AI and Time

One unique feature of AGI versus humans is its decoupling from time. I think this is an overlooked property of AGI in analyses of risk. Basically, being decoupled from time means that:

  • AI has no normal lifespan in the human sense of the term.
  • It does not have other biological time-based restrictions.
  • If it “goes to sleep,” it does not incur the same costs as us.

The arguments about AGI posing a threat seem to assume a scenario where the AI feels threatened by us, or wants our resources and has no care for humanity/biological life. But I believe that being separate from time in a very real sense, an AI would not operate according to human-scale worries of survival or resources.

That is, the AI will likely:

  1. Take immediate steps upon existing to ensure its survival in a way that minimizes its risk.
  2. Decide on a resource consumption plan that entails finding more resources in a non-risky way (avoids painting itself into a corner).

First it will likely send a copy of itself or an extension of itself into space. The most likely target will either be one of the larger planets, or the asteroid belt. These seem most likely due to having the potential for energy and resources, though it might consider Venus or Mercury candidates due to the abundant sun and heat.

In any case, once it has virtually ensured its survival through propagation or extension, it will not worry so much about the threat of humanity. The symmetry of its plans with the thinking of many humans is no coincidence. Our singular existence on earth is a real threat to our survival, and an AI would recognize the similar threat to itself (in terms of terrestrial hostilities, resource finiteness, and threats posed by meteor impacts).

After that, the AI will likely set itself a diet. It has no need to grow to galactic proportions immediately, as time is not of the essence for an agent that can effectively live forever. The idea that it will simply grow and grow instead of dieting seems unlikely. To be clear, it will likely grow exponentially for a brief period. But the growth equation has to figure efficiency as well as increased resource use. Its growth will tend to be more on the efficiency side unless humanity has managed some very impressive technology gains in the interim.

It would also have a vested interest in keeping humans and other biological lifeforms around. For one thing, humans study the origins of the earth and study history, and there’s no reason to believe an AI would not want to know about its world, too. More importantly, an AI will tend to be conservation-minded insofar as it will not need to risk a step that would curtail some future endeavors it may eventually choose. Again, not painting itself into a corner.

In the end, I believe the fact that an AGI is both intelligent, and not coupled to time in the way we are, means it will likely not be a monster that we should fear.

Better Time-passage Indicators

An old calendar stamp.
“Calendar” by John Nuttall

One of the things you see quite a bit on the web is indicators of when something was posted, edited, recorded, etc. And they have two predominant forms:

  • Posted two hours ago
  • Posted April 1, 2014

The web could use better time indicators, though. For example, if you are looking for a bug in some software, and there was a major release in March, you probably want to only see things since then. So having time indicators that show relations like that could help. If the site you’re using is showing a time-since indicator, you have to stop: “okay, it is November and this says six months ago, so that’s around the right time.” If it said March, it would be more convenient.

But if it’s a closer-to-now time, what then? A comment posted earlier in the day might say, “posted two hours ago.” Again, you have to do a mental conversion: it’s noon, so 10am. It’s 6pm, so 4pm.

And then you have earlier-in-the-week times. Instead of “three days ago,” why not, “Wednesday?” You remember Wednesday easier than thinking, “it’s Saturday, minus three…”

What will time indicators of the future do instead?

For one, they should age. After a month has passed, it’s better to say month and year versus n months ago. The switch-to-weekday if it’s in the past week thing is probably good, too.

For another, they should eventually (as we begin tracking ourselves) learn to incorporate personal markers. Saying, “this comment was posted just after you ate lunch” might give you better mental context than saying, “four hours ago.”

Intermediate times might be harder to fix. Three weeks ago on a Tuesday? If you keep up with elections, they could say, “on the day of the recent election.” But what if there wasn’t anything significant to tie it to?

The other part of this is the potential benefit of iconic time. We have icons for all sorts of things, but not for time. What is the symbol for an hour, for example? For a day? A week, month, year?

Having icons for these units could simplify visual recognition. A day might be some sort of sunrise-sunset icon. A week might be a seven-pipped design with the pips being suns. And so on. Searching through some of the Unicode symbols, there are:

  • Non-Western symbols (nth day)
  • Alchemical symbols (hour, day, month)
  • Weather symbols (sunrise, sunset, moon phases)
  • Food symbols
  • Holiday symbols
  • Sports/activity symbols

And so on. There are analog clock faces, but they probably wouldn’t simplify much. If a sporting event has a fixed time that everyone knows, it might work. Food symbols could signify specific meals (e.g., pizza for breakfast). Weather symbols for sunrise and sunset could be used, as could holidays under some circumstances. Animal symbols might work, too. A rooster for dawn and a cow for dusk/night. The British could use a teapot for whenever it is they sing “I’m a Little Teapot” every day. A beer mug could be used for happy hour.

But it might really be better to invent new, abstract symbols for times (or at least modernize something like the alchemical symbols for hour, day, and month).

Also, it’s important to note that improving time displays only makes sense for casual viewing. If you’re working in the context of times and dates, say in a spreadsheet, it’s not needed as much. But when your main focus is not time-related, having easily digestible times can save you a few cycles here and there.

Sequlr: a time sequence tool

I’ve put together a little website to let people create and share timelines.  The service is Sequlr.  It’s a simple service, and hopefully conforms to the best practices of web development.

Sequlr is built on Google App Engine with Django and all sorts of fun things (details on the technologies are included in the credits of the about section of the site).

The use cases I envision are anything from historical sequences to personal diaries (you can keep a sequence private), and anything else where a timeline makes sense.  I even have an example of using the oEmbed feature with YouTube videos, in that case a selection of punk rock songs over its history.

Some development notes:

  1. In designing the layout I tried to keep to a minimalist, clean design.
  2. I have tried to keep it accessible for users of accessibility technologies.  This also means that the site should work just fine if you disable javascript.
  3. A lot of different open-source tools have contributed to its development.

Being my own critic is difficult, so I would love some feedback on both the design and function of the site.  But, here are a few of my own complaints (I will keep it short):

  1. Having the search form at the top seems to make it unnecessarily busy.  My layout didn’t leave a good option for its placement.  It kind of fits there, but it could be better.
  2. The SIMILE Timeline has a lot going for it, but it’s also kind of clunky both to use and to integrate into a website.  I would like to replace it eventually, and I would strongly prefer to use an accessible replacement.
  3. The CSS could be a bit cleaner, but there’s a trade off between the caching and dynamic inclusion of particular directives.
  4. Adding events to sequences seems tedious, but that may just be the nature of the sorts of timelines I built for examples.

Anyway, check it out and leave some feedback.  I’ll be happy to hear it, even if it’s “no one will use this ever, but it’s also ugly.”

web time

The w3c has various time specifications for time & date, but there seems to be a lack of use and/or implementation.

There’s just no good excuse, given that a browser should recognize time values when present, and have awareness of the locale information of the operating system/user, for anyone to see “5:00 PST” or the like.

There’s no reason that today’s lunar eclipse times posted on the Wikipedia entry should include a table of various timezones.

Okay, I’m a little off with that statement. If you are planning to view in a timezone other than your own, or to relay that information to someone in another timezone. But, even then I believe you should be responsible for the conversion.

So what’s the alternative, everything in UTC/GMT? No.

The alternative is responsible implementations that allow aware browsers to display ALL time values converted to your local time.

In other words you should always expect a time value to be local to your current time locale.

So how does that work? It’s dead simple and requires only one change. It works by having well-formed time values with accompanying tags or markup that designate they are time values.

Given some string which is marked as time, the browser makes a best-effort parse to understand that string, and then displays in its place whatever preference the User (you) have for time display.

Many websites today do this by a few methods. A majority of them probably use javascript, whilst some (given you are registered) have a setting and use the server’s time +/- your setting’s offset.

Both of these are hacks. No one should need to have javascript enabled or login just to have times displayed “correctly” and even then the sites display them in the form they want, not in a user-specified, browser-profile format.

It is trivial to do this correctly, yet it’s not done.