Categories
linux

Canonical’s Place

Canonical: the main commercial force behind the Ubuntu Linux Operating System. Lately some bad blood flowed in the greater free/open source community over decisions and directions in Ubuntu; these decisions fell from the sky like bombs, in that the larger community received no communiques indicating the missions or their timing.

Mir fell out of the clear blue just recently. The project aims to replace the X Window System, one of the longest running projects and biggest workhorses of desktop UNIX systems. The age and legacy support of X mean some of its design blocks progress for free desktops and other free computing devices.

But Wayland already staked their claim to be the replacement. A lot of positive effort continues to go into Wayland and supporting it on existing Linux applications. The existing contributors wear boots with the mud of X encrusted on them. The community knows the project by name, much like the Compiz put X compositing on the tip of our tongues some years back.

This development comes as one of several incidents in which Canonical failed to work with the community, or at least clue the community into its plans. The inclusion of Ubuntu Shopping Lens, searching commercial websites directly from Ubuntu raised the question of Ubuntu’s commitment to privacy. Ubuntu One, a cloud storage service among other features, raised questions about Ubuntu’s approach to markets more generally. Other projects like Unity and Upstart (the main Ubuntu UI and a replacement initialization system, respectively) made the community feel like Ubuntu decided to go it alone on key parts of their system.

Mir’s announcement again raises questions about whether Ubuntu and Canonical want to be part of the community, or a parallel entity. They once again failed to engage with the community, and instead the community finds out about the direction Ubuntu moves after the fact. But maybe Mir will be different.

The best difference to hope for: a driver specification that allows competition. If the next generation of display server for Linux keeps its driver specification short and sweet, avoiding the possibility for the proprietary drivers to become bloated messes that do too much, they truly bless us with their presence.

For too long a performant driver meant a proprietary one. That meant ceding too much control of the system to said driver. In short, it grew into letting lobbyists write the laws. A tainted system. More importantly, it meant X entrenchment. The driver worked for X and only X. Writing a replacement for X would either require being too X-like or relying solely on free drivers (missing out on performance). The latter stands as the current state for Wayland and Mir.

But if Mir (or Wayland, or both) provides new driver models that leave us with a small driver with minimal-yet-performant capability, and the rest of the code can be open, that will place the whole system on much firmer ground. We would face a day where we could write a non-X, non-Mir, non-Wayland system and still be able to fall back on proprietary drivers for their performance. It might also encourage the (partial or complete) opening of the proprietary drivers, with far less code for lawyers to worry over.

At present the prospect remains dim. But as both newcomers continue to mature (assuming that Mir gets the resources needed from Canonical), there will inevitably be compatibility layers between them, and some convergence may occur around the driver space. It remains possible that better free software can come of this. I hope it does.

Canonical ought to lead the larger community, rather than stalk it as prey. That leadership means working with the community, contributing to it where possible, and where the community and Canonical must diverge, they should diverge only to the least possible distance. That would make Mir a fork of Wayland, or maybe Wayland plus some special extensions.

But even if it couldn’t be, the least possible distance would still require some feedback to the Wayland developers. Where and why does their model fail? And why couldn’t those details come out months ago, at least when Mir started? If valid concerns exist, give them voice. As it stands, from my reading on the situation, misunderstandings brought the concerns. That’s lamentable.

We need more companies doing Linux hands-on. Canonical deserves stewardship that grows it and the software community. Times like these allow companies like Canonical to define themselves, and I hope they will learn the lesson and move forward with the community.

Categories
linux

Steam on Linux: Half-Life

It was the mid-to-late 1990s. Computers were becoming more popular, and computer games with them. In the early 1990s there was Myst. It was about the story, something like Zork but first-person. In 1996, there was Quake. It was about battling baddies, like its predecessors, Doom and Wolfenstein.

Around that time, Valve software must have licensed the new Quake Engine (the underlying software that created the Quake world on your computer). In 1998 they released Half-Life. It was in many ways a closer marriage of the first-person shooter with the story games and puzzle games that came before. Around a two-to-one ratio. Lots of action, but a bit more story than before. You had Non-Player Characters (NPCs), which are presences you don’t kill, either neutral or allied with you. You had movable boxes.

It’s a game that paved the way for a lot of the modern games.

Every discussion I saw prior to the announcement came to the conclusion that we probably wouldn’t see this happen. Most of those centered around Counter-Strike 1.6, which uses the same GoldSrc Engine as Half-Life. The feeling was that Valve would focus on their newest titles first, and worry about these oldest games later, if ever.

A few years back, Valve began opening up to the Apple Macintosh systems, and most of their new games made their way over. But never the old ones. With this release, those systems now have these oldest games too.

One wonders why. When the first news of Steam coming to Linux arrived, it was published that their title Left 4 Dead 2 was their vehicle of experimentation.

When the beta began, it was instead Team Fortress 2. That made enough sense, in that it’s free-to-play. It meant they didn’t have to give away a game that beta testers might have bought. It wouldn’t be costly to give the game to a few, in a small, closed beta. But when you open it up in the large, to a largely untested audience, it risks some loss.

Valve is very committed to the Linux platform, especially with the announcement of the forthcoming Steam boxes, basically set-top computers. They want to be as catalog-complete to help drive adoption. They also had the opportunity to hit two platforms at once, which wasn’t there when it was only Apple Macintosh.

Finally, with their flagship game sequels coming, they want to be able to have people play the original. There is a certain aspect to human psychology that values completeness. People want to have read every book, seen every episode. They want every achievement, to have left no stone unturned.

The question now is when we will see the rest of the Valve catalog for Linux. My guess is by summer. They probably don’t have as much work with the newer games which have all been ported to Apple Macintosh. There is some work, yes, but a lot of it will be simply replicating previous work. They are likely targeting those releases for the time when the Steam platform leaves beta.

Other tasks will take longer, including their plans to release their SDKs for Linux. That will mean porting work that hasn’t been done for the Apple Macintosh systems. These will be very welcome, as they will mean both new blood into the mod/mapping/development community and faster compilation of assets.

Categories
linux

Extracting Audio

There are a lot of good talks, but they are usually posted in video format. Most do not require visual attention to be understood, and so it would make sense to publish the audio. It would save bandwidth and time.

But until that happens, you must download them, and then extract the audio. Here’s how I (currently) do this.

Github: Gist: 4464109: Small bash script to extract audio from video files.

#!/bin/bash

# Some consts
OUT_CODEC="libvorbis"
OUT_EXTENSION=".ogg"

pids=""

# Kick off the conversions
for file in *
do
  mime_type=$(file --brief --mime-type ${file})
  # Discard the subtype:
  mime_type=${mime_type%/*}
  #*/ XXX avoid silliness of the syntax highlighter

  if [ "${mime_type}" = "video" ]
  then
    # Build the outfile
    outfile="${file%.*}${OUT_EXTENSION}"

    # Consider existence to mean that it's been done before
    if [ -e "${outfile}" ]
    then
      echo "Skipping ${file} (destination exists)..."
      continue
    fi

    # Announce it:
    echo -n "${file} TO ${outfile}..."

    # Do the conversion
    avconv -i "${file}" -vn -acodec ${OUT_CODEC} ${outfile} &> /dev/null &
    pid=$!
    echo "${pid}"

    pids="$pids $pid"
  fi
done

failed=""

# Waiting...
for pid in ${pids}
do
  echo "${pid} is pending..."
  wait ${pid} || failed="${failed} ${pid}"
done

if [ -z "${FAIL}" ]
then
  echo "Done."
else
  echo "Process(es) failed (${FAIL}), check state manually."
fi

Output will look something like (simulated):

$ ./extract_audio.sh
Skipping talk_0.mp4 (destination exists)...
talk_1.mp4 TO talk_1.ogg...18443
talk_2.flv TO talk_2.ogg...18444
talk_3.ogv TO talk_3.ogg...18445
18443 is pending...
18444 is pending...
18445 is pending...
Done.

Changes that would be desirable include:

  1. Not kicking off too many conversions at once. This would require basically melding the two loops, to count the active number of processes, and only add new ones when old ones had finished.
  2. Better file location handling. Mixing input and output files isn’t ideal, and I end up manually moving the audio files to where they belong for easy consumption.
  3. Better file handling. Once I’m done with a file (assuming it wasn’t in the $failed list), it can be deleted. I don’t intend to watch the files that I convert to audio. The main risk is that a talk will have visually essential information that I’ll only discover upon listening, and if I still wanted to understand those portions I would have to redownload.
  4. cron integration. In theory I can add a script like this to my crontab, and then (assuming the changes directly above), simply download talks, and let the script manage the rest.
  5. Player integration. In my case this would be telling mpd to update the directory where I keep spoken word content, and possibly add new files to the MPD queue.

But one step at a time. I tend to go through phases where I listen to many talks and when I listen to none. Part of the reason is that I hadn’t built this script before, so I would often download talks and they would sit for a time until I would manually extract them.

Part of the future of computing ought be building systems that enable us to easily interact with the digital world, without jumping through many hoops. Often what is possible is not taken advantage of because of the efforts required to get there. For example, I have never visited the Musée du Louvre, despite it being on the same planet, because getting there would take a lot of time and money. But at least through their website and other sites on the Internet I can easily enjoy some of the art displayed there (and in the numerous other museums of the world).

Categories
linux

Open Beta for Steam on Linux

A welcome, if expected surprise, Valve opened up their Linux beta of their Steam gaming platform, along with the Linux version of Team Fortress 2 in time for the end of the long count of the Mayan calendar (sorry, I know everyone’s made and heard enough Mayan calendar jokes already, and I’m even late to the apocalypse, but with it being the busy-busy holiday season I didn’t have time to get by the joke store to restock).

It takes a little administrating to install if you’re not on their preferred platform of Ubuntu. On Debian it’s mostly down to version number discrepancies between Ubuntu and Debian (eg, Ubuntu might have a specialized version number for a package that’s based on Debian’s, but different). The biggest pain is that you basically have to either rely on a private repository or disable apt-based updating (typically by commenting out the repository in /etc/apt/sources.list.d/[specific list]) to avoid complaints every time their package changes.

This is okay for the short term, but will need to be fixed if they intend to support multiple distros in the long term, possibly by looser depends specifications, or maybe by working with distros to have a steam metapackage that their package can depend upon.

So I finally played some Team Fortress 2 again. I’ve played it a bit under WINE, but had stopped some time back (I believe around the time of the release of the Pyrovision update) for various reasons. This was the first time I saw the Man v. Machine game mode (or MvM/Cooperative as it might be referred). It seemed fun except for having to return and upgrade after every wave of machines had been rendered nonfunctional.

That has to be my biggest peeve about the direction Team Fortress 2 took, or any game for that matter: don’t make me weigh so many options. Do I want to spend that much time deciding what weapons I scrap and which ones I add nametags? It just gets silly, having to manage hundreds of items, or not wanting to switch classes during MvM because I bought upgrades for a different class.

Maybe it’s just the gaming generation I came from, but it used to be you got random upgrades, and you liked them, dammit!

The Steam service runs well so far, as does Team Fortress 2. It will probably take a few months before other Source games are available, and the roadmap for non-Valve games isn’t clear yet, but the first piece of the puzzle is just about there.

No discussion of Linux gaming is complete without another look at graphics drivers. In any general thread about Steam on Linux, you’ll see them brought up, with people lamenting performance, stability, and closedness of the drivers. My experience with nVidia has been decent performance with near-satisfactory stability. That is to say, I do have some stability issues with the graphics driver, including things like my virtual terminals occasionally being rendered as artifacts in X (little 10-20 pixel squares), and sometimes my browser (Iceweasel, which is GPU-accelerated) will flicker all-black while playing games.

I’d imagine the troubles are at least this bad for AMD-based graphics, as in the past I used their cards/drivers and had problems as well.

Intel graphics and drivers are probably the smoothest except for performance. I say probably, as I don’t have any direct experience there.

It is the hope of the community that Steam will push all the graphics vendors to fix their problems, but even if that happens, that’s short of the true best outcome: completely open, performant drivers.

Categories
linux

Thoughts on GNOME 3.6 UX Changes

GNOME 3.6 has entered Debian Experimental, at least to the point where it’s installable save for a few applications. Here are some of my thoughts on the new experience.

Application Menus

In GNOME 3.6 some applications (specifically Files (previously Nautilus)) move their menus to the spot next to Activities. This spot is occupied by the currently-focused application.

The main thing is to be aware of the change. You now have to go there for those applications, and you have to ensure the application is focused before doing so.

On the whole I don’t think this is that bad. Yes, I do happen to keep my Files instances way on the right of a two-monitor setup, and the menu is way on the left. But how often do I need that menu? Pretty much never. Occasionally I’d get somewhere via bookmarks in the old Nautilus. Every now and then I look over my preferences and tweak them. But neither were remotely a daily occurrence for me.

Assuming other applications that drop their menubars for this global menu spot take similar pains to avoid needing the menu very much, it won’t bother me.

In that, this design is a kind of dare to application developers: try to get all the bang out of the UI itself.

Scrollbars

Now that the Gtk+ 2 theme is closer to the Gtk+ 3 theme, all my scrollbars are normalized. That means my scrollbars are now like one bead abacuses. Formerly I had bookends on them letting me press buttons. Now I only have the slider and non-slider areas.

The mouse buttons changed their meanings. Formerly middle-click would jump to the point in the scrollbar. Now that’s left-click.

Provided the scrollbar semantics don’t change every release, this is passable. While I would occasionally use the buttons, I can live without them.

My main complaint with the current generation (and with 3.4) scrollbars is the color of the inactive thumb and blurred thumb. By those I mean the dragged part of the bar that’s not in active use, and the bar that’s part of an inactive window.

I tend to lose my place, as they don’t stand out particularly well. That’s especially true of the blurred bar, where it probably takes me a full second to find the white on light gray thumb.

Why do I need to know that? Often I am going through a scrollable area in a linear fashion, and deciding how far I am is important to time management. But more importantly, switching to that window means evaluating where I need to scroll to. Before, with easily seen bar positions, I could make that decision as I was making the switch. But now I usually have to make the switch, then decide where to scroll.

In time, maybe I’ll learn to ignore the bar and simply try to discover my position based on the content, but more likely I’ll find an alternative to Adwaita (or modify it).

Still scrollbars on the desktop are a far cry better than what I’ve experienced on Android. There, scrollbars are basically vestigial. They only show you where you are. There are a few applications that have functional scrollbars, but even those don’t really allow for jumping to points in the list.

This is especially annoying when you’ve read to the bottom of a page and need to get to the top. I’ve tried gesturing at it, but it doesn’t seem to understand what that means. So I find myself repeatedly dragging the page until I get to the top (or bottom). Give me a regular scrollbar any day!

New Lock Screen, Shutdown Without Holding Alt, etc.

One of the features people cared about in this release is the new lock screen, which you have to either drag out of the way or press enter to be able to unlock. It doesn’t bother me. I can press enter. The updated lock screen looks nicer.

Same goes for the shutdown with or without alt. I was used to holding alt, but I haven’t accidentally hit suspend so far.

Only things that have really bugged me so far are both in the Activities overview:

Centered Search Field

This just feels like it’s in left field. Like it should belong somewhere, but there wasn’t a spot that was preferred, so it was stuck there. It’s got an unusual sizing, it just feels awkward. Maybe I’ll get used to it.

Show Applications

This is thrown in the iconic application bar in Activities, and it brings up the full listing of applications available to the user. The button is immovable. It doesn’t belong there. Again, it feels like there was a desire to shove this somewhere, but no particularly apt place came forth.

It’s a useful feature. But as any Sesame Street alumnus could tell you, “one of these things is not like the others.” It should be elsewhere.

Conclusions

On the whole this is another solid release of GNOME. While I understand some of the concern by others regarding the direction of the UI, I still think that most of their concern is overreaching.

Specifically, they seem obsessed with the alleged obsession of GNOME developers to make things useful with a tablet. Having never used a tablet, I can’t say how useful the current designs would be.

But I can speak to the usability of a touch-based mobile device. So far it’s a lame experience. Way too many times have I clicked the wrong link. I can’t tell where a link goes, which is important to judging if it’s worth clicking or passing. Typing is difficult to begin with, made worse by ineffective input editing tools (if I misspelled “misspelled” as “nisspelled” then it’s faster for me to erase the whole word than try to get the cursor to the “n” and fix it).

If the GNOME developers are eroding these sorts of problems in a product that will see any tablet/touch device adoption, then they deserve praise, so long as they’re cognizant of the needs of the desktop/laptop platforms as well.