Why Valve is Doing SteamOS

Lots of people question why Valve wants to make SteamOS, Steam Machines, Steam Link, Steam Controllers. But Valve has been pretty open about their reasons. They are in several businesses:

  1. Making games
  2. Making game engines
  3. Selling games

They probably make the most off the third, but they still do the first two. But the third is a driver of decisions for them. They surely ask, as any business, “how do we expand?” To sell more games requires more people buying games. And as the PC declines a bit, that means other platforms.

Mobile, although increasing in power and certainly ubiquitous, is not currently a prime target. The living room is. The living room is a proven gaming environment. The living room has the big screen and the comfy couch. It makes a lot of sense for a gaming company to want to be there.

This same logic is driving decisions about engine design, not just for Valve but across the industry. Making content easier to create means a larger market with more lottery tickets to win consumer dollars. It makes the platform broader and expands what gaming means. So does game streaming, which is becoming more popular.

Other businesses could learn a lot from Valve and the gaming industry in this regard. Building out transport and lowering the friction to relocate to new opportunities would do wonders for the economy. Valve is doing both of those, in their own way, in their own market, with SteamOS.

By making a living room PC platform, they’re bridging a divide between two long-isolated groups: console gamers and PC gamers. With SteamOS in the living room, people will actually be able to play against mouse-and-keyboard gamers (either with a Steam Controller or with a mouse and keyboard).

At the same time, the console makers are pushing their own initiatives to do the same. But at present it’s not clear if you will be able to buy a game for a console and play it on a PC. And even if you can, how widespread will that option be?

Valve has some challenges. They have to make sure the SteamOS platform has feature parity with consoles. That means video streaming and music. It means actively courting games to be on Linux and making sure drivers are up to the job. It even means working on better APIs like Vulkan to ensure a cleaner development-to-market process.

But they have advantages as well. The Steam Machine market is a market, not a single offering from one company. Consumers can decide when to upgrade, how much to spend, and so on. They’re delivering competition and choice to consumers and betting the consumers will make the right choices for them.

At the end of the day, Valve is working to expand their market. They are doing that to make more money, but they seem to be doing it in a way that is smart enough to mean more money for others too. And that’s what good capitalism should focus on.

Debian’s init Options

The Debian Project will choose a new default init system for its next major release (codename Jessie). The debate details (Debian Wiki: Debates: initsystem) include the following proposals:

  1. sysvinit (status quo)
  2. systemd
  3. upstart
  4. openrc
  5. One of the above for Linux, other(s) on non-Linux
  6. Multiple on Linux, at least one for every other kernel

The chief goal in switching? Bring modern boot functionality (speed and lower resource use). Others include lowering the bar for packaging and maintenance, and taking advantage of newer kernel features.

The matter of choosing an init system mainly deals with the amount of work and amount of benefit available. Unfortunately, some aspects of this debate must focus on other things.

The main contenders, systemd and upstart, both have at least one strike against them:

  • systemd looks technologically superior, but that superiority makes it a non-option for at least some non-Linux kernels (owing to using Linux-specific features), and support for other kernels would require much effort. It also takes a different approach to being pid 1, namely rolling in some functionality that has long been outside of init‘s domain.
  • upstart can be supported more readily, but similar if slightly less effort would be required for non-Linux. Worse, Ubuntu’s stewardship of upstart hampers it with the Canonical Contributor License Agreement problem.

A Contributor License Agreement basically states that by signing it, you grant rights of your contributions to the project maintainer. But the Canonical CLA goes a step beyond, in claiming for Canonical the right to relicense the contributions in a non-free manner.

In the Free/Open Source world that makes it as attractive as poison ivy. Also important, some who contribute as part of their work may actively be barred from participation. A company that sees benefit in open source will probably see hostility in their employee’s work being tied into a CLA of this sort (or any sort).

It all adds up to one difficult decision. The fact that both major contenders do not reduce Debian’s workload means the decision will boil down to technical merits. That makes systemd more likely.

What of non-Linux, then? openrc or sticking with sysvinit both seem plausible. Debian likely will not abandon their work with other kernels, so they will likely bite their tongues. Debian will put up with the extra work of dual systems for now. That will also mean that their Linux decision will remain a technical hybrid for the time being.

But not forever. Post-Jessie, I expect Debian will re-evaluate and hopefully find a more useful option to shed some of the extra weight they will take on in the short-term, whether that means configuration conversion tools, or something else.

The main reason that upstart seems unlikely, Ubuntu and Canonical never took the time to lead the way on non-Linux and while some Debian packages might have easier times adopting upstart configurations, the feature set of systemd seems to be a bit more powerful.

Canonical’s Place

Canonical: the main commercial force behind the Ubuntu Linux Operating System. Lately some bad blood flowed in the greater free/open source community over decisions and directions in Ubuntu; these decisions fell from the sky like bombs, in that the larger community received no communiques indicating the missions or their timing.

Mir fell out of the clear blue just recently. The project aims to replace the X Window System, one of the longest running projects and biggest workhorses of desktop UNIX systems. The age and legacy support of X mean some of its design blocks progress for free desktops and other free computing devices.

But Wayland already staked their claim to be the replacement. A lot of positive effort continues to go into Wayland and supporting it on existing Linux applications. The existing contributors wear boots with the mud of X encrusted on them. The community knows the project by name, much like the Compiz put X compositing on the tip of our tongues some years back.

This development comes as one of several incidents in which Canonical failed to work with the community, or at least clue the community into its plans. The inclusion of Ubuntu Shopping Lens, searching commercial websites directly from Ubuntu raised the question of Ubuntu’s commitment to privacy. Ubuntu One, a cloud storage service among other features, raised questions about Ubuntu’s approach to markets more generally. Other projects like Unity and Upstart (the main Ubuntu UI and a replacement initialization system, respectively) made the community feel like Ubuntu decided to go it alone on key parts of their system.

Mir’s announcement again raises questions about whether Ubuntu and Canonical want to be part of the community, or a parallel entity. They once again failed to engage with the community, and instead the community finds out about the direction Ubuntu moves after the fact. But maybe Mir will be different.

The best difference to hope for: a driver specification that allows competition. If the next generation of display server for Linux keeps its driver specification short and sweet, avoiding the possibility for the proprietary drivers to become bloated messes that do too much, they truly bless us with their presence.

For too long a performant driver meant a proprietary one. That meant ceding too much control of the system to said driver. In short, it grew into letting lobbyists write the laws. A tainted system. More importantly, it meant X entrenchment. The driver worked for X and only X. Writing a replacement for X would either require being too X-like or relying solely on free drivers (missing out on performance). The latter stands as the current state for Wayland and Mir.

But if Mir (or Wayland, or both) provides new driver models that leave us with a small driver with minimal-yet-performant capability, and the rest of the code can be open, that will place the whole system on much firmer ground. We would face a day where we could write a non-X, non-Mir, non-Wayland system and still be able to fall back on proprietary drivers for their performance. It might also encourage the (partial or complete) opening of the proprietary drivers, with far less code for lawyers to worry over.

At present the prospect remains dim. But as both newcomers continue to mature (assuming that Mir gets the resources needed from Canonical), there will inevitably be compatibility layers between them, and some convergence may occur around the driver space. It remains possible that better free software can come of this. I hope it does.

Canonical ought to lead the larger community, rather than stalk it as prey. That leadership means working with the community, contributing to it where possible, and where the community and Canonical must diverge, they should diverge only to the least possible distance. That would make Mir a fork of Wayland, or maybe Wayland plus some special extensions.

But even if it couldn’t be, the least possible distance would still require some feedback to the Wayland developers. Where and why does their model fail? And why couldn’t those details come out months ago, at least when Mir started? If valid concerns exist, give them voice. As it stands, from my reading on the situation, misunderstandings brought the concerns. That’s lamentable.

We need more companies doing Linux hands-on. Canonical deserves stewardship that grows it and the software community. Times like these allow companies like Canonical to define themselves, and I hope they will learn the lesson and move forward with the community.

Open Beta for Steam on Linux

A welcome, if expected surprise, Valve opened up their Linux beta of their Steam gaming platform, along with the Linux version of Team Fortress 2 in time for the end of the long count of the Mayan calendar (sorry, I know everyone’s made and heard enough Mayan calendar jokes already, and I’m even late to the apocalypse, but with it being the busy-busy holiday season I didn’t have time to get by the joke store to restock).

It takes a little administrating to install if you’re not on their preferred platform of Ubuntu. On Debian it’s mostly down to version number discrepancies between Ubuntu and Debian (eg, Ubuntu might have a specialized version number for a package that’s based on Debian’s, but different). The biggest pain is that you basically have to either rely on a private repository or disable apt-based updating (typically by commenting out the repository in /etc/apt/sources.list.d/[specific list]) to avoid complaints every time their package changes.

This is okay for the short term, but will need to be fixed if they intend to support multiple distros in the long term, possibly by looser depends specifications, or maybe by working with distros to have a steam metapackage that their package can depend upon.

So I finally played some Team Fortress 2 again. I’ve played it a bit under WINE, but had stopped some time back (I believe around the time of the release of the Pyrovision update) for various reasons. This was the first time I saw the Man v. Machine game mode (or MvM/Cooperative as it might be referred). It seemed fun except for having to return and upgrade after every wave of machines had been rendered nonfunctional.

That has to be my biggest peeve about the direction Team Fortress 2 took, or any game for that matter: don’t make me weigh so many options. Do I want to spend that much time deciding what weapons I scrap and which ones I add nametags? It just gets silly, having to manage hundreds of items, or not wanting to switch classes during MvM because I bought upgrades for a different class.

Maybe it’s just the gaming generation I came from, but it used to be you got random upgrades, and you liked them, dammit!

The Steam service runs well so far, as does Team Fortress 2. It will probably take a few months before other Source games are available, and the roadmap for non-Valve games isn’t clear yet, but the first piece of the puzzle is just about there.

No discussion of Linux gaming is complete without another look at graphics drivers. In any general thread about Steam on Linux, you’ll see them brought up, with people lamenting performance, stability, and closedness of the drivers. My experience with nVidia has been decent performance with near-satisfactory stability. That is to say, I do have some stability issues with the graphics driver, including things like my virtual terminals occasionally being rendered as artifacts in X (little 10-20 pixel squares), and sometimes my browser (Iceweasel, which is GPU-accelerated) will flicker all-black while playing games.

I’d imagine the troubles are at least this bad for AMD-based graphics, as in the past I used their cards/drivers and had problems as well.

Intel graphics and drivers are probably the smoothest except for performance. I say probably, as I don’t have any direct experience there.

It is the hope of the community that Steam will push all the graphics vendors to fix their problems, but even if that happens, that’s short of the true best outcome: completely open, performant drivers.

Attack of the Undead Penguins

Steam coming to Linux, you say?

People following the Valve/Steam/Linux news already know that the awesome hackers at Valve have Left 4 Dead 2 running on Ubuntu like a champ. They know that Valve has been doing some work toward a so-called three meter display (ie, for television display), and have probably speculated that they are at least considering building a console.

This is a post about what I’m looking forward to seeing out of Valve on Linux.

Playing Games

Foremost, I’m looking to playing games without even the minor inconveniences of WINE. Often there are tweaks, there’s turning off features, or some minor thorn of just about every game I’ve played on WINE. WINE is awesome, and it’s made some money for game companies, as there are games I bought because I knew that I could play them.

But it’s not perfect, and for people that eschew yak shaving to play a game, the set of titles they might purchase and play drops (I’m not thinking about side projects like PlayOnLinux, as I’ve not tried them).

For a lot of games, if they make it to Linux, that means getting full eye candy. Full features.

Building Games

Secondly, I’m hopeful that the game creation tools will be coming to Linux. Some of these kind-of-sort-of run under WINE, but my experience with these hasn’t been nearly as good as with games. Even if the current generation tools don’t make it, maybe the next generation will.

The lower the barrier to entry for creating game content, the more that will be created, and the better games we will see. That’s true of technology in general.

I’ve made a few maps years ago under Windows, but the few times I tried to build maps under WINE it was much clunkier and fraught with peril. I’m very hopeful that in another decade or so it will be commonplace for gamers to be mappers and modelers, even if their extent of mapping and modeling is just to customize existing maps and models.

Building Bridges

But, like others, my biggest hope is that this work will result in greater support from the four corners of the earth for Open Source and Linux. That it will widen the market for gaming, while making governments and businesses evaluate Linux as a greater possibility for their employees.

Just like Android has pushed a device with Linux to far more hands than ever before, a Valve console could do it again. But so can Steam for Linux. There’s plenty of people that keep a second computer or dual booting just for games. There also a general perception that Windows is the king because of gaming. Linux getting more gaming means that even Apple may end up supporting iTunes for Linux one day.

A side bet is, assuming the success of Steam for Linux, could that competition bring Microsoft back from the brink? For years Microsoft has had the capacity to push the computing world far beyond its current state. But it’s had no reason. That’s harmed its server market, which hasn’t been very competitive.

Conclusion

In any case, as a long-term fan of Valve’s games, I look forward to playing Half-Life 3 on Linux (just like I played Half-Life 2 here), and with any luck the Black Mesa modification can be playable on Linux too.