One of the things that holds back Linux adoption is that the average user doesn’t understand how it works compared to Microsoft Windows or Apple Macintosh OS X. In some ways Apple Macintosh has reused some of the best ideas of free operating systems (eg, package management). Note that I’ve not actually touched a modern Microsoft Windows installation, so my current knowledge is based on the little I hear through the various grapevines of the Internet Vineyard.
On Microsoft Windows, most programs come with their own installer. That installer is meant to look at your system and decide where everything goes and how to configure it. It still relies upon certain services the OS provides, like the Registry for storing settings or for discovering existing configurations. There’s an ability to use a more Linux-like installer, but the prevalence seems to remain with regular executables.
On Apple Macintosh OS X programs use a more Linux-like installation method. The main difference is that they are typically acquired like Windows programs, with the only repository available to the package manager being the official Apple repository.
On most Linux distributions, such as Debian and Fedora, the vast majority of packages are installed through a package manager. A package manager flips the control over installation a bit. Each package has certain rules associated with its installation, but installing them begins with executing a package management component that’s responsible for invoking the rules execution.
An example will be useful. I’ll use Firefox since it’s widespread on all three platforms.
On Windows, you download the Firefox installer executable. When you run it, it checks the system out and copies its files into the installation folder (usually somewhere like C:\Program Files).
On OS X, you download a disk image file. This contains the .app structure for installation, along with some metadata.
On Linux, you might download a Debian archive. This also contains the program’s files in an existing structure and some metadata.
The main difference is that Linux packages typically announce their dependencies, which means that other packages are required to use them. On Windows, most dependencies are bundled or mentioned in documentation, with no formal facility to account for them. On OS X, you get that too, but you also get developers tending to rely on the facilities provided by the OS itself rather than requiring anything separate. On Linux, the expectation is that the computer should figure out what to install and in what order (though, ultimately humans still make those calls for now).
Okay, back to the beaten path. The takeaway for a Windows user is that most of the software you will install on a Linux system comes from the distribution and is managed by it. If you want to edit some images, you might install GIMP, which means you ask the package manager to install it for you, rather than downloading it from the GIMP website.
If you want to modify GIMP, you can go the traditional Windows route of downloading the source or checking it out from the repository, but you can also tell the package manager you want the source. This lets you take advantage of the work of the package managers to make the package buildable and packageable. A few simple commands and you’ve got the source ready to build (or modify first, then build).
That’s an important fact about the Linux universe, the ability to easily rebuild the software is essential to its health. It means that a bug can be fixed without too much hassle (other than the actual debugging) from a user. When you have a platform of tens of thousands of pieces of software that need to work together, someone that’s only casually using one piece doesn’t want to learn the world to fix it.
I would be interested to know how much non-OS software the average person installs on their computer based on platform. My guess is that it’s not that much. And my guess is that most of the third-party software that goes on OS X and Windows has free alternatives that are in most Linux distribution repositories.
Improving the rough edges, making the concepts accessible, and giving people confidence in support channels, are the main challenges to adoption.