Skip to main content
Welcome guest. | Register | Login | Post

Fork a kernel, kill an "OS" and revolutionize the desktop

The news of Con Kolivas, a Linux kernel developer, quitting that role, along with an interview in which he explains why, could and should make loud noises around the Free Software community which is often touting GNU/Linux as the best operating system one could use, and not just because of freedom you have with it. In the interview he says certain things which should cause tectonic shifts in the mindset that we have all been having. Why didn't we realize these things before?

As you can see, the article intrigued me quite a bit, and got me thinking about a better way forward for the Free Software OS.

I'll go through some of the basic points that he makes and lay out one possible solution and its implications. However, take this article as just a discussion starter.

Linux kernel is too bloated

It was made for servers because that's where its initial success was possible and where most of the money came. Linux kernel developers are on the payroll of companies who, in the first place, care for their server related businesses. This makes it much less likely for the kernel development to focus on needs related to the desktop, a most demanding of the market segments.

Developers are also developing on machines which are too powerful compared to the machines that ordinary desktop users are using, making performance issues that desktop users may experience, practically invisible to these developers.

Lack of communication with desktop users

Linux kernel developers are normatively disconnected from the rest of the users, most of which are desktop users, the kind making up 90% of all computer users. Since they are not mostly governed by the companies whose interests are on the desktop it is logical that they will a lot of the time be oblivious to the needs of the ordinary desktop users.

This is not because a desktop user does not have the freedom to participate. In a Free Software world we have the freedom, but we also have social and cultural norms which are often enough of an obstacle to healthy communication and cooperation. Some of those norms are beneficial, and some are quite hurtful.

Can a kernel developed by people not so attuned to the needs of desktop users lead an OS to success on the desktop?

Lack of innovation

It is largely unfashionable to accuse the GNU/Linux world of not being innovative enough, but accusing it of lack of innovation is not saying that it isn't more innovative than the rest.

I believe Linus Torvalds once said that all operating systems suck, but "Linux just sucks less". I think this statement is very true, but useless if not taken as a call to improvement. Why does it have to suck at all?

The point that CK makes is related to performance issues on hardware which is so powerful that it shouldn't have performance issues at all. Why do we still need a specially optimized Linux kernel just to properly enable music production? Shouldn't real time audio processing just work absolutely flawlessly on computers containing dual core CPUs? Even Windows XP doesn't have this requirement.

The question has to be asked. Why does *anything* have to run slow on todays computers?

Free Software movement is about freedom which includes freedom to innovate, but having this freedom wont make things just happen automatically. It is merely a beginning. We have to use this freedom to really do things better, to revolutionize the world of computing. But, so far, GNU/Linux is becoming just another bloated OS which is incapable of making the most out of the most powerful hardware we have ever had (yes, that's probably your computer too).

It is leaving the revolution to the social and ideological side of the issue, while remaining stuck to the old ways that everyone else is stuck with on a technical side of things, yet it can do better.

Solutions?

One of the solutions being pondered is forking a Linux kernel into a kernel which would specifically be aimed at desktop users and their needs. It could be ran in a way which would allow for a seamless two way communication between desktop users and kernel developers, because they would share much of the same interests in this project. This means that an ordinary user would be able to leave a bug report or a suggestion without being flamed as a newbie who doesn't know anything, but rather awarded with contribution points.

Now imagine the implications of having a desktop kernel.

Popular distributions such as Ubuntu would likely switch to this new kernel, without diminishing their popularity. The new kernel would allow a lot of the lost performance to be gained, allowing us to reclaim the title of the fastest OS around. However, applications would have to follow this trend in order for performance gains to be felt more dramatically. It is possible that a project of forking a kernel for a desktop with focus on performance would also initiate a similar movement with the rest of the Free Software world. We can call it a hint that would affect the way developers think about their software, encouraging them to pay more attention to performance issues on the desktop.

This would also likely blur the lines between what is or isn't considered to be "Linux" or "GNU/Linux", especially if a new kernel doesn't carry a name involving "Linux", possibly even throwing the whole definition of an "OS" in a limbo, perhaps making it irrelevant. Above all else, it would just be a Free Software OS where distributions use whichever kernel suits their users the most. The reason why this would happen is the discussion that, for example, Ubuntu switching to a new kernel, would entice regarding what we should now call the new OS; as "Linux", by the name of the new kernel, just "Ubuntu" or perhaps "GNU". The reason I believe this would blur the lines is simply because I doubt a proper consensus on this would ever be made. It will likely just end up being called Ubuntu, which is merely a distribution, leaving the OS simply unnamed - a Free Software OS.

Therefore the benefits are twofold. There is a technical benefit of regained performance and moving away from the bloat of the rest of the computing world, truly innovating on the desktop and not just making a one-shoe-fits-all operating system. And there is a benefit of blurring the lines between operating systems based on what kernel they run, which would encourage further diversity and open mindedness towards different platforms. The only way we would most often identify an OS is by the name of the given distribution (Ubuntu, Gentoo, Nexenta, Haiku etc.)

Free Software is about restoring the freedom of computer users. But now that we have it, why not take things to the next level? Let's make a desktop operating system which will be truly irresistible, then conquer the desktop and make computers really fun again.

Thanks

Edit: There have been some comments by people who seemed to have understood this article as saying that GNU/Linux sucks and is not ready for the desktop then going on to point out the obvious advantages that GNU/Linux has compared to other platforms, including the advantage of performance. However, if you reread the article you would see that I in no way argued that GNU/Linux is not better and that it is not ready for the desktop. Saying that it sucks less than anything else does mean that it is the best thing around. My point was simply that things can be even better. We can be even better than merely better than anyone else. We can leave them trailing us hopelessly and obviously. Why else would I call this a "revolution" in computing? Because it is not just about beating other platforms, it is about beating ourselves. Why does merely "being better" have to be enough?

That said, yes, I do believe GNU/Linux already is ready for the desktop. It's more ready than anything else. It deserves 90% of the market share right now if you ask me. However, it is hard to win that market share without going even further than people would usually expect us to go.

Good enough is not enough. Being the best is not enough.

Comments

Great Article

You make really good points all around.

What's the point of having all kinds of stuff going on in the background that only mainframes can do well, I will never use and I won't miss if I don't have it. Why have to resort to a 'light' distro to make my P3 450mhz do jumping jacks again. :-)

Scott

I don't know if forking the

 

I don't know if forking the Linux-kernel would be such a good idea. It would either end up being incompatible, or in a race adjusting patches for the latest vanilla-kernel so that your own kernel stays (more or less) performant.
Ending up in incompatibility could bring a lot of problems, many developers are used to the way Linux handles things, therefor I don't see many developers switching.
If the ABI and API stay the same, you cannot just take away the name Linux, it'll still be Linux, just with a bunch of patches for internal workings.
Even if you fork and don't care about staying compatible it would, at least for quite some time, still stay Linux, with a bunch of patches.

Also I would like to mention that the kernel does not stand still in terms of performance. Of course, CK has contributed great patches for the community, but we should not forget that performance always has increased also in the vanilla-tree. It is a big difference if you run 2.2, 2.4 or 2.6.
My first kernel was 2.2.10 on Suse 6.2. I'm sort of one of the early switchers, that likes to try things, as long as they're in a state that can be expected mostly usable (meaning things like KDE RCs, but not KDE Betas, and such things; not that I dislike Betas in general, just picked KDE as example), so as soon as Linux 2.4 was there, I had it, and it felt great. The same with 2.6, as soon it was there, I had it, and again, it felt great. By great I don't only mean having the feeling of having the cool, new stuff, but also that you really feel that something has changed for the better.

Some people say that 2.6 is

Some people say that 2.6 is actually a bit slower, but I can't really testify to this myself. At least one of the reasons for forking a kernel aren't so much technical as organizational. It would allow for establishing a communication channel between developers and desktop users which doesn't exist today. We need developers who actually care enough about desktop users and their needs for this sort of thing to work. So the question is, does the current Linux have enough of those?

But I see that some of the issues you mention could be a problem. I suppose that overall forking would make sense only if enough developers join in for them to be able to maintain the compatibility or to move away from the original Linux fast enough. But besides, isn't a fork always quite similar to the original anyway? It doesn't have to be so different at once, but it would be taking a different direction free of the Linux mainline.

Anyway, there may be alternative solutions. A desktop kernel could be based on Solaris or even maybe some more exotic non-unix kernels such as one from Haiku. I even thought of possibly diverting more energy to HURD, accelerating its development and making it focused on desktop performance, although I have my doubts about how to establish proper communication between its current developers and the public on desktops.

Anyway, it's a discussion starter, so at least I hope it will get more people thinking about the issue, because a Free Software OS can do even better. Thanks to Con Kolivas we shouldn't be thinking just in terms of being good enough or as good or only a bit better than Windows and Mac OS X. We should leave them trailing us for good. Eye

Due to the distributed

Due to the distributed nature of Linux development (No central CVS server or something like that, but truly distributed git), a fork should be nothing more than an alternative outlet for the whole pool of Linux development. Technically, every single linux developer has "forked" Linux — all these "forks" are merges together in a tree-like fashion right up to Linus' "fork". So, one would collect interesting changes together in one's own repository, try to get Linus GregKH or somebody to merge it or cherrypick parts of it, and publish the whole thing as Linux publishes his tree.

So basically a forking

So basically forking Linux could be a matter of picking one of these existing "forks" or creating a new one suited specifically for desktop needs and then making that a main tree, right?

I think that kind of system only makes forking easier. The point in forking is largely in taking a direction different than the one in which Linux, under the leadership of Linus, currently goes to allow it to focus more on the desktop.

One project that also helps

 

One project that also helps Linux run better on the common desktop, but not only there, is what I've mentioned in my blog: The Free Linux-driver development project.
Of course this is not about performance, but about another very important point, supporting drivers.

I still hope that CK maybe reconsiders, maybe not now, but at some other point, and continues contributing.
I have read a little about this case recently, just forgot most of it again. Have to see that I find another few articles and get the info again.

Are you serious?

 

This has got to be the dumbest idea ever with Linux involved. The only other two things that come to mind as even worse ideas was MS B.O.B. and the forking of the Unix kernel. We all know well the Unix fork and B.O.B. went, don't we? There is no need to fix what isn't broken just improve on the existing which is done with every kernel release. Keeping the kernel whole and unforked and uniform is the only way to gain market share on the desktop and the server.

Although I also think that

 

Although I also think that forking might be not the best way, as explained above, I would prefer you to give some reasons for that, instead of just pointing around at other stuff, that's, like Bob, not really related.
Also I think that at least some Unix-forks didn't do so bad, just look at the BSDs, still in active development and pretty spread. Also Solaris isn't doing so bad.

if you read my post above

if you read my post above carefully, you'll see that forking Linux wouldn't have to make it any less unforked and uniform. It'd be all about marketing a specific developer's linux version which participates in the Linux git code-sharing ecosystem there is as anyone else can do already. (The "usual" type of fork involving two repositories that eventually diverge strongly would indeed be VERY undesirable)

Is there a "why YOU should be using MY tree instead of Linus'" wiki out there ? ;-)

"Keeping the kernel whole

 
Alphaman wrote:

Keeping the kernel whole and unforked and uniform is the only way to gain market share on the desktop and the server.

I think that in and of itself is enough of a reason. It's bad enough that most regular Joe's that are toying with the thought of switching to Linux get put off by the 300 active distros. Now could you imagine the 300 active distros PLUS different kernel variations? Talk about a cluster f*ck. It also wouldn't help in making CIOs decisions any easier especially if they are on the fence. As for the BSDs and Solaris that's FOUR out of how many? Let's start.

OSF/1
Tru64
HP-UX
AIX
MINIX
DG/UX
Irix
SCO Unixware
Unisys
Xenix
Ultrix
QNX
Dynix
Acorn RISCiX
Concentrix 4.1
UTS 2.4 and UTSV 5.2.6b
DomainOS SR10.0 and 10.4
A/UX
MachTen 2.1.1.D
MiNT
MINIX ST 1.5
BOS/X
UNOS
Convex/OS
DNIX
UMax
FPX
TOS

Those are the failures I can think of off the top of my head. I am sure there are WAY more to list but I got tired of thinking. Point is it is a plain old BAD IDEA. period.

Wow, great idea!

 

So linux is bloated, does not communicate properly and is not innovative. I can understand, that somebody gets this impression, but how does forking help?

Do you really think that all of a sudden all the bloat will just magically melt away? It won't. It will be a huge amount of work to get the fork debloated and you will see heated discussions: Is e.g. devicemapper bloat? Does a desktop user really need LVM, crypted partitions, etc.? And do not forget that today's mainframes are tomorrows desktop systems: Where would dual-core support be today without the SMP work done for the "mainframes" in the 90?

Where is the innovation going to come from all of a sudden? Splitting up a developer community does not automatically do that! You need to draw lots of new developers to get one or two with some fresh ideas... I doubt that this will happen without some really overpowering personality as a core developer. Where are you going to take that one from? Are you volunteering? I doubt that you are the right person for that job (both technically and PR-wise).

That leaves the communication issue. I doubt that a fork will do much good there. Most users just do not care about the kernel: It is just something that works (or sometimes does not). Why do you think that perception will change all of a sudden? Desktop users will not happily jump onto the fork-bandwagon and turn into kernel hackers overnight, being innovative.

After forking you are going to do a lot of work to keep up with the original you forked from: You want to get the good ideas, the drivers, etc. into your fork. So you have to stay pretty close to the original or spend a huge amount of time adapting patches so that they will work on your fork. Where do you take the time for your innovations from then?

Forking is not worth the effort. Improving the communication to the kernel developers is. We would all be better of with more people like CK (just in case you are reading this: Thanks for the work you did, sad to see you leave for greener pastures), mediating between the users and with the technical understanding to interface their desires to the kernel hackers. Developers are a special bunch of people... they just are not compatible with the Joe User.

Best Regards,
Karl

servers vs desktop

 

"MidnightBSD was forked from FreeBSD 6.1 beta. [...] With MidnightBSD, we wish to focus on optimization and usability improvements for desktop users. The FreeBSD project has developed a reliable server operating environment, but often usability and performance on the desktop is overlooked. Scheduling, allocation of resources, security settings, and available application support should be tailored to desktop users. Many of the BSD projects are tailored to servers or older hardware. Others are distributions of FreeBSD with a nice graphical user interface, but still suffer from server centric design under the hood. We did not fork FreeBSD as a result of a falling out, but rather as an excellent starting point. It should be viewed as a compliment to the FreeBSD developers who have worked very hard on FreeBSD 5.x and 6.x." http://midnightbsd.org/about/index.html

sounds awfuly similar as an idea.. separating a server OS from a desktop OS.. it´s the same as main Windows vs. their server line (2003 and the new one with vista) and also Mac OS X vs their own mac os server .. and the desktop and server versions are not incompatible between themselves.. if it works for them, it might work for linux too... a linux more solaris-like for the servers and one more like mac os X and plain XP (or Vista) for the desktop, maybe?

The problem is choice and authority not the kernel

 

The problem for desktop Linux is not the kernel, it is all the choices that is involved in using Linux. We need a single distribution that drives those choices, similar to what the OLPC people have done. Nobody in the Linux world dares to make choices because of all the discussions that follows. Define some goals and design the system according to those and have people in authority to make the necessary choices. If the kernel is too bloated, configure it and remove the unneeded features.

Another problem we need to adress is the underlying assumption of UNIX, that there is a technically capable system administrator that can configure and administrate the system.

You know, I just don't buy

You know, I just don't buy the "too many to choose from" argument anymore. It's a dead argument if you ask me. If people don't wanna choose then they can let someone else choose for them, and there are plenty of people who would do that for them to go around, from local geeks to companies specifically aiming at the undecided crowd looking for a best "default" option. I think Ubuntu is playing that role quite well as it's increasingly becoming obvious that if they don't know which distro to go with, they'll likely just get Ubuntu and start there.

So why is choice such a big problem?

And besides, how is forking a kernel really gonna affect desktop users? They barely even know what a kernel is. All they hear about these days is Ubuntu, Mandriva, SuSE and whatnot and also about the "Linux" thing though many barely know what the heck that "Linux" thing actually represents. Not everyone even agrees that this "Linux" thing is "Linux" because it might as well be "GNU/Linux".

Which is why I mentioned the "kill an OS" part. Just do away with the whole notion of there being this boundary between one system and another based on which kernel it runs, as if that really matters to end users. And in the Free Software world, what runs on GNU/Linux will likely run on any other *nix, and even further (because there is the source code) making what were once considered solid lines dividing various systems largely irrelevant. It is all just Free Software.

So if forking a kernel is a way for a certain segment of Free Software developers, advocates and companies to strengthen their focus on the desktop market, making it possible for them to innovate and evolve on the desktop faster than anyone else, then why not just go and do it? All this boo hoo about supposed divisions, fragmentation etc. are just illusions stemming from an old way of thinking about computing, in terms of boundaries which are not really relevant anymore. Why? Because of the very free nature of Free Software.

These may have been relevant for UNIX and they may be relevant for other proprietary platforms, but that's exactly because they were *proprietary*.

It's time to stop being so damn afraid of forking. Forking is what made Free Software what it is today. Free Software is largely exactly about the freedom to fork, or in other words, freedom to innovate *your way* for *your needs* if you believe that to be better for you.

And there is absolutely nothing wrong with that.

Welcome aboard Alphaman.

Forking a kernel is an

Forking a kernel is an idea, albeit a provocative one I would admit, but that makes it a very good discussion starter which is really a point of this article. CK made certain points I believe should not go unnoticed and I've wrapped them up and fired up a suggestion that some might find appalling and others perhaps brilliant, but at least we're talking about something other than the status quo, about potential alternative ways forward and also other ways of thinking.

So maybe fork is not a best thing to do, but maybe this discussion results in more emphasis being put on the points that CK made and on doing at least *something* about them.

The problem is userspace.

 

all this blame on the kernel is totally unfounded. Linux Kernel is a high perfomance kernel which is as good or better than any. but in glibc and other crucial userspace libraries, portability over perfomance is the main objective (imho). Perfomance has only recently been a priority in gcc/glibc and that too on x86-class.

CK patcheset was great, but part of what Con is talking about in the interview, "not being able to measure and quantify with certainty" is because of the userspace libs. One can only add so much to the kernel in terms of perfomance without breaking everything. If Perfomance improvements has to happen, start with glibc and xlib.

I agree

 

I have been thinking the same thing for some time. Don't get me wrong I love thinking that our kernel runs everything from mainframes to cell phones but this is just to much for one OS or kernel as it were. Yes each distro does special build its own kernel however those customizations are higher level than they really need to be.

I have read on Slashdot and others some criticism of the Con for his actions here. I ask just how long do people expect a man who is probably the best advocate for us desktop users to bang his head on the wall?

Honestly what should happen is that Conical or Fedora should pick up other really good hackers like the Con and pay them full time. The server side has this already as anyone knows. What we do not have is commercial viability for desktop kernel hacking.

We do not have it and the kernel will regress until we do.

Brotherred

faster yet slower

 

It is galling that despite computers getting faster and faster, the software seems to get slower and slower. Ive also noticed that often dual core makes little difference (especially under Windows XP more so than Linux), where software still hangs waiting while an optical disc is loaded despite the second cpu.

It makes perfect sense about the influence of the server marketplace in Linux. So many of the new features introduced into the kernel have a corporate feel to them. One good example is virtualisation. How many ordinary desktop users are really likely to want this feature, and now there are three or four versions of it. Also the amount of work gone in to enable support and performance with 16 or more cpu's - again not a desktop profile.

Perhaps it is time for some group to fork the kernel, and domesticate it. Get rid of the unwanted enterprise bloat, and make a lean and mean desktop real time kernel with features that appeal to the mainstream and much more numerous users.

After all, the enterprise people have lots of dough and can afford to patronise Microsoft.

Although I understand the

 

Although I understand the points I prefer staying on the site supporting the current model of "one kernel for everybody".
As said before the kernel has seen big improvements over the past years, not only in terms of features nobody needs for the next 5 or 10 years, but also in terms of performance.
Also, if you configure the kernel yourself, I know all the options are intimidating, but it's really not that hard (okay, probably still too hard for Joe User), you can get a nice slim kernel that only has what you need, and not a lot of other stuff you don't need. Also building everything static into the kernel can speed things up a little, at least at boot-time.

Another thing is that boot-time varies from distro to distro. I can boot EasyLFS in QEmu on my notebook and can login after 20 seconds, from Lilo to login, including startup of LVM, SSH and HAL.
Fedora 7, which is installed on my notebook, so no QEmu in between to slow things down, takes one minute and ten seconds from Grub to login.
Linux from Scratch on my PC presents the login screen after around 35 seconds, and that is a much more complete system than the EasyLFS-installation I have running in QEmu.

It depends alot on what distros do, and what kind of stuff they want to fire up at boot-time. I think you can easily see the difference of the distros in the previous paragraph.
Fedora and my Linux from Scratch both are fully usable systems, including graphical login and everything I need for my daily work and fun. EasyLFS is a pretty naked development-image in QEmu, but although it runs in an emulated environment it's ready for login after about 20 seconds. Imagine how fast it could be there if running directly on the CPU!

So, that mentioned maybe we should better think about something that's far easier than deslacking the kernel: Deslacking distros to get better boottimes and better responsiveness.

That is pretty shortsighted...

 

You can go out and buy 4 CPUs in one little package right now (Quad-Core CPUs). So how long do you think will it take for 16 core boxes to become available in the hardware store next to you?

How can you get some use out of all those cores? The only way I see is by running virtual machines on them! Virtualisation is the big thing in commercial setups, because of this... just wait a year or two for that realisation to drift down to the home users.

Starting to develop that kind of feature once users are crying for it will be too late.

 

I very much like this article. It brings up something very important, quality as usabiltiy. Linux distros have many fine things about them, but there are only a few that come ready to rock as full multimedia desktop OSs without the infamous "post-install attentions." Now personally when a users has to do extensive post-install activity to get things going, I feel that is a bad thing. Sabayon linux is a fine example of a distro not needing the user to do anything after installation except enjoy it.
.
Another issue is upgrading. Too often upgrading the system amounts to reinstallation with /home partition separate and untouched. I have often experienced Grub being smacked down with kernel related upgrades through the built-in upgrade features be it apt, adept, or Synaptic: that is why I stay with Aptitude.
.
Autodetection was "invented," I believe, with Knoppix. but autodetection needs to be more agressively pushed: why does the user not have install choices that include optimization choices on kernel just as he might choice Kde, Gnome or Xfce. To push the idea further, Linux should be able to have the kernel build fully automated with no user intervention.
.
Also, hardware detection capabilities should be a modular and expandable utility so that a running Linux can have its autodetection module updated like any other packageg; that way the system can truly update itself without the need for reinstallation. Linux distros need to achieve an install and the system able to keep itself fully up-to-date, optimized, and able to auto-reconfigure itself on-the-fly as per hardware requirements and capabilities. The need for distros to keep releasing new ISOs as the primary way to make the next version available should be a sure sign that the updating process is unacceptable.
.
Another important area of usability, is for the user to be able to identify (tag) software for upgrading, because the current way of upgrading everything is too high risk. I have done upgrades to a bunch of things and then added a few hundred packages and my linux craps out. So the Debian apt updating is not bullet-proof -- hence it needs to be oriented to being fully user controlled as per the user easily able to select a subset of packages that the system should monitor for upgrading.
.
Performance is a great idea, vista's window manager makes me shudder, but that is only one aspect or type of resource waste. The user-base does drive development; the user-base is not the enemy of the developer -- you made this point very well.
I will stop here... now this may all seem like pie-in-the-sky so I will just say that some of the things I talked about is currently happening on OS X.
.
OS X, after a single install, can be upgraded - including the kernel - and be kept fully up-to-date. And after a year of poking around, I can't find maintenance chores to do on this system. It does not require me to do anything to keep it up-to-date and pumped.
.
For the record, I have Debian Linux, Windows XP and OS X boxes on a home network, with my primary workstation being the Linux box ... the wife and kids live on the OS X: Whenever I user OS X I feel like I am having an affair.

Thoughts

 
albertfx wrote:

To push the idea further, Linux should be able to have the kernel build fully automated with no user intervention.

This actually is a tough one. I tried this for EasyLFS, my own little distro, and so far I didn't find a way that really makes me happy. For now I have a pre-configuration, that is based on a standard-setup I constructed, plus patches based on the choice of software, like when you choose to install dmraid you'll also have device-mapper support in the kernel.
The user then, more or less, only has to select the proper CPU and drivers for drive-access and things like that (like NICs, sound, etc.) But many, or most, of the "overwhelming" choices, like all these options most people have no idea what they do before you read the docs and stuff on the net, are done already by my pre-configuration.
But anyway, I wouldn't claim my project to be userfriendly, it's a source-distro, so I just try to not make it unnecessarily complicated. ;-)

As for most distros, usually the kernel come with the full set of modules, and by now these usually get loaded quite reliable, so that manual modprobing should not be necessary anymore, except for a few exceptions like KQEmu or NDISWrapper.

albertfx wrote:

Also, hardware detection capabilities should be a modular and expandable utility so that a running Linux can have its autodetection module updated like any other packageg; that way the system can truly update itself without the need for reinstallation. Linux distros need to achieve an install and the system able to keep itself fully up-to-date, optimized, and able to auto-reconfigure itself on-the-fly as per hardware requirements and capabilities.

This, I guess, is where UDev and HAL are destined to go. They already work quite nicely, UDev can create and remove decices on demand, even with really useful names, like /dev/mp3stick, and HAL can integrate these devices throughout the system, like creating a mount-icon on your KDE-desktop.

 

An interesting, but drastic, idea.

I would like to suggest the concerns raised might be addressable if you borrowed an idea from KDE: the kernel more than any other program deserves a "Kontrol Center." This would give two things:

1 - give support by the an intellegent/easy-to-use configuration panel
2 - and bring the kernel into a point-click-use relationship to the user

1.1 - the menu choices around kernel building needs a multi-view approach that must include support for auto-identification of hardware as much as possible. Currently kernel menu is by hardware elements, but it needs to be supplemented by a profile prospective so that users could select a profile (desktop) and then, optionally tweak the hardware pieces if they wished

2.1 - the build phase must be a single step that ties together: building, installing and setting up on menu (e.g., grub) for use, just like every other program the user enjoys.

In summary, the kernel is outside the desktop paradigm from a user perspective, in that it is not handled in the same way like any other program from the desktop.

In closing we need to remember, that the desktop is above all else a paradigm if ease-of-use and the Linux kernel is the last holdout. So I suggest that the kernel be brought into relationship with the user like all other packages in the desktop environment first.

A user friendly control

A user friendly control panel is a really cool idea. I wonder if it has ever been pushed elsewhere before..

Somewhat offtopic, it's kinda funny that some of the things I argued for in this article might be incompatible with what I argue in my latest article where I say "Linux is not an OS" and instead that specific distributions are OS's instead of completely killing off the concept of an OS..

But this is an article from 2007 and I've changed very significantly since then. Sticking out tongue

 

Sorry Albert, but I guess you are asking for the impossible. From your post I can take it that you have had a look into the kernel-configuration, but I honestly do not see a way how this could be made even remotely easy to use/understand for the average user.
Also I don't see a need for this. Look at modern distros and you'll see what I mean. I know how to compile a kernel for my machine that supports everything I need and want, but I haven't done this for my system since I switched to Fedora. This is simply because the kernel Fedora ships works perfectly fine for me. Most users probably have the same experience, thus there is no need to tinker with the kernel.
Furthermore I'd say that it's outright dangerous to let an average user play around with his kernel. He's most likely to render his machine unbootable because he forgets a lot of stuff.

Hardware-detection also is no silver bullet. What about hardware you want to configure to be supported but which is not connected at the time of configuration, like USB-devices.
What about all the other options the kernel offers? How is a regular user supposed to make an educated choice on what he needs and what is safe for him to use?

I'd rather say keep the kernel as far from the common user as possible. Nowadays it's usually not necessary anymore that the user builds his own kernel to get something to work. With Suse 6.2, the distro I started with back in 1999, that was different, I had to compile my own kernel to get my soundcard running.

So, in conclusion I'd vote "No" for your idea, for the reasons I explained above. May everything else be easy to use and stuff, but keep the users away from the kernel.

Good points reptiler..

Good points reptiler.. Reminds me of when I was recompiling my kernel in Debian for about five times and couldn't get what I wanted (tho then it could've been simple lack of support in old Debian stable for a JMicron controller in my new Gigabyte board). In any case it's very easy to miss something crucial, plus it's a bit time consuming. Sure isn't something an average user would wanna fiddle with.

 

Kernel (re)compiling is still needed even today, sadly. In 2006, I had to recompile my kernel on Slackware because the vanilla kernel did not have the proper ACPI modules enabled. What made it worse was, it HAD to be compiled directly into the kernel and could NOT be an external module. And all of this just to get my laptop to display the information for both batteries.

This was just 3 years ago and sadly, this