Skip to main content
Welcome guest. | Register | Login | Post

Self contained packaging

18 replies [Last post]
libervisco's picture
Offline
Joined: 2006-05-04

I am no longer much of a believer in central package repositories and traditional (RPM and APT-like) package management systems. I also dislike the insistence to keep libraries, binaries etc. separated and scattered all over the system.

Today, most GNU/Linux distributions actually treat the whole software repository as an operating system. As the operating system moves all 15 000 packages in the corresponding repository move with it. If you upgrade from Debian Stable to Debian Testing, for instance, or from Ubuntu Gutsy to Ubuntu Hardy, it will not be only the basic system that will be upgrades (kernel, GNU userland, desktop environment), but all applications as well. And that's the thing I'm trying to point out. On GNU/Linux there doesn't seem to be a practical separation between an "application" and an "OS" on which it runs. Instead they're treated as if they were the same.

While this may seem like a good and convenient thing I beg to differ at this point. I suppose it's great that you get all your applications updated automatically, but why not have a choice on the matter (that doesn't include some sorcery on command line)? And what are the other disadvantages of this approach? What if you have two versions of the same application, one pretty old and another brand new? On GNU/Linux as it currently is this doesn't seem like a recommended thing to do, unless you're an advanced user of course, which is missing the point.

Why not do what successful yet proprietary operating systems seem to be doing and it seems to be working quite well for them. Instead of "debianizing", "fedoranizing", "susenizing" etc. all software by adding various packages which only end up adding new bugs and prolonging the amount of time needed before one can get a stable yet brand new package in the system, why not just go completely decentralized and build packages which are neither .deb nor .rpm, which *don't* have any dependency resolution, but come with all dependencies built right into the package. As far as I'm concerned the package could be a mere tar.gz containing the binaries that you can just drop anywhere you want and run the executable. Step up from there is making it akin to setup.exe on Windows. Double click, select location, next, next and you're done.

Some games are already distributed on GNU/Linux this way through .run, .bin files or just archives as I described above. On windows the equivalent is setup.exe and on Mac OS X I think it is .dmg, but I hear you can just drag and drop stuff to install. On PC-BSD they are .pbi packages.

Why is this not done more on GNU/Linux? Seriously, if people are worried about losing the advantages of what we have today I think it's quite an irrational worry. You can still have repositories and programs like synaptic which allow you to from one place fetch whatever you want and install automatically to a preset location. It wouldn't even need to do any dependency checking, just download and decompress to something like /programs directory.

All you need is a stable core system that runs everything that is programmed for GNU/Linux and a /programs directory for your software.

What do you think?

a thing's picture
Offline
Joined: 2005-12-20
duplication

The problem with that approach is massive bloat. If I want to have both Kaffeine and Amarok on the same system, then I would need to have two copies of KDE.

libervisco's picture
Offline
Joined: 2006-05-04
In cases like those maybe

In cases like those maybe instead of including the whole KDE in an Amarok or Kaffeine package you could just include a script that checks for it in all possible paths (traditional and the ones like /programs) and if it doesn't find it then download and install it to /programs).

Although speaking of separation between an operating system, applications and libraries, a desktop environment can, in addition to Linux kernel and GNU userland be considered a part of the OS. So a package that depends on a particular environment should state that it is made for an OS which uses KDE or GNOME.

It probably doesn't sound like an ideal option, but it might be possible to work something out. It can't be that this is the only way GNU/Linux will ever be and innovation can't hurt. That said, there is a distro which does something similar, albeit they are doing it more as a way of easing it for the people who compile everything from source. The distro is GoboLinux. They put everything in its own folder and in order to maintain compatibility with applications relying on traditional paths they are using symlinks. It's quite interesting.

In comments to a review of GoboLinux someone suggested what I was thinking off after writing the last couple of topics here - to basically take the best of both worlds. As original software is usually distributed as source tarballs I could take them, compile them and package the result as a binary tarball then upload those on a subsite of nuxified.com. So even if we can't do all software self-contained without the issue you mentioned, there are still plenty of software we could make available this way and therefore make it a lot easier for almost all GNU/Linux distro users to install latest software without touching their main system.

We could also include scripts to install bundles of such software. It's almost like doing a GNU/Linux-universal applications-distro. Smiling And if people like it, this may be a new trend and perhaps we would be seeing more and more of truly distro-universal binary tarrballs (probably even original developers would start releasing those).

Cheers

Offline
Joined: 2006-03-28
I think this could be

I think this could be possible with consequently linking all packages static, and not to use any dynamic linking at all.
The advantage would probably be what you want, that packages are independent of other stuff that may or may not be installed.
The disadvantage is that it will blow up package-sizes. As a thing mentioned, you would end up having the same code (maybe in the same version, maybe in different versions, depending on the environments the packages are created in) in binary-form multiple times on your hard-drive. Packages using big libraries, like QT or the KDE-libs, will grow quite a lot, as they have to include lots of stuff into the binary itself.

As said, I don't think it's impossible, but I think it doesn't really make sense, even though hard-disks are growing and prices are dropping.

The problem with the update from one version of a distro to the next version is that often essential parts of the system itself are changed.
An example: Fedora 8 used GLibC 2.7, which together with the kernel is the most essential piece of software you can find in any Linux-system. Fedora 9 uses the not even released GLibC 2.8.
I have personally made the experience that if you upgrade your GLibC and don't recompile your userland it is likely your heading towards an unbootable system.
I tried this once, one Suse 6.2. I think it had GLibC 2.2 or so, and I tried to update to 2.4 (I think). I just compiled GLibC, installed it over the current one, rebooted and nothing worked.
Changes in newer GLibC-versions may not be as radical as they have been at that time, but still I think it's not a good idea to do this (I may actually try this with an EasyLFS-installation if I find the time, upgrade EasyLFS 0.4 from GLibC 2.7 to 2.8, and see what happens).
Thus, if you upgrade parts that essential for your software, and everything links against GLibC, you have to upgrade your userspace too.

libervisco's picture
Offline
Joined: 2006-05-04
About blowing up package

About blowing up package sizes, true enough, but I maybe then the script I mentioned in an earlier post would do at least for software that has dependencies which are too big.

Good point though, about the changes in the core system requiring changes in binaries... I suppose the issue with GNU/Linux is that it simply moves too fast. With distros like Fedora and Ubuntu we're getting essentially a whole new revision of the OS every six months whereas OS X releases every few years or so with only minor upgrades in between, therefore keeping compatibility with applications. And windows I don't even have to mention (6 years for Vista to come around! Sticking out tongue ).

That's a tough one, honestly... In a sense, being so open and developing so fast is both a blessing and a curse. Still... there might be a way around... hmm, should probably take a deeper look at what exactly GoboLinux guys do.

Offline
Joined: 2007-08-07
Last month I was watching

Last month I was watching FOSDEM 2008 recordings, and there's a session that was talking about exactly what you say here.
The project is Klik, it treats every "Application" as an "image" with all the libraries and stuff, so you can do whatever you like, having more than one version of the same application.. etc.
Don't know if it's the same that GoboLinux does or not, but Klik works on various distributions, so you don't need to switch to another system.
I haven't tried it, but I like the binary approach. I downloaded Songbird and Aptana Studio in the form of a compressed archive with binaries, just extract and run.
(btw IIRC, installing klik needs mounting some stuff, so it will probably modify ur fstab IIRC)

a thing's picture
Offline
Joined: 2005-12-20
synthesis

I propose that installing isolated packages could be made easier with distro package managers (rpm, deb). This would be similar to the already existing --root option for rpm ("Use the file system tree rooted at DIRECTORY for all operations. Note that this means the database within DIRECTORY will be used for dependency checks and any scriptlet(s) (e.g. %post if installing, or %prep if building, a package) will be run after a chroot(2) to DIRECTORY." from the rpm man page). However, instead of using the database in DIRECTORY, the regular one would be used with specific exceptions. For example, if a release of Gnash only works with FFMpeg SVN before a certain date, FFMpeg would be the exception. The package manager would install a specific version of FFMpeg to DIRECTORY, followed by Gnash, and finishing by symlinking all of their dependencies. This could be useful not only in obscure cases, but also when not all programs have transitioned to a new API (like KDE3 to KDE4).

Of course, if compiling from source, FFMpeg could just be statically linked.

libervisco's picture
Offline
Joined: 2006-05-04
Hassan, I forgot about

Hassan, I forgot about klik, but it's indeed a very interesting option and I'm eager to try it out. On sid I'm having some trouble with it, but then I don't really trust the local installation of Iceweasel 3 (RC2). It could be though that klik just doesn't support Firefox 3 yet.

a thing, That's an interesting idea, and I like it. So basically, when something different from what is provided by the main system (and corresponding repositories) is required then install those to the local root, else just install it to the main system and link it. And in order to be universal scripts contained in such packages would just have to check all possible paths that distros commonly use and also be able to somehow detect whether the main package manager of the distro is apt, rpm/yum, pacman or installpkg for slackware's tgz's.. If I understand correctly.

It'd probably work. The biggest point of potential failure might be in all that detection work, but maybe it can be simplified in some way without ending up doing distro specific packages (which would defeat a part of the point for doing it in the first place).

Anyway, I'm probably going to be switching to Debian Lenny Beta2 which was released recently. Since it's beta already it should mean that lenny is in the process of turning into Debian Stable.. and I intend to run that from that point on. I just hope I wont get baited by the new kernel, Xorg, desktop environments and stuff once Lenny becomes as old as Etch (which I tend to fall for).... But perhaps I could alleviate that by using backports and this new system I would be experimenting with, which we are discussing here.

The good thing about that system though is that it should work on whatever distro I'm on (though, somehow I think I'll be sticking with the Debian universe for quite a while..).

Cheers

a thing's picture
Offline
Joined: 2005-12-20
one problem at a time
libervisco wrote:

And in order to be universal scripts contained in such packages would just have to check all possible paths that distros commonly use and also be able to somehow detect whether the main package manager of the distro is apt, rpm/yum, pacman or installpkg for slackware's tgz's.. If I understand correctly.

It'd probably work. The biggest point of potential failure might be in all that detection work, but maybe it can be simplified in some way without ending up doing distro specific packages (which would defeat a part of the point for doing it in the first place).

The latter paragraph is a reason why there are different package managers. I think you are trying to take care of two issues at once (universal packages and alternative packages on the same system). I meant that the functionality described in my previous post would be implemented in package managers (deb, rpm...).

libervisco's picture
Offline
Joined: 2006-05-04
I see, well that's not a

I see, well that's not a bad idea either, but then you still depend on that package manager for everything and it's not really universal because not every package manager might have this implemented.

Anyway, I feel like I'm pushing against the stream with this. GNU/Linux moves too fast leaving semi-geeks like me in a constant dilemma; to go bleeding edge or to stick with stable.. and no matter how much I wanted to find a middle ground it is evading me. Debian seems to be the last sanctuary for me when it comes to GNU/Linux, but frankly it's pretty problematic itself, not to mention that debian is the embodiment of this "stable vs. newest" dichotomy. And the thing supposedly in between, Debian Testing, can be even worse than unstable because bugs take longer to be fixed.. So I find myself in the nowhere land.

I thought.. I'll get Lenny going and it will become the next stable release and then I will compile my own packages of the new stuff I need, but what would motivate me to spend time on that is if it was a project from which many others could benefit, but considering the disadvantages that were pointed out here I'm not sure about it anymore..

And so if I stick to stable soon enough I'll have an outdated base including desktop environments which aren't easily and safely replaceable by backports or compiled new packages. Did you know that Etch (current Debian Stable) is using GNOME 2.16? That's considered ancient in the world of GNU/Linux. And as a GNU/Linux user running a related site it's hard not to be exposed to the goodness of latest desktop environments. On stable though, it's a looong wait.

Bottom line is wherever I go it's not good and I sometimes wish I could just pop out of the whole mess. Ubuntu is too bloated and it's not privy to being stripped down. Debian Stable gets too outdated too quickly. Debian Testing and Debian Unstable are too buggy. Fedora and OpenSUSE are like Ubuntu with worse package management. Slackware, Arch and the like require making a second hobby out of setting up and maintaining them. Source based distros are even worse with respect to that. Frugalware, Zenwalk and similar "user friendly" slackware or arch based distros have way too little packages and support... So that's GNU/Linux.. "choice". 50 distros around one concept, 50 around another etc. Soon you have 300 of stuff all of which isn't what you really want.

Windows XP is worse, sure.. and even Mac OS X too.. you get only one choice there, but it's a bit of a paradox. GNU/Linux encourages you to choose and seek for something better, until not even itself is enough anymore and you realize that most of it is just same old same old. Most distros are following a fundamentally same blueprint with the same exact fundamental disadvantages (as described in an original post) and meanwhile, it is failing to conquer the desktop market quickly enough.

I no longer wonder why. It's a mess of "choice" and all you ultimately wanna do is get your work done in a graceful manner, the way 21st century operating systems and applications are supposed to be.

GNU/Linux supporters have a lot to learn from the Mac OS X approach. I still believe that holy grail is possible without non-free software. Ubuntu failed because they're too dependent on Debian's regime, too tolerant towards bloat and tend to get sloppy in recent releases by allowing buggy and unstable stuff in.

Offline
Joined: 2006-03-28
Distros
libervisco wrote:

Debian Stable gets too outdated too quickly.

Debian Stable in my opinion actually already is outdated on release. That's the problem with the Debian-policy of well-tested packages.
The advantage is/should be stability and security, especially as those old versions are maintained and get security-updates.

libervisco wrote:

Fedora and OpenSUSE are like Ubuntu with worse package management.

I cannot speak for (or against) Suse (although I have reasons for not using Suse, but these don't belong here right now), but the package-management of Fedora is going to improve greatly with Fedora 10.
I have an installation of Fedora Rawhide in KVM, and there the package-manager looks a lot better than it does in F9 (looks in this context is in terms of usability).

libervisco's picture
Offline
Joined: 2006-05-04
Considering I'm toying with

Considering I'm toying with Pardus and finding it quite impressive I guess at this point I'm ready for anything, even switching to Fedora if package management improves as you say. But Fedora 10 (or shall it be Fedora X? Eye ) is coming only in October... that's quite a wait and I have this funny desire to finally find my distro, although that just keeps evading me. I already have a rather efficient routine for switching between distros.

I tried Pardus and it is quite impressive. Out of the box you get a full featured KDE desktop with quite a few nice touches. It also has its own package manager, including a GUI, which rivals synaptic in its usability (and is in some ways even better). They've also have quite a friendly configuration tool that slightly reminds of the one from Mandriva, only seems faster...

They also have a lot of packages for a young distro (especially interesting is the amount of games not usually included in other distros yet ready to go on Pardus), but some things are missing like for instance the lm-sensors package. :S I don't know yet how they do updates, that is, whether I'd be stuck at the release point versions or will they upgrade some software along the way..

Anyway, it's quite interesting overall and I think may soon rival Fedora and Ubuntu for desktop users, seriously. It's got all the ingredients and some extras, including the fact that it is completely unique yet works (it's not based on RedHat or Debian like the big ones today are).

Offline
Joined: 2007-10-20
sorry im late to the party

sorry im late to the party here, but here are my two cents.

there is a difference between a package and an application. in GNU/Linux, they are treated the same and this is the biggest problem with package management. the only thing that needs constant attention are libraries (in terms of packages). as long as the libraries are up to date and all applications utilize those libraries properly, there wont be any problems.

unfortunately, this isnt the case. making programs bundled in a nice little .deb or .rpm is great to get your software out there, and helps in keeping file system integrity. but, if i am updating my system, why must i redownload 5+ gigs of games just to update the rpm package from f8 to an f9 suffix? the two packages are identical, but because one ends in .f8.i386.rpm and the other ends in .f9.i386.rpm, i am supposed to redownload the entire package?

the other cent goes to package maintaining. if i remember correctly, the package for the game Abuse-SDL for fedora is still labeled as for fedora 6 and has never been updated, let alone the bug with shooting behind you fixed. one of the reasons why the developers dont bother with packaging their programs is because there are just WAY too many to make. which is yet another reason for a unified packaging system for third party applications.

so far, i think the best option is coming from the Linux Foundation, the LSB Package API.

http://www.linuxfoundation.org/en/LSB_Package_API

libervisco's picture
Offline
Joined: 2006-05-04
You're not late. There's no

You're not late. There's no specific speed at which a thread must move. Sticking out tongue

You're pointing out some valid problems indeed, which just further underscore how much room there is for improvement here. We could say that prevalent freedomware package management systems are already quite innovative compared to the way this is done in windows, but like many things on freedomware platforms "innovative" often comes with "fragmented" because there are always quite a few parties who want to do it better (as they see it) and so we end up in a kind of dilemma where we are missing compatibility between all these innovative options and yet facing a clear need for some sort of standardization, universalization if not unification.

I think what a_thing described, a synergy, and ideas to that effect, may very well be the way to go. We can hardly expect for major distributions and package management systems to change so drastically as to mold into a single universal packaging system, let alone remove certain limitations such as treating essentially the whole repository of all software as a whole OS.. So we can either patch up these systems with a provision for an alternate way of installing that would actually work, the way a_thing described, or simply have an additional, meta-system of sorts, that applies to just about all distributions - like source tarballs are today (with an obvious disadvantage of having to compile them and look for dependencies yourself).

Interestingly there are examples of something like this in practice scattered here and there, like those .run files distributed by certain game developers. They are pretty much distro agnostic.

The LSB Package API seems like something that would fit that "synergy" vision too, but I'd have to read more on it. Smiling

Btw, the problem with having to download the whole package for a minor change of content is, iirc, solved by the Conary package manager which is used by Foresight Linux.

Meanwhile I've switched to Arch and I intend to stick with it. It's come a long way. Arch is interesting to this thread in that in addition to its own package management system (which is fast, simple and pretty powerful), pacman, it also features "ABS" (Arch Build System) which allows you to build your own packages in a rather systematic way. You create PKGBUILD files which are like recipe scripts telling the system what needs to be done and what needs to be acquired to build a particular piece of software. This has a bit of a learning curve, but it's not too complicated. In fact the whole idea of arch is predicated on the "Keep It Simple Silly" principle. Smiling

It's, in the end, definitely not THE solution I was looking for in this thread, but at least it provides for a nice (probably best there is) way to use the one type of "packaging" which truly is distro-universal: source code tarballs. Smiling ABS is further empowered by the fact that these PKGBUILD's are en masse contributed to the "AUR" repository meaning that most packages you want to build already has a recipe necessary and all you need is use a special tool like "yaourt" to download the recipe and use it to build your package. And you can also modify it in case you wish to change something specific, without writing the whole process on your own. This is how I recently installed a special version of ffmpeg without an option I wanted to disable in order to try something out.

It's similar to gentoo actually, except probably a bit simpler and for being a primary binary distro with the build system benefit. Smiling

Now here's an interesting thought: packaging ABS for other distros. All you need is a PKGBUILD file and you can install anything. Sticking out tongue The next step in its evolution then might be to become the universal package manager (instead of building stuff locally, it just gets and installs binary tarballs).

Offline
Joined: 2006-03-28
A solution that will not

A solution that will not give you packages that contain what they need, as you actually wanted here, but gives you nice packages that you can install through the package-manager of your system (and thus later remove or replace) might be checkinstall. With this you compile the package yourself and then use checkinstall to build a package out of it. It supports RPM, DPkg and Slackware-packages.

Offline
Joined: 2006-03-28
Some points
libervisco wrote:

Interestingly there are examples of something like this in practice scattered here and there, like those .run files distributed by certain game developers. They are pretty much distro agnostic.

As nice as these files are (and I agree that they are pretty nice) they will stop working if the ABI (Application Binary Inferface) of GLibC changes in an incompatible way. If this is done in only a few distros that like using recent code, Fedora comes into mind here, then the packages will continue working on distros with older code, like Debian, but not anymore on the ones with newer code. Thus the developers would need to provide another package for these distros, or just tell people that they can forget about playing "Hardcore Shooter 4D Extreme" (TM ;-) ) on this specific distro, and sooner or later on any distro, as even distros like Debian in some distant future may evolve to use a newer GLibC.
The same goes for OSS. Commercial games on Linux as far as I know (and I've tried some, including Civilization: Call to Power, Unreal Tournament 2004, Doom3 and a few others, Doom3 being the most recent of those I have tried) use OSS for sound-output. Not that this already gives some problems (at least until KDE3 Arts used to grab the sound-device and lock it, so that other applications could not access it, now this might be the case with PulseAudio, but also some other systems like ESD), but if finally OSS is removed from the kernel, and I actually do see this coming at some point as it's out-dated and I think not even maintained anymore, then these games will also have big problems, no sound-output. OSS is marked "DEPRECATED" in the kernel-config, that means people should not use it anymore, and it will ultimately lead to it's removal, I am pretty sure.
Of course Alsa does offer an OSS-compatibility-layer, but will this always be there? ArtsDSP offers the re-routing of OSS-signals through Arts, and there's something for PulseAudio that does the same, but will those solutions be kept in there forever?
But I guess I'm drifting away too far from the original topic... And actually this is a problem of all binary-packages, just I think that these distro-independent packages of commercial software have the bigger problems here.

libervisco wrote:

It's, in the end, definitely not THE solution I was looking for in this thread, but at least it provides for a nice (probably best there is) way to use the one type of "packaging" which truly is distro-universal: source code tarballs. Smiling ABS is further empowered by the fact that these PKGBUILD's are en masse contributed to the "AUR" repository meaning that most packages you want to build already has a recipe necessary and all you need is use a special tool like "yaourt" to download the recipe and use it to build your package.

Other distros offer something like this too. Fedora and Debian have source-repositories. For Fedora I am pretty sure that in this repository you find SRPM-files, which are source-RPMs, ready to be built and packaged.
I am quite confident when saying that this sounds more or less the same like what you just described for Arch.

libervisco wrote:

Now here's an interesting thought: packaging ABS for other distros. All you need is a PKGBUILD file and you can install anything. Sticking out tongue The next step in its evolution then might be to become the universal package manager (instead of building stuff locally, it just gets and installs binary tarballs).

Writing RPM-build-files isn't that hard either. ;-)

libervisco's picture
Offline
Joined: 2006-05-04
I know about checkinstall.

I know about checkinstall. Smiling It's a great program and I used it quite a few times on Ubuntu/Debian and Slackware.

About the problem with glibc and things like OSS, it's a good point, but then I think the game developer can release a new version of their .run package for a new version of kernel and glibc as it progresses. Really, those two components are what makes up the core of an operating system, Linux + GNU stuff. Smiling Also, if the game is freedomware (at least the engine which is enough for this) then anyone could compile and make up a new universal binary like .run for the new versions of kernel and glibc. Besides, games have new versions occassionally too so chances are they'll be keeping up with such significant system changes.

reptiler wrote:

Other distros offer something like this too. Fedora and Debian have source-repositories. For Fedora I am pretty sure that in this repository you find SRPM-files, which are source-RPMs, ready to be built and packaged.

It's not quite the same. With ABS you don't actually have a repository of source files at all, only PKGBUILD files which get the actual source code from wherever they are originally distributed (like Sourceforge, BerliOS etc.). So you don't have specific patches bundled within such packages because there are no packages. If you want to patch it up with something, you just make the PKGBUILD download the patch from some mirror and do the work.

RPM and DEB are still also pretty distro specific, source or not and you therefore at least have different ways of doing the same thing. EDIT: What I mean here is that it breaks this universality which is the goal as every distro ends up having a different set of steps for doing the same thing. If you would have an universally applicable build/packaging system (at least as an add on to the "official" one) you can tell a new GNU/Linux user a single procedure and it'll work on whatever distro he is on. This seems like the kind of thing people screaming for distro unification would like. It doesn't unite all distros into one, just provides a sort of a compatibility layer that connects them all where it really matters most: obtaining software.

Also, if source RPM repositories match the binary RPM repositories it's possible that some source RPM's are still missing and therefore you wouldn't use the same build process for such things and may in fact benefit more from something like PKGBUILD's which instead of specially formatted source packages depend on the actual original source tarballs, which is much more universal. I guess you could create source RPM's and then use the RPM's build process, but I don't quite see the point - why add an extra step? "KISS" seems to apply. Eye

reptiler wrote:

Writing RPM-build-files isn't that hard either. ;-)

Well I can't tell as I didn't try it, but I've seen PKGBUILD's and I think they're fairly managable. Also, do you have multiple files necessary for building a source RPM? PKGBUILD is just one file. Smiling

Here's an explanation of ABS: http://wiki.archlinux.org/index.php/ABS

And here is also a thread I've started on Arch forums about the idea of making ABS the universal package manager: http://bbs.archlinux.org/viewtopic.php?pid=389398

There's a chance we'll be compiling ABS for other distros. Sticking out tongue

Cheers

Offline
Joined: 2006-03-28
From what I remember also

From what I remember also RPM just requires one file. But I have to admit that it's been a while since I last wrote one of those.

The fact that sources are retrieved from where they actually come from really is a nice idea. The disadvantage might be that patches might be missing. Don't know if the program also takes care of that.

Offline
Joined: 2007-10-20
the point i was trying to

the point i was trying to make with my post is that, when it comes to package repositories, the only things which should be available should be the library and core system packages. all third-party packages should be handled independently of the main repositories, much like ubuntu with the universe/multiverse/medibuntu repositories. a unified repository which all distributions can tap into which hosts all third party applications and that, during install, will do all dependency checking and access the official repositories to gather any required libraries.

the best analogy that i can think of when talking about package management in linux is, think about the segregation problems that america faced between the white and blacks. there were schools that forbid blacks from entering, there were movie theaters that forbid blacks from watching movies, there were even laundry mats thats disallowed blacks from washing their clothes in their establishment. this is the same way with package management in linux. if there is an application that i want but is not available in the package that my distribution supports, i would either have to package it myself or build from source. the issue with the former is, grandma isnt going to sit there and build the latest package of pidgin (for example) to chat with her family, either from a lack of knowledge of linux or from not wanting to learn something completely new. the same could be said for the latter, unfortunately, the latter also has issues with not having the ability to properly manage any application.

now, the benefit from separating the application from its dependencies in the repositories would be that it would then allow for user level package installation. for example, if i am deploying a group of thin clients for a company, 20 thin clients spanning 3 departments, not everyone will need access to all of the same applications. user level package installation would allow me to install the applications on a per group basis, blocking the departments from accessing applications they dont need to access.