Daniel Imfeld
I read about a major company, I think it was HP, that has created a Beowulf
toggle quoted message
Show quoted text
cluster (a group of cheap PCs linked together to create a supercomputer) that qualifies as one of the more powerful supercomputers in existence. One advantage of this method, they said, is that when you add and remove computers from the cluster, the total processing power changes linearly, so it's easy to calculate how many computers you'll need for a specific task. Also, it makes it easier to split the processing time between two projects if the need arises. I think they used something more powerful than 486's and Pentiums, but I can't remember exactly what they used, as I read about it a few months ago. I know that they used only cheap off-the-shelf PCs though. If I remember correctly, they run it on Linux, or some variation of it, with a master computer (or computers) that controls all the others, similar to Stanford's Folding@home project, but probably more optimized for LAN usage and proprietary things. So I suppose if you do it correctly, you can get great results, although I imagine that it would take quite a bit of work to set everything up. Daniel Imfeld ----- Original Message -----
Saad, Old PC's can beI've heard about that but I wonder whether it is a practical idea. The controlling software that would assign parts of the computing task to various computers would have to be vary sophisticated. Then there would be the distances between computers. It would probably be better to just use the system cards otherwise all those old power supplies would consume lots of energy and there'd be the problem of varying power supply connectors and voltage requirements, maybe some kind of bus communication would be necessary to avoid port bottlenecks. It would certainly be a good project for some IT students but I think the work involved in making it happen in a real world situation would more than cancel most benefits. But then I could be wrong, wouldn't be the first or last time. Jim |