But dual cores is also stupid. This is because it only increases speed twice and that probably is a happy(!) estimation (please correct me if I'm wrong).
This is not the way to think about it. Multiple Cores don't increase speed by a direct factor of the multiple of cores anymore than the Dual pipeline architecture of the original Pentium doubled it's speed. I already explained why Dual and more cores are helpful speed wise: the fact is that at this point, the large set of processes running on a machine (and even for individual programs, which often can benefit from doing multiple things simultaneously) started to require almost as much overhead in terms of context switching between processes (eg. concurrency being used to emulate asynchronous execution). There were basically two ways to solve that issue, which involve either cranking up the clock speed (which is only possible to a certain point, and clock speed is hardly an indicator of anything anymore, given the fact that a Celeron processor is usually clocked about twice the speed of processors with far better performance. The other was to add multiple core dies to the same processor. This would alleviate the context switches (since for each X added core you reduce the number of context switches X-fold).
Because I suck at windows I do not understand how to make such nice quotes like you do, so this will have to do:
To paraphrase Charles Babbage- "I cannot rightly apprehend the confusion of ideas that would lead to this statement"... Quoting on this forum has nothing to do with Windows...
I still think that this is the wrong approach for the future. Multiple cored processors will not solve our future need for faster and faster computers
Two problem with this involve the fact that you assert essentially to have an inside track to the knowledge of exactly what our future needs will be for faster computers; Otherwise, how would you know that parallelism was not a solution to those as of yet unseen issues?
(which "incompetent" software companies, no name, will indirectly require).
This expresses a complete misunderstanding of how the software industry works, in general- a completely understandable one, mind-you, but the best way to see would be to go back to the original 8088.
Naturally, we had 8088 programs, written (usually) in x86 assembly. Intel, of course, has released far more chips after the x86. the 80186 doesn't count (not being in consumer machines) but the 286 introduced new instructions and a new execution mode. These new features were not "required" by the software of the time, and at first a 286 system really just performed like a faster 8088. But eventually, programmers started to move towards the new platform, and use the new features of the architecture. This had two repurcussions: first, the programs written in assembly essentially had to be rewritten. Even though the 286 and 386 had very similar cycle-eaters, it added at least one new one (the data alignment cycle eater) that meant that a lot of hand-tuned Assembly written for the 8086/8088 Processors, while performing faster on the 286 or 386 (because of reduced wait states and an overall improved architecture) had to be rewritten for maximum performance on them. Most of them never got rewritten, simply because it wasn't worth the effort. On the other hand, Once compilers (such as the C compiler) were updated to use new instructions, those programs simply needed to be recompiled to take advantage of the new processor features. This is particularly the case starting around the Pentium, when new Instruction sets were designed more around their use by a compiler than by their use by a programmer working in assembly. (And the number of rules about speed, cycle eaters and the various instructions raised exponentially both because of change to a superscalar architecture (Pentium) as well as simply because they were so different from their predecessors.
When the 386 came around, it "finished" protected mode- the exploitation and use of which in a program required a rewrite almost entirely, since it used a completely different memory addressing scheme; Additionally, it had it's own gotcha's that either made 8088 assembly optimizations pointless (for example, using a byte-sized values in preference to word-sized ones on a 8088 was a common optimization because of it's 8-bit external data bus, but this advantage completely disappeared with the 286 (which was 16-bit through and through) as well as the 386 (which was 32-bit through and through, the 386SX notwithstanding).
the 8-bit bus cycle eater (which ate cycles by virtue of limiting the bus size to 8-bits). One might reasonably think that with the 386 and the 286, that cycle eater went away, particularly since the 8088 prefetch queue cycle eater is a side-effect of that 8-bit bus, as well as the fact that the 286 and 386 have larger prefetch queues than the 8088 (6 bytes for the 286 and 16-bytes for the 386), and can perform memory access and instruction fetches in fewer cycles. But it doesn't, for several reasons. For one thing, instructions that branch will still empty the prefetch queue, so instruction fetching slows down after most branches. When the queue is empty, it doesn't really matter how big it is. (Branching on these processor should be avoided anyway, on account of it taking second cycles apiece).
Anyway, as we went through new hardware, the software evolved to take advantage of it. Hardware was not pushed forward by software; hardware just inexorably marched forward, and software companies came along for the ride. For example, the much chagrined release of Vista brought with it a collosal change in the form of the desktop OS actually exploiting the capabilities of the graphics card available on most modern systems. Most people thought this was silly, but the point is that at an XP desktop, the graphics card is basically sitting there. Some people spend hundreds of dollars on a graphics card, so having it do the same job that a 12 dollar special GPU could do seems rather silly. Sure, they could play... Quake 3 or whatever and do timedemos, but outside games, you don't even see that expensive hardware. Same for memory; many PCs had 1GB or 2GB of memory (at least); XP didn't use it. It almost always sat unused. So Vista added a memory disk cache (SuperFetch) that used that memory to increase performance.
This is simply exactly because of what you say with "We've reached the clock speed limit". And I say "we can't use 100 cores in the same space. We must reduce the amount of unneccesary data chewed by our poor processors". I know I sound like a dinasour, but I mean well
I think what you might be confusing here, is that for example- If you run Vista on a 1GB machine with a dual core 2.33Ghz it might boot in say... bah... maybe 25 seconds? I dunno. But if you put, say, Windows 95 on it, it boots in mere seconds. So one might surmise that the Windows 95 machine is actually making better use of the hardware. But it is fact it's
underutilization of the machines capabilities that make it appear fast. it uses only a small portion of the available memory, CPU capabilities (both in terms of clock speed as well as instruction sets) and so forth; the result is that you are not really running Windows 95 on a new Intel i7 (or what-have you) but what is effectively a really-really fast Pentium
Anyway, for the future, since software
follows hardware, there is no reason to think that software requirements will somehow march past the capabilities of the hardware This is why parallelism is the software future: since the only way to go forward hardware wise is with multiple cores (due to the quantum tunnelling issue) software is going to follow.
My friend (the one I "hated" for a while) got in contact with a friend in the USA. His friend asked him what type of computer he had. My friend said a Mac running at 30MHz. His friend said, well that's prehistoric! But yet he could still keep in contact with his friend. One more thing, did his computer take 100 times longer to start than mine? Guess what, no!
Oh, good. a Friend of a friend story. a Mac SE can be used for browsing, but it is
definitely not fast at it. It also makes the same mistake as above. Of course, most older machines can be used for modern purposes, if you are willing to use older software and wait a bit longer. For example, I'm sure there are IRC clients available on systems such as that Mac that work perfectly fine. However, At the same time, I doubt there is a 3d-modelling tool comparable to the current versions of 3ds max in terms of capabilities. So it depends entirely on what somebody wants to do. Most of the systems people are buying today are far overpowered for what they will be used for (web browsing and E-mail) so the effect is that with that many overpowered machines, software has marched forward so that web browsing and E-mail has taken advantage of that otherwise untapped power.
I'm sorry, but I keep on insisting that the programming can be made to be much more efficient than today.
However, you've yet to provide anything other than anecdotal evidence toward that cause.
Today my beloved collegue and friend fixed a computer password problem. It ran Windows 7. It had a processor of 2,2 GHz and 2GB of RAM.
Turning on the computer took a while but not that irritating. Logging in took however over a minute! Clicking around yielded the (actually quite nice) new type of timeglas (a rotating circle). And whatever we did yielded that same timeglas. Wait, wait, wait, that is.
Windows...
Is this the future? I hope not!
likely confirmation bias. (same story, IMO, with all the 'terribleness" of Windows ME).
With regard to photographing I understand your point. And I have learned. I discussed it with my friend today. I almost (obs) immediatelly understood that if you take a picture of pure resolution you can not increase the resolution afterwards.
Actually today graphics artists are more arguing amongst themselves about whether to use 32-bits-per-pixel at all; the debate now is whether it is worth it to go to 128 bits per pixel (with each colour component being a full 32-bits). This is of course completely silly as far as making graphic images for websites or programs is concerned. However, where it get's relevant is when dealing with hard-copy print and magazines, since smooth gradients can occasionally have clear "lines" on them as a result of the lower colour resolution (paired with the colour conversion to CMYK for print).
naturally, of course, This is a feature only used by print artists, but consider for a moment that a lot of print artists get their subjects from a digital camera, and one could make the case for digital cameras to even have the ability to capture that amount of information. (In fact, most graphic artists that employ digital photography have digital apparatus that costs several thousand dollars with advanced capabilities such as that, simply because it is something that such a person is going to need for their work.
I am actually that much of a drunk that I tremble so much that I can't solder my CPLD to the arrived Schmartboard. So I am trying desperatelly to make one of my collegues to do it for me. I have tried for a month now but nothing happens.
Uuuh... not sure how to respond to that. I'm pretty sure there is a language issue here because reading this at face value I would have to come to the conclusion that you drink heavily at work...
Attaching the architecture of my CPU.
Schematic, rather.
[/quote]