Welcome guest. Before posting on our computer help forum, you must register. Click here it's easy and free.

Author Topic: Time for a new OS, reborn  (Read 11774 times)

0 Members and 1 Guest are viewing this topic.

rogerk8

    Topic Starter


    Rookie

    • Experience: Beginner
    • OS: Windows XP
    Time for a new OS, reborn
    « on: October 18, 2012, 12:48:30 PM »
    Hi!

    Due to the obvious risk of being locked I will keep this attempt nice.

    MS Excel is the best program Microsoft has ever designed (I mean that).

    But newer versions are simply too and unneccesarily complicated (I use a vintage version, i.e 97).

    Most people don't need all the features that today comes with a program.

    My point is that new programs, more often than not, are larger and thereby consumes unneccesary CPU power.

    My vision is that applications should be scaled down to a minimum when we buy them. If we then want additional featutres, like f.i 100 fonts & sizes instead of only one, then we could simply buy it. Module for module.

    But we do not need all these features by default. And it slows down the processor...

    Because I love my simple and primitive Excel-97 so much, I f.i want to add that it do not have that childish animated paper clip. Because who needs that? Really? Grandma, or?  :D

    One more benefit with my vintage version is that it can do log-log plots. Which I haven't found in later versions (2003).

    This is either because I suck at windows or that the program has been adapted to economics (with bar-grahs and other useless stuff).

    My vision is to be able to speed things up while keeping the amazing and simple concept of single-cored processors intact and at the same time keep the visual or audial experince nice.

    Because let's face it, where is the computer evolution headed, really?

    Multiple-cored processors will not solve the problem in the long run.

    Neither will sci-fi quantum-computers or the like do (probably).

    Because what should we do later on when all programs has mutated to even larger ones? Use 100 cores?

    Of course not.

    We should use the resolution that is NEEDED for the specific application. And not generally "good to have, I can compress it later".

    In the long run, precompressing by f.i directly use 8-bit audio instead of 16-bit (read that this automaticly gives an addional 30 times in compression compared to MP3) my idea might not work either.

    But if we refrain from designing bigger and bigger programs, my solution will hold and everyone will get their resolution, quality and speed they want. We got all the advanced HW that we need! It is a software problem!!

    It is interesting to note that camera manufacturers compete with number of pixels. There's lots more to fotographing than resolution. I f.i bought a camera recently. It had some 8 MPIX. But I only wanted to upload the pictures to fb and other chat forums like yours. I finallly found a feature which could compress the pictures from default 3,5MB to some 130kB. I was happy there for a while but I realized that I wanted the camera to take the pictures with the low resolution by default. Guess what? It was not possible!

    So here is yet another point to my vision.

    But if I wanted to manufacture a (very) large photo to hang on my wall then yes, higher resolution is (for fun) needed.

    Take care!

    Best regards, Roger
    PS
    Attaching yet another fun picture of my pet project







    [year+ old attachment deleted by admin]
    « Last Edit: October 18, 2012, 01:05:26 PM by rogerk8 »

    BC_Programmer


      Mastermind
    • Typing is no substitute for thinking.
    • Thanked: 1140
      • Yes
      • Yes
      • BC-Programming.com
    • Certifications: List
    • Computer: Specs
    • Experience: Beginner
    • OS: Windows 11
    Re: Time for a new OS, reborn
    « Reply #1 on: October 18, 2012, 04:29:37 PM »
    Due to the obvious risk of being locked I will keep this attempt nice.
    it's probably going to get locked anyway.

    Quote
    MS Excel is the best program Microsoft has ever designed (I mean that).
    Subjective. Excel is mostly useless to me, personally. My vote would go to Visual Studio.

    Quote
    But newer versions are simply too and unneccesarily complicated (I use a vintage version, i.e 97).
    People said the exact same thing about Excel 97 compared to Excel 5.0. My point is that some people are resistant to change but unaware of it, so they come up with ad hoc justifications for their pre-existing biasses. I'm not super-fond of the ribbon, myself, but recognizing that I am more against the change itself rather than the result of that change helps.

    Quote
    Most people don't need all the features that today comes with a program.
    Very true.

    Quote
    My point is that new programs, more often than not, are larger and thereby consumes unneccesary CPU power.
    demonstratably false. Larger programs to not consume more CPU. Additionally, This ignores the issue that unused CPU cycles are wasted cycles anyway; if a program uses 50% CPU and completes half as fast as if it used all the available capability, what good has it achieved, other than wasting time? OS thread scheduling could be used to make processes low-priority so that they never take cycles away from other processes. My point being that it is silly to have some idea that unused CPU cycles contribute positively to performance, because they don't, for obvious reasons.

    Quote
    My vision is that applications should be scaled down to a minimum when we buy them. If we then want additional featutres, like f.i 100 fonts & sizes instead of only one, then we could simply buy it. Module for module.
    The installations do provide options to enable and disable various features of the application. One such feature includes Installing the features only when they are used.

    Quote
    But we do not need all these features by default. And it slows down the processor...
    Again, the former is why options are provided during installation. The latter is demonstratably false; code that doesn't execute does not effect overall execution speed.

    Quote
    One more benefit with my vintage version is that it can do log-log plots. Which I haven't found in later versions (2003).
    XY Scatter chart, select desired sub-chart type, and change both axes to log. Plots are now found as part of the chart Module (and yes, it quite literally is a separate component, exactly the type of functionality you suggested!), because they both are a graphical representation of data.


    Quote
    This is either because I suck at windows or that the program has been adapted to economics (with bar-grahs and other useless stuff).
    Fact is that while you consider it useless, a lot of other people probably don't. They don't just add these features willy-nilly- they add them through user feedback and testing. A good example of this is that they found a lot (vast majority) of people using Excel for lists of data- more like a database. So they changed features to better suit that workflow, even though arguably it could have been said to be beyond the scope of the program.

    I often see people use Excel as a database too...

    Quote
    My vision is to be able to speed things up while keeping the amazing and simple concept of single-cored processors intact and at the same time keep the visual or audial experince nice.
    But at what point is a single-cored processor no longer a singler-cored processor? It could be argued that the original Pentium's SuperScalar architecture and dual pipelines offered simultaneous execution of instructions, so isn't it effectively the same concept?

    Quote
    Multiple-cored processors will not solve the problem in the long run.
    What problem?

    Quote
    Neither will sci-fi quantum-computers or the like do (probably).
    There is no way to know how post-von nuemann machines will work.

    Quote
    Because what should we do later on when all programs has mutated to even larger ones? Use 100 cores?
    This is a typical debate strategy (and fallacy) which basically takes the oppositions concept and inflates it to ridiculous proportions. The fact of the matter is that multiple cores and parallel processing are the future simply because we cannot make a single core go much faster. We've reached the clock speed limit; therefore the hardware industry is moving forward by multiplying the number of cores. This is not really a new thing, either; servers have had multiple processors for quite a long time (mid 90's at least) for the reason that they are handling a lot of different tasks simultaneously and so multiple processors and/or cores are better than the approximation of concurrency used for a single processors, which have a high overhead in terms of context switches.

    Quote
    We should use the resolution that is NEEDED for the specific application. And not generally "good to have, I can compress it later".
    using your later example: let's say you took a photo exactly for what you needed on the web. so it came out 640x480. That serves your need.

    But what if later you decide "hmm, that would make a pretty cool poster". You're screwed- all you have is 640x480. You can't make a poster out of it, and you can't take the picture again for obvious reasons. You've essentially fenced off what you are able to do with the image by virtue of deciding that you will never need to do X with it, without understanding at the time that needs and requirements change.

    Quote
    In the long run, precompressing by f.i directly use 8-bit audio instead of 16-bit (read that this automaticly gives an addional 30 times in compression compared to MP3)
    No it doesn't. using 8-bits per sample instead of 16-bits per sample is going to reduce size exactly by half for obvious reasons. And it sounds much worse.



    Quote
    But if we refrain from designing bigger and bigger programs, my solution will hold and everyone will get their resolution, quality and speed they want.
    At the time. But if the needs for that data change, they're screwed. Not a very future proof scenario, but since you advocate the use of architecture-dependent assembly language this lack of foresight comes as no small surprise.

    Quote
    It is interesting to note that camera manufacturers compete with number of pixels. There's lots more to fotographing than resolution. I f.i bought a camera recently. It had some 8 MPIX. But I only wanted to upload the pictures to fb and other chat forums like yours. I finallly found a feature which could compress the pictures from default 3,5MB to some 130kB. I was happy there for a while but I realized that I wanted the camera to take the pictures with the low resolution by default. Guess what? It was not possible!
    I don't know what camera you have (well, that's a lie, you have a Coolpix S6150) but every single digital camera I've used has had a feature built in that let's you change the resolution. Unfortunately on at least one occasion I switched it to 1024x768 and forgot afterwards to increase the resolution, which effectively ruined a few images. (you can't get 1024x768 images developed). If I had "forgotten" the other direction, though, creating the appropriate sized image would be a matter of resizing the larger one. Making an image smaller is easier than trying to make it larger, because it doesn't try to create data where there is none. (This is also one of the reasons that the "digital enhancement" stuff shown on TV is ludicruous- more than once I've seen a show have "security footage" of a perp or something where the face was maybe 8 pixels, and they basically say "enhance" and the program has somehow managed to create millions of other pixels pretty much out of thin air.

    Quote
    But if I wanted to manufacture a (very) large photo to hang on my wall then yes, higher resolution is (for fun) needed.

    The issue here is that you are assuming that, at the point of capture, a person is going to know every single possible usage for a recording or image. They won't.


    Also: regarding the image, What exactly are we looking at there? All I see is a few IC chassis with some resistors, capacitors, and LEDs attached. Also, what progress has been made since Sept 22nd, when the picture was taken? Are they integrated into some more functional unit? I imagine they are supposed to be plugged into the various IC bases that are part of the "motherboard/test board" unit whose image you posted earlier, taken in June?
    I was trying to dereference Null Pointers before it was cool.

    patio

    • Moderator


    • Genius
    • Maud' Dib
    • Thanked: 1769
      • Yes
    • Experience: Beginner
    • OS: Windows 7
    Re: Time for a new OS, reborn
    « Reply #2 on: October 18, 2012, 04:55:36 PM »
    I shoulda left this locked...
    " Anyone who goes to a psychiatrist should have his head examined. "

    foxidrive



      Specialist
    • Thanked: 268
    • Experience: Experienced
    • OS: Windows 8
    Re: Time for a new OS, reborn
    « Reply #3 on: October 18, 2012, 05:29:35 PM »
    rogerk8,

    You don't have a multiple core processor do you?  Or you haven't ever loaded it fully.
    A single core CPU that is fully loaded will stop responding but a multi core CPU will allow you to access the OS and fix things, such as killing the process that is taking all of a CPU core.

    Multiple CPU cores allow better usefulness.  Science can do more processing on multiple cores. 
    You can run SETI@Home on your multlple cores while doing other things - or human protein folding, or other distributed computing projects. 

    Just one other point - programs with more options don't necessarily slow down a CPU, they use more RAM.   


    rogerk8

      Topic Starter


      Rookie

      • Experience: Beginner
      • OS: Windows XP
      Re: Time for a new OS, reborn
      « Reply #4 on: October 19, 2012, 05:44:25 PM »
      Hi!

      How do I reply to this amazing and educational input?  ;)

      How much am I not learning just by expressing my crazy thoughts?  :)

      With regard to the argument for 8-bit audio for DRASTICLY increasing CPU speed, I confess that that was stupid!  ;D

      But dual cores is also stupid. This is because it only increases speed twice and that probably is a happy(!) estimation (please correct me if I'm wrong).

      Because I suck at windows I do not understand how to make such nice quotes like you do, so this will have to do:

      My statement :

      "Because what should we do later on when all programs has mutated to even larger ones? Use 100 cores?"

      Your reply:

      "This is a typical debate strategy (and fallacy) which basically takes the oppositions concept and inflates it to ridiculous proportions. The fact of the matter is that multiple cores and parallel processing are the future simply because we cannot make a single core go much faster. We've reached the clock speed limit; therefore the hardware industry is moving forward by multiplying the number of cores. This is not really a new thing, either; servers have had multiple processors for quite a long time (mid 90's at least) for the reason that they are handling a lot of different tasks simultaneously and so multiple processors and/or cores are better than the approximation of concurrency used for a single processors, which have a high overhead in terms of context switches."

      I still think that this is the wrong approach for the future. Multiple cored processors will not solve our future need for faster and faster computers (which "incompetent" software companies, no name, will indirectly require). This is simply exactly because of what you say with "We've reached the clock speed limit". And I say "we can't use 100 cores in the same space. We must reduce the amount of unneccesary data chewed by our poor processors". I know I sound like a dinasour, but I mean well  :D

      I also know, I'm getting boring but let's consider this:

      My friend (the one I "hated" for a while) got in contact with a friend in the USA. His friend asked him what type of computer he had. My friend said a Mac running at 30MHz. His friend said, well that's prehistoric! But yet he could still keep in contact with his friend. One more thing, did his computer take 100 times longer to start than mine? Guess what, no!

      I'm sorry, but I keep on insisting that the programming can be made to be much more efficient than today.

      One final example:

      Today my beloved collegue and friend fixed a computer password problem. It ran Windows 7. It had a processor of 2,2 GHz and 2GB of RAM.

      Turning on the computer took a while but not that irritating. Logging in took however over a minute! Clicking around yielded the (actually quite nice) new type of timeglas (a rotating circle). And whatever we did yielded that same timeglas. Wait, wait, wait, that is.

      Windows...

      Is this the future? I hope not!

      One more and final thing (I promise):

      With regard to photographing I understand your point. And I have learned. I discussed it with my friend today. I almost (obs) immediatelly understood that if you take a picture of pure resolution you can not increase the resolution afterwards.

      So this version of my point might not be so good. Unless you only want to use it in smartphones or for uploading to nice chatrooms like this. I.e bragging about how good you are :D

      I thank you all for replying so nicely to my topic. I saw at work today that there was an answer but I was actually afraid to read the answer...I love beer!  :D

      With regards to your final fun comment BC, here are the facts:

      I am actually that much of a drunk that I tremble so much that I can't solder my CPLD to the arrived Schmartboard. So I am trying desperatelly to make one of my collegues to do it for me. I have tried for a month now but nothing happens. It is not so strange either, because who wants to stay after work just to help a friend (well, I would). But nothing happens. So I'm getting more and more frustrated (again). Who should I begin to "hate" this time?  :D

      Best regards, Roger
      PS
      Attaching the architecture of my CPU.  ;)












      [year+ old attachment deleted by admin]

      BC_Programmer


        Mastermind
      • Typing is no substitute for thinking.
      • Thanked: 1140
        • Yes
        • Yes
        • BC-Programming.com
      • Certifications: List
      • Computer: Specs
      • Experience: Beginner
      • OS: Windows 11
      Re: Time for a new OS, reborn
      « Reply #5 on: October 19, 2012, 06:43:33 PM »
      But dual cores is also stupid. This is because it only increases speed twice and that probably is a happy(!) estimation (please correct me if I'm wrong).
      This is not the way to think about it. Multiple Cores don't increase speed by a direct factor of the multiple of cores anymore than the Dual pipeline architecture of the original Pentium doubled it's speed. I already explained why Dual and more cores are helpful speed wise: the fact is that at this point, the large set of processes running on a machine (and even for individual programs, which often can benefit from doing multiple things simultaneously) started to require almost as much overhead in terms of context switching between processes (eg. concurrency being used to emulate asynchronous execution). There were basically two ways to solve that issue, which involve either cranking up the clock speed (which is only possible to a certain point, and clock speed is hardly an indicator of anything anymore, given the fact that a Celeron processor is usually clocked about twice the speed of processors with far better performance. The other was to add multiple core dies to the same processor. This would alleviate the context switches (since for each X added core you reduce the number of context switches X-fold).

      Quote
      Because I suck at windows I do not understand how to make such nice quotes like you do, so this will have to do:
      To paraphrase Charles Babbage- "I cannot rightly apprehend the confusion of ideas that would lead to this statement"... Quoting on this forum has nothing to do with Windows...

      Quote
      I still think that this is the wrong approach for the future. Multiple cored processors will not solve our future need for faster and faster computers
      Two problem with this involve the fact that you assert essentially to have an inside track to the knowledge of exactly what our future needs will be for faster computers; Otherwise, how would you know that parallelism was not a solution to those as of yet unseen issues?

      Quote
      (which "incompetent" software companies, no name, will indirectly require).
      This expresses a complete misunderstanding of how the software industry works, in general- a completely understandable one, mind-you, but the best way to see would be to go back to the original 8088.

      Naturally, we had 8088 programs, written (usually) in x86 assembly. Intel, of course, has released far more chips after the x86. the 80186 doesn't count (not being in consumer machines) but the 286 introduced new instructions and a new execution mode. These new features were not "required" by the software of the time, and at first a 286 system really just performed like a faster 8088. But eventually, programmers started to move towards the new platform, and use the new features of the architecture. This had two repurcussions: first, the programs written in assembly essentially had to be rewritten. Even though the 286 and 386 had very similar cycle-eaters, it added at least one new one (the data alignment cycle eater) that meant that a lot of hand-tuned Assembly written for the 8086/8088 Processors, while performing faster on the 286 or 386 (because of reduced wait states and an overall improved architecture) had to be rewritten for maximum performance on them. Most of them never got rewritten, simply because it wasn't worth the effort. On the other hand, Once compilers (such as the C compiler) were updated to use new instructions, those programs simply needed to be recompiled to take advantage of the new processor features. This is particularly the case starting around the Pentium, when new Instruction  sets were designed more around their use by a compiler than by their use by a programmer working in assembly. (And the number of rules about speed, cycle eaters and the various instructions raised exponentially both because of change to a superscalar architecture (Pentium) as well as simply because they were so different from their predecessors.

      When the 386 came around, it "finished" protected mode- the exploitation and use of which in a program required a rewrite almost entirely, since it used a completely different memory addressing scheme; Additionally, it had it's own gotcha's that either made 8088 assembly optimizations pointless (for example, using a byte-sized values in preference to word-sized ones on a 8088 was a common optimization because of it's 8-bit external data bus, but this advantage completely disappeared with the 286 (which was 16-bit through and through) as well as the 386 (which was 32-bit through and through, the 386SX notwithstanding).

      the 8-bit bus cycle eater (which ate cycles by virtue of limiting the bus size to 8-bits). One might reasonably think that with the 386 and the 286, that cycle eater went away, particularly since the 8088 prefetch queue cycle eater is a side-effect of that 8-bit bus, as well as the fact that the 286 and 386 have larger prefetch queues than the 8088 (6 bytes for the 286 and 16-bytes for the 386), and can perform memory access and instruction fetches in fewer cycles. But it doesn't, for several reasons. For one thing, instructions that branch will still empty the prefetch queue, so instruction fetching slows down after most branches. When the queue is empty, it doesn't really matter how big it is. (Branching on these processor should be avoided anyway, on account of it taking second cycles apiece).

      Anyway, as we went through new hardware, the software evolved to take advantage of it. Hardware was not pushed forward by software; hardware just inexorably marched forward, and software companies came along for the ride. For example, the much chagrined release of Vista brought with it a collosal change in the form of the desktop OS actually exploiting the capabilities of the graphics card available on most modern systems. Most people thought this was silly, but the point is that at an XP desktop, the graphics card is basically sitting there. Some people spend hundreds of dollars on a graphics card, so having it do the same job that a 12 dollar special GPU could do seems rather silly. Sure, they could play... Quake 3 or whatever and do timedemos, but outside games, you don't even see that expensive hardware. Same for memory; many PCs had 1GB or 2GB of memory (at least); XP didn't use it. It almost always sat unused. So Vista added a memory disk cache (SuperFetch) that used that memory to increase performance.

      Quote
      This is simply exactly because of what you say with "We've reached the clock speed limit". And I say "we can't use 100 cores in the same space. We must reduce the amount of unneccesary data chewed by our poor processors". I know I sound like a dinasour, but I mean well  :D
      I think what you might be confusing here, is that for example- If you run Vista on a 1GB machine with a dual core 2.33Ghz it might boot in say... bah... maybe 25 seconds? I dunno. But if you put, say, Windows 95 on it, it boots in mere seconds. So one might surmise that the Windows 95 machine is actually making better use of the hardware. But it is fact it's underutilization of the machines capabilities that make it appear fast. it uses only a small portion of the available memory, CPU capabilities (both in terms of clock speed as well as instruction sets) and so forth; the result is that you are not really running Windows 95 on a new Intel i7 (or what-have you) but what is effectively a really-really fast Pentium

      Anyway, for the future, since software follows hardware, there is no reason to think that software requirements will somehow march past the capabilities of the hardware  This is why parallelism is the software future: since the only way to go forward hardware wise is with multiple cores (due to the quantum tunnelling issue) software is going to follow.

      Quote
      My friend (the one I "hated" for a while) got in contact with a friend in the USA. His friend asked him what type of computer he had. My friend said a Mac running at 30MHz. His friend said, well that's prehistoric! But yet he could still keep in contact with his friend. One more thing, did his computer take 100 times longer to start than mine? Guess what, no!
      Oh, good. a Friend of a friend story. a Mac SE can be used for browsing, but it is definitely not fast at it. It also makes the same mistake as above. Of course, most older machines can be used for modern purposes, if you are willing to use older software and wait a bit longer. For example, I'm sure there are IRC clients available on systems such as that Mac that work perfectly fine. However, At the same time, I doubt there is a 3d-modelling tool comparable to the current versions of 3ds max in terms of capabilities. So it depends entirely on what somebody wants to do. Most of the systems people are buying today are far overpowered for what they will be used for (web browsing and E-mail) so the effect is that with that many overpowered machines, software has marched forward so that web browsing and E-mail has taken advantage of that otherwise untapped power.

      Quote
      I'm sorry, but I keep on insisting that the programming can be made to be much more efficient than today.
      However, you've yet to provide anything other than anecdotal evidence toward that cause.

      Quote
      Today my beloved collegue and friend fixed a computer password problem. It ran Windows 7. It had a processor of 2,2 GHz and 2GB of RAM.

      Turning on the computer took a while but not that irritating. Logging in took however over a minute! Clicking around yielded the (actually quite nice) new type of timeglas (a rotating circle). And whatever we did yielded that same timeglas. Wait, wait, wait, that is.

      Windows...

      Is this the future? I hope not!
      likely confirmation bias. (same story, IMO, with all the 'terribleness" of Windows ME).

      Quote
      With regard to photographing I understand your point. And I have learned. I discussed it with my friend today. I almost (obs) immediatelly understood that if you take a picture of pure resolution you can not increase the resolution afterwards.
      Actually today graphics artists are more arguing amongst themselves about whether to use 32-bits-per-pixel at all; the debate now is whether it is worth it to go to 128 bits per pixel (with each colour component being a full 32-bits). This is of course completely silly as far as making graphic images for websites or programs is concerned. However, where it get's relevant is when dealing with hard-copy print and magazines, since smooth gradients can occasionally have clear "lines" on them as a result of the lower colour resolution (paired with the colour conversion to CMYK for print).

      naturally, of course, This is a feature only used by print artists, but consider for a moment that a lot of print artists get their subjects from a digital camera, and one could make the case for digital cameras to even have the ability to capture that amount of information. (In fact, most graphic artists that employ digital photography have digital apparatus that costs several thousand dollars with advanced capabilities such as that, simply because it is something that such a person is going to need for their work.


      Quote
      I am actually that much of a drunk that I tremble so much that I can't solder my CPLD to the arrived Schmartboard. So I am trying desperatelly to make one of my collegues to do it for me. I have tried for a month now but nothing happens.
      Uuuh... not sure how to respond to that. I'm pretty sure there is a language issue here because reading this at face value I would have to come to the conclusion that you drink heavily at work...

      Quote
      Attaching the architecture of my CPU.
      Schematic, rather.











      [/quote]
      I was trying to dereference Null Pointers before it was cool.

      rogerk8

        Topic Starter


        Rookie

        • Experience: Beginner
        • OS: Windows XP
        Re: Time for a new OS, reborn
        « Reply #6 on: October 19, 2012, 07:40:38 PM »
        Jesus, how much do you not keep on amazing me?!

        Extremely intresting to read your input!

        I love it and feel honored all the time!

        I think we can close this topic now. :D

        I have nothing more to add.

        I rest my case!  ;D

        Except perhaps for more stupid pictures of my never ending pet project  :D

        But I do have to comment on this one:

        "Uuuh... not sure how to respond to that. I'm pretty sure there is a language issue here because reading this at face value I would have to come to the conclusion that you drink heavily at work..."

        I am laughing my *censored* off!  :D

        Take care!

        Best regards, Roger
        PS
        Attaching a picture of another passionate interest of mine

        [year+ old attachment deleted by admin]

        DaveLembke



          Sage
        • Thanked: 662
        • Certifications: List
        • Computer: Specs
        • Experience: Expert
        • OS: Windows 10
        Re: Time for a new OS, reborn
        « Reply #7 on: October 19, 2012, 10:13:57 PM »
        You seem to like LEDS! if thats your passion.    ;) Why are you making your analog electronics modular with those IC thru hole component sockets. I have only seen these used many years ago when using different "analog" modes. Also lots of time soldering the LEDS to the correct polarity when they make LED bar arrays which are better and more professional looking for a final product such as linked: http://www.mouser.com/ProductDetail/Lumex/SSA-LXB10IGW-GF/?qs=sGAEpiMZZMvnlkTg8UMATwn7m4RH1JwofoFCSpZH5AY%3d

        rogerk8

          Topic Starter


          Rookie

          • Experience: Beginner
          • OS: Windows XP
          Re: Time for a new OS, reborn
          « Reply #8 on: October 22, 2012, 02:15:37 PM »
          Hi!

          Thank you DaveLembke!

          The problem with your nice solution is that I am using standard 16-pin DIL sockets. I use 16-pin to be able to display one byte (or two nibbles actually) at the same time. The LED-arrays you so kindly suggested are however of 10 "bits" per module. It do therefore not fit so nicely as my hand-soldered "byte"-arrays does. It was however nice to see that you were actually able to choose color on the array. If you had special 30-pin sockets, that is. Thank you!

          Now I turn to BC.

          Here are my thoughts and questions after once again having read all your nice inputs:

          1) What is context switches?
          2) What is cycle-eaters?
          3) What is superscalar architecture (Pentium)?
          4) Pentium, 32-bit wide address AND data bus?
          5) Pentium I = fastest single core available (100MHz)?
          6) Vista, a good example?  ;D
          7) What is SuperFetch?
          8) Hardware pushing software forward. Is this really true?
          9) 32-bits times RGB = 96-bits and not 128?
          10) CMYK?
          11) SS-Drives and lower speed at the bus destroys my point! (even though SSDs are capacitively small and expensive)
          12) With "automaticly generated code" I think they are using predefined functions with lots of available parameters while only using a very few each time (please correct me if I'm wrong).
          13) There is no line b, should be before beep, yes?
          14) I am beginning to love C :-)
          15) I suck at computers :-)

          The ones that have followed my topic know that I mean no harm.

          I just want faster computers.

          And I can't see why we can't have them today already.

          I mean that the hardware is fully developed, but the software sucks.

          It might however be the other way around, but I will not signe under that.

          I mean that the (single core) processors are fast enough.

          And we need to think about program size (how fun is it to see a program load?) and file size (what is the real use for 24-bit picture resolution?).

          I further more offer a speed-increase of (only) twice for 8-bit audio files (in your smartphone of which you can't hear the difference in you poor iPhone standard ear-phones).

          Consider this:

          Your smartphone/iPad screen is as large as perhaps 4" times 4". This means 10000 mm, roughly.

          If you appreciate fast downloading and reallly don't care how you see the actual pictures, then you migth have 10000 pixels or one per mm^2 ). This is a rough picture but it downloads fast...)

          While each pixel (in practice) has to be 3 bytes (one per color, RGB) you will now have a picture of only 30kB. Which will load very fast. Considering a "bad" connection of 1M bit/s, this will only take approximatelly 1s (but modern connections are way better than that).

          If you are a real speed-freek, you may even tell your operator that you want them in back & white only. Hightening the speed 3 times more!

          Because let's face it, don't we want the higher resolution at home (only)?

          So what does it matter then that our portable machines don't have the highest resolution?

          I for one loves speed. And maybe to be able to select that, the one I want to enlarge and put on the wall, Often you can ask your friend to resend it in higher color resolution...

          I know this is kind of silly, but I do think I have some kind of point here.

          Another crazy example:

          My friend at work told me he had bought an actual functional phone for not more than 10 bucks! (poor chinese people...).

          It was a Samsung and you could both text and make phonecalls with it(!)

          And it had a stand-by time of over two weeks. Two weeks!!

          Consider then these new and "neccesary" smartphones.

          How long is their stand-by time? 2 DAYS?!

          And how much more useful are they, actually?

          Aren't they just more fun?

          Because they are for sure not that much more useful (I don't care about "useless" apps).

          Finally here is an irrelevant example:

          The first tube amp I ever built (Williamson 2X6W pure Class A) let me play as load so that the neighbours came knocking.

          This was before I reached the fantastic sound-level that began to distort (in that nice way only tube amps can).

          Consider also that most of us play at not much more than "noon" at the volume knob.

          This means that the output power is actually "maximum output power" DIVIDED by 10.

          So if you have an amp of say 2X50W, you do not play much more than 2X5W while you are having your party (in your appartment).

          Yet, this is a high sound level, is it not?

          Conclusion:

          You do not need all this power or resolution!

          Take care you all!

          Best regards, Roger





          « Last Edit: October 22, 2012, 02:27:31 PM by rogerk8 »

          BC_Programmer


            Mastermind
          • Typing is no substitute for thinking.
          • Thanked: 1140
            • Yes
            • Yes
            • BC-Programming.com
          • Certifications: List
          • Computer: Specs
          • Experience: Beginner
          • OS: Windows 11
          Re: Time for a new OS, reborn
          « Reply #9 on: October 22, 2012, 07:43:58 PM »
          Here are my thoughts and questions after once again having read all your nice inputs:
          OK! I'll go through each one :)
          Quote
          1) What is context switches?
          As you know, Operating Systems for the last decade or so have had some method of multi-tasking. Today, this is done with threads. Threads are managed by the OS Kernel. A Context switch is basically when the OS goes from one thread, puts it on "hold" and starts executing another thread. With multiple cores, there are fewer context switches because more threads can be run simultaneously. For example, let's pretend we had 16 threads. With a single core machine, that means that there are going to be 16 context switches to go through all the threads of execution. With two cores, however, 8 threads could be on one, and 8 on the other, meaning that there will only be half as many context switches (A context switch only occuring when the context of execution of a given core changes (eg. when the scheduler switches out the stack frame and heap between two threads on the same core).

          Quote
          2) What is cycle-eaters?
          "Cycle eater" is a term used by assembly programmers when referring to instructions, or sequences of instructions, that take more time than one would normally expect. This is normally a result of other parts of the system contributing to a delayed execution time of the given instructions. The term "cycle-eater" here is from Michael Abrash's book, "Graphics Programming Black Book" which focuses on the use of Assembly Language for direct programming of the hardware in DOS: Directly from an early chapter:

          Quote
            I call these cycle-eaters because, like the monsters in a  bad  50s horror movie,
          they lurk in those shadows, taking their  share of your program’s performance with-
          out  regard  to the  forces of goodness or  the  US. Army. In this chapter, we’re going  to
          jump right  in  at  the  lowest  level  by  examining  the  cycle-eaters that live beneath  the
          programming  interface;  that  is, beneath your application, DOS, and BIOS-in  fact,
          beneath  the  instruction set itself.
          Why start at  the  lowest  level?  Simply  because cycle-eaters  affect  the  performance  of
          all assembler code, and yet are almost unknown to most  programmers.  A full under-
          standing of code optimization  requires an  understanding  of  cycle-eaters and their
          implications. That’s no simple task, and in fact it is in precisely that  area  that  most
          books and articles about assembly programming fall short.
          Nearly  all  literature on assembly programming discusses  only  the programming inter-
          face: the instruction set, the registers, the flags, and the BIOS and DOS calls. Those
          topics  cover  the functionality  of  assembly  programs most  thoroughly-but  it’s  perfor-
          mance above  all  else  that we’re  after.  No one ever  tells  you  about  the  raw  stuff  of
          performance, which  lies  beneath  the  programming  interface, in the dimly-seen  realm-
          populated  by  instruction prefetching, dynamic RAM  refresh, and wait  states-where
          software meets hardware. This area is  the domain of hardware engineers, and  is almost
          never  discussed  as  it relates to code performance. And  yet  it is  only  by understanding
          the mechanisms  operating  at this  level  that we  can  fully  understand  and  properly im-
          prove the  performance  of our code.
          Which brings us to cycle-eaters.

          By example, some of the major cycle eaters of the 8088 are it's 8-bit external data bus, it's prefetch queue, Dynamic RAM refresh, and Wait states. The thing about these cycle eaters is that they aren't really documented; however, it's worth realizing  that the people writing Compilers are well-versed in assembly language, and therefore are almost always aware of these cycle eaters. In fact, in addition to changing the instructions being used, choosing different target processors usually means different output to account for the various cycle-eaters present on different processors.

          Quote
          3) What is superscalar architecture (Pentium)?
          A "SuperScalar" Processor implements a form of instruction-level parallelism. It's basically a sort of "multiple core" processor, but each functional unit is not a separate core but an execution resource inside the single CPU; such as the ALU, bit shifter, multiplier, etc. Basically they will often execute more than one instruction during a single clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. One might wonder what this has to do with assembly language: well, using instructions carefully a good assembly programmer might be able to interleave instructions to try to maximize the number of instructions run concurrently. (A Compiler, on the other hand, will be able to do that...)

          Quote
          4) Pentium, 32-bit wide address AND data bus?
          iirc Original Pentium has a 32-bit wide internal bus and address bus, and a 64-bit external data bus.

          Quote
          5) Pentium I = fastest single core available (100MHz)?
          No. I went with the Pentium processor because it is the first with a Superscalar architecture, which is still a single core but is practically two 486 processors on a single die; (referred to as the U pipe and the V pipe) with the second one being stripped down. Since the discussion more or less revolves around multiple cores, and the purpose of multiple cores is simultaneous execution, I was establishing the fact that processors were using simultaneous execution long before we had multiple cores in consumer machines.
          Quote
          6) Vista, a good example?  ;D
          Not sure I follow. A good example of what? Most of the speed issues with Vista were a result of manufacturers loading the machines down with crapware, and not because Vista required hardware that exceeded what was available at the time. What exceeded the hardware capabilities was Vista plus a few hundred useless pieces of crapware, which is quite a different Ostrich egg.

          Quote
          What is SuperFetch?
          Windows Vista and 7 cache commonly used files on the disk in unused areas of System Memory that would otherwise be idle. Basically, it's a disk cache. The effect is a noticable speed improvement, which is more pronounced with more Memory. (One of the big advantages being that people with Gobs of memory no longer have it sitting idle when they aren't playing Skyrim).

          Quote
          8) Hardware pushing software forward. Is this really true?
          Yes. That's why I said it. Intel/ AMD release hardware with new capabilities. Software cannot be written to take advantage of those new capabilities until the processor is released or at least documented, so it's a bit difficult to try to follow the idea that software is driving hardware forward to any sort of conclusion.
          Quote
          9) 32-bits times RGB = 96-bits and not 128?
          32 bits of Red, Green, Blue, And Alpha. are 128 bits.
          Quote
          10) CMYK?
          Cyan, Magenta, Yellow, and blacK. used for printing. Printers use pigments and not light so they use Cyan, Magenta, and Yellow as primary colours.

          Quote
          12) With "automaticly generated code" I think they are using predefined functions with lots of available parameters while only using a very few each time (please correct me if I'm wrong).
          Ironically, this sort of thing is far more common in Assembly via Macros.

          Quote
          13) There is no line b, should be before beep, yes?
          I'm not sure what you are referring to, though I do remember providing a "beep" function of sorts in Assembly and C. Can't seem to find it but I recall posting it, so I think that is what you are referring to.

          The bug, for what it's worth, was that the Assembly version beeped one too many times.
          I was trying to dereference Null Pointers before it was cool.

          rogerk8

            Topic Starter


            Rookie

            • Experience: Beginner
            • OS: Windows XP
            Re: Time for a new OS, reborn
            « Reply #10 on: October 24, 2012, 01:37:18 PM »
            Hi!

            What is Alpha? Intensity, or? I thought only RGB was needed.

            It seems that I know less about computers than I thought. And this kind of explains it all.

            I do however still think that:

            1) We do not need all the features a program nowadays provide (this will just make it load/start more slow than neccesary)
            2) We do not need the hysterical resolution that nowadays are common (but the long-term use is hard to predict...)
            3) I am not certain anymore but it seems like we could write more code-effective programs (using single cores and assembly)
            4) Multiple cores are not the future in the long run (because there's a limit on how many cores you can actually use)
            5) Maybe the problem (read slow computers) isn't the high-level language. Maybe the problem is badly designed compilers.

            I know I am being stubborn but this is what I think.

            My next generation of CPU (using a FPGA instead of a CPLD) I think will have a 32-bit wide address bus and 16-bit wide data bus.

            But if I fail at this (or maybe both) I would like to buy a similar CPU on the (second hand) market.

            What kind of CPU should I look for?

            486?

            It doesn't matter if both the address bus and the data bus is 32-bit wide. But I kind of like the asymmetry because this is how my first CPU will work (if I ever get it to work, that is).

            Finally, you have taught me that assembly do not work so well with multiple cores. So that is another reason why I stick to my belief.

            I think I have said all I wanted to say.

            Take care!

            Best regards, Roger
            PS
            Attaching the schematic of my CPU. And yes, the other one above was more of a block-diagram than a picture of the actual architecture.

            Because I'm so bad at computers but at same time very interested (especially in hardware), could you please recommend a book I should read? It need perhaps not be for dummies but approximatelly at that level. I am very interested in hardware protocols (like the formatting of an hard drive f.i) and the way a (modern) computer actually work. All hardware considered. And drive routines (freely translated from swedish)  :)

            If it isn't you Mr G, then it's got to be you Mr B!  :D



            [year+ old attachment deleted by admin]
            « Last Edit: October 24, 2012, 01:47:59 PM by rogerk8 »

            BC_Programmer


              Mastermind
            • Typing is no substitute for thinking.
            • Thanked: 1140
              • Yes
              • Yes
              • BC-Programming.com
            • Certifications: List
            • Computer: Specs
            • Experience: Beginner
            • OS: Windows 11
            Re: Time for a new OS, reborn
            « Reply #11 on: October 24, 2012, 07:13:53 PM »
            What is Alpha? Intensity, or? I thought only RGB was needed.
            Per-pixel blending. Essentially translucency. For photo's it's not typically used, but it's used heavily in graphics for blending. Obviously, once the image is "complete" it is merged into a single layer and depending on use-case it will probably be 24-bit or 32-bit. If it's for the web usually it get's optimized to an 8-bit palette. Big issue being that 8-bit palette means one-bit Alpha (essentially a transparency mask).

            Quote
            1) We do not need all the features a program nowadays provide (this will just make it load/start more slow than neccesary)
            It does not increase load or start times. in and of itself.
            Quote
            We do not need the hysterical resolution that nowadays are common (but the long-term use is hard to predict...)

            That depends. New laptops are coming out with resolutions as high as 2560x1440/ In order to take advantage of this extra fidelity, raster images are going to have to be larger and have a greater resolution, or Scalable graphics will need to be used. The later can be more processor intensive than the former at a cost of transfer size.

            Quote
            3) I am not certain anymore but it seems like we could write more code-effective programs (using single cores and assembly)
            I already explained this.  I'm not sure if I should bother to explain it again, but I will- saying there is a major divide between single core and multi-core machines is fallacious: again, the first Pentium was essentially two 486 processors. Why does having those two execution units in a single core suddenly change it? The processor needs special attention by assembly programmers (and compiler writers) for performance improvements, in particular taking advantage of dual execution pipes. If you are saying that my statements about the Pentium's dual execution pipes means single core processors are "better" to use, than you are mistaken since the Pentium processor is a single core processor.


            Quote
            4) Multiple cores are not the future in the long run (because there's a limit on how many cores you can actually use)
            There is only such a limit using imperative programming. Functional Programming as well as languages like Erlang that discard the current ideas about concurrent programming techniques (such as threads) and instead favour the functional style of immutable data structures as well as a different concurrency concept; which can either be in the form of tasks, coroutines, or "processes". These languages will automatically take advantage of as many cores as exist on the machine, and are heavily used for large server machines with 64-cores as well as distributed computing models.

            Quote
            5) Maybe the problem (read slow computers) isn't the high-level language. Maybe the problem is badly designed compilers.
            part of the problem is people with a similar idea to you that think they "know better" than the compiler writers and essentially write Assembly in a high-level language. The result is that their code is even slower... so they blame the compiler, even though they took steps to subvert it.

            Another important point with regards to this is- let's go with the assumption that Assembly is indeed more efficient overall than other High Level Languages. Therefore, we can declare that the current end products are not as efficient as possible. So we allegedly should be taking efforts to make them more efficient.

            Even following this logic there are several fallacies. The best example would be a analogy. Let's go with cars since that is so popular in Computing analogies. The same could very well be said of car manufacturers- They use Steel,Plastic, and Aluminium... but I mean, wouldn't cars be better and more efficient if they were made out of Tungsten? They would, indisputably. The metal has a very high melting point so it would essentially be heat resistant, and it's practically indestructible.

            But building cars out of tungsten is fundamentally a bad idea. First, Tungsten is not cheap, and it takes longer and is more expensive to machine. Second, we have the problem that in a car accident the tungsten metal could become a cage that traps victims; the "jaws of life" tool that firefighters use to get into wrecked cars would be garbage, since the tungsten would be pretty much indestructible. So there is an overall safety issue.

            Of course the answer is for people to "not crash their cars" but that is obviously not a realistic Expectation. Accidents happen. Additionally, All these extra costs would all just be passed on to the consumer.

            The same would be true of a modern application being written in Assembly language. The development costs would be exorbinantly higher than even using a language like C, for very little gain; and the consumer would be the one to bear the brunt of the costs. A Triple-A game such as skyrim costs around 60 dollars today, and took maybe 5-6 years to develop. If the entire thing was written in Assembly, it would be in development for maybe three times as long, cost 6 times as much, and the speed improvement wouldn't matter because in 6 years even the cheapest possible machine would be able to run it on the highest settings. It's also worth remembering that early on, when games <had> to be written in Assembly to be performant, they cost much the same as they do now. The Advantage that high-level languages have provided is to make larger projects more managable. Compare the Second Ultima (which cost I believe over 100$) to Skyrim. Skyrim is indisputable a more complex project overall, but the fact is that the complexity is managed well with higher-level languages and modular design. Ultima II was written in Assembly, but is plagued with problems- first the programmer used a large variety of undocumented "hacks" to the PC platform that modern system either discard entirely and emulators cannot emulate. The end result is that The game cannot even be made playable on a modern machine. That is, While the game was finicky at the time, it's now essentially useless without hardware from that era. A Program that doesn't run at all cannot be proclaimed efficient- and that isn't to even account for the constant divide errors one get's because the game wasn't designed to run on today's faster computers. Basically the entire argument falls apart- Assembly language was fine when the resulting program was only going to be one on a specific type of machine whose various specifications were well known (such as the PDP-11) but today, the variety of hardware available and it's different capabilities mean that any assumption is basically invalidating future portability. It doesn't matter how fast or efficient a program is if you cannot run it at all.



            Quote
            If it isn't you Mr G, then it's got to be you Mr B!  :D

            I don't know who Mr G is, and I don't know who Mr.B is. My last name starts with B but then again if you did even the most basic research (eg. my blog->main website->Name on bottom footer) you could easily find out my full name and reach the conclusion that I am neither of these people who you have conjectured me to be.

            I was trying to dereference Null Pointers before it was cool.

            rogerk8

              Topic Starter


              Rookie

              • Experience: Beginner
              • OS: Windows XP
              Re: Time for a new OS, reborn
              « Reply #12 on: October 25, 2012, 04:17:28 PM »
              Hi!

              Could someone please explain Alpha in laymen terms. I do not understand english so well. But as far as I understand, RGB is enough. And I can't understand why it isn't always enough.

              As usual it is always interesting to read your marvellous input, BC.

              And yes, I had already looked at your fantastic blog.

              Guess what? I did not understand a single thing  ;D This is simply because I suck at programming languages. So when I saw all your fantastic programming examples I just came to the conclusion that this was nothing I could learn from (even if I wanted...)

              One nice thing though that struck me by reading the first part of you blog was that we are on the same quest. That is faster computers!

              I did however think that you wanted to stay anonomous (and wouldn't have your actual name even in the blog), so I didn't even try to search for your name. Actually, when you recently did describe how to, I didn't find it then either. But it doesn't matter now. You are a skilled and nice guy, and I am at least a nice guy. Who doesn't know it all, that is  ;)

              Take care!

              Best regards, Roger Knopp
              PS
              Attaching a picture of my CPU progress. Just got 156 more wires to solder...

              [year+ old attachment deleted by admin]

              TechnoGeek

              • Guest
              Re: Time for a new OS, reborn
              « Reply #13 on: October 25, 2012, 04:29:34 PM »
              Could someone please explain Alpha in laymen terms. I do not understand english so well. But as far as I understand, RGB is enough. And I can't understand why it isn't always enough.

              Alpha is transparency. This is important in graphics use, especially on the internet, where the graphics artist can specify certain parts of the image as a mixture of red, green, and blue plus a transparency amount. 0 = transparent, 255 = opaque. If you have a square of red (#FF0000) plus alpha (80, 50%) this results in a pinkish square on a white background, but on a blue background it will look more purple. Essentially, alpha is a technique used for color blending. Another use is thus: because images are always rectangular, a circle requires the circle itself, but the outside must be either a solid color, or transparent, which will allow the image to be used seamlessly on any color background.

              rogerk8

                Topic Starter


                Rookie

                • Experience: Beginner
                • OS: Windows XP
                Re: Time for a new OS, reborn
                « Reply #14 on: October 25, 2012, 05:04:07 PM »
                Thank you, Technogeek!

                I still think I do not understand this. But I can appreciate the difficulties for color to appear correctly at different backgrounds. And this might not be as simple as ordinary RGB-blending (yielding all the rainbow colors). But I have never heard of this before. It amazes me! And that fascinates me. Because a CRT did not have a "forth" cannon. It was only RGB.

                In short this means that one pixel do not only need three bytes (256 levels for each color) it also needs a fourth byte (the Alpha-byte), yes? And this is always embedded in ordinary pictures, or? So you have to calculate with four bytes per pixel? Is this true?

                Best regards, Roger