Intel's upcoming 10nm and beyond

Discussion in 'Hardware Components and Aftermarket Upgrades' started by ajc9988, Apr 25, 2019.

  1. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,015
    Messages:
    11,508
    Likes Received:
    9,060
    Trophy Points:
    931
    bwahahahahahahahahaaaaa.....im right there with you! :D
     
    Rei Fukai likes this.
  2. AlexusR

    AlexusR Guest

    Reputations:
    0
    Yes, CPU encode will still give a better quality and you're right, the 16-core can be useful for that for people who don't want to go dual PC (one for playing and one for encoding). I just wonder how well the core sharing will work with games that will be trying to use as many cores as possible (though you can always set Processor Affinity manually for games and programs like OBS for most optimal performance).
     
    ajc9988 likes this.
  3. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,526
    Messages:
    5,679
    Likes Received:
    8,018
    Trophy Points:
    681
    Have you ever tried process lasso?

    Sent from my SM-G900P using Tapatalk
     
  4. AlexusR

    AlexusR Guest

    Reputations:
    0
    No, I only tried streaming using dedicated hardware encoders. I can see how this program would be useful on multi-core CPU since it can apply persistent affinity to individual apps and games, so this should work great for 16-core CPU users.
     
    ajc9988 likes this.
  5. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,765
    Messages:
    12,218
    Likes Received:
    2,256
    Trophy Points:
    631
    For the stated use cases, two 4 Core or 8 Core platforms will be a lot more productive and future proof than any single 16 Core platform you can buy today, even if each setup comes with 64GB RAM for every 8 cores used.

    Having 16 Cores today is the equivalent to racing stripes and flame decals on '90's cars...

    I've been hearing for almost three years now how AMD will change the PC landscape with their high core offers.

    Yeah, still waiting.

    There is a reason that Intel didn't lose 80% (net income) as AMD did, but that side of the business is conveniently ignored by the AMD blind allegiance here. That reason? Intel is still delivering more performance period.

    The fact that Intel can do this with their oh so old process node(s) is even more noteworthy. But I'm sure I'll be told how 'out of touch' they are again.

    It may look like I'm bashing AMD and/or putting Intel on a pedestal. But people, come to your senses, you nor I can influence the numbers that matter one iota. When Intel is surpassed, I'll be the first to admit it.

    But continually coming up with imaginary uses for 16 cores when there's still not much use for them (especially in this topic/context), is getting a little tiring.
     
  6. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,526
    Messages:
    5,679
    Likes Received:
    8,018
    Trophy Points:
    681
    Says the person using programs optimized for single threaded performance on an aging adobe program.

    We can revisit this in a month to two months, then again in about five months or six months if comet lake drops around then.

    Part of the issue is software designers not adopting parallelizing workloads. As we all mentioned, games are only starting to regularly get scaling beyond six cores. Adobe actually went backwards on scaling on some programs, whereas their competitors actually do scale, but are less used (compare premiere and resolve for video editing).

    The use of 12 and 16 cores or above is understood for professionals. Consumers, until now, have not had such a luxury. Because of that, and how consumer software is designed, you do have a point on consumers looking for ways to use all that extra power, mainly because software companies haven't designed their products for the commercial space, instead focusing on lower core counts. That is a temporary issue, fixed through changes that are coming.

    Hell, with the optimizations on a game like Civ VI gathering storm, the AI processing for later game play on my 1950X now demolishes ANY 8-core chip out there. Once the programming is in place, your comment will not age well.

    Sent from my SM-G900P using Tapatalk
     
    bennyg likes this.
  7. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,765
    Messages:
    12,218
    Likes Received:
    2,256
    Trophy Points:
    631
    Please stop trying to pigeonhole me into a single workload. I have always stated my workloads are varied, but that they were 'most' like PS. Not only do you not know my actual workloads (I would be a fool (yeah; competitors) to divulge them fully and publicly), but most here have an instant bias that seems like anything or anyone that seems against AMD or is pro-Intel is to be ridiculed, instead of engaged in a conversation. Not only are most of my workloads not based on Adobe products currently (and for quite a while now), but they are also made up of custom and proprietary code too. ;)

    Let's try this again: I'm pro-productivity, period. I support the hardware that actually increases my productivity at the end of the day, not how much I'm liked in a little corner web forum, or how good the 'scores' look on mere synthetic tests. Yeah, I would love to revisit this in a month, half year or even another three years from now, but I don't see the movement needed and required to move beyond 8C/16T in any meaningful way in this time-frame, just like I predicted in another thread so many years ago too.

    Professionals who actually need more than 8 Cores were always served well enough, and right now, they have great options of choice. For that, we can thank AMD. But if productivity is their goal and they have a normal, varied workload, just like I do and most of the people I know do too, then mere additional cores are not the answer, even today. This is known by most of the professionals in my circle, intimately.

    At the most? One, two or even three (high and very) high core count platforms are commonly used much more efficiently in an organization vs. having every workstation be capable of the highest demand process, yet come in second, third or last in their most used processes/workloads. It just makes sense because that is still the reality of how software currently and for the foreseeable future, works.

    Stating that software designers are slow in adopting and utilizing these additional (sometimes available) cores misses the point. I don't go to my suppliers and whine about how I wish things to be and then go about buying and configuring products based on those imaginary wishes. I tell them to provide me with their ultimate platform example and then I test it in my environment. Either it flies or it dives. Next.

    While it is nice for consumers to have access to multicore platforms that resemble the best of a few years ago at cheap prices, that has always been the case. Are they now getting mostly on par with those older platforms and in some ways surpassing them? Great! But that doesn't make a multicore platform the 'go-to platform' either, for most.

    My comments will age well because I don't say these changes will never come. I'm saying they are not here yet. And until they do, these computers are frequently toys for most because there are other, more suitable and finely tuned options out there and for much less too.

    Will a $1K processor 'demolish' a $500 8 core chip in a single example as you've given? Yeah and yawn... (and, I'm simply taking your word for it) of course, it will. When I say the same thing about Intel's offerings for my workloads, why am I wrong then?

    I hope that the very slow momentum for (very high) multicore support and especially for parallelizing software workloads in the last three years or more will accelerate at a much more
    exponential pace going forward. As long as it is not at the expense of the performance we can get now, or even with three-year-old hardware, in less parallelized workloads.

    When a current desktop runs my workloads slower than what I had as a 'mobile' workstation years ago, that is not something I will pay for.

    And, I have always agreed that, once the programming is in place, we'll all be singing from a different songbook.

    But then, so will Intel. ;)


     
  8. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,526
    Messages:
    5,679
    Likes Received:
    8,018
    Trophy Points:
    681
    You always hide behind this bs ambiguous workload that you never say because it would allow people to dig into it.

    Now, with that running critique out of the way, let's get to where you are absolutely correct. Getting the machine right for the job. That is why awhile back I recommended doing a cheap amd capture machine and an Intel 6-core for streaming rather than getting an 8-core. At the time, there were basically no games getting much scaling for the two extra cores. This is just an example, as you've seen me have vastly different recommendations on server and other workloads.

    Finally, we are starting to have increased parallelization on programs allowing them to scale further. That is only going to grow as it is more common in mainstream and HEDT chips as well as server. Chiplet is coming to everything. And because, at least until graphene or similar is viable, we are hitting frequency limits on current technologies, as well as costs of miniaturization increasing, we really only have one choice: moar cores! That is just where we are at on computer tech.

    Now, if doing purchases for business or as an enthusiast, upgrades happen quicker, as in once it is necessary or makes fiduciary sense for businesses on TCO and ROI, or a jump in performance for enthusiasts. But, the overall trend is that ordinary people are buying systems then holding onto them for longer periods of time. In part this is due to the global economy retracting, in part it is increased costs of systems while wages are stagnated, creating a system where it is harder to allocate funds, in part due to proliferation and higher costs of phones and mobile devices (tablets, laptops) causing people to have to plan which device is upgraded when, etc.

    Now, with that last point, that is were we disagree and agree. When it makes sense for us to upgrade, we do it. I rely more on CPU multi threading, which is why I still have a 980 Ti paired with a 1950X in my workstation (the extra 60% to grab the Intel 16 core wasn't within budget at the time, not saying it is a bad product). Your workloads seem lighter on multi threading by comparison, but benefit from high frequency single thread work. Nothing wrong with that. And for systems in various work environments, you might want to have a 9900K sitting by a 2990WX machine loaded with 8 graphics cards.

    But, for average consumers, 5 yrs + is happening more often. As such, and the stated info above, higher core count will age better.

    Now, we both have our caveats on stated goals. And, depending on where you for in on purchasing, different goals males sense. But not always is it better to buy best for today, while ignoring where things are going, especially for non-business use consumers.

    Sent from my SM-G900P using Tapatalk
     
    rlk likes this.
  9. rlk

    rlk Notebook Evangelist

    Reputations:
    97
    Messages:
    483
    Likes Received:
    236
    Trophy Points:
    56
    There's no question that parallel programming, particularly for things that aren't embarrassingly parallel, is more difficult than sequential programming. All manner of programming models (a lot of which I've been involved with, starting with Thinking Machines) -- from the pure data parallel, to data flow, message passing, explicit threading, what have you -- have been tried, but they tend to be more difficult than trusty old single threaded for a lot of problems, and no one parallel programming model works well for everything. But it's also true that clock rates and CPI aren't improving as fast as in yesteryear, while core (and thread) counts continue to increase, and we can thank physics for that. Faster clock rates demand faster switching, which means greater power consumption (superlinear) to drive faster transitions. And then we hit the simple speed of light limit, and the fact that matter is composed of atoms, which are not negligible in size compared with circuit features. Those limits are much less of a problem in parallel systems, particularly when sychronization can be minimized.

    For that matter, even single threads are not purely sequential at the instruction level, and haven't been for ages. Pipelined instructions are themselves parallel (control parallel, not data parallel), and it's important to issue instructions in a way that doesn't break those pipelines. If you don't use assembly, you're relying on the compiler to do that work for you, but it's still there.

    Bottom line, getting better performance will intrinsically require greater use of parallelism, be it at the data or the task level.
     
    Kyle, hmscott and ajc9988 like this.
  10. rlk

    rlk Notebook Evangelist

    Reputations:
    97
    Messages:
    483
    Likes Received:
    236
    Trophy Points:
    56
    I don't particularly care what your exact workload is. I do find it interesting that other professionals in your circle (presumably your competitors) understand your workload but you can't tell anybody else. Whatever.

    It's not "still the reality of how software currently and for the foreseeable future" can't take advantage of lots of cores. That may be the case for your field, whatever it may be. Or at least for the custom and proprietary code that may not have been updated. But for my field -- software development (working on Kubernetes/OpenShift by day, other various FOSS packages off hours) -- having lots of cores means that builds run that much more quickly, and testing (when written in a way that's not inherently sequential) also completes more quickly.
     
    hmscott and ajc9988 like this.
Loading...

Share This Page