*Official* NBR Desktop Overclocker's Lounge [laptop owners welcome, too]

Discussion in 'Desktop Hardware' started by Mr. Fox, Nov 5, 2017.

  1. Robbo99999

    Robbo99999 Notebook Prophet

    Reputations:
    4,100
    Messages:
    6,642
    Likes Received:
    5,773
    Trophy Points:
    681
    Ha, with your multi core power of your 2700X you might be able to wait till 2030! ;-) No, I don't overthink buying scenarios...I enjoy thinking about buying scenarios! But also I like to buy intelligently at the right time, which is the plan anyway, I believe in trying to ride the crest of wave as early as possible rather than buying something at the end of a cycle - that way you enjoy better performance for longer.....but if my PC wasn't performing like I needed it to and I was having to make compromises then for sure I would upgrade as soon as that would arise, but I hope that won't arise until end of next year for my platform choice - so I would then give myself the option between AMD 4000 series CPU vs Intel...and I would then buy the platform with the greater gaming performance; in the interim I'm almost definitely buying a 3000 series NVidia GPU next year because I have a G-sync monitor and also because I see the next gen to be actually relevant when it comes to ray tracing performance/implementation. So, yeah, I'm thinking it through, I enjoy the process and I enjoy trying to be smart...it remains to be seen!
     
    Convel, Papusan, Raiderman and 2 others like this.
  2. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    I agree to a degree. If it is for business or you have an immediate need, bite the bullet. Otherwise, it is important to plan around known quantities. PCIe changes are one, ram changes are another.

    For example, PCIe 5.0 will be available around 2021-22. PCIe 6.0 is published in 2021 and will hit servers likely 18-24 months later, although unknown when consumers may get it. DDR5, though, will double bandwidth and will be available 2020-21. That will likely last longer than PCIe and will likely use a new platform.

    So, there is a convergence of new standards being implemented in 2021-22 time frame, making it a good time to buy. Also, if Intel can deliver on 7nm, then you will have another potential inversion of competition between Intel and AMD, meaning pricing could be very good at that time.

    As to graphics cards, their cadence is a gen every two years, roughly, with some exceptions. So having large Navi and Ampere on a die shrink from Nvidia on a mature 7nm node, it will be a very competitive time as well. Since Nvidia's monster Turing die was made for 10nm or smaller, the die size on 7nm will be large for 7nm, but much smaller, meaning yields will be up and defects less likely to create unusable dies. That means they can be sold for lower prices (unless both Nvidia and AMD decide not to bring down prices and just mark price to relative performance, which happens a LOT). Intel won't make a splash, so doesn't matter about them next year.

    So that is why next year is a good time to buy, although you may want to wait until both companies products are released.

    But all of that assumes you can wait. If you can't, or if a great deal lands in your lap, take it! Otherwise, patience is fine.

    I mean, Intel isn't innovating on desktop and soonest I would want to grab something would be rocket lake, which is a back port of Willow cove core to 14nm+++++. AMD may also be releasing Zen 3 earlier than mid-summer like they did this year for Zen 2 (that is a bit speculative). Most people already have some version of covfefe refresh or Zen. As such, comet doesn't add anything, cascade-X is a rehash, and Zen 2 is roughly equal performance to Intel. So unless you have a need, why buy now?
     
    Robbo99999 likes this.
  3. ssj92

    ssj92 Neutron Star

    Reputations:
    2,098
    Messages:
    4,001
    Likes Received:
    4,979
    Trophy Points:
    331
    Idk what I am doing wrong. Either I got a really crappy CPU for overclocking or I'm doing something wrong.

    I can't get this thing stable at 4.4Ghz+ I even tried +0.300 offset on CPU & Cache on BIOS which brings it to 1.37v and still BSOD under load.

    Auto everything with multipliers to 43x brings it to 1.27v and it seems to work at this frequency.

    My goal was at least 4.5Ghz all core and maybe 4.6-4.7Ghz single core 24/7

    [​IMG]
     
    Robbo99999 and Mr. Fox like this.
  4. Rage Set

    Rage Set A Fusioner of Technologies

    Reputations:
    1,145
    Messages:
    1,351
    Likes Received:
    3,792
    Trophy Points:
    281
    It doesn't appear like you are doing anything wrong. What it likely boils down to is the CPU isn't the strongest overclocker and that is to be expected with the majority of Xeon chips. I wouldn't stress about it.
     
    ssj92, Papusan and Mr. Fox like this.
  5. Raiderman

    Raiderman Notebook Deity

    Reputations:
    720
    Messages:
    986
    Likes Received:
    2,409
    Trophy Points:
    156
    I would love to have a ~1300mhz overclock. If AMD would give us a little more headroom, I would be much happier.
     
    ssj92, Rage Set and Mr. Fox like this.
  6. jaybee83

    jaybee83 Biotech-Doc

    Reputations:
    4,095
    Messages:
    11,564
    Likes Received:
    9,125
    Trophy Points:
    931
    so you would be happier if they downclocked their cpus at stock so u can push it further via manual oc? :D

    Sent from my Huawei Mate 20 X EVR-AL00 using Tapatalk
     
    Raiderman and ajc9988 like this.
  7. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    30,522
    Messages:
    35,779
    Likes Received:
    57,920
    Trophy Points:
    931
    Some people ask a similar question about the CPU and I call BS on that as a lame excuse for it being unsuitable for overclocking enthusiasts. People that think in that vein do not really understand or appreciate overclocking, they're just making excuses for a product they like. If it cannot handle meaningful overclocking after elevating the voltage and running it colder, it's a product that sucks at overclocking because it doesn't scale and respond properly under more favorable conditions.

    There is a "reference" clock speed and voltage specification that chip manufacturers use to establish a normal baseline for their chips to function well for average users with average motherboards and average thermal solutions. They have to do that to avoid stability and functionality variances in motherboard manufacturing (or GPU PCB design) and variances in operating conditions (thermals) that are outside of their control or influence. If they didn't do that, people running things stock would have a hit or miss experience based on their hardware purchasing decisions and variety in operating conditions. If they pushed all of their chips to the edge of their functionality, lots of people would have instability issues and end up labeling those chips and buggy and unstable, even though their bad experiences might be their own fault for not making good choices in cooling components or buying low quality supporting hardware, or using it in an environment that is too warm.

    Having a CPU or GPU not respond well to increases to clocks and voltage that are in excess of the stock reference values no matter how well designed the motherboard (or GPU PCB) and thermal solution might be doesn't mean they are "optimized" or "already overclocked" from the manufacturer. It only means they just suck at overclocking and are not well suited for that. The reasons for that can vary as well. The manufacturer may engineer limitations into the product, or may under-engineer the product in such a way that it is not properly equipped to handle increases in stress levels beyond the conditions for which they intended for it to be run. Referring to those products as being "unlocked" is just a marketing scam to lure overclocking enthusiasts. They might as well be locked because having the clock multipliers unlocked doesn't really serve any useful or beneficial purpose and being able to adjust them has no practical application.
     
    Last edited: Dec 9, 2019
  8. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    You evidently are so stuck in your thinking in how products used to be made that you don't understand how boost works.

    First, AMD does scale with voltage and cold. GN showed that just making it colder gives scaling. They showed about 25mhz per 7C. You see similar scaling on Nvidia with temp. You also get scaling on voltage. Otherwise the CPUs could not get to 5.3+GHz. Period. So you mischaracterize a product because it doesn't give you higher frequency. That's ignorant.

    Intel left so much on the table because they produced so many chips and had a much higher product variance in silicon, which is why silicon lottery would show a 300-500MHz swing from worst to best silicon, all while AMDs binning reduces that to 200MHz swing. You may not like it, but those are facts.

    Moreover, since AMDs binning is so good, Intel has been forced to clock their chips closer to Max. See the 9900KS. You only get 100-300MHz OC, depending on silicon and cooling.

    Further, even looking at performance after OCing vs AMD boost, depending on workload, AMD is slightly behind or slightly ahead.

    Next, the boost algorithm already adjusts for the factors you describe in different deployments, such as inadequate cooling, meaning there is no reason to leave that performance on the table anymore from a commercial perspective.

    All you don't like is you don't get a high frequency number due to process and architecture differences, even when AMD is literally taking more and more overclocking records. Why ignore that?

    Edit:


    Edit 2: cold scale analysis starts at 11:26
     
    Last edited: Dec 10, 2019
    Raiderman likes this.
  9. Mr. Fox

    Mr. Fox BGA Filth-Hating Elitist

    Reputations:
    30,522
    Messages:
    35,779
    Likes Received:
    57,920
    Trophy Points:
    931
    No, I do get all of that. I see it and understand it. I am a victim of the same stupid BS as an NVIDIA owner. Boost sucks. Manual overclocking is best. I get it that the control freaks prefer boost and it takes less skill for noobs to feel special. Low frequency gains means crappy overclocking as far as I am concerned. Having to use LN2 for a meager 25-30% frequency gain is stupid. I don't buy the notion overclockers need to change. Hardware manufacturers need to do what their customers want if they want to keep them as customers. I don't like it when the tail wags the dog... I'd say chop off the tail. It's all about the dog, not the tail.
     
    Raiderman likes this.
  10. ajc9988

    ajc9988 Death by a thousand paper cuts

    Reputations:
    1,597
    Messages:
    5,815
    Likes Received:
    8,207
    Trophy Points:
    681
    Overclockers are a niche market, not a market driver. That means what we want would be the tail wagging the dog, not the other way around.

    Also, 25-30% when Intel, under LN2 only gets 40% (literally, 5GHz for the 9900KS which then gets 6.9-7.1GHz under LN2 is only 40%) shows they are not as different as you state.

    Now, I stand by my statement boost is killing our hobby. It is. But, I also don't stand for mischaracterizing a product, even if hyperbole to stress a point being made, if that hyperbolic statement may mislead others.

    For AMD being closer to 4GHz, 200MHz OC is a 5% increase. For Intel clocked at 5GHz, 250MHz, or a 5.2-5.3GHz OC all core, is the same 5%. Please stop acting like they are all that different. Intel trying to leave less on the table to compete or keep some form of lead has reduced them to giving roughly similar overclocking. On HEDT, they still leave a lot of room due to AVX2 and AVX512 workloads being more common, forcing a downclock, or because long rendering or scientific tasks would eventually raise the temps so high it would kill a chip in a situation, whereas the boost would automatically adjust the frequency on AMD, negating needing an offset for AVX workloads, we are seeing the industry head a certain way. We don't have to like it, but we also don't need to act as though we are self-important or that one company is not reacting to another competitor, reducing that OC headroom.

    Now, I do wish they would build in the steps so we could better fine tune the hardware ourselves manually on AMD, because I'm betting me spending hours getting the voltages dialed in would give more performance than their algo. But that is trying to work within the system we have (similar to manually adjusting the curve on nvidias cards since Pascal).

    Once again, don't have to like it (I don't because there goes my hobby), but it's something that is understood.
     
    Raiderman and jaybee83 like this.
Loading...

Share This Page