AMD Carrizo Laptops spotted in EU

Discussion in 'Hardware Components and Aftermarket Upgrades' started by Deks, Jun 22, 2015.

Thread Status:
Not open for further replies.
  1. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,815
    Messages:
    12,253
    Likes Received:
    2,290
    Trophy Points:
    631

    See:
    http://www.anandtech.com/show/9185/intel-xeon-d-review-performance-per-watt-server-soc-champion


    Please bear with me - I know this is a Xeon server review, but it does showcase what Broadwell is capable of.

    More importantly, it also shows what AMD is only wishing it could achieve. Carrizo is not the answer to Intel's Broadwell at this time, let alone Skylake soon.

    In other words; IPC is more than the commonly stated 5% per generation advantage - this is no 'yawn' upgrade.

    Performance is up, cost is down - just as expected of a new platform.

    This compares the new Broadwell Xeon to Haswell i7 QC's very favorably, in all aspects... performance, price and power efficiency (yeah... for this workflow).

    The above quote is the most direct proof I have about how a Broadwell platform 'feels' vs. even an optimized (by me) Haswell solution. Yeah, it will take some faith to believe this from a consumers point of view, but this review came up today and I've been saying this very thing for a few months now.

    The above is why I always say new platform. Not new cpu. Not new gpu. Not new storage subsystem. PLATFORM. The whole is greater than the sum of the parts.

    HSA? DX12? All of that will be included if that is a successful attack angle on all processors going forward. It is not an AMD exclusive except for 'right now'. And eve then, if/when Intel decides it is needed to stay ahead (or keep up with their competition), we'll get it. But it is of no use when almost every other relevant performance factor is in favor of Intel. (Libre Office? Why)?

    To AMD... keep up the good fight. Stay strong. Stay focused and deliver tangible real world results. Not empty promises that cater to gaming oriented and other non-professional workflows. Games, like every other software in existence, will always grow to dominate and choke any platform. No matter how great it is at introduction.

    Sure, the same applies to Intel. But what Intel is concentrating on is what IS most important; performance, power efficiency and price. Paying more isn't a sin or a sign of incompetence. At least not when the higher priced part is more efficient, offers more performance and also offers a longer usable lifecycle (allowing me to be more productive longer for the same relatively low initial cost).

    That is what AMD should concentrate on, imo, to give Intel a sense of urgency once more (like in 2005...).

    Is Carrizo a good step? Yeah, no arguing there. And I really hope the implied improvements show in real world tasks too.

    But again; at the $1K price point... it is another swing and a miss for AMD.

    For me to recommend such a system even for a 13 year old 'gamer', it would have to be at half price or less. Why? Because another will be needed in less than two years once again (and in my experience; nobody buys AMD twice).
     
    Kent T likes this.
  2. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,576
    Messages:
    3,793
    Likes Received:
    1,212
    Trophy Points:
    231
    I haven't red all of what you wrote, but I'll ask - have you ever wondered why the GPU takes more and more space on newer Intel CPUs? It's quite some time since "it is just to have a picture on the display and be light on the battery". I think AMD has always pushed the envelope, just lacked the resources to carry it on. More and more applications would benefit from GPUs, just like more and more applications benefit from more cores. It wasn't that long ago when I was coming across opinions like - why all the cores, no application can take advantage anyway. You know which hardware is the most expensive one? The unused one. You have it there at your disposal, yet you can't get advantage off it.
     
    Starlight5 likes this.
  3. Starlight5

    Starlight5 W I N T E R B O R N

    Reputations:
    551
    Messages:
    2,916
    Likes Received:
    1,377
    Trophy Points:
    181
    I personally see no point in coupling anemic, CPU-wise, APU, whose only strength is better-than-competitors GPU performance, with mediocre dedicated GPU and 15.6" chassis. I simply don't get it why such monstrocity even exists, yet laptops with top AMD APUs come only in this flavor, and any less-than-top AMD APU is so weak it's not even worth mentioning. Could someone please explain me what is so fundamentally wrong in the idea of putting that friggin FX-8800p inside a 11.6" or 12.5" ultraportable and calling it a day?
     
  4. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,815
    Messages:
    12,253
    Likes Received:
    2,290
    Trophy Points:
    631
    A $1K, 12", 12Hr, 2lb, notebook with a 1920x1200 touchscreen and Wacom like pen support and I would buy one too with the FX-8800p inside just to use as my 'digital notebook' - and would be recommending them to anyone that would stop and listen. But a same old same old chassis and performance? I'll keep my U30Jc for those tasks even if the screen is weak, the notebook too heavy and the battery life half - it's still free to continue using it and I'm sure still more powerful than an AMD APU is.



    Intel plays it smart. Introduces things when they need to (based on economics and actual need/usability by their customers). Sure, we all ***** that we could use what we can get today last year - but a small fraction of a percentage of Intel's users don't hold much water. When the environment is right (especially for them); Intel delivers.

    Yeah, the gpu is taking up too much space for my tastes - but Intel does not go willing down a road without a large (expected) reward at the end. I'll bide my time and reap the rewards when the rewards are there to reap.

    Right now, theoretically (Libre Office... sigh) AMD is winning. But while this mole hill was claimed first by AMD, Intel will be the one to clean up big time... in due time.

    A great product is one that performs in harmony with the (whole) environment in which it is introduced in. AMD has not found that sync yet. Intel has; consistently. Even when they were behind AMD a decade ago (the designs they were working on from even before then got them out of that slump and they've never looked back).
     
    Starlight5 likes this.
  5. nipsen

    nipsen Notebook Ditty

    Reputations:
    694
    Messages:
    1,685
    Likes Received:
    130
    Trophy Points:
    81
    ..well. It's kind of the other way around. A gpu is actually the monstrosity. But it's cheap and useful, while we're.. still taught in school that a general purpose computing engine is necessary to have. So the convention is kept. I.e., an x86-compatible cpu has expensive math units that can do atomic execution units (or run reduced assembly language). While your typical gpu is fitted to a set of arithmetic units that can perform a limited amount of SIMD operations in parallel along the graphics card's ram.

    Now, an x86 compatible cpu already runs longer command words than just the basic atomic operations. An intel processor could be called a CISC engine, for example (complex instruction set computing, where the system can collapse common assembly commands into more complex ones, cache the result, and allow this to be computed quicker the next time - most of the optimisation since Pentium has been in that area). As opposed to a RISC engine that would run longer command words with potentially completely different commands, and return that complex result every clock-cycle -- two main strategies for increasing computation power, with completely different limitations and requirements. The other improvement on the cpus have been that they also have started to run limited SIMD operations with various instruction set standards.

    In other words, your average PC CPU as well as your typical GPU are not programmable engines with specialized instruction sets allowing parallelism across work ram with any sort of concurrency. They run a specific instruction set standard, where they add more and more specific algorithm reductions in hardware.

    So, I hear you ask, what is actually the difference between a gpu's arithmetic unit and a cpu's arithmetic unit. And it's basically that gpu-instructions are very often easily parallelizable, in that you can render each pixel independent of the next one, etc. So the processor doesn't have to run as quickly, nor be as complex, and the ram doesn't need to be very fast. While running the same thing on the cpu is just a terrible waste of time - cost-efficiency goes down the drain, and multipurpose cores are wasted, and they have to be made in a huge number to work -- and when you get down to that, designing for parallel operations with concurrent access to the bus is not trivial, and also extremely expensive.

    But. What if you could get RISC engine to be relatively cheap, while simply structuring the compiler and the hardware in it to run general code? It might not always be extremely fast, but code goes through the compiler anyway, so why not go for that? Basically, that's ARM. They did that.

    So now you have a general purpose engine with programmable instruction sets. That essentially can execute "gpu code" as well as "cpu code" on the same chip. It's just about adding any amount of arithmetic units, improve the bus-speed, and you're going to have a cpu that can execute limited simd-type operations across enough memory to function as a gpu, while also being able to run general purpose code(to a certain extent). This is basically the Tegra chip - except they added an IO bus, memory bus and so on on the same die.

    Of course, before that Intel claimed that there'd be a copyright infringement on their intellectual property if such a general purpose engine was made, as that would then have to be called a cpu in intel's definition. And that's that. So Tegra devices, by some curious and unexplainable coincidence, are now limited to tablets and phones only. Rather than, say, replacing Intel's entire motherboard and chip array sandwich in, say, the EeePCs with one single chip.

    So what's AMD's apu? It's an offshoot where the gpu and cpu computing units are on the same die - but where they are separate types of computation cores with a separate pipeline. That is, rather than simply being a mass of general purpose cores in different clusters, etc., which was the initial design. Which, as explained, Intel put a stop to while essentially being about to sink AMD for good. But they still stuck with that offshoot design, and have worked their way off into somehow proving fairly good results with OpenCL thanks to the improved pipeline - basically a fast bus - between the gpu and cpu devices. Another concern is the cost of the chip if the number of general purpose cores were increased. And I don't know if it's entirely certain that such a system actually would be easily compatible with low-level or high-level standards all at once.

    Anyway - the downside to this apu design is that these gpu cores aren't very efficient when it comes to space, and creating a separate pipe increases the size again. So even though it's a pretty nifty engineering feat, it's not nearly as energy efficient as it might be. And while you get quite impressive performance, it's not realistic to expect the same speed or energy consumption on that overpopulated die as on a larger module. So it has certain limitations that AMD will never get through in the long run.

    What they do offer, however, is a possibility to have a general purpose x86 cpu, along with a decent gpu - and more than decent when it comes to decoding video, running OpenCL, etc. That has a very low effect draw on full burn, compared to the competition (read: Intel).

    In an ideal world, we wouldn't be having this discussion now. Basically ARM would have taken over the smaller laptop market long ago, and we would all be sporting laptops with a week of running time for music, text-typing, movies and internet browsing. While AMD would be well on their way to designing a cpu with programmable arithmetic units along a common bus with concurrent parallel access to working ram.

    Also, I suppose IBM would have already done that 10 years ago, and we would have had their PowerPC designs still running in 64-bit land. But that didn't happen either, of course, because those products as well were just too useful I guess. Besides, why pay through the nose for hardware that would force Microsoft to scrap their entire toolchain, while giving a monstrous boost to any company offering a solution that wasn't bound into proprietary code-bases that literally are written by undergraduates pressed for time.

    So that's basically why we have an apu-design turning up. It's an attempt to optimize the gpu/cpu design down to the size where it no longer makes any sense to have it - while keeping the design there on the concept-level to please patent-lawyers at Intel, allowing Microsoft to still design bad software, and annoying the hell out of engineers everywhere, including, I'm sure, at Intel. So there you go: History in computing since ever, according to me.

    Of course - when you can have a computer overheat from running youtube, but do that while scoring well on cinebench. That's obviously going to sell better than, say, a computer that doesn't overheat while running youtube, is about the 1/20th of the size... but doesn't score incredibly high on cinebench and artificial sequential tasks that... no computer used by humans or running programs created for something other than computing "1" in binary over and over again, will ever actually execute. I mean, everyone can see that... right?

    But hey, I don't work at marketing over at Intel, or write my blog from whitepapers sent over from their PR office. So of course I'm not entirely sure about that last one.
     
    alexhawker and dzedi like this.
  6. Apollo13

    Apollo13 100% 16:10 Screens

    Reputations:
    1,432
    Messages:
    2,582
    Likes Received:
    210
    Trophy Points:
    81
    I don't get it, either. It seems like a 13.3" or 14" laptop with a top-end APU and no dedicated GPU would be a nice sweet spot - the lack of a GPU would keep the weight, power consumption, and price lower, while the top-end APU would deliver respectable graphics performance. It'd wind up being $100-$200 cheaper than an Intel Iris system, with similar graphics performance - and thus likely a compelling option for someone who wants both mobility and gaming at a reasonable price.

    Going down to 12.5" or 11.6", you'd need better cooling that would likely cost more for 35W TDP, but the 12.5" size at least ought to be doable. It'd still be cheaper than Intel Iris. If you're willing to add a little thickness, 11.6" should be doable, too - Alienware had an 11.6" with a dedicated GPU a couple years ago that seems to have sold well, and I'm sure you could make it slightly thinner than that with only an APU. Or put it in the 15W configuration on the 11.6" and have it ultrabook-thin.

    Not that there's anything wrong with a 15.6" or 17" system with this APU and a dGPU, but I agree it seems like it's missing the sweet spot.
     
    Starlight5 likes this.
  7. Deks

    Deks Notebook Prophet

    Reputations:
    1,115
    Messages:
    4,689
    Likes Received:
    1,851
    Trophy Points:
    231
    With respect, these are early models with Carrizo, so its possible we might see more form factors over the next few months.

    On the other hand, and do correct me if I'm wrong... Carrizo laptops seemed to have become available the fastest for purchase out of most recent APU releases when it comes to mobile, did it not?
     
  8. Atom Ant

    Atom Ant Hello, here I go again

    Reputations:
    1,340
    Messages:
    1,497
    Likes Received:
    267
    Trophy Points:
    101
    I just do not get why AMD calling Carizzo as the 6th generation???

    -Llano,
    -Trinity/Richland,
    -Kaveri,
    -Carizzo

    So it is just the 4th generation or at the best 5th if we consider Richland as a new generation.
     
  9. triturbo

    triturbo Long live 16:10 and MXM-B

    Reputations:
    1,576
    Messages:
    3,793
    Likes Received:
    1,212
    Trophy Points:
    231
    So, lack of innovation is playing it smart? Did we just hit rock bottom? If Intel is indeed the leader, the miracle maker and so forth, you are trying to make it in every single post of yours, why don't they take charge of innovation? It's not like they lack the resources or anything. With so much cash behind their back, it would've been a thing already, and your sarcastically pointing at a single application.
     
  10. tilleroftheearth

    tilleroftheearth Wisdom listens quietly...

    Reputations:
    4,815
    Messages:
    12,253
    Likes Received:
    2,290
    Trophy Points:
    631
    Please re-read the original post again and in context too.

    In a nutshell; They don't play it smart for you or me, but for themselves and their lifeblood (the shareholders).

    And when all is said and done, they still deliver the best we can buy as consumers.
     
    Starlight5 likes this.
Loading...
Thread Status:
Not open for further replies.

Share This Page