A Noob’s Shopping Guide: Video Cards

You may also like...

Hey, reader. The ads below are not "inappropriate", they are computer-generated "popular topics on the web". Data doesn't lie. Don't blame me, blame mankind.

69 Responses

  1. pspfanMOHH says:

    Jmqm you into spamming to :surprised:

  2. Tech1 says:

    Great info thx!

  3. John says:

    I bought a Radeon 7950 (& FX-8320) the other day.

    I like AMD over Nvidia b/c AMD graphics cards offer better performance for cheaper (like the 660Ti is the same price as the 7950, but it usually offers sub-par performance in games).

    Of course, Nvidia is always the way to go for video editing and especially 3D modelling.

  4. garrei says:

    gtx 700 series will be released soon.. like… VERY soon

  5. Mike Litoris says:

    “Sorry AMD, but right now Nvidia has the upper hand!”

    I stopped reading there, right now AMD is not just the best quality/price, but it has the best Vcard ATM (Ares II, which, curiously, is even older than the titan and is more powerful)

    • SilverInfinity says:

      That right there is misleading. The Ares II is certainly more powerful, but it is a Dual-GPU solution (packaged into one card), akin to the GTX690. The Titan makes no such claim that it is the fastest GPU ever, but rather the most powerful single GPU solution in the market at this time, almost a whole generation ahead of most single GPU setups.

      That being said, the price is astronomical, pretty much impractical for most users. Entry-level compute devs may be interested in it over the much pricier Quadro. The filthy rich may SLI or tri-SLI it for the biggest E-peen to show off to friends. :P

      • TStrauss says:

        I didn’t mean to start an AMD/ Nvidia fanboy war!

        If you take a look at the benchmarks, AMD beat Nvidia in the 6000/ GTX 500 hardware generation:
        http://www.tomshardware.com/charts/2011-gaming-graphics-charts/3DMark11-Gamer,2659.html

        Nvidia has consistently outperformed AMD in the current 7000/ GTX 600 hardware generation:
        http://www.videocardbenchmark.net/high_end_gpus.html

        Regarding the Ares II and the GTX 690, SilverInfinity hit it on the head: both are dual-GPU cards. The benefits of dual-GPU: you get raw horsepower. The drawbacks: these cards tend to be glitchy. Caveat emptor I suppose.

        Also, if you want to compare apples to apples, the Titan supports SLI. Compare the performance of a dual Titan system to the Ares II, and I suspect the Titan will come out on top.

        • SilverInfinity says:

          You’d have to be loaded to even consider SLI with Titan. Or if it’s essential to your business.

          I’m currently staying with the Green team only because Red has had poor driver support for OpenGL, which is used by so many of my legacy games (like Jedi Academy and KOTOR). Those games simply fail to run if not running a specific driver revision only. I’ve not had any issues with the Green team.

          That said, I do appreciate that both sides tend to trade places at the top from time to time – keeps both companies in health competition. There really is no point in being a fanboy if the other side has a clearly superior product in any given generation.

          • TStrauss says:

            This. There is no point in being a fanboy. Just get whatever is at the top right now. I’m sure it will go back and forth every year or two, just like it always has.

            Also, Mike brought the Titan into this, not me. I just wanted to point out that a dual high end GPU vs. a single high end GPU isn’t a very fair competition. If I could afford to SLI Titans, I probably wouldn’t be living in an apartment!

          • Thrawn says:

            STAY AWAY FROM SLI AND DUAL GPU CARDS.
            This is a 5 year long experience.
            My first SLI system was dual asus 8800gtx for about two years, they are not only energy hungry but also they get very very hot (used with crysis in dx10 mode back then).
            Second SLI system was a dual palit 260gtx system, not so energy hungry, not so hot, used with crysis warhead, crysis 2 and several unreal engine 3 titles.
            Now I’m back to single gpu systems as they are not that glitchy, half of the time with SLI I had driver problems, bsods endlessly, picking a working official driver was more of a gamble than playing russian roulette, and the worst of all: SLI does not bring you expected performance boost in every game. There are games that work good (usually unreal titles) but games like metro 2033 or hard reset, stalker… they f*ck up pretty bad. Those engines are not good with SLI, so most of the games will not benefit from this. Also when playing a game with SLI expect micro stutters, those are COMMON, they do not go away. Another negative part of SLI, do not expect it to work with a B-rate CPU and mainboard, most MB’s claim to be SLI compatible but in the end they do not offer the performance needed by the gpus. If you still wanna go for SLI be warned everything in that category is expensive and most things are glitchy.

          • aces says:

            I run dual GTX680′s without problems.
            Just because it was glitchy in the 2xx days, doesnt mean it still is

    • Zyrkl says:

      http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+Titan

      I rest my case. Nvidia is better, but is money hungry.

      AMD isn’t very power efficient, but costs less.

      SLI (Nvidia) is better than CrossfireX (AMD), because CrossfireX has too many bugs.

      Both these companies have its ups and downs. Deal with it fanboys.

  6. adx2 says:

    hey bro i have a quad core amd bulldozer unlocked processor which can be overclock at 4.5ghz current is running 3.4ghz and have a 8 gb of ddr3 ram and a gtx 550 ddr5 graphic card and a gigabyte motherboard which is 3 tb unlocked and now i m thinking of upgrading my graphic card to amd readeon sli

    • PSY says:

      I have some unpleasant news for you: compared to i7, buldozer sucks badly

    • gunblade says:

      would be mean if ur board had four cpu slots and eight pci slots dev kits coool. so the psfour around icore five or icore tree..

    • Aririnkitaku says:

      “AMD Radeon SLI”

      Wat.

    • TStrauss says:

      For this discussion, your CPU only matters if it is intel, because the PCIe controller is on the CPU die with intel set ups (it is on the motherboard with AMD). So that doesn’t really matter. If you can identify your motherboard, I can tell you if it supports PCIe 3.0 or not.

      The AMD Radeon HD 7000 series are good cards, just not the best right now. But if you prefer AMD cards, by all means, go for it! I wouldn’t count on SLI on your AMD Radeon though. Are you sure you don’t mean CrossFire?

  7. tinostar91 says:

    In reality performance loss when you take PCIE 3 card into lower PCIE slot isn’t anything hilarious, if you will take card like NVIDIA GeForce 680 GTX to PCIE 1 performance loss will be only 4-5% so it’s almost unmeasurable, if you will use PCIE 3 x16 card to PCIE 1 x4, performance loss will be much higher but it still isn’t as high as you may think it will be (~25% performance loss)

    • SilverInfinity says:

      That is probably true, but sometimes I get the feeling that my PCIE bus is holding my performance back. I have a GTX460 1GB (OC’d) that constantly has to swap data between RAM and VRAM in a heavily modded Skyrim, and I see the system swap thrashing a lot (HDD activity) with insane stutters at times. All my RAM hasn’t been consumed yet (1+GB left), so RAM is not the problem. I’m getting the feeling that my bus is choked and can’t swap fast enough. Any games that fit right into the VRAM (or just a little over) run like butter.

      Compare the PC I built for my brother, running PCIE 3.0. Running a GTX650 TI also with 1GB VRAM. No thrashing, but some minor stuttering when loading a large area for the first time.

      tl;dr It’s probably important if your game doesn’t fit in your VRAM.

      • gunblade says:

        yea old school board u could atlest make more ugrade to the board jus pop out the chip now day they got u soldering chips in trying to add perfomence were the company jus had not add the chips for the next gen model or a more expensive model.

  8. svenn says:

    “more RAM is always better”
    For most mid-range cards more RAM doesn’t affects anything, as games that could use more RAM need way stronger GPU’s to use it;

    I seen a couple argue if they would buy a 8GB RAM machine or a 16 GB machine (obv. not GPU), they ended up with 16GB, The intended use is surfing the web and office; So probably a 4GB RAM machine would have cut it for many years to come.

    While I understand your point, do not be fooled by vendors selling old crap with more GB’s RAM … even in vRAM;

    Nice article, really enjoyed reading it; keep it coming!

    • gunblade says:

      yea like ddr 8gig for real cheap …ddr3 16 gig i was thinking about getting an older mouerbord thst had 16gig but in ddr two but it had two core slots but only two pci slots none mini pci but then seen a tree hundred dallller bourd with two cores but 16 gig ddr 3 i would by both but nowing the ddr tree be a bit better

    • TStrauss says:

      1 GB of vRAM is probably adequate today, but as next-gen consoles start raising the bar for cross-platform gaming, I could easily see 2 or even 4 GB of vRAM becoming the new baseline. Thinking ahead, I really wouldn’t recommend going less than 2 GB of vRAM.

  9. FishSticks says:

    I have a laptop with a GTX 670MX, does the job!

  10. jon says:

    You made this article into a nit picky fanboy discussion.

    AMD is less power efficient, the temperature in your room will increase by 15 degrees while playing a game.
    An AMD 7970 is 275 Watts TDP. AMD is pretty much raw power, but at a cost of inefficiency.

    Nvidia is power efficient high end cards usually take 70 Watts less than their AMD counterparts, you see 1000′s Nvidia cards inside of super computers because they will not raise the electricity costs. Also they are more precise with their parts you usually see Nvidia with nearly half the Cuda cores than stream processors but they perform equally on both cards.

    Battlefield 3 will only use 6 cores and 3 gb of DDR3 ram, since DX11 is locked at these settings.

    2 660 Ti’s in SLI are only 6% faster than a single 7970 GHz Edition.

    • TStrauss says:

      I did not realize there was so much fanboyism built around GPU manufacturers! My only purpose was to explain the details of video cards for those who don’t know what they’re looking at. There is no reason for GPU fanboyism; they all accomplish the same thing. There are no Nvidia or AMD exclusives! It’s all about making sure you get the best bang for your buck. Personally, I’ve gone back and forth between AMD and Nvidia cards (and 3dfx back when they existed), based on what struck me as the most bang for my buck at the time. If you like AMD, buy AMD.

      There are some problems in your technical analysis, though. First, I don’t see why you would compare a ~$300 card (the GTX 660 Ti) to a ~$450 card (the HD 7970 GHz). Apples and oranges. SLI doesn’t make this a fair comparison either; now you’re looking at a $600 set up vs. a $450 set up. I would propose comparing a comparably priced card like the GTX 680 to the HD 7970 GHz.

      Power consumption depends on use; the 7970 GHz idles better, but the GTX line up are more power efficient under an average load (http://www.tomshardware.com/reviews/radeon-hd-7970-ghz-edition-review-benchmark,3232-18.html).

      Looking at benchmarks, the HD 7970 GHz outperforms the GTX 680 at some tasks, but is outperformed at others (www.anandtech.com/bench/Product/618?vs=555). So your claim that “AMD is pretty much raw power” simply does not line up with the facts. A more accurate claim is that the current line of Nvidia cards are comparable with the current line of AMD cards in terms of processing power, but there is a power consumption tradeoff. If you turn your computer off when not using it (please do this!), then the tables are tilted in favor of Nvidia. At least, they are for this generation. These things change year over year.

    • Z says:

      A lot of the computers in the top 50 fastest super computers use AMD GPUs.
      The max amount of cores BF3 uses is only 4, not 6
      True current AMD cards run a bit hotter than their current counterparts, but they do not raise the room temperature by 15 degrees. To even raise the room’s temperature close to 15, the GPU would probably have to be molten.
      You said AMD cards in general are hotter and less efficient than Nvidia’s cards. But since you didn’t specify which generation, so I can call bullshit than that, since the 400-500 Series ran hotter and were less efficient than their AMD counterparts( 5000-6000 Series)

  11. jd20dog says:

    crossfire can run on different cards as long as they both support crossfire(your split line where the 2 cards image connects will move up or down on your screen depending on which card is better), amd also have infrastructure mode which is used by some emulators like ps2, something that still kills navidia’s cards ram usage to reproduce, or have i been away from these things too long and been dealt with?

    also try to match the ram type from the card to that of your pc or you’ll git scan lines on your screen when refresh rate of the card out performs what your pc can work with

  12. Gikero says:

    I would have pointed out power consumption and heat output. Its a shame when I see someone buy a card and not have enough 6-8 pin connectors or an inadequate power supply.

  13. Spyder2k5 says:

    Just not accurate, not only are AMD drivers better optimized, and developed, their cards are the fastest, and best on the market, and have been for years.

    nVidia WAS best when the 8800 was king, but they sat on that card too long and let their drivers rot as well. Ever since gaining PhysX they have been more focused on that, than producing a good card, or proper drivers, each driver loses performance, and introduces bugs.

    Do proper research and try some cards yourself and you will see, AMD is on top.

  14. Ivo says:

    Crap, i dont think they still supply agp cards with hdmi drivers … Crap im still waiting for that driver hdmi for windows. Lucky enough that linux supports old drivers and writes their own. For updating the old agp card . Better forget that and buy a tuned new pc or id have to forget my north bridge and old pcs dont upgrade they sit at the bin.

  15. Qwizarrds says:

    i got spells that’ll shrink you down to size kid!!

  16. Z says:

    1. Crossfire allows you to use different models of GPUs, SLI is the only one that doesn’t
    2. AMD and Nvidia are pretty much on the same level right now. AMD is even going a bit faster since they started optimizing their drivers
    2. The 400 Series was terrible. They ran hot, consumed more power and performed slower than their AMD counterparts( 5000 Series).
    3. The 7000 Series is not comparable to the 400 Series, I’d understand comparing it to the 500 Series, but considering the 400s ran slower than the 5000s, which is slower than the 7000s, then no.
    4. Current generation cards( and probably also next gen) barely saturate the full bandwidth of PCI-E 2.0, even the Titan just saturates 2.1. For now, 3.0 is just a gimmick
    5. Unless you have anything less than 512MB, video RAM is not that important. It is mainly used for textures and driving resolutions, so unless you plan on gaming at 2K res or above with high res textures, 1GB is enough( A GT 610 with 2GB of RAM costs as much as a 620 with 1GB. The 610 has more RAM, but you’ll need to replace it sooner than a 620. I also point this to console fanboys who think the PS4 is gonna be way faster than any GPU just because it has 8GB of unified system RAM)
    6. Stream Processors and CUDA cores are not the same. They are alike in the jobs that they do, but they are not the same thing.
    7. In relation to point 6, more processors/ cores does not equal to better performance. If that was the case( and if CUDA and Stream were the same), then a $300 7950 should perform way better than a $450 680( which, if I remember correctly, has a lower amount of CUDA cores than the 7950′s Stream processors)
    8. The processors/ cores are not just shaders
    9. The longest slot isn’t always x16. Sometimes it might be x8, or x4
    10. HDMI can be converted into an analog signal
    11. A DDR3 GPU is just fine, but it will run a bit slower than its DDR5 counterpart.

    • SilverInfinity says:

      You probably don’t want to get a DDR3 GPU if you’re gaming at any resolution close to 1080. I’ve seen the lack of memory bandwidth choke my own system when I bought a 1080 screen on an ATI HD4650. All games that are fairly memory hungry (especially w/ AA) will require plenty of memory bandwidth. For example, in a bandwidth-hungry game like AvP2, AMD’s high-frequency GDDR5 has a clear performance advantage at higher resolutions over comparable NVidia offerings.

    • TStrauss says:

      I’ll bite.

      1. This is an error of fact. It has already been pointed out (you would know this if you read previous comments), but it is noted.
      2. See the benchmarks and notes posted earlier. Current generation, Nvidia has a lead over AMD in terms of power consumption, and is comparable in terms of performance to AMD over a variety of real world and benchmark applications. I still call that a win for Nvidia.
      3. Read more carefully. I never compared the GT 400 series to the HD 7000 series. Why would anyone do that?
      4. I’m guessing you don’t use USB 3.0 or SATA III because they’re “gimmicks” too. There is a known benefit to PCIe 3.0 for GPGPU purposes, but a noob shopper (the target audience) doesn’t need to be troubled with such details. Going there would just be showing off, and that isn’t the point.
      5. That is true–today. The next gen consoles have an awful lot of unified memory. The target platform for development is usually a console, since this offers the largest potential shopper base. I don’t expect games to require 6 GB of vRAM in the near future, but 2 GB does not sound unreasonable. It’s called forward thinking.
      6. When you explain things to a layman’s level, you often sacrifice linguistic precision for comprehension. I also have to consider the length of the article. I could have said “CUDA and Stream Processing both perform GPGPU functions, as well as serving as vertex, pixel, geometry, and tessellation shaders under a unified shader architecture/ Shader Model 5.0′s unified shader model.” But would I be helping people, or just confusing them while I stoked my ego?
      7. Within a given architecture, more cores is an indicator of performance.
      8. Covered in #6. Just because you know something doesn’t mean it is helpful or worth saying. I could mention the different types of shaders, but then I’d have to explain them all. Unless of course I was just trying to show off.
      9. I never said this.. I would reread the article more carefully. The slots in the picture I showed were PCIe x16 (unless the lanes are split between them, but even then a PCIe x16 card will fit and work in them). I would be surprised to see a current motherboard where the biggest slot was x8.
      10. No, that simply isn’t true. HDMI is a digital technology (DVI is both analog and digital, which sucks when buying cables). You would need a digital converter box to take a signal from HDMI and put it into analog VGA. I know they make converter cables for HDMI to VGA. They don’t work.
      11. Yes, you’re right. DDR3 is slower than DDR5. DDR5 first appeared on commercial cards in 2008 (an AMD card, actually). This article is about upgrading your video card. Why would you buy a card that uses memory technology that is over 5 years old? A DDR3 GPU is not “just fine” for upgrade purposes, it’s incredibly stupid.

      If you want to be a part of the discussion, I recommend reading more. You missed things commented on earlier, and you downright misread several things in the article. Also, this article is for people who don’t know video cards. You clearly do. I’m guessing you are less interested in being helpful than you are in flaunting your technical knowledge. That kind of seems sad, actually.

      • Kbrown says:

        Although I’m not a noob with computers and love reading about more of the technical details of components, I really thought this article was a good start to learning what to watch for when upgrading your video card (I remember when I bought my first TNT 2 card – woo 32bits of color!). I also fully agree with your reasoning and reply to an earlier poster.

        Probably the trickiest thing newcomers need to remember is whatever you end up buying for parts, make sure there’s enough power being supplied to it (Check the wattage output of the PSU you are considering and the power required for your video cards) and that your motherboard is compatible with what you are hoping to plug into it. The computer’s motherboard is the single most important piece and one that everything else connects to. Maybe down the road you could look into Motherboards for beginners.

        Again, kudos for taking the time to share this with others!

      • Z says:

        I never meant for my post to be read by those who don’t know these stuff. I meant for it to correct the facts that I originally thought you didn’t know or had wrong. I never meant for it to seem like I was flaunting my knowledge, as I said, I was merely trying to correct what I thought you didn’t know or had wrong. And I certainly didn’t mean for it to offend you in any way, so if it did, I apologize.

        1. I just wrote that part in case you didn’t read the earlier one
        2. I thought we were talking about performance, so I didn’t include the fact that Nvidia is more power efficient. However we interpret it, we can come up with different outcomes. If you believe Nvidia wins, so be it
        3. You did say that ” I recommend either a Fermi (GeForce 400-500 cards) or Kepler (GeForce 600-Titan) based board since these will support DirectX 11/ Shader Model 5.0. If you choose to go AMD, the comparable cards are the HD 7000 series.” I just assumed that you were talking about both the Kepler and Fermi Series
        4. Yes, I do know about those two. I just didn’t mention them since I thought we were strictly talking about graphics cards. There is some benefit, but it’s less noticeable in GPUs than in other accessories.
        5. I’m not saying 2GB of VRAM is bad. I was trying to say that 1GB is still decent, and you’ll most likely not need to upgrade in a year’s time
        6. As I said, I was just mentioning it in case you didn’t know
        7. Yes, it is. But Nvidia’s and AMD’s architectures are very different, so it doesn’t apply as much
        8. Like you said, I also covered it in my point 6 of this post
        9. You did kind of said it. What I was trying to get at was that the longest/ x16 slot doesn’t always have 16 lanes. And it is actually still quite a common practice among vendors, though it has dropped significantly from before
        10. That’s what I meant, the digital converter box.
        11. It’s not incredibly stupid, it’s just a little unreasonable these days. The performance hit isn’t that big of a deal. As long as you were upgrading from a weaker model, you’ll still see a performance boost. It’s still a good option for those on a really tight budget but is already set on a specific card that the DDR5 model might be too expensive.

        • TStrauss says:

          Let me apologize. My response was overly harsh, I’m just frustrated by the fanboyism that seems to have taken hold here (both AMD fans and Nvidia fans). I misread your post through those lenses, and got something different than you seem to have intended. I hope you will forgive me my outburst.

          • Z says:

            Of course, and I hope you accept mine if I offended you. I know where you are coming from. Some people here just reek with fanboyism, going as far as arguing with false facts. I’m surprised there are even AMD fanboys, since most are either Intel or Nvidia. Though I am a little biased towards AMD, I hate over-the-top fanboys, so I apologize on their behalf.

            BTW, there’s this forum called Tek Syndicate( teksyndicate.com), which surprisingly, for a tech oriented website, has little to none fanboys. I was just wondering if you would like to register. The community there would be lucky to have someone like you.

          • TStrauss says:

            Sounds cool, I’ll check it out!

  17. cscash241 says:

    It should also be mentioned that if you are buying a GPU for Hash Cracking or Bitcoin Mining, you want an Amd card, due to the differences in architecture Amd kicks Nvidia’s ass at hash cracking

  18. papaguan says:

    The only problem I had with this article is about vRAM. You have to take into account things: Whether you will use a lot of vRAM and whether the card is even capable in utilizing a lot of vRAM. If you’re looking at the low end segment of the market (anything under a GTX 650 or 7750), I guarantee you that the card will never be fast enough to use over 1GB. That said, if you’re looking for a decent gaming GPU, the amount of vRAM is important depending on your resolution. If you’re going for a resolution under 1080p and don’t think you will be upgrading your monitor for a while, 1GB will be fine for most games. That said, unless you’re going for a 4k setup or something with multiple monitors, jumping from 2GB to 4GB is extremely unnecessary.

    Just my $0.02.

  19. LinDo says:

    i have asus sabertooth z77 + i7-3770k + corsair neutron gtx 240gb ssd + 32gb dominator platinum 1866mhz, inside on corsair carbide 300r + h80i. only think i need now are asus gtx 680 dc top.

  20. adx2 says:

    hey PSY corei7 runs at the speed of may be 4 ghz but as i said my amd bulldozer processor and br overclock at the speed of 3.4 ghz to 4.3ghz without effecting the pc environment i have done it several time i played black ops 2 with overclockink that was not needed but i overclocked it and played continous 2 hours whitohou any heating problem intel sucks amd rules and thats why i m gonna byu ps4 cuz it has all amd parts in it and i also own a ps3

  21. kuagelo says:

    I personally use a Sapphire Radeon HD 7750 with *only* 1GB of GDDR5 memory, and I’m having no problems so far. :)

    Then again I only play NFS MW 2012 on a 1080p monitor.

  22. jon says:

    Even though AMD allows crossfiring allows different cards, there are huge performance drops and sometimes the video source will be ruined all together.

    Crossfire 7970 and a 5750, Catalyst Control Center will freeze and crash.

    It’s best to just use high end cards or 2 of the same cards.

    If you use 2 different cards then the crossfire will only buffer the lowerpowered card.

  23. duffman says:

    i stay away from amd after all these years they still have trouble with drivers ive never had such trouble with nvidia.

  24. Hektik says:

    I built a PC a few months back, Here are the specs:
    Asus Sabertooth 990fx Motherboard
    16GB ram 1666mhz
    6x 500gb Hard drives
    AMD FX 8350 Eight core Processor
    Fractal Design Refine R4 In white (with matching keyboard, mouse, and 24 inch Ben Q flat panel Screen)
    HIS Graphics Card gddr5 3gb vram

    What drove me to this, well I tried to play Far Cry 3 on my old (4 years) pc and it was slow.

  25. u|ir says:

    sorry, but i like HD 7970 better than GTX 680 cus i used it to crack hashes too :P

  26. vitralizer says:

    The 670 is 95% of the 680, with only 75% of the cost. And the 760Ti is coming out for $300 and that is basically a 670.

  27. Kittens says:

    The 7970 beats the 680 in a lot of benchmarks/games. It also doesn’t have memory bandwidth issues. The two biggest drawbacks to it is that the drivers suck and Crossfire is terrible (because the drivers suck).

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>