A Noob’s Shopping Guide: Video Cards

If you ever dally in the world of PC gaming, there is one thing for sure: at some point, you will need to upgrade your video card.

It used to be that when you upgraded your Voodoo2, you simply bought a Voodoo3 (or maybe an Nvidia RIVA TNT2 if you wanted full 32 bit color).  But nowadays, looking at the feature list of a video card is like reading some strange sort of code.  But never fear, dear reader!  I will walk through the features of a video card on Newegg (the EVGA GTX 650 Ti 2 GB, if you are curious), and I will try and explain them so even the most basic enthusiast can understand what they are looking at.

image

Huh?  Which thing tells me how fast it is?  I want it fast.

The brand and model number don’t really matter.  I don’t know of any disreputable manufacturers of mid-high end video cards, so you’re fine buying from just about anyone.  The interface, on the other hand, does matter.  Unless your computer is an antique, you will be using a PCI Express (PCIe) interface.  This replaced AGP (which was slower), which in turn replaced PCI (which was slower still).  The x16 at the end refers to the number of “lanes” (how big the slot is that you are using).  This will almost always be x16, so just go with that and don’t worry about it.  The revision number (1.0, 2.0, and 3.0—4.0 is slated to be finalized in 2014 or 15) refers to how fast the video card can communicate with the rest of the system, and it matters in a big way.  Let’s say you get a GTX 650.  That uses revision number 3.0, which is the fastest.  But maybe you don’t have a motherboard or CPU that supports PCIe 3.0 (intel users, Sandy Bridge CPUs and backwards do not support PCIe 3.0).  The result is that you might be paying for power that you cannot use!  PCIe is backward compatible, so you can use a later revision and still get by just fine, of course.  Just know that you are missing out on performance that you paid for.

PCIExpress

Those big long slots are PCIe x16.  Don’t worry about the other ones, they don’t matter in this project.  Just be sure to use the PCIe slot closest to your CPU!

On to the chipset!  The chipset manufacturer actually does matter.  There are two major players in the world of video cards: AMD and Nvidia.  These two companies have gone back and forth as far as features and performance go, but at the moment, Nvidia is in the lead.  I recommend either a Fermi (GeForce 400-500 cards) or Kepler (GeForce 600-Titan) based board since these will support DirectX 11/ Shader Model 5.0.  If you choose to go AMD, the comparable cards are the HD 7000 series.

AMD-VS-NVIDIA

Sorry AMD, but right now Nvidia has the upper hand!

Don’t worry about clock speeds; this will not hold you up.  What will hold you up, however, are the number of CUDA cores/ Stream processing units (they’re the same thing, Nvidia and AMD just use different names).  These are the programmable shaders that I’ve waxed poetically on about in previous updates, and the more you have, the better.

cores

In this die shot, the boxes marked SMX are where your cores are (this is a high end Kepler chip, so they are CUDA).  There are 192 CUDA cores in each SMX unit, for a whopping total of 2880 CUDA cores.  Wow!

On to memory.  Don’t pay too much attention to memory clock speeds; the manufacturers manipulate these numbers to look as big as possible without telling you anything.  Unless you are into heavy overclocking, your factory default memory speed will be fine (if you are an overclocker, you probably don’t need this guide).  Regarding the quantity of RAM, more is always better.  Consider: the PS4 will have 8 GB of unified memory.  Assuming non-graphics related tasks use 2 GB of that memory, it still has a sizeable 6 GB of video memory.  I can’t overstate this: if you don’t want to have to upgrade within a year, go big on video RAM.  I recommend at least 2 GB.

Chimpanzee

I could have a picture of RAM here, but that’s boring.  So here’s a chimpanzee wearing people clothes.

The memory interface and memory type are other thorny memory-related issues.  The number for your memory interface (128 bit on the GTX 650 Ti) refers to the size of the memory aperture.  The higher this number, the faster information can go to and from your video RAM.  Newer does not always mean better here.  On the GTX 650, Nvidia actually reduced the size of the memory interface from the GTX 550 Ti  (which is 192 bit).  I’m not sure why this was done, but as far as memory access goes, the newer card is actually slower (of course, a GTX 680 is quicker than both, but it’s also ~$500!).  Do not buy a card with less than GDDR5; this is the actual speed of the memory itself, and is the industry standard.

gtx-680

Oh GTX 680, why are you so desirable, yet so far away from me?

Under APIs, you want to make sure your card supports the latest 3D APIs (otherwise you are buying something that is already obsolete).  The current APIs are DirectX 11 (technically 11.1, but no hardware revisions are required for 11.1 support) and OpenGL 4.3.

Final notes: Under ports, make sure you have the right port for your monitor (a DVI output can be converted to an older VGA one with a simple adapter, but an HDMI output cannot).  If you’re planning on CrossFire/ SLI (connecting two video cards together for faster performance), just make sure that CrossFire/ SLI is supported.  Also, be sure that you get two identical cards; this is required for CrossFire and SLI.  Finally, look at the power requirement (this will be measured in wattage) and the power connecter (probably 6 pin).  You want to make sure your power supply (PSU) can run the video card you are about to buy!  Otherwise, the rest of the stuff is marketing bull: all current Nvidia cards should support PhysX and 3D Vision, and all current AMD cards should support Eyefinity (you might need to turn down in game graphics for this if you get a lower end card).  Almost all video cards come with fans; the fact that they tell that on the website confuses me!

oldcard

This is a video card without a fan.  It is 9 years old.

Finally, before buying a video card, check out the benchmarks.  I’ve seen tech that looked great on paper but performed like crap in real life.  Why be a test guinea pig when someone else has done that for you?  Learn from the testing that others have done, and make a great purchase.  Happy shopping!

  1. pspfanMOHH’s avatar

    Jmqm you into spamming to :surprised:

    Reply

  2. Tech1’s avatar

    Great info thx!

    Reply

  3. John’s avatar

    I bought a Radeon 7950 (& FX-8320) the other day.

    I like AMD over Nvidia b/c AMD graphics cards offer better performance for cheaper (like the 660Ti is the same price as the 7950, but it usually offers sub-par performance in games).

    Of course, Nvidia is always the way to go for video editing and especially 3D modelling.

    Reply

  4. garrei’s avatar

    gtx 700 series will be released soon.. like… VERY soon

    Reply

  5. Mike Litoris’s avatar

    “Sorry AMD, but right now Nvidia has the upper hand!”

    I stopped reading there, right now AMD is not just the best quality/price, but it has the best Vcard ATM (Ares II, which, curiously, is even older than the titan and is more powerful)

    Reply

    1. SilverInfinity’s avatar

      That right there is misleading. The Ares II is certainly more powerful, but it is a Dual-GPU solution (packaged into one card), akin to the GTX690. The Titan makes no such claim that it is the fastest GPU ever, but rather the most powerful single GPU solution in the market at this time, almost a whole generation ahead of most single GPU setups.

      That being said, the price is astronomical, pretty much impractical for most users. Entry-level compute devs may be interested in it over the much pricier Quadro. The filthy rich may SLI or tri-SLI it for the biggest E-peen to show off to friends. :P

      Reply

      1. TStrauss’s avatar

        I didn’t mean to start an AMD/ Nvidia fanboy war!

        If you take a look at the benchmarks, AMD beat Nvidia in the 6000/ GTX 500 hardware generation:
        http://www.tomshardware.com/charts/2011-gaming-graphics-charts/3DMark11-Gamer,2659.html

        Nvidia has consistently outperformed AMD in the current 7000/ GTX 600 hardware generation:
        http://www.videocardbenchmark.net/high_end_gpus.html

        Regarding the Ares II and the GTX 690, SilverInfinity hit it on the head: both are dual-GPU cards. The benefits of dual-GPU: you get raw horsepower. The drawbacks: these cards tend to be glitchy. Caveat emptor I suppose.

        Also, if you want to compare apples to apples, the Titan supports SLI. Compare the performance of a dual Titan system to the Ares II, and I suspect the Titan will come out on top.

        Reply

        1. SilverInfinity’s avatar

          You’d have to be loaded to even consider SLI with Titan. Or if it’s essential to your business.

          I’m currently staying with the Green team only because Red has had poor driver support for OpenGL, which is used by so many of my legacy games (like Jedi Academy and KOTOR). Those games simply fail to run if not running a specific driver revision only. I’ve not had any issues with the Green team.

          That said, I do appreciate that both sides tend to trade places at the top from time to time – keeps both companies in health competition. There really is no point in being a fanboy if the other side has a clearly superior product in any given generation.

          Reply

          1. TStrauss’s avatar

            This. There is no point in being a fanboy. Just get whatever is at the top right now. I’m sure it will go back and forth every year or two, just like it always has.

            Also, Mike brought the Titan into this, not me. I just wanted to point out that a dual high end GPU vs. a single high end GPU isn’t a very fair competition. If I could afford to SLI Titans, I probably wouldn’t be living in an apartment!

          2. Thrawn’s avatar

            STAY AWAY FROM SLI AND DUAL GPU CARDS.
            This is a 5 year long experience.
            My first SLI system was dual asus 8800gtx for about two years, they are not only energy hungry but also they get very very hot (used with crysis in dx10 mode back then).
            Second SLI system was a dual palit 260gtx system, not so energy hungry, not so hot, used with crysis warhead, crysis 2 and several unreal engine 3 titles.
            Now I’m back to single gpu systems as they are not that glitchy, half of the time with SLI I had driver problems, bsods endlessly, picking a working official driver was more of a gamble than playing russian roulette, and the worst of all: SLI does not bring you expected performance boost in every game. There are games that work good (usually unreal titles) but games like metro 2033 or hard reset, stalker… they f*ck up pretty bad. Those engines are not good with SLI, so most of the games will not benefit from this. Also when playing a game with SLI expect micro stutters, those are COMMON, they do not go away. Another negative part of SLI, do not expect it to work with a B-rate CPU and mainboard, most MB’s claim to be SLI compatible but in the end they do not offer the performance needed by the gpus. If you still wanna go for SLI be warned everything in that category is expensive and most things are glitchy.

          3. aces’s avatar

            I run dual GTX680′s without problems.
            Just because it was glitchy in the 2xx days, doesnt mean it still is

    2. Zyrkl’s avatar

      http://www.videocardbenchmark.net/gpu.php?gpu=GeForce+GTX+Titan

      I rest my case. Nvidia is better, but is money hungry.

      AMD isn’t very power efficient, but costs less.

      SLI (Nvidia) is better than CrossfireX (AMD), because CrossfireX has too many bugs.

      Both these companies have its ups and downs. Deal with it fanboys.

      Reply

  6. adx2’s avatar

    hey bro i have a quad core amd bulldozer unlocked processor which can be overclock at 4.5ghz current is running 3.4ghz and have a 8 gb of ddr3 ram and a gtx 550 ddr5 graphic card and a gigabyte motherboard which is 3 tb unlocked and now i m thinking of upgrading my graphic card to amd readeon sli

    Reply

    1. PSY’s avatar

      I have some unpleasant news for you: compared to i7, buldozer sucks badly

      Reply

      1. gunblade’s avatar

        heard third gen icore be boss iam on one pentium 4 with ht dont think its as good as third gen.

        Reply

    2. gunblade’s avatar

      would be mean if ur board had four cpu slots and eight pci slots dev kits coool. so the psfour around icore five or icore tree..

      Reply

    3. Aririnkitaku’s avatar

      “AMD Radeon SLI”

      Wat.

      Reply

    4. TStrauss’s avatar

      For this discussion, your CPU only matters if it is intel, because the PCIe controller is on the CPU die with intel set ups (it is on the motherboard with AMD). So that doesn’t really matter. If you can identify your motherboard, I can tell you if it supports PCIe 3.0 or not.

      The AMD Radeon HD 7000 series are good cards, just not the best right now. But if you prefer AMD cards, by all means, go for it! I wouldn’t count on SLI on your AMD Radeon though. Are you sure you don’t mean CrossFire?

      Reply

  7. tinostar91’s avatar

    In reality performance loss when you take PCIE 3 card into lower PCIE slot isn’t anything hilarious, if you will take card like NVIDIA GeForce 680 GTX to PCIE 1 performance loss will be only 4-5% so it’s almost unmeasurable, if you will use PCIE 3 x16 card to PCIE 1 x4, performance loss will be much higher but it still isn’t as high as you may think it will be (~25% performance loss)

    Reply

    1. SilverInfinity’s avatar

      That is probably true, but sometimes I get the feeling that my PCIE bus is holding my performance back. I have a GTX460 1GB (OC’d) that constantly has to swap data between RAM and VRAM in a heavily modded Skyrim, and I see the system swap thrashing a lot (HDD activity) with insane stutters at times. All my RAM hasn’t been consumed yet (1+GB left), so RAM is not the problem. I’m getting the feeling that my bus is choked and can’t swap fast enough. Any games that fit right into the VRAM (or just a little over) run like butter.

      Compare the PC I built for my brother, running PCIE 3.0. Running a GTX650 TI also with 1GB VRAM. No thrashing, but some minor stuttering when loading a large area for the first time.

      tl;dr It’s probably important if your game doesn’t fit in your VRAM.

      Reply

      1. gunblade’s avatar

        yea old school board u could atlest make more ugrade to the board jus pop out the chip now day they got u soldering chips in trying to add perfomence were the company jus had not add the chips for the next gen model or a more expensive model.

        Reply

  8. svenn’s avatar

    “more RAM is always better”
    For most mid-range cards more RAM doesn’t affects anything, as games that could use more RAM need way stronger GPU’s to use it;

    I seen a couple argue if they would buy a 8GB RAM machine or a 16 GB machine (obv. not GPU), they ended up with 16GB, The intended use is surfing the web and office; So probably a 4GB RAM machine would have cut it for many years to come.

    While I understand your point, do not be fooled by vendors selling old crap with more GB’s RAM … even in vRAM;

    Nice article, really enjoyed reading it; keep it coming!

    Reply

    1. gunblade’s avatar

      yea like ddr 8gig for real cheap …ddr3 16 gig i was thinking about getting an older mouerbord thst had 16gig but in ddr two but it had two core slots but only two pci slots none mini pci but then seen a tree hundred dallller bourd with two cores but 16 gig ddr 3 i would by both but nowing the ddr tree be a bit better

      Reply

    2. TStrauss’s avatar

      1 GB of vRAM is probably adequate today, but as next-gen consoles start raising the bar for cross-platform gaming, I could easily see 2 or even 4 GB of vRAM becoming the new baseline. Thinking ahead, I really wouldn’t recommend going less than 2 GB of vRAM.

      Reply

  9. FishSticks’s avatar

    I have a laptop with a GTX 670MX, does the job!

    Reply

  10. jon’s avatar

    You made this article into a nit picky fanboy discussion.

    AMD is less power efficient, the temperature in your room will increase by 15 degrees while playing a game.
    An AMD 7970 is 275 Watts TDP. AMD is pretty much raw power, but at a cost of inefficiency.

    Nvidia is power efficient high end cards usually take 70 Watts less than their AMD counterparts, you see 1000′s Nvidia cards inside of super computers because they will not raise the electricity costs. Also they are more precise with their parts you usually see Nvidia with nearly half the Cuda cores than stream processors but they perform equally on both cards.

    Battlefield 3 will only use 6 cores and 3 gb of DDR3 ram, since DX11 is locked at these settings.

    2 660 Ti’s in SLI are only 6% faster than a single 7970 GHz Edition.

    Reply

    1. TStrauss’s avatar

      I did not realize there was so much fanboyism built around GPU manufacturers! My only purpose was to explain the details of video cards for those who don’t know what they’re looking at. There is no reason for GPU fanboyism; they all accomplish the same thing. There are no Nvidia or AMD exclusives! It’s all about making sure you get the best bang for your buck. Personally, I’ve gone back and forth between AMD and Nvidia cards (and 3dfx back when they existed), based on what struck me as the most bang for my buck at the time. If you like AMD, buy AMD.

      There are some problems in your technical analysis, though. First, I don’t see why you would compare a ~$300 card (the GTX 660 Ti) to a ~$450 card (the HD 7970 GHz). Apples and oranges. SLI doesn’t make this a fair comparison either; now you’re looking at a $600 set up vs. a $450 set up. I would propose comparing a comparably priced card like the GTX 680 to the HD 7970 GHz.

      Power consumption depends on use; the 7970 GHz idles better, but the GTX line up are more power efficient under an average load (http://www.tomshardware.com/reviews/radeon-hd-7970-ghz-edition-review-benchmark,3232-18.html).

      Looking at benchmarks, the HD 7970 GHz outperforms the GTX 680 at some tasks, but is outperformed at others (www.anandtech.com/bench/Product/618?vs=555). So your claim that “AMD is pretty much raw power” simply does not line up with the facts. A more accurate claim is that the current line of Nvidia cards are comparable with the current line of AMD cards in terms of processing power, but there is a power consumption tradeoff. If you turn your computer off when not using it (please do this!), then the tables are tilted in favor of Nvidia. At least, they are for this generation. These things change year over year.

      Reply

    2. Z’s avatar

      A lot of the computers in the top 50 fastest super computers use AMD GPUs.
      The max amount of cores BF3 uses is only 4, not 6
      True current AMD cards run a bit hotter than their current counterparts, but they do not raise the room temperature by 15 degrees. To even raise the room’s temperature close to 15, the GPU would probably have to be molten.
      You said AMD cards in general are hotter and less efficient than Nvidia’s cards. But since you didn’t specify which generation, so I can call bullshit than that, since the 400-500 Series ran hotter and were less efficient than their AMD counterparts( 5000-6000 Series)

      Reply

  11. jd20dog’s avatar

    crossfire can run on different cards as long as they both support crossfire(your split line where the 2 cards image connects will move up or down on your screen depending on which card is better), amd also have infrastructure mode which is used by some emulators like ps2, something that still kills navidia’s cards ram usage to reproduce, or have i been away from these things too long and been dealt with?

    also try to match the ram type from the card to that of your pc or you’ll git scan lines on your screen when refresh rate of the card out performs what your pc can work with

    Reply

  12. Gikero’s avatar

    I would have pointed out power consumption and heat output. Its a shame when I see someone buy a card and not have enough 6-8 pin connectors or an inadequate power supply.

    Reply

    1. TStrauss’s avatar

      I did mention power consumption and power connectors . . .

      Reply

      1. Gikero’s avatar

        I feel foolish, you did. Sorry. =)

        Reply

  13. Spyder2k5’s avatar

    Just not accurate, not only are AMD drivers better optimized, and developed, their cards are the fastest, and best on the market, and have been for years.

    nVidia WAS best when the 8800 was king, but they sat on that card too long and let their drivers rot as well. Ever since gaining PhysX they have been more focused on that, than producing a good card, or proper drivers, each driver loses performance, and introduces bugs.

    Do proper research and try some cards yourself and you will see, AMD is on top.

    Reply

    1. Spyder2k5’s avatar

      Let me add that nVidia cards run way to hot, and nVidia cares not that they run overly hot, AMD uses less power, and stays cooler while performing better in just about EVERY case.

      Reply

      1. TStrauss’s avatar

        Ok fanboy.

        Reply

      2. SilverInfinity’s avatar

        Fanboy detected – your news is old. In this generation, Green team performs generally better, uses much less power, and is significantly cooler. Price is a different question.

        In previous generations, yes. In this one, not at all.

        Reply

    2. svenn’s avatar

      AMD drivers better optimized ? really now …

      Reply

      1. TStrauss’s avatar

        You know you’re a fanboy when your best argument is “our drivers are the best!” :D

        Reply

    3. Gikero’s avatar

      I think both companies have good cards. It really depends on your situation.

      Anyone who is interested in graphic cards, I’d suggest PCPer Podcast and This Week in Computer Hardware. Both are really informative.

      Reply

  14. Ivo’s avatar

    Crap, i dont think they still supply agp cards with hdmi drivers … Crap im still waiting for that driver hdmi for windows. Lucky enough that linux supports old drivers and writes their own. For updating the old agp card . Better forget that and buy a tuned new pc or id have to forget my north bridge and old pcs dont upgrade they sit at the bin.

    Reply

    1. fakeer’s avatar

      you’re not a real wizard. so dont post REAL WIZARD SHI!!!

      Reply

  15. Qwizarrds’s avatar

    i got spells that’ll shrink you down to size kid!!

    Reply

    1. fakeer’s avatar

      @Qwizarrds forever alone

      Reply

  16. Z’s avatar

    1. Crossfire allows you to use different models of GPUs, SLI is the only one that doesn’t
    2. AMD and Nvidia are pretty much on the same level right now. AMD is even going a bit faster since they started optimizing their drivers
    2. The 400 Series was terrible. They ran hot, consumed more power and performed slower than their AMD counterparts( 5000 Series).
    3. The 7000 Series is not comparable to the 400 Series, I’d understand comparing it to the 500 Series, but considering the 400s ran slower than the 5000s, which is slower than the 7000s, then no.
    4. Current generation cards( and probably also next gen) barely saturate the full bandwidth of PCI-E 2.0, even the Titan just saturates 2.1. For now, 3.0 is just a gimmick
    5. Unless you have anything less than 512MB, video RAM is not that important. It is mainly used for textures and driving resolutions, so unless you plan on gaming at 2K res or above with high res textures, 1GB is enough( A GT 610 with 2GB of RAM costs as much as a 620 with 1GB. The 610 has more RAM, but you’ll need to replace it sooner than a 620. I also point this to console fanboys who think the PS4 is gonna be way faster than any GPU just because it has 8GB of unified system RAM)
    6. Stream Processors and CUDA cores are not the same. They are alike in the jobs that they do, but they are not the same thing.
    7. In relation to point 6, more processors/ cores does not equal to better performance. If that was the case( and if CUDA and Stream were the same), then a $300 7950 should perform way better than a $450 680( which, if I remember correctly, has a lower amount of CUDA cores than the 7950′s Stream processors)
    8. The processors/ cores are not just shaders
    9. The longest slot isn’t always x16. Sometimes it might be x8, or x4
    10. HDMI can be converted into an analog signal
    11. A DDR3 GPU is just fine, but it will run a bit slower than its DDR5 counterpart.

    Reply

    1. SilverInfinity’s avatar

      You probably don’t want to get a DDR3 GPU if you’re gaming at any resolution close to 1080. I’ve seen the lack of memory bandwidth choke my own system when I bought a 1080 screen on an ATI HD4650. All games that are fairly memory hungry (especially w/ AA) will require plenty of memory bandwidth. For example, in a bandwidth-hungry game like AvP2, AMD’s high-frequency GDDR5 has a clear performance advantage at higher resolutions over comparable NVidia offerings.

      Reply

      1. Z’s avatar

        True, you’d probably want a GDDR5 GPU, but that doesn’t mean DDR3 is bad or isn’t an option

        Reply

    2. TStrauss’s avatar

      I’ll bite.

      1. This is an error of fact. It has already been pointed out (you would know this if you read previous comments), but it is noted.
      2. See the benchmarks and notes posted earlier. Current generation, Nvidia has a lead over AMD in terms of power consumption, and is comparable in terms of performance to AMD over a variety of real world and benchmark applications. I still call that a win for Nvidia.
      3. Read more carefully. I never compared the GT 400 series to the HD 7000 series. Why would anyone do that?
      4. I’m guessing you don’t use USB 3.0 or SATA III because they’re “gimmicks” too. There is a known benefit to PCIe 3.0 for GPGPU purposes, but a noob shopper (the target audience) doesn’t need to be troubled with such details. Going there would just be showing off, and that isn’t the point.
      5. That is true–today. The next gen consoles have an awful lot of unified memory. The target platform for development is usually a console, since this offers the largest potential shopper base. I don’t expect games to require 6 GB of vRAM in the near future, but 2 GB does not sound unreasonable. It’s called forward thinking.
      6. When you explain things to a layman’s level, you often sacrifice linguistic precision for comprehension. I also have to consider the length of the article. I could have said “CUDA and Stream Processing both perform GPGPU functions, as well as serving as vertex, pixel, geometry, and tessellation shaders under a unified shader architecture/ Shader Model 5.0′s unified shader model.” But would I be helping people, or just confusing them while I stoked my ego?
      7. Within a given architecture, more cores is an indicator of performance.
      8. Covered in #6. Just because you know something doesn’t mean it is helpful or worth saying. I could mention the different types of shaders, but then I’d have to explain them all. Unless of course I was just trying to show off.
      9. I never said this.. I would reread the article more carefully. The slots in the picture I showed were PCIe x16 (unless the lanes are split between them, but even then a PCIe x16 card will fit and work in them). I would be surprised to see a current motherboard where the biggest slot was x8.
      10. No, that simply isn’t true. HDMI is a digital technology (DVI is both analog and digital, which sucks when buying cables). You would need a digital converter box to take a signal from HDMI and put it into analog VGA. I know they make converter cables for HDMI to VGA. They don’t work.
      11. Yes, you’re right. DDR3 is slower than DDR5. DDR5 first appeared on commercial cards in 2008 (an AMD card, actually). This article is about upgrading your video card. Why would you buy a card that uses memory technology that is over 5 years old? A DDR3 GPU is not “just fine” for upgrade purposes, it’s incredibly stupid.

      If you want to be a part of the discussion, I recommend reading more. You missed things commented on earlier, and you downright misread several things in the article. Also, this article is for people who don’t know video cards. You clearly do. I’m guessing you are less interested in being helpful than you are in flaunting your technical knowledge. That kind of seems sad, actually.

      Reply

      1. Kbrown’s avatar

        Although I’m not a noob with computers and love reading about more of the technical details of components, I really thought this article was a good start to learning what to watch for when upgrading your video card (I remember when I bought my first TNT 2 card – woo 32bits of color!). I also fully agree with your reasoning and reply to an earlier poster.

        Probably the trickiest thing newcomers need to remember is whatever you end up buying for parts, make sure there’s enough power being supplied to it (Check the wattage output of the PSU you are considering and the power required for your video cards) and that your motherboard is compatible with what you are hoping to plug into it. The computer’s motherboard is the single most important piece and one that everything else connects to. Maybe down the road you could look into Motherboards for beginners.

        Again, kudos for taking the time to share this with others!

        Reply

      2. Z’s avatar

        I never meant for my post to be read by those who don’t know these stuff. I meant for it to correct the facts that I originally thought you didn’t know or had wrong. I never meant for it to seem like I was flaunting my knowledge, as I said, I was merely trying to correct what I thought you didn’t know or had wrong. And I certainly didn’t mean for it to offend you in any way, so if it did, I apologize.

        1. I just wrote that part in case you didn’t read the earlier one
        2. I thought we were talking about performance, so I didn’t include the fact that Nvidia is more power efficient. However we interpret it, we can come up with different outcomes. If you believe Nvidia wins, so be it
        3. You did say that ” I recommend either a Fermi (GeForce 400-500 cards) or Kepler (GeForce 600-Titan) based board since these will support DirectX 11/ Shader Model 5.0. If you choose to go AMD, the comparable cards are the HD 7000 series.” I just assumed that you were talking about both the Kepler and Fermi Series
        4. Yes, I do know about those two. I just didn’t mention them since I thought we were strictly talking about graphics cards. There is some benefit, but it’s less noticeable in GPUs than in other accessories.
        5. I’m not saying 2GB of VRAM is bad. I was trying to say that 1GB is still decent, and you’ll most likely not need to upgrade in a year’s time
        6. As I said, I was just mentioning it in case you didn’t know
        7. Yes, it is. But Nvidia’s and AMD’s architectures are very different, so it doesn’t apply as much
        8. Like you said, I also covered it in my point 6 of this post
        9. You did kind of said it. What I was trying to get at was that the longest/ x16 slot doesn’t always have 16 lanes. And it is actually still quite a common practice among vendors, though it has dropped significantly from before
        10. That’s what I meant, the digital converter box.
        11. It’s not incredibly stupid, it’s just a little unreasonable these days. The performance hit isn’t that big of a deal. As long as you were upgrading from a weaker model, you’ll still see a performance boost. It’s still a good option for those on a really tight budget but is already set on a specific card that the DDR5 model might be too expensive.

        Reply

        1. TStrauss’s avatar

          Let me apologize. My response was overly harsh, I’m just frustrated by the fanboyism that seems to have taken hold here (both AMD fans and Nvidia fans). I misread your post through those lenses, and got something different than you seem to have intended. I hope you will forgive me my outburst.

          Reply

          1. Z’s avatar

            Of course, and I hope you accept mine if I offended you. I know where you are coming from. Some people here just reek with fanboyism, going as far as arguing with false facts. I’m surprised there are even AMD fanboys, since most are either Intel or Nvidia. Though I am a little biased towards AMD, I hate over-the-top fanboys, so I apologize on their behalf.

            BTW, there’s this forum called Tek Syndicate( teksyndicate.com), which surprisingly, for a tech oriented website, has little to none fanboys. I was just wondering if you would like to register. The community there would be lucky to have someone like you.

          2. TStrauss’s avatar

            Sounds cool, I’ll check it out!

  17. cscash241’s avatar

    It should also be mentioned that if you are buying a GPU for Hash Cracking or Bitcoin Mining, you want an Amd card, due to the differences in architecture Amd kicks Nvidia’s ass at hash cracking

    Reply

  18. papaguan’s avatar

    The only problem I had with this article is about vRAM. You have to take into account things: Whether you will use a lot of vRAM and whether the card is even capable in utilizing a lot of vRAM. If you’re looking at the low end segment of the market (anything under a GTX 650 or 7750), I guarantee you that the card will never be fast enough to use over 1GB. That said, if you’re looking for a decent gaming GPU, the amount of vRAM is important depending on your resolution. If you’re going for a resolution under 1080p and don’t think you will be upgrading your monitor for a while, 1GB will be fine for most games. That said, unless you’re going for a 4k setup or something with multiple monitors, jumping from 2GB to 4GB is extremely unnecessary.

    Just my $0.02.

    Reply

  19. LinDo’s avatar

    i have asus sabertooth z77 + i7-3770k + corsair neutron gtx 240gb ssd + 32gb dominator platinum 1866mhz, inside on corsair carbide 300r + h80i. only think i need now are asus gtx 680 dc top.

    Reply

  20. adx2’s avatar

    hey PSY corei7 runs at the speed of may be 4 ghz but as i said my amd bulldozer processor and br overclock at the speed of 3.4 ghz to 4.3ghz without effecting the pc environment i have done it several time i played black ops 2 with overclockink that was not needed but i overclocked it and played continous 2 hours whitohou any heating problem intel sucks amd rules and thats why i m gonna byu ps4 cuz it has all amd parts in it and i also own a ps3

    Reply

  21. kuagelo’s avatar

    I personally use a Sapphire Radeon HD 7750 with *only* 1GB of GDDR5 memory, and I’m having no problems so far. :)

    Then again I only play NFS MW 2012 on a 1080p monitor.

    Reply

  22. jon’s avatar

    Even though AMD allows crossfiring allows different cards, there are huge performance drops and sometimes the video source will be ruined all together.

    Crossfire 7970 and a 5750, Catalyst Control Center will freeze and crash.

    It’s best to just use high end cards or 2 of the same cards.

    If you use 2 different cards then the crossfire will only buffer the lowerpowered card.

    Reply

  23. duffman’s avatar

    i stay away from amd after all these years they still have trouble with drivers ive never had such trouble with nvidia.

    Reply

      1. SilverInfinity’s avatar

        I’d probably chalk that up to the game being heavily optimized for AMD cards, somewhat akin to what happened with Dirt 2 with me before.

        Reply

  24. Hektik’s avatar

    I built a PC a few months back, Here are the specs:
    Asus Sabertooth 990fx Motherboard
    16GB ram 1666mhz
    6x 500gb Hard drives
    AMD FX 8350 Eight core Processor
    Fractal Design Refine R4 In white (with matching keyboard, mouse, and 24 inch Ben Q flat panel Screen)
    HIS Graphics Card gddr5 3gb vram

    What drove me to this, well I tried to play Far Cry 3 on my old (4 years) pc and it was slow.

    Reply

  25. u|ir’s avatar

    sorry, but i like HD 7970 better than GTX 680 cus i used it to crack hashes too :P

    Reply

  26. vitralizer’s avatar

    The 670 is 95% of the 680, with only 75% of the cost. And the 760Ti is coming out for $300 and that is basically a 670.

    Reply

  27. Kittens’s avatar

    The 7970 beats the 680 in a lot of benchmarks/games. It also doesn’t have memory bandwidth issues. The two biggest drawbacks to it is that the drivers suck and Crossfire is terrible (because the drivers suck).

    Reply

Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>