Just my 2 cents: The Illusory Specs Wars – part 2
If you ended up here, I recommend reading part 1 first (https://wololo.net/2015/10/12/just-2-cents-illusory-specs-wars-part-1/)
Early 90s – The Bit Wars
I know that the Bit Wars is one of the most documented periods of gaming around (like here in this awesome article by Acid_Snake https://wololo.net/2014/07/03/8-days-of-gaming-day-4-the-bit-wars/), but my focus here is to show how the “bits” actually mattered little, so bear with me.
Just after launching the Master System, Sega already figured that to counter Nintendo heavily dominant market share, they had to create a more advanced hardware iteration. Sega chose to make a system based on their successful arcade board, the Sega System 16, which had, well, a 16-bit processor.(http://www.siliconera.com/2013/09/18/segas-original-hardware-developer-talks-about-the-companys-past-consoles/).
So, when launching their new shiny system (Mega-Drive worldwide, later Genesis in US), they had to find a way to properly market it, showing how better it was compared to its current competition. The problem is that they couldn’t use games for that, as the NES library was monumental in comparison at that point, while the Genesis had only some arcade ports. Let’s remember once again that few people back then (thought they) knew anything about computer hardware, so they needed to create some buzzword jargon that single-handedly showcased Genesis vast superiority. So, they came up with the “16-bit” fad, implying it was at least “twice better” than the NES, since it only had an 8-bit processor. This “bit factor” was deemed so important to Sega’s marketing that it was even engraved on golden letters on the console.

So, the console was indeed more advanced, promised to bring the arcade experience home, and even had double the “bitness”; so the Mega-Drive would crush the competition, right? Once again, not quite, because none of that mattered. Even if it was a better piece of hardware and beautiful technically more advanced games, Nintendo got the edge even on Genesis’ launch, because it had launched just a week earlier what would be known as one of the greatest games of all time: Super Mario 3. Many people were too busy exploring the Mushroom Kingdom to notice Genesis’ launch (http://www.ign.com/articles/2009/04/21/ign-presents-the-history-of-sega?page=4). Talk about bad timing.

So, there was Sega, with an awesome console and still being served by their competition, even being one hardware iteration ahead. They realized that they couldn’t “16-bits” their way out, so what to do? First, they created a mascot to compete with Mario; the once great Sonic The Hedgehog, and bundled the game with the console. Secondly, they focused their marketing on the Generation X (child-to-teenagers at the 90s) with an “in your face” attitude, led by their mascot.
But all that wasn’t enough, as I already said many times, Genesis needed some games at the same time that Nintendo had most third-parties locked in. What they did to circumvent that was to focus on licensed games (sports, celebrities, heroes) that Nintendo lacked. This was specially important when launching Genesis in the US, since it made easier with players to identify with the console, since the licenses had great appeal to the US market. C’mon, they even had the legend Michael Jackson at their side.

After a couple of years, Nintendo eventually made its debut on the 16-bit world with the almighty SNES, which rendered the 16-bit part of their marketing useless. Then Sega came up with yet another marketing gimmick to illustrate the Genesis’ supposed superiority compared to the SNES, and then the Blast Processing was born. Blast Processing was a term coined to illustrate and summarize Genesis’ processing power supposed superiority higher processor clock, DMA controller and etc .(http://segaretro.org/Blast_Processing), implying that it had something Nintendo hadn’t. Obviously, that was heavily advertised, trying to make Nintendo look like child’s play while Sega was the “cool” and “fast” one.
And, as silly as it seems , it worked for some time. People at first discussed that the Genesis had more bits, and later that it had the Blast processing and so on, closing the gap between Sega and Nintendo market share. But let’s reflect a little over what we saw.
Although Sega clearly had a far superior hardware (compared to the NES) and a way to market it (16-bit), that by itself wasn’t enough to gain penetration, since Nintendo still had more popular and recognizable games (like Super Mario 3). It took games and IP deals to get it going. Even after the SNES debut, what made some people choose the Genesis over the SNES was a charismatic mascot with targeted games, not Blast Processing (http://www.usgamer.net/articles/the-true-16-bit-experience-segas-genesis-turns-25). These were only a marketing move to give people ammo o argue about something that they didn’t understand.
Why none of this ’16 bit’ and ‘blast processing’ actually mattered?
Now let’s talk more about the hardware side of this. Keep in mind that this is a very complex subject, so I will try to simplify it as much as possible. First of all what the heck does that 16-bit Sega advertised mean? To understand that, let’s make a sketch of a very simplified console architecture:
CPU (Central Processing Unit): Handles and processes all data between components. It is basically the computer’s brain.
Registers: The most internal and elemental form of memory. It is a little and very fast (many times faster than RAM) inside the CPU. Its purpose is to hold many types of basic data, such as memory addresses and instructions.
Cache: Its another kind of memory, slower than registers but still much faster than RAM. Simplifying, it holds the last chunks of data (instructions or data per se, some computer have separated cache for both), and if the computer happens to need theses chunks again in a close future (which is common in computing, since there are many repetitions in any typical program), it gets from the cache instead of processing it all over again. In some cases, it can give a great boost in performance.
RAM (Random Access Memory): It’s a higher capacity slower memory where instructions and data are kept. That’s where the CPU reads and writes data to perform tasks. It’s volatile, so as soon as the power is off, all the data is gone.
Storage: Is where the program is kept to be read and loaded in memory. It is typically the slowest type of memory in any computer. In videogame consoles, they are typically read-only (cartridges, CD-Rom, Bluray and etc). In more modern consoles, there is typically a separated writable internal part to hold updateable firmware.

GPU (Graphics Processing Unit): It’s a specialized chip to process graphics, like sprites, polygons, rotations, color palettes. It typically has its own dedicated memory, VRAM (Video RAM).
Sound Chip: Specialized chip to process sound output.
BUS: Circuitry where the data travels between components. Think of them as data lanes.
DMA (Direct Memory Access): Its a way for a component (i.e. the GPU) to read and write data from RAM without CPU intervenience. Since the CPU is typically is needed by almost every operation; avoiding CPU interrupts is a great way to boost performance.
So, in a very superficial level, when someone writes a computer program in a high-level (as in close to the human understanding) programming language, they make it go through a compiler, which is a program that translates a high-level programming language (like C, Python, Java, Ruby, etc) to a low-level (close to the machine) programming language (Assembly), which the computer then executes. Actually the computer executes binary code generated by the assembly, but for the sake of simplicity, let’s say the computer executes assembly directly.
Once again this is simplified, as actually one line of high-level language can (and probably will) generate many assembly instructions, but let’s keep it simple. Anyway, after the program is compiled, it generates many many instructions. These instructions have a basic (and heavily simplified here) anatomy:

Opcode: A number code that represents an assembly instruction mnemonics like MOV, SUM, JMP, etc. In our example above, it was ADD.
Operands: Are the parameters given to the said instruction. The number of operands it accepts depends on each individual instruction. For instance, JMP (jump), may accept one operand, while ADD, can accept more. In our example, there are two operands: [A] and [B], which means the contents of the memory address contained in register A and register B.
So, what does all of this have to do with the Bit Wars? You see, the maximum size of these instructions is what the “bitness” of the CPU means. Higher “bitness” CPUs can have more instructions (more possible opcodes) and bigger operands (bigger registers, addresses and so on), which theoretically mean you can move more data around in less time. One of the most prominents effects of higher CPU ‘bitness’ is more addressable RAM (like 4GB limit for 32-bit cpu).
However, address buses, different cpu registers and other components can have different ‘bitness’. As such, the Genesis processor was a hybrid processor designed to be forward compatible (http://www.cpu-world.com/CPUs/68000/). So, unimportant as it is, the Genesis wasn’t ‘genuinely’ a 16-bit system.
Further, the thing is that CPU ‘bitness’ translates poorly to gaming performance on its own. Let’s look at the differences between SNES and Genesis:

Like I said in part 1, much more important in games are stuff like resolution, colors, sprites and their sizes. And if we look at it, we can see that the SNES whoops the Genesis graphically in every way. It’s also worth noting that while some state that the Genesis processor had more ‘speed’, as it actually only has higher frequency. The SNES processor could output more MIPS (million instructions per second) even at a lower frequency (I will talk more about this ahead in the Frequency Wars). The GPU DMA chipset is where the Genesis was better than the SNES (http://segaretro.org/Blast_Processing). Also, it didn’t help that the SNES also had superior audio capabilities. So much for the Blast Processing fad…
All this (overly simplified) technical stuff would be very hard to market, so that’s why Sega came up with the ‘bitness’ and blast processing in the first place; at the same time that it mattered very little to gaming. The biggest proof is that the Turbografx 16, which is considered a competitor in the 16 bit era, had an 8-bit CPU, but had dual 16-bit GPUs (http://www.retrogamingconsoles.com/consoles/pc-engine-turbografx-16/#specifications), among other stuff that made it possible to produce games comparable in quality with the SNES and Genesis ones.
Amusingly enough, while Sega seemed worried to market ‘deeper’ technical stuff, they left aside much more simple (and urgent) matters, like the gamepad. When the SNES was released, Street Fighter II was the biggest system seller, because it was possible to play the arcade hit at home with freaking six buttons, while the Genesis only had 3. Eventually Sega would release a revised gamepad with 6 buttons along with a new Street Fighter II revision, but by then the Street Fighter II hype wasn’t at it’s peak.
Later Sega would turn it around when the Genesis version of Mortal Kombat was less censored than the SNES one.

But Sega, Nintendo and the other players were far from done trying to market their consoles on specs, so stay tuned for part 3. And what about you? Did you get hooked by the Blast Processing or by the games? Share your thoughts.





This reminds me of the Mario sprite hidden in Sega’s game: Astal.
Im glad you took consideration into my request trust me when i say your going the right route. Many people who dont or do know need to understand that the retro consoles are the way and will always be as long as there is homebrew access. Or better yet try owning one of the consoles to experience the whole thing. The best part is knowing the origin of the games that exist now. Next neogeo sega saturn pc engine duo and pcfx and of course since this is a playstation fanboy site the psx aka ps1.
Neutopia is a Zelda clone
Star Fox on Sega Genesis?
I only ever owned an NES. I never owned an SNES or a Genesis. I still have my NES to this day.
Genesis would have never won me over with sports games since I don’t play them.
It’s all about the games. Without games you want to play, the power of the system is meaningless.
“The SNES processor could output more MIPS (million instructions per second) even at a lower frequency (I will talk more about this ahead in the Frequency Wars).”
Which didn’t matter because the SNES had bus problems of its own and an outdated instruction set. For example, it did not have a MOV operation.
“However, address buses, different cpu registers and other components can have different ‘bitness’. As such, the Genesis processor was a hybrid processor designed to be forward compatible (http://www.cpu-world.com/CPUs/68000/). So, unimportant as it is, the Genesis wasn’t ‘genuinely’ a 16-bit system.”
The SNES wasn’t a “genuine” 16-bit system either. The CPU is a late product of the previous generation. In fact, when you turn on your SNES, for a very short time it’s 8-bits until the BIOS instructs it to be 16-bit.
The “Resolution” entry in your graph is misleading. While the Mega Drive indeed commonly used 320×224, and was its maximum resolution, the SNES commonly used 256×224. Technically the SNES could output 512×448, but as it required more memory and more processing, it was used sparingly for things like Secret of Mana’s load/save screen and Chrono Trigger’s Lavos Day sequence.
These are great points. I’m not trying to say that one is better/worse than the other, my point is precisely that this stuff matters very little, although SEGA was the one bragging about it.