Tuesday, 30 November 2010

Overclock Your Computer

There are two schools of thought as to why you can, or would even want to overclock most CPUs and GPUs. One of them takes the peace, love and understanding route, namely that the manufacturing process is never 100 per cent reliable, so not every chip that rolls off the same production line is born equal. Those with the most lustrous coats and shiniest eyes (bred on Pedigree, presumably) are ready to be high-end components, but those with a bit of a squint and a runny nose may have a funny turn if they exert themselves too much.


Hence, some chips are slapped with a lower official clock-speed and sold for less groats than their beefier brethren. The potential for their intended glory remains, however. Overclocking techniques can unlock at least some of that potential, albeit at the risk of frying the chip completely.

The tinfoil hat/Angry Internet Men theory is based on the same concept but chucks in a bit of paranoia. In this scenario, every same-series processor is born equal, but The Man artificially neuters most of them and slaps different badges on what are fundamentally the same chips. Overclocking, then, is simply a way of taking back what's rightfully yours.

The truth likely lies somewhere between the two. Mass production certainly makes more financial sense than dozens of separate lines, and it's true that a low-end CPU or GPU can be made to punch far above its weight, but their stability isn't as guaranteed as a chip that's officially able to run at a higher speed. No manufacturer wants to deal with a steady trickle of returned parts, after all. But it does mean home overclocking is almost always productive - and seemingly more so with every new hardware generation.

It's also increasingly easy. The earliest overclocking on the 4 to 10MHz 8088-based CPUs of 1983, involved desoldering a clock crystal from the motherboard and replacing it with a third-party one, with only partially successful results. Ouch. Still, the precedent was set: a dedicated guy-at-home could exceed his chip's official spec. IBM, then very much the top dog of PC land, wasn't entirely happy about this, so follow-up hardware included hard-wired overclock blocks.

More soldering this time of a BIOS chip, managed to get around this. By 1986 IBM's stranglehold had been broken, resulting in a raft of 'clone' systems - and a wealth of choice. Intel's 286 and 386 processors became the de facto standard chips, and bus speed and voltage controls began to shift from physical switches and jumpers to BIOS options and settings.

It was the 486 that really changed everything however. It's telling that this was the chip most prevalent during the era that birthed the first-person shooter as we know it: 1993's Doom very much popularized performance PCs for gaming driving system upgrades in the same way a Half-Life 2 or Crysis does these days. At the same time, the 486 introduced two concepts absolutely crucial to overclocking both then and now. Firstly, it popularized split product lines; no longer was it a matter of buying simply a processor, but rather which processor. The 486SX and DX offered some serious performance differential, and notably the SXs were hobbled/failed DXs, giving rise to the ongoing practice of assigning different speeds and names to what were the same chip.

For a while too, the 25MHz SXes could be overclocked to 33MHz by adjusting a jumper on the motherboard; something less salubrious retailers took full advantage of. Secondly, it introduced the multiplier: performing more clocks per every one mustered by the system's front side bus. The 486's 2x multiplier thus effectively doubled the bus frequency. This was something overclockers would make the best of for successive processor generations - bumping up the multiplier was the simplest and often most effective way of increasing CPU speed. Nowadays (since the Pentium II, in fact), the multiplier is locked to prevent this, save for high-end chips, such as Intel's Extreme Edition series. For a while, there were complicated ways of defeating the multiplier lock: soldering on a PCB for earlier chips, third-party add-ons and the infamous practice of drawing a line onto certain AMD CPUs with a pencil. No CPU manufacturer's likely to make that mistake again.

Around this time, RAM overclocking became more common place, as memory speeds were ratified, and with that came more tweaking of the front-side bus to compensate for the locked multipliers. Overclocking shifted further towards the BIOS and away from jumpers, which in turn led to overclocking software.

The first was 1998's SoftFSB, which enabled bus-tweaking from within Windows for the first time. With the Pentium III era came aftermarket coolers, as processors now chucked out so much heat that a standard cooling block and fan wasn't enough to cope with an overclocked chip. And so it continued, overclocking largely becoming easier and more common place with each processor generation. This leads us to the Core 2 chips of today, and Intel's current terrifyingly unassailable dominance of the CPU market. Generally drawing as little as half the power of the Pentium 4s that preceded them, most of the range offers a vast amount of overclocking headroom, to the point that a low-end Core 2 Duo can almost go toe-to-toe with the top of the line.

So how's it done? Key to processor overclocking is the front side bus (FSB). In the very simplest terms, this is the connection between the CPU and the rest of the PC, and its speed defines the processor's speed to a significant extent. Intel CPUs final speed is the FSB times the multiplier - so if you've got an FSB of 266MHz and a multiplier of 9, your chip will run at approximately 2.4GHz. While the multiplier is usually locked - though some chips let you at least lower it, to conserve power and reduce heat - the FSB isn't. Bump up the FSB and you bump up the chip. In our example taking the bus to 290MHz gives us a 2.6GHz processor. This is no random example, incidentally, it's what we run the Intel Core 2 Quad Q6600 in one of our office test systems at, giving it a healthy 200MHz boost that makes a noticeable difference in CPU-intesive games and hi-def video re-encodes.

What stops us from going higher? Not a lot in the case of this particular chip. We're playing it safe for desktop work, cos we're in a particularly sweaty office. When we're playing around with high-end tasks, we can have it running stably at over 33GHz (with an FSB of 370 or so) on a decentish, third-party air cooler. That's more or less trading blows with the best Intel has to offer on a $200 chip. But while going to 280MHz on the FSB took a BIOS tweak, a reboot and Microsoft BOB's your uncle, going much higher does involve more fuss.

First up, when our Q6600 is at 33GHz, it's also running at nearly 70°C when under maximum load (and around 50°C when idling). It's perfectly stable, but it could damage it in the long run, and on top of that the fan is making enough noise to wake the deaf pensioner in the next street over. Watercooling, a fancier air-cooler or even just a spot of dust-cleaning will bring the heat down, but there can come a point where that stuff becomes more expensive and hassle than simply buying a better processor.

Hurdle the second is the motherboard. Pushing up the FSB doesn't affect only the CPU, but also the motherboard and, in many cases, the RAM and PCI-e slot to boot. In our case, we're using a motherboard that supports a monstrously high FSB. When shopping for a motherboard, its max FSB will usually be referred to as four times the actual speed, due to the way the processor actually fetches data. So when we've got the FSB set to 266MHz, in effect that's 1,066MHz. When it's up to 372MHz, we need a motherboard that's happy at nearly 1,500MHz. That simply isn't a given, especially on cheaper boards, so shop carefully.

As well as that, if you've got a board with a stingy BIOS, you may not be able to alter RAM and PCI timings independently of the FSB, which can lead to those falling over. Ours does, and for our mighty near-Gigahertz Q6600 overclock, we have to lower the RAM's clock speed a little to compensate for the strain put on it by the raised FSB - we have it sitting pretty at 893MHz. It could comfortably go higher, but the real-world benefits (as opposed to the willy-waving benefits, which are a different matter entirely) would be so miniscule that it's simply not worth placing the extra pressure on the RAM.


Similarly, while faster and, most likely, more expensive RAM will cope better at their stock speeds with a massive FSB, the pay-off is often so minor that value RAM, running at a lower clock-speed may well be enough to make your overclocking masterplan hugely successful. Even the best memory will net you something in the region of just a five per cent performance boost - worth having if every little helps, but it's the FSB that makes the big difference. And for that, the motherboard is critical.

Thirdly, there's the matter of voltage. The faster your chip runs, the more power it needs to feed it. As the FSB goes up, you'll find your motherboard's North Bridge and your RAM also get hungrier.

Unfortunately, your hardware will automatically report its revised power requirements, so trial and miserable error are required to find the sweet spot. Volt tweaking is a fiddly and danger-fraught business.

Some overclocking-friendly motherboards can automatically adjust voltages for you, but are understandably conservative about it, so for the really big overclocks you'll need to set them yourself. This needs to be done by the tiniest increments possible, establishing reboot-by-reboot how many volts your embiggened CPU needs; as low as possible, essentially, as firing too many into it can fry it.

Establish in advance what your chip's out-of-the-box volts are and, through a mix of common sense and googling, decide on a number you're not going to risk going higher than. We pushed our Q6600 from 13 to 1.4V, which is a fairly big increase as volt modding goes. It's not just a matter of the so-called vCore either - as you go for the big overclocks, you'll find you're having to play with the arcane likes of CPU PLL and FSB termination voltage. Again, so long as you raise stuff in tiny increments the risk of killing your chip, RAM or motherboard is fairly minimal.


It's a different matter with AMD processors, which for a while now have had an onboard memory controller, which allows the chip to communicate more directly with the RAM, which in turn means there isn't an FSB as such. Instead, you're overclocking something known as the HyperTransport bus, which is achieved in more or less the same way, but can require lowering the NT's own multiplier to retain stability when you bump the speed. If you've gone for one of the recent AMD Phenom Black Editions, you'll find it comes with the multiplier unlocked, which makes overclocking an easier affair.

By contrast, overclocking a graphics card is dead simple. As a more self-contained piece of hardware, there's none of this confusing multiplier or FSB business; just overclocking the card itself, finding the right speeds for both the GPU and the card's onboard memory. Free software - some of it official NVIDIA/ATI driver plug-ins - will do the trick from within Windows, and built-in safety cut-offs and stability tests make it incredibly hard to damage the card, though of course you are going beyond the warranty.

It's also grown a little more complicated of late in that you may need to overclock the shader clock as well as the GPU and RAM for the best boosts. In the case of NVIDIA cards, it used to be that this was twinned to the GPU speed, meaning a raise in one had a synchronous effect on the other, but for a little while now they've been able to be altered independently. So if you hit the speed ceiling on the GPU, it may yet be possible to eke more performance out of the card by pushing the shader clock a little further.


While the present situation is that you can overclock everything and be pretty confident it'll work, the future of the form is harder to call. One thing seems sure: it's not a dirty little nerdy secret anymore, but an increasingly common practice, most especially with Core 2 chips. There's a vast aftermarket cooler industry to support it, and even cheap motherboards can handle a bit of a free boost. If anything overclocking will become easier, with more and better applications to achieve it within Windows, rather than from the BIOS, and possibly more in the way of automatic volt-modding. But much depends on the future of desktop processing. There's a big war brewing between Intel and NVIDIA as to whether the CPU or the GPU will be the major element in the PC of the near-future.

Intel are pushing ray-tracing using a multi-core CPU to render game graphics, while NVIDIA's CUDA enables its recent GeForce cards to perform parallel processing, such as video encoding and in-game physics, far faster than a CPU could manage. If either of these bed in, overclocking will need to take them into account. At the same time, the slow move to ever-more cores potentially reduces the need for conventional overclocking, as raw clock speed continues to be a lesser concern to multi-threading and, in the case of 3D cards, the number of stream processors and texture units. That's hardly going to stop anyone from trying it, of course. Even when its effects are minimal, overclocking's always going to be a sure-fire way of making a system feel like its yours rather than simply a collection of mass-produced parts.

Modding the case is one thing, but what makes a PC is its performance. When you've painstakingly tweaked that performance into something that suits your own purposes, and it's become something that feels like you've gone far beyond what you paid for it, the system will feel more unique than all the green neon tubing in the world could ever hope to achieve.

Author:
Sandra Prior’s

website:
http://usacomputers.rr.nu
http://sacomputers.rr.nu.

Other Tips and Tricks

0 comments:

Labels

(Electronics) 2011: 2012: 3.4GHz 400MHz 4x512 acpi_call Adapter Alienware all-in-one Amd processors AMD's AMD’s Analysis Android Angry Announced Announcement announces Antec Apple Apple's Arctic Arrives Aspire ASRock Athlon Audio Battle Benchmarked Biostar Blackberry Blu-ray Bluetooth Bridge Broadband Browser Budget business Cache camera Chromebook Compound Computer Computers) Computex Concept Cooler Corsair Crysis Dell's Design Desktop Desktops Details Devices Diamond Digital Display Download Drive External Extreme Facebook feature features First flash Fujitsu Fusion Future Galaxy Gaming GeForce Gigabyte Honeycomb HSFPHASECM India Indian iPad iPhone Keyboard Keyboards Keypad Kingston Laptop Laptops latest Latitude launched launches Leaked Lexmark's LGA1155 LGA1366 LGA775 Light Llano Logitech M2N-SLI MacBook Master Medal Media Medion Mini-Review Mobile mobiles module Motherboard MotherBoard's Motherboards Motorola Mouse MSI's MSM8660 Multimedia National NC215S Netbook Netbooks Nexus nFORCE Nokia Non-ECC notebook Notebooks NTLDR NVIDIA NVIDIA's Offers omnia Optical Optimus Option Overclock Overclocked overview panasonic Panda Panel Patriot Pavilion PC2-5300 PC2-6400 Performance Phone Plans player PlayStation Portable Postpaid Power PowerPC Preview Preview: Price Prices ProBook Processor Processors Project Projector Propus Qosmio Quad-Core Qualcomm Quick Radeon RadTech Rates Razer Registry release Released Releases Retail Review Review: Reviews Rupees Rupess Samsung SandForce Sandy Satellite SDRAM Seagate Select Sensitive Series Series™ Server shutdown Shuttle Silent Silicon Silver Single Small Smartphone Smartphones Socket SODIMM Solar Solid Speakers special Specifications Specs Speed Store Super tablet Tablets takes Tariff Technology Tegra Testing Thermal ThinkPad Thunderbolt Toshiba touch TouchPad Touchscreen TouchSmart toughbook Tower Troubleshooting Tutorial: Ultimate Ultra unveils Update Verizon Vertex ViewSonic Viliv Vista Vodafone Voodoo webOS Western white Wi-Drive Wi-Fi WiMax Windows Wireless Workstations: World Writer www.phoronix.com Xperia Zotac

About This Blog

Blog Archive

  © Blogger templates The Professional Template by Ourblogtemplates.com 2008

Back to TOP