ผู้ติดตาม

Powered by Blogger.

ATI Radeon HD 4870 X2


ATI Radeon HD 4870 X2 - AMD Back On Top

by Marco Chiappetta

AMD hasn't exactly kept the product we're going to be showing you here today a secret. Once NVIDIA launched the GeForce GTX 200 series, and AMD had a look at what the cards could do first hand, AMD's marketing machine was tuned up and revved to its redline expunging the features and benefits of their upcoming GPU. Then, when the initial products in the Radeon HD 4800 series launched, AMD's plan became quite clear.

The Radeon HD 4800 series didn't overwhelm NVIDIA's GTX 200 series with raw performance. In fact, the GeForce GTX 280 and 9800 GX2 were more powerful than the Radeon HD 4870. The Radeon HD 4800 series cards, however, were still excellent cards and they were offered at extremely competitive prices, which put significant pressure on NVIDA. At the time of their launch, the Radeon HD 4850 and Radeon HD 4870 were both less expensive and more powerful than the GeForce 9800 GTX and GeForce GTX 260, respectively. Since then, NVIDIA has reacted with a quick round of price cuts.

While enthusiasts were contemplating the purchase of a new Radeon or GeForce, AMD then planted another seed and released some concrete details regarding the Radeon HD 4870 X2, as if to say, "Yeah, we've got you covered at the $300 price point and a new, ultra powerful behemoth is coming real soon too. Maybe you should hold onto your upgrade money for a bit?"

That behemoth is the Radeon HD 4870 X2. As its name suggests, the card features two RV770 GPUs running in tandem, for what is effectively a Radeon HD 4870 CrossFire configuration on a single PCB. Other than its pair of GPUs, however, the Radeon HD 4870 X2 has a few more differentiating factors we'll need to tell you about. Read on for the full scoop...


http://hothardware.com/Articles/ATI%2DRadeon%2DHD%2D4870%2DX2%2D%2DAMD%2DBack%2DOn%2DTop/

MSI N280GTX-T2D1GOC, GeForce GTX 280 Redux


by Dave Altavilla

NVIDIA's GeForce GTX 200 series of graphics cards, that launched about 60 days prior to the publishing of this article, have undergone a significant price reduction since they first debuted. At the time, NVIDIA flexed its GPU muscle with a high-end, single GPU graphics design that their primary competition (AMD) could only compete with by throwing two GPUs at the problem. As such, NVIDIA initially priced their flagship GeForce GTX 280 at a significant premium; $649 upon its introduction on launch day. Would you believe us, if we told you back then, that these cards would be selling for $200 dollars less in only about two months time? You can believe it today, however, as many etailers have cards on the shelf right now at $449 and even less, with rebates etc.

Since then, we've seen the launch of AMD's latest ATI dual GPU-based flagship, the Radeon HD 4870 X2. There's no question, in all of the numbers we've shown you, it's faster than any standard retail GeForce GTX 280. On the other the hand, AMD's Radeon HD 4870 X2 also costs $100 - $150 more currently, generates a lot more heat, consumes more power and in general has a more prominent acoustical signature, versus a GeForce GTX 280. So with that stage set, we'll take you through the ins and outs of another retail version GeForce GTX 280 from MSI, as well as compare and contrast it versus other cards above and below its price range. Journey on for our view of the GeForce GTX 280 redux and see how MSI's N280GTX-T2D1GOC shapes up in the current high-end 3D graphics landscape.


•MSI N280GTX-T2D1G Features

650MHz Core clock
2300MHz Memory clock
2nd Generation NVIDIA® Unified Architecture
PhysX™ Ready
3-way NVIDIA® SLI™ Technology
Microsoft® DirectX 10 Support
NVIDIA® CUDA™ Technology
PCI Express 2.0 Support
GigaThread™ Technology
NVIDIA® Lumenex™ Engine
16x Anti-aliasing Technology
128-bit floating point High Dynamic-Range (HDR) Lighting
OpenGL® 2.1 Optimization and Support
Dual-link DVI Support
NVIDIA® PureVideo™ HD Technology
Discrete, Programmable Video Processor
Dual-stream Hardware Acceleration
Dynamic Contrast Enhancement & Color Stretch
HDCP Capable
Integrated SD and HD TV Output


•Video Output Function

TV-Out (via S-Video to Composite)
HDTV-out
Dual-link DVI x 2
VGA (via DVI to D-Sub adaptor)
HDMI (DVI to HDMI adaptor)

Gigabyte GeForce 7950 GT


by Shane Unrein

NVIDIA has enjoyed quite a bit of success the last couple of years thanks in part to its GeForce 6 and 7 Series graphics cards. Like every other company in the world, NVIDIA likes to ride its success as long as possible. In NVIDIA's case, this can be done with product refreshes. The most recent refresh brought us the GeForce 7950 GT, interestingly positioned between the 7900 GT and 7900 GTX.

What's attractive about the GeForce 7950 GT is that it doesn't cost much more than a 7900 GT yet it offers nearly the same performance of a much pricier 7900 GTX. Additionally, the 7950 GT sports 512MB DDR3 clocked at 1.4GHz (effective). The GPU core runs at a speedy 550MHz. Just like the 7900 GT and GTX, the 7950 GT also boasts 8 vertex shaders and 24 pixel shaders.

Today, we have a 7950 GT from Gigabyte in the labs to see just how well the new GeForce 7 Series card can perform. The GV-NX795T512H-RH features reference clock speeds and a Zalman cooler for a little differentiation outside of the typical reference design HSF assembly. Read on to find out more...


http://hothardware.com/articles/Gigabyte-GeForce-7950-GT/

Gigabyte Radeon X800




Overclocking

As I suspected, the Gigabyte Radeon X800 did not allow us to overclock its core very much at all. In fact, it took after the Gigabyte GeForce 6800, and I couldn't even push 2 more MHz out of it without causing some sort of freezes or crashes in a 3DMark 2005 loop. A stock 400 MHz it is. The GPU temperature after a loop or two at 400 MHz was around 75C.

The memory, however, is a completely different story. In the introduction I mentioned that the stock memory speed on this board was 480 MHz (960 MHz DDR). I was able to push that memory far beyond that, all the way to 545 MHz! This is a whopping 65 MHz overclock (130 MHz DDR) on memory chips that are rated at only 20 MHz more than the stock speed of the card. Very impressive.

Doing some quick benchmarks in Half-Life 2, I saw up to a 5% difference in performance at the higher resolutions with AA/AF.

Conclusion

The Gigabyte X800 is a tough cookie to summarize. It is an excellent product from the perspective of the price/performance ratio, just as the Gigabyte GeForce 6800. For anyone interested in an HTPC/silent PC, products of this genre are perfect, as the R423 core is a first-class DX9 core ideal for medium to heavy gaming, yet they are completely silent. The X800 delivers in spades and will present a serious challenge to the 6800 and especially the 6600GT. Again, the Gigabyte X800 pricing is not exactly at the ATI suggested 199$ SRP but arguably the extra cost is justified by the extra (and faster) memory along with the heatpipe design. There is still the X700 Pro at the 199$ range and I guess board partners don't want boards cannibalizing each other.

As a customer trying to make a purchase decision if I were aiming for a 100% silent PC, I would probably end up narrowing down my choices to either one of the two heatpipe solutions from Gigabyte in either their RADEON X800 or the NVIDIA 6800. At that point, I would decide on what games are more important to me, and what brand (ATI vs. NVIDIA) I've had better experiences with in the past. If I were more interested in games like Far Cry or Half-Life 2, I'd pick the X800. If Call of Duty, Doom 3, or Jedi Knight 2 were my favourites, I'd probably pick the GeForce 6800. The biggest weakness on the ATI side is the OpenGL thorn and with other marquee titles like Quake 4 this will be a lingering question. On the D3D side, the X800 is consistently fast.

As it stands right now, the Gigabyte X800 is a worthy product for both the enthusiast gamer and casual gamer, as well as one of few obvious choices for anyone on a quest to reduce computer noise. While it may not be the screamer that the X800 XT or 6800 Ultra are, it is still head and shoulders above most of the graphics solutions that the average person possesses.


http://www.neoseeker.com/Articles/Hardware/Reviews/gigabytex800/9.html

GeCube Radeon HD3870X2 Video Card



GeCube Radeon HD3870X2
GeCube is a brand owned by a company from Taiwan, called Info-TEK. It is specialized in making graphics cards based on AMD/ATI chips. The HD3870X2 card is based on the RV670 chip, actually on two of them, both on the same PCB(that's where the X2 comes from).

Box Contents

In the box, along with the card you can find a thick manual, CD with the drivers, two DVI to VGA adapters, one DVI to HDMI adapter, S-Video to RGB cable and one CrossFire bridge.

Specifications

Compared to the HD3870X2 card even the massive 8800GTS 640MB looks small. The HD3870X2 is 10.5inches(26.67cm) long, which is longer than the width of an ATX board. The card has 640 stream processors, 32 texture units and 32 rendering units. That's a total of 1.332 billion transistors which give it the computing power of over one teraflop. You have to admit that these are some impressive numbers. The GPU supports the latest DX 10.1 and Shader Model 4.1, 32-bit FPU filtering and OpenGL 2.0. The GPUs work at 825MHz each (the normal HD3870 works at 775MHz). It comes with 1GB of DDR3 memory working at 900MHz(1800MHz effectively). The ATI PowerPlay technology controls the GPUs and changes the voltage and the frequency depending on the load. That's why this card has very low power consumption when in Windows. Other features include: the Unified Video Decoder (H.264/AVC i VC-1) that allows reproduction of HD DVD and Blu-Ray content in 1080p, the Ultimate Image Quality that goes beyond the 1080p standard and allows resolutions up to 2560x1600. It also supports HDMI and has the new AMD's Xilleon HDTV coder.

Overclocking and temperature

Overclocking of this card is only possible if you have the 8-pin PCIe power connector. Combining the 8-pin and the 6-pin connector unlocks the ATI Overdrive option in Catalyst Control Center and you can raise the GPU and memory frequencies. But unfortunately the ATI Overdrive has a limit at 878MHz for the GPU and 955MHz (1910MHz DDR) for the memory. At those frequencies the card completed all the benchmarks without any problems and I'm sure it could be overclocked more if the ATI Overdrive didn't have the limit. They will probably raise the limit in the next version of the drivers though.

Temperatures:

Idle:

GeCube 3870X2 - 65°C(149°F)
GeCube 3870X2 @ OC - 66°C(150.8°F)

Load:

GeCube 3870X2 - 83°C(181.4°F)
GeCube 3870X2 @ OC - 86°C(186.8°F)

Benchmarks

3DMark 06

HD3870X2 - 15180 marks

COD4 Modern Warfare DX10 (1280x1024, 4xAA, 16xAF)

HD3870X2 - 77.06 fps
HD3870X2 @ OC - 77.18 fps

Company Of Heroes - Opposing Fronts DX10 (1280x1024, 4xAA, 16xAF)

HD3870X2 - 48 fps
HD3870X2 @ OC - 49.3 fps

Crysis DX10 (1280x1024, 4xAA, 16xAF)

HD3870X2 - 21.50 fps
HD3870X2 @ OC - 26.13 fps

Half Life 2 Episode Two (1680x1050, 4xAA, 16xAF)

HD3870X2 - 121.4 fps
HD3870X2 @ OC- 123.8 fps

Conclusion

The GeCube Radeon HD3870X2 is a card with lots of potential. I believe that the performance will go up with every driver update, and the overclocking limits should also be removed. Considering that Crossfire and SLI still have lots of bugs, this should be a better choice. But if you want the ultimate gaming machine, this card also has the ability to be paired with another one, which would make a total of 4 GPUs on one PC, a real power sucking monster. All in all, I think that this card is not a bad choice for people who can afford it.

Links

HD3870X2 at Rage3D.com - tons of good pictures and benchmarks

HD3870X2 vs. 9800GX2 at Tom's Hardware

NVIDIA in the red last quarter due to manufacturing issue

By Joel Hruska |

NVIDIA has published its results for the second quarter of fiscal 2009. The company's performance fell short of previous expectations for multiple reasons, and NVIDIA's forecast for the third quarter isn't much brighter, with projected sales below seasonal levels. NVIDIA reported revenue of $892.7 million for the second quarter, a decrease of some 5 percent. The company dipped into the red this quarter, with an operating loss of $120.9 million (22¢ per share).

Related StoriesAMD takes performance crown from NVIDIA with new 4870 X2
GPU shipments healthy in Q2 as Intel surges forward
GeForce 9 series rumored to launch in late February
NVIDIA's GTX 280: G80, Evolved
The single most important factor that contributed to that loss, of course, is the $196 million warranty charge NVIDIA admitted it was taking a month ago. The company fielded several questions regarding the size and nature of that problem and once again reiterated that the manufacturing issue in question affected a relatively small batch of parts, that the company remains fully committed to repairing those parts, and is working closely with all of its OEM partners. When asked whether or not it believed the problem had affected other GPUs, expected additional costs, or predicted design losses as a result of the issue, NVIDIA said no. Based on the company's profile of the flaw, the one-time $196 million charge will cover it.

A challenging market
NVIDIA could have potentially tried to blame its quarterly loss entirely on that one-time charge, but the company declined to do so. Instead, it presented a very candid picture of its own weak performance during the quarter and the multiple challenges it faced. One problem, according to the company, was that it had "underestimated its competitor," and had mispositioned its new product introductions. This issue has since been corrected, and the GTX 260 / GTX 280 are now far more accurately positioned against the HD 4800 series from ATI than they were when those products launched.

Weak sales in the second quarter drove up inventory levels, and greater-than-expected inventory levels, in turn, have slowed NVIDIA's transition to 55nm. NVIDIA CEO Jen-Hsun Huang claimed that the transition to 55nm technology is ongoing, but that we won't see these cards across the market until existing 65nm supplies have been depleted. The weak US dollar drove US OEM sales in European markets, but it also weakened the sales performance of native European manufacturers. NVIDIA claims this substantially impacted its business, as desktops in Europe have a substantially higher GPU attach rate than their American counterparts.

Huang pulled no punches in his evaluation of the company's second quarter. "Our second quarter financial performance was disappointing," he said. "The desktop PC market around the world weakened during the quarter, and our miscalculation of competitive price position further pressured our desktop GPU business. We have a great product line-up, and having taken the necessary pricing actions, we are strongly positioned again. Our focus now is to drive cost improvements and to further enhance our competitiveness through the many exciting initiatives we have planned for the rest of the year."
He then added, "In contrast, the rest of our businesses did not exhibit the same dynamics as our desktop business. The notebook GPU, MCP, and Professional Solutions groups grew a combined 27 percent year-over-year. Though we approach the near term with caution, we remain very optimistic about the expanding universe of visual computing and the exciting growth opportunities made possible by CUDA, our general purpose parallel computing architecture."

No plans for UMPCs
CUDA, Tegra, and mobile Internet Devices (MIDs) were all hot topics this afternoon. Huang was dismissive of the UMPC market, implying that NVIDIA has no plans to compete in such a space, but talked extensively on how MIDs are the future of computing. The company is also putting a massive amount of weight behind CUDA, which it apparently envisions as the software engine that will power adoption of NVIDIA GPUs within everything from MIDs to supercomputers. None of NVIDIA's forward-looking statements touched on NV70 or any particular aspect of a next-generation architecture, but focused entirely on massively parallel computing and the ways in which the current generation of GeForce products can be adapted to deliver tremendous performance increases.

NVIDIA plans to address its revenue shortfall by significantly cutting operating expenditures, which ballooned by 34 percent year-on-year, and believes strong sale of its 55nm products will improve the situation. The company does not believe it lost market share in its second fiscal quarter, and while it forecast a tough third quarter, it said nothing about a long-term stay within the red zone. Barring further one-time charges, the company may quickly return to profitability; take away that charge and NVIDIA pops back into the black by some $60 million.

As for the rumors and allegations of widespread GPU/chipset failures, NVIDIA has now denied them in its quarterly earnings report. If the company's statements are accurate, there's absolutely nothing to them. If the company is lying about the scope or nature of the problem, it's opening the doors to potential shareholder lawsuits.

Personally, I'm ready for this topic to go away. The scope of the allegations continues to expand, proof remains nonexistent, and NVIDIA has now officially stated that the scope and nature of the problem is confined to a relatively small batch of parts—which, incidentally, is what it has been saying all along.

http://arstechnica.com/news.ars/post/20080813-nvidia-in-the-red-last-quarter-due-to-manufacturing-issue.html

ASUS P5GC-MX

ASUS P5GC-MX (R2.02) LGA 775 Intel 945GC Micro ATX Intel Motherboard

Best Performance of All-in-one Solution MB Support Intel Core2 DUO

- Intel LGA775 Pentium CPU
- Intel® Core™2 Duo Ready
- Dual-Core CPU Support
- Intel® 945GC/ICH7
- Dual channel DDR2
- 6-channel High Definition Audio
- EZ Flash


Best Performance of All-in-one Solution MB

P5GC-MX with Intel 945GC chipset inside supports Intel dual-core CPU and features 800MHz FSB, PCI Express x 16, Serial ATA interface, high performance integrated graphics engine, dual-channel DDR2 memory, and HD Audio CODEC. Users can experience faster graphics performance and higher video quality today. P5GC-MX is the most affordable all in one solution platform for Intel® Core™2 Processor with Intel 945GC chipset inside.


LGA775 Intel® Core™2 Processor Ready
This motherboard supports the latest Intel® Core™2 processors in LGA775 package. With new Intel® Core™ microarchitecture technology and 800 MHz FSB, Intel® Core™2 processor is one of the most powerful and energy efficient CPU in the world.




Dual-Core CPU
Enjoy the extraordinary CPU power from the latest dual-core CPU. The advanced processing technology contains two physical CPU cores with individually dedicated L2 cache to satisfy the rising demand for more powerful processing capability.


Intel® 945GC Chipset
The Intel 945GC graphics memory controller hub(GMCH) and the ICH7 I/O controller hub provide the vital interfaces for the motherboard. The GMCH features the Intel Graphics Media Accelerator 950, an integrated graphics engine for enhanced 3D, 2D, and video capabilities. The GMCH contains one 16-lane PCI Express port intended for an external PCI Express graphics card and provides the interface for processor in the 775-land package with 800/533MHz FSB, dual channel DDR2 at speeds of up to 533MHz.



Intel Graphics Media Accelerator 950
The Intel Graphics engine has new capabilities that provide a significant increase in graphics performance. DirectX 9 hardware acceleration, 400MHz core clock, and up to 224MB of video memory all together provide a full-value, high performance graphic solution to you. Through a dual-independent display technology, different content can be displayed on each monitor or stretched across both displays for more workspace.


Dual-Channel DDR2 533
Dual-channel DDR2 technology doubles the bandwidth of your system memory and hence boost the system performance to out perform any memory existing solutions in the market.


PCI Express Architecture
PCI Express is the latest I/O interconnect technology that will replace the existing PCI. With a bus bandwidth 4 times higher than that of AGP 8X interface, PCI Express x16 bus performs much better than AGP 8X in applications such as 3D gaming. PCI Express x1 and x4 also outperforms PCI interface with its exceptional high bandwidth. The high speed PCI Express interface creates new usages on desktop PCs e.g., Gigabit LAN, 1394b, and high-speed RAID systems.


High Definition Audio
Enjoy high-end sound system on your PC! The onboard 6-channel HD audio (High Definition Audio, previously codenamed Azalia) CODEC enables high-quality 192KHz/24-bit audio output, jack-sensing feature, retasking functions and multi-streaming technology that simultaneously sends different audio streams to different destinations. You can now talk to your partners on the headphone while playing a multi-channel network games. All of these are done on one computer.


SATA 3Gb/s
Serial ATA3.0Gb/s is the next generation ATA specification that provides scalable performance foroday and tomorrow. With up to 300MB/s data transfer rate, Serial ATA II is faster than current Parallel ATA, while providing 100% software compatibility.


Max. 8 USB 2.0 ports supports
USB 2.0 is the latest connectivity standard for next generation components and peripherals. Backwards compatible with current USB 1.1 peripherals, USB 2.0 delivers transfer speeds up to 40 times faster at 480Mb/s, for easy connectivity and ultra-fast data transfers.


EZ Flash
EZ Flash is a user-friendly BIOS update utility. Simply press the pre-defined hotkey to launch this tool and update BIOS from a floppy disk before entering the OS.


CrashFree BIOS 2
The CrashFree BIOS2 feature now includes the BIOS auto-recovery function in a support CD. Users can reboot their system through the support CD when a bootable disk is not available, and go through the simple BIOS auto-recovery process. ASUS motherboards now enable users to enjoy this protection feature without the need to pay for an optional ROM.


MyLogo
ASUS MyLogo personalizes and adds style to your system with customizable boot logos


Q-Fan
Constant, high-pitched noise generated from heatsink fans are a thing of the past thanks to ASUS Q-Fan. This motherboard with Q-Fan technology intelligently adjusts fan speeds according to system loading to ensure quiet, cool and efficient operation


Compliance with the RoHS Directive
The motherboard and its packaging comply with the European Union´s Restriction on the use of Hazardous Substances (RoHS). This is in line with the ASUS vision of creating environment-friendly and recyclable products and packaging to safeguard consumers´ health while minimizing the impact on the environment.

ASUS Xonar DX PCI Express 7.1 Audio Card


by Jeff Bouton in Sound

Just a few years ago, if someone said that integrated audio would deliver great performance with many of the features of a full-blown add-in card, it might have raised an eyebrow or two. Today, however, thanks to vastly improved integrated technologies, today's integrated audio solutions are often more than satisfactory for a large portion of consumers. Nonetheless, there are those who want a better audio experience, which can only be delivered from the likes of an add-in card.

A few months ago, ASUS jumped into the audio card market with both feet with the introduction of the Xonar product line. One of the first models to market was their premium class Xonar D2 Ultra Fidelity 7.1 sound card. The D2 boasted a superb retail package with all of the cabling, software and extras one would need thrown in, along with a slick, eye-catching design, for around $180. Not only did the Xonar D2 Ultra Fidelity 7.1 deliver on all fronts, it proved to be solid competition for Creative, who garners the majority of the add-in audio card market.

To further infuse themselves into the audio card market, ASUS didn't stop with the Xonar D2 Ultra Fidelity 7.1. ASUS has also developed several other models to meet the needs of consumers of all budgets. Today, we're going to take a look at ASUS' latest offering which targets the mainstream market, with our assessment of the Xonar DX PCI Express 7.1 Audio Card. Selling for about $89, the Xonar DX takes many of the Xonar D2's more desirable features and condenses them into a smaller PCI Express ready PCB with the goal of delivering a major improvement over common integrated audio solutions.

AMD 790GX Chipset Platform Launch


by Marco Chiappetta in Motherboards

If you've been on top of the PC scene for any length of time, you probably know that whether by choice or necessity, AMD has taken a different tact as of late. Whereas the company was all about bigger, faster, and better during the Athlon's heyday, AMD is now more about touting the performance per dollar and value of their products. While they may not have a CPU with the horsepower to compete in the benchmark war with Intel's $1000 behemoths, AMD's affordably priced Phenoms do offer good bang for the buck.

The value conscious mentality that has permeated AMD's recent graphics card and processor launches has also rung true in their motherboard chipset business as well. The 690G and 780G, for example, offered solid feature sets and excellent IGPs, at very affordable prices. And today, AMD continues their recent traditions with the introduction of the 790GX chipset.


The AMD 790GX is a tough product to categorize. It is targeted at value conscious gamers, enthusiasts, and multimedia buffs all at the same time. The block diagram above gives a high-level overview of the chipsets main features and illustrates how each component is connected in the architecture.

As you can see, the AMD 790GX Northbridge is connected to the AM2+ socket through a HyperTransport 3.0 link and it sports and integrated graphics core, along with a flexible PCI Express lane configuration. PATA, 6 SATA ports, HD audio, and 12 USB ports are supported by the SB750 Southbridge. Also, at the bottom of the diagram, a new feature you may not be familiar with, makes its debut--ACC, or Advanced Clock Calibration. More of ACC a bit later.

The AMD 790GX is manufactured at 55nm and features an Integrated Radeon HD 3300 Graphics Processor (IGP) that integrates a DirectX10 compliant Shader Model 4.0 graphics core, a Unified Video Decoder (UVD), two x8 PCI Express 2.0 links or 1 x16 link, HyperTransport 3.0, DVI / HDMI interface, and internal / external TMDS and DisplayPort capability in a single chip. The graphics core is actually identical to the one found in the 780G, but in the 790GX, it is clocked much higher (700MHz) for up to 33% better performance, PowerPlay features have been enahnced to support lower power states, and many boards featuring the 790GX will be equipped with dedicated sideport memory, for increased performance. Of course, the 790GX supports ATI Hybrid CrossFire technology as well, for increased performance or low-power operation.

The AMD SB750 Southbridge communicates with the Northbridge through the A-Link Express II interface. The AMD SB750 offers support for both SATA RAID and IDE drives and it is the key piece in the Advanced Clock Calibration puzzle. In total, the SB750 supports 6x SATA 3.0 Gb/s ports that can be setup in IDE, AHCI, JBOD, RAID 0, RAID 1, RAID 5 or RAID 10 modes, 12x USB 2.0 and 2x USB 1.1 ports, DASH 1.0, 6x PCI slots, HD Audio, IDE, and Serial and Parallel ports.

DisplayPort Cards Coming, AMD's RV635 Unveiled


by Dave Altavilla

We've got a bit of show-and-tell for you here this morning. You might recall recent announcements of a new digital display interface, dubbed DisplayPort by VESA, the Video Electronics Standards Association. This new interface will supplant DVI and VGA connections eventually and its micropacket architecture offers significantly more bandwidth with multi-monitor support over a single cable. Like HDMI, a DisplayPort connection can carry 8-channel 24-bit audio, but also offers a dedicated auxiliary link for control communications of things like panel I/O and microphone connections. There are hundreds of big brand name companies behind the standard that is set to compete with HDMI for desktop and notebook dominance, including the likes of AMD, Intel, NVIDIA, Samsung and Dell. However, DisplayPort is more likely to co-exist with HDMI, since HDMI is specifically targeted for consumer electronics like set-top boxes, DVD players etc, while DisplayPort was designed from the ground up for computing.

Though we've heard through the grapevine that NVIDIA is readying DisplayPort capable graphics cards for sometime early next year, AMD has stepped up with the first DisplayPort-enabled graphics card to hit our test labs.


We've got an RV635 XT board here and have been testing it out on an unreleased LCD panel that we'll be showing you in the coming weeks. On the board you'll note that the surrounding circuitry for each DisplayPort connection is minimal and devoid of those all-too familiar Silicon Image TMDS chips that add cost to any dual link DVI-D connection. Since each DisplayPort cable can run multiple monitors in a daisy-chain configuration, imagine a four panel setup from a single graphics card and even possibly a single cable connection. We'll have more to come on the LCD side of the equation, soon.

Can RAM Speed Up My Computer?

By Kris Mainieri

Many people are looking for ways to speed up their PC. And adding RAM seems to be a viable option. To answer the question: yes, RAM can speed up your computer. RAM is an acronym for Random Access Memory. Everything you do using your computer uses up RAM.

The amount of RAM used depends on what kind of program is running. Here is how it works. All operating systems such as Microsoft Windows use a component called virtual memory manager of VMM. Running a program, such as an internet browser or a instant messenger, will trigger your computer's microprocessor to load an executable file to your RAM. For larger programs, this will typically take up 5 megabytes of RAM. Also the microprocessor uses shared DLLs or dynamic link libraries which can range from 20 to 30 megabytes of RAM.

Most users open programs simultaneously. For example, when doing research, you might have a word processor running while having multiple internet browsers for research. Sometimes the music player is also running. This adds up the RAM used. If you use more RAM than what is currently installed in your unit, then you surely will experience a slower PC speed.

In this case all you have to do is increase your RAM. To find out exactly how much RAM you need to add to your hardware, you have to first find out how much RAM you already have. If you don't know your RAM status, right-click My Computer and choose Properties. Choose the General tab and various information, including RAM, will be displayed. Then press and hold control alt delete to go to Task Manager. In the Processes tab, you will see how much RAM you use for a particular program. Add this up and you will have the number of RAM you use. Calculate the deficit from your installed RAM. This will, more or less, give you an idea of how much RAM you need.

Adding RAM is a cheap alternative to speed up your computer as opposed to buying a brand new faster unit. And while you're at it, perhaps you might consider buying an external hard drive. This is useful for saving and transferring important files that you don't often use. Keeping only regularly used files on your hard disk will leave more space which results to faster PC experience.

Aside from adding RAM, other ways of speeding up your computer is to continually free up some space in your hard disk. Uninstall and remove unused programs as they eat up space you can use for something else. Clear your internet cache of temporary files and offline content every so often.

If you don't have the budget for upgrading your hardware such as RAM, you can employ certain techniques to maximize your current RAM capabilities. Make sure you only open programs that you need. Avoid opening unnecessary programs or software. This will lead to less usage of RAM, giving you a faster PC speed and less headaches and frustration that slow PC can bring.

Kris Mainieri is an accomplished Computer Tech focusing on innovative and unique ways to help people take their computing performance to a whole new level free.

For a limited time, you can claim the "Secrets That Most People Will Never Know About Computers" e-book absolutely free Windows XP Tweaks CLICK HERE!

Dell - New BIOS For NVIDIA Laptop GPU Problems

By John Xu

NVIDIA's mobile graphics failure woes are increasing, as Dell's BIOS fixes, which mitigate the chance of graphics chip failure in its laptops, come at the cost of noise and battery life.

NVIDIA has already admitted that at least a portion of the rumors involving graphics failures were true, and announced a $200 Million one-time set-aside to cover costs related to the failures. The fault was not, says NVIDIA, in the silicon pattern of the GPUs themselves, but caused by the packaging, which was sensitive and could fail under heat. It took pains to deny rumors of a broad pattern of failing chips, including problems with desktop cards.

Now, Dell is releasing new firmware for a huge number of laptops including Inspiron, Latitude, Precision, Vostro, and XPS lines. The new firmware makes fan control more aggressive, spinning fans more to keep the GPUs cooler on systems with potentially affected NVIDIA GPUs. Dell emphasizes that failed GPUs won't be fixed by the new BIOS, but systems with it will fail at a lower rate. According to Dell, the fans on the laptop may run more frequently, but will not spin up as often-users are essentially trading short periods of full fan usage for longer periods where the fan might run at 25-50 percent. Despite these drawbacks, Dell urges everyone with the potentially affected laptops to update, and has announced that all new laptops will ship with the new BIOS.

The move by Dell is interesting. It's possible that Dell knows more than we do and more than NVIDIA cares to admit, and is shipping the new BIOS to forestall a massive wave of failures. It's also possible that the explanation is significantly less sinister; Dell could simply want to minimize the number of warranty service calls they get. It could also be worried about triggering a CPSC product recall or product liability lawsuits, and view it as due diligence if it fixes as much as it can, so that the recall or lawsuit never materializes.

The full extent of NVIDIA's difficulties with G84 and G86 won't be known for a while, but Dell is being proactive about minimizing it as much as possible. Owners of potentially affected laptops should cross their fingers.

I specialize in research, development, manufacturing & sales of all types of batteries for laptop: http://www.power-battery.com.au

Article Source: http://EzineArticles.com/?expert=John_Xu

USB Flash Drives a Security Threat?

By Chris Proth

Progress in new technology, driven by the fall in Nand Flash price, has led to a number of small devices that can hold a significant amount of data. The device that epitomises all of these is the USB Flash Drive. In 1999 a 16MB USB Flash Drive cost £50 now a 4GB USB Flash Drive costs only £10.

USB Flash Drives have been a great friend to businesses over the last few years, making it quick and easy to transfer data from one machine to another and to carry work home to finish without needing to haul around a heavy laptop. However, with flash drives being able to hold ever growing amounts of data and transfer data faster than ever, USB flash drives are being looked at by some companies as one of their biggest threats to security.

The use of USB Flash Drives in business poses three categories of risk: the contraction of a computer virus; the loss of data and the theft of data. All three of these are considerably serious: a computer virus could totally wipe out a company's IT infrastructure and delete their most important files, if the company hasn't an effective data backup this could ruin it; if a company holds sensitive information and loses it, the company could be found to have broken the DPA and could get fined a crippling sum of money, not to mention the damage to its reputation; if a sales person walks out with your entire clients database and gives it to his new firm you're going to lose at least a few customers.

Fortunately there are some solutions out there to reduce the chances of businesses being a victim of one of the sad situations mentioned above.

1. Removal of USB ports - The removal of USB ports is a rather severe reaction to this potential threat and is often a practically difficult one to implement, a lot of firms use USB ports for their mice and keyboards and these would all have to be changed.

2. Encryption of USB Drives - Encryption on USB Flash Drive can take one of two forms either a hardware encryption or software encryption. Forms of hard ware inscription may include only allowing access to the USB Flash Drive after the user has authenticated his or her authority using a password, encryption certificate or biometric authentication (most commonly finger print recognition but sometimes also retina recognition). The encryption of storage media can also be managed via software two examples are Microsoft's Active Directory and Novell's eDirectory, both can be set up to use certificates to data held on USB Flash Drives.

3. Restrict access to important files on critical servers - As with all company information access should be given on a need to basis and taken away from employees as soon as they no longer need that access: if you're going to give someone their notice is it worth letting them have access while they sit it out?

4. Monitor access of company employees to sensitive files - Just because your employees have access to the data doesn't mean they have ant rights to do with as they please. Monitoring their behaviour with sensitive data is the best way to spot any unusual pattern and can give you time to act and stop any possible data leaks.

5. Limit size of data transferred to USB drives- Data transfer restriction software can be used to protect sensitive files by capping the sizes of files that can be copied to a USB Flash Drive. The file size limit simply needs to be set to the size of the smallest sensitive files.

6. Enforce USB Flash Drive policies - Enforcing USB Flash Drive Policies can go a long way to reducing the risk of accidental loss of data, policies alone aren't likely to stop a disgruntled employee from doing any damage. Here are some policies you might want to consider: USB Flash Drives should only be used for data transfer and not storage. Data should not be on USB Flash Drives for over a week. Data should not be transferred to USB flash drives without prior consent form a manager.

7. Firewalls and Anti viruses - Viruses can be contracted from USB Flash Drives just as they can be contracted from any other type of media. To keep yourself safe make sure you have the latest updates installed for whichever antivirus software you use.

Although the removal of USB ports is the only 100% certain way of making sure USB flash drives pose you no risk it is not a solution, the benefits of USB flash drives are vast and getting rid of them would be a step back for computing. By being vigilant to the behaviour of business staff and using one or a variety of the security solutions mentioned above you can protect yourself from flash drive misuse but still benefit from their versatility.

Chris Proth is a writer with a keen interest in the USB Flash Drive market, for promotional USB Flash Drives he recommends Flashbay Ltd.

Article Source: http://EzineArticles.com/?expert=Chris_Proth

Intel DG45ID MotherBoard



Intel broke new ground when it launched the GMA X3000 graphics core in the G965 chipset as it was the first Intel graphics core to use Unified Shaders. The next big step was the GMA X3500 core in the G35 chipset which supports Shader Model 4.0 and DirectX 10.0 however you couldn’t really claim that the graphics were up to the job of playing games. Today we’re looking at a pre-release sample of the Intel DG45ID with G45 chipset that uses the GMA X4500HD graphics core and once again Intel has come up with something new.


G45 has had the same 65nm die shrink as the mainstream P45 chipset and Intel has used the space that has been freed up to increase the Unified Shader count from eight to ten. The clock speed of the core has been reduced from 667MHz in the G35 to 533MHz so the amount of graphics power has remained constant (10x533=8x667) but we would expect that the power consumption has been reduced.


You’ll have noticed the HD suffix that Intel has added to the GMA X4500 model code as Intel has worked on the High Definition support in this chipset. Intel has supported hardware decoding of H.264, VC1 and MPEG2 since the introduction of the GMA X3000 core and it maintained the list of features in GMA X3500. With GMA X4500HD it has enhanced support for H.264 HW by adding Motion Compensation and Inverse Transform which were already present and correct for VC1 and MPEG2. The new core also gains Variable Length Decode support for all three codecs and some post processing features such as Non-Linear Anamorphic Scaling that are, frankly, incomprehensible to your reviewer.


We get things back on track as we scroll down the list of features when we reach the ports and connectors. Until G45 came along Intel supported HDMI with SDVO or Serial Digital Out connection which is a revision of PCI Express. With the latest graphics core HDMI support is integrated and DisplayPort is added but the really interesting thing is that the DG45ID motherboard has dual digital outputs in the shape of an HDMI port and a DVI-I connector.


As far as we can see there is no change in monitor support from G35 to G45 as they both support dual independent displays with resolutions up to 2,048 x 1,536 so both GPUs can handle 1,920 x 1,080 HD. In the past we’ve only ever seen one digital port and one analogue connection on an Intel motherboard.

If you plug in a PCIe graphics card the DVI-I port is disabled so you shouldn’t get carried away with the idea of running four displays on one PC.


This is a thoroughly modern motherboard that doesn’t have any legacy support whatsoever. The graphics outputs are digital, there are no PS/2 ports for mouse or keyboard, the only storage connectors are SATA and eSATA so it’s finally time to ditch your IDE DVD writer with the horrid ribbon cable. The only problem we have with the layout is that Intel has used open chokes around the CPU socket and the capacitors appear to be the old style paper-and-electrolyte design. This suggests that the DG45ID might not have the same long life that you would expect from a recent model from the likes of Asus, Gigabyte or MSI.


Intel has managed to comfortably fit all of the features onto the motherboard despite the fact it is MicroATX. You’d only want the extra room that is provided by the ATX form factor if you intended to install a host of expansion cards and frankly we’re struggling to see how you’d even use the slots that Intel has provided.

nVidia GeForce 8600 GTS Graphic cards


Ever since the release of nVidia's 8800 series way back in November, the graphics industry has been in a kind of suspended animation. Those willing to spend more than £200 on a graphics card were well catered for but the rest of us were left to wonder whether it was worth buying a DX9 card now or whether it was best to wait until midrange DX10 hardware started to appear.


Well, at last we no longer have to make that decision as nVidia has stepped up to the plate and announced three new DirectX10 products, the GeForce 8600 GTS, GeForce 8600 GT and GeForce 8500 GT. Priced from $229 all the way down to $89, these cards now mean that everyone has at least one DX10 capable option, whatever their budget. All we need now is for ATI to provide us with some competition!


While all three cards were announced last week, nVidia has said that the 8500 GT and 8600 GT won't be available immediately and the 8600 GTS was the only card to truly 'hard launch' last week. So, today we're just taking a look at a pair of these cards. Rest assured though, the other two cards should be appearing before the end of the month and we will be reviewing them as soon as we get our hands on some.


For this launch we were very graciously provided with full retail cards from MSI and Leadtek so later on we shall compare them against each other and check them against other cards available to see how they stack up. First, though, I'll go into a little more detail about just what lies beneath those heatsinks.


Unlike the 8800 GTS, these new cards are not cut-down versions of the G80 (8800 GTX) core and are based on brand new designs named G84 and G86. The former powers the 8600 GTS and 8600 GT, while the latter lies under the hood of the 8500 GT. As well as the obvious reduction in sheer performance, these new cores feature a brand new video processing engine dubbed Video Processor 2 - I’ll talk more about this later.


There are 289 million transistors squeezed into the G84 core, which is about half that of the G80s 691 million. No, great surprises there. However, it’s interesting to note it actually has more than nVidia’s previous generation high-end offering - the G71 - that had a measly 278 million. Progress, hey!


Within that silicon you’ll find 32 stream processors which, if you recall, are nVidias answer to the unified shader architecture of DirectX 10. Rather than having separate hardware for vertex shading and pixel shading, these stream processors can perform whatever function is desired of them. This results in a much more efficient use of the graphics hardware, reducing potential bottlenecks, and theoretically boosting performance. We saw in our reviews of the 8800 GTX and 8800 GTS that this logic held up as they were two very fast cards. However, they respectively featured 128 and 96 stream processors, which would account for a lot of their performance prowess. Whether a mere 32 stream processors will be able to keep up with the more traditional solutions, used by the likes of the 7600 GT and X1650 XT, remains to be seen.

NVIDIA GeForce 9600 GT Graphic cards

Part 1: Theory and architecture
Much time has passed since the launch of Mid-End NVIDIA GeForce 8600 GTS (G84). Unlike the top solution (G80), it had very few ALUs and TMUs, and this product failed to provide the expected performance level. The performance gap between the GeForce 8800 GTX and the GeForce 8600 GTS was too wide. Later on, AMD and NVIDIA launched graphics cards of a higher level: the GeForce 8800 GT and the RADEON HD 3870. But AMD also offered the Low-End HD 3850 to compete with the GeForce 8600 GTS. A better fabrication process and a much earlier launch gave AMD a performance advantage, and the HD 3850 was much faster than the GeForce 8600 GTS in many applications.

And now NVIDIA announces the G94 chip, based on the overhauled G9x unified architecture. The GeForce 9600 GT based on this GPU pushes the GeForce 8600 GTS down in the price line. The new solution ranks in between the 8800 GT and the 8600 GTS. The GeForce 9600 GT is based on the G94, which differs from the G92 only in fewer unified processors and texture units, bringing a 256-bit bus into the segment of cards below $200. Thus, the key features of the G94 are a 256-bit memory bus, and fewer ALUs and TMUs. Let's examine the new solution from NVIDIA...

Before you read this article, you may want to study the baseline theoretical articles: DX Current, DX Next, and Longhorn. They describe various aspects of modern graphics cards and architectural peculiarities of products from NVIDIA and AMD.


GeForce 9600 GT
Codename: G94
Fabrication process: 65 nm
505 million transistors
Unified architecture with an array of common processors for streaming processing of vertices and pixels, as well as other data
Hardware support for DirectX 10, including new Shader Model 4.0, geometry generation, and stream output
256-bit memory bus, four independent 64-bit controllers
Core clock: 650 MHz (GeForce 9600 GT)
ALUs operate at more than a doubled frequency (1.625 GHz for the GeForce 9600 GT)
64 scalar floating-point ALUs (integer and floating-point formats, support for FP32 according to IEEE 754, MAD+MUL without penalties)
32 texture address units, support for FP16 and FP32 components in textures
32 bilinear filtering units (like in the G84 and the G92, it gives more bilinear samples, but no free trilinear filtering and effective anisotropic filtering)
Dynamic branching in pixel and vertex shaders
4 wide ROPs (16 pixels) supporting antialiasing with up to 16 samples per pixel, including FP16 or FP32 frame buffer. Each unit consists of an array of flexibly configurable ALUs and is responsible for Z generation and comparison, MSAA, blending. Peak performance of the entire subsystem is up to 64 MSAA samples (+ 64 Z) per cycle, in Z only mode — 128 samples per cycle
Multiple render targets (up to 8 buffers)
All interfaces (2xRAMDAC, 2xDual DVI, HDMI, DisplayPort) are integrated into the chip


Specifications of the reference GeForce 9600 GT
Core clock: 650 MHz
Frequency of unified processors: 1625 MHz
Unified processors: 64
32 texture units, 16 blending units
Effective memory clock: 1.8 GHz (2*900 MHz)
Memory type: GDDR3
Memory: 512 MB
Memory bandwidth: 57.6 GB/sec
Maximum theoretical fill rate: 10.4 gigapixel per second
Theoretical texture sampling rate: 20.8 gigatexel per second
2 x DVI-I Dual Link, 2560x1600 video output
SLI connector
PCI Express 2.0
TV-Out, HDTV-Out, support for HDMI and DisplayPort with HDCP
Power consumption: up to 95 W
Recommended price: $169-$189

more..

Dell Studio Hybrid Desktop



Design a desktop that fits in the office or in the living room, or anywhere space is at a premium.

Specification:

Intel Core 2 Duo T9500 processor
up to 4GB DDR2 RAM
320GB HDD
Blu-ray Disc Combo
Intel Integrated Graphics Media Accelerator X3100
Optional TV Tuner
Optional wireless Keyboard and Mouse
5x USB 2.0 ports
Wireless N, HDMI
Windows Vista Home Premium / Ultimate

starting at $499

PNY XLR8 GeForce 9800 GTX




PNY XLR8 GeForce 9800 GTX Video Card
The NVIDIA® GeForce® 9800 GTX GPU offers a powerfully immersive entertainment experience designed for extreme high-definition gaming and video playback. Play the hottest DirectX 10 games with awesome speed and watch the latest HD DVD and Blu-ray movies with brilliant clarity, powered by the revolutionary PureVideo® HD engine. Couple a GeForce 9800 GTX-based graphics card with an NVIDIA nForce® motherboard for an optimal graphics platform, allowing you to ramp up your gaming horsepower with NVIDIA SLI® technology support for the most demanding games. With the GeForce 9800 GTX GPU, amazing graphics performance is now within your reach.

NVIDIA GeForce 9800 GX2 Dual-GPU Graphic card




NVIDIA's GeForce 9800 GX2 is a dual PCB adaptor that is directly derived from the new GeForce 8800 GT and GTS series based upon the G92 chipset, which was already derived from the initial GeForce 8800 series graphics card. The GeForce 9800 GX2 takes two GeForce 8800 boards, and joins them via internal PCI Express lanes. The GeForce 9800 GX2 is designed to go up against the recently introduced Radeon HD 3870 X2 graphics card from AMD. Since this dual-GPU monster from ATI has become the card to beat, it would be too long before NVIDIA would try to regain their long owned throne as having the fastest graphics card on the market. Yes their GeForce 8800 Ultra took some beating from the AMD dual-GPU edition, which is able to run also in pair to form Quad CrossFire.

Just as the former GeForce 7950 GX2, the GeForce 9800 GX2 card is build up out of two PCB's that are paired up together. So in essence you have a dual-GPU graphics card, based upon the GeForce 8800 power coming from two G92 chipsets. However, only one of these two PCB's has the PCI Express interface connection which will be plugged into the mainboard. This is the design that NVIDIA has used in the past on the GX2 solutions and is still the same as with this GeForce 9800 GX2 version. On the other hand, ATI has taken a different approach and put both RV670 cores of their Radeon HD 3870 X2 on the same PCB which is of course introducing a longer card in the end. As NVIDIA could build on their previous experience the cooling solution of the dual-PCB design has been improved intensively.

This new GeForce 9800 GX2 does look finished as such and better designed than their previous GX2 versions. The card has a decent casing to hold the two PCB firmly together and feels very solid as a whole. From this angle you wouldn't even be able to deduct that this is a dual-GPU edition but actually looks very familiar. By just looking at the casing covering the full graphics card you could find this a more optimized GeForce 8800 Ultra edition which came with a similar cooling solution only going by the looks. From the top and sides you can see as well the venting holes which are required for the air intake, but also to get the hot air out again. We don't know how effective this design will be just from the looks, although the looks of this card is certainly quite attractive and very solid.

motherboard VIA EPIA PX-Series Pico-ITX



motherboard VIA EPIA PX-Series Pico-ITX

The VIA EPIA PX embedded board is the first commercial embedded board based on VIA's Pico-ITX form factor measuring just 10cm x 7.2cm. Designed to enable x86 to be built into embedded systems where it was previously impractical for space reasons, the VIA EPIA PX provides a full complement of multimedia and connectivity options on a platform smaller than any standard mainboard or x86 system on module.

Powered by the 1GHz VIA C7® or a fanless 500MHz VIA Eden™ ULV processor and supporting up to 1GB of DDR2 533 SO-DIMM system memory, the 10-layer VIA EPIA PX mainboard is based on the single-chip VIA VX700 system media processor, which boasts the VIA UniChrome™ Pro II IGP 3D/2D graphics core, MPEG-2/-4 and WMV9 hardware decoding acceleration and display flexibility. The onboard VIA VT1708A HD audio codec also contributes a rich entertainment experience.

This highly power efficient board runs standard productivity and multimedia applications at under 13 watts, thanks to the combination of VIA's energy efficient processor and core logic platform and the significantly lower power DDR2 system memory.

The VIA EPIA PX Pico-ITX embedded board also supports flexible hard drive storage options, with one SATA and one UltraDMA 133 connector, as well as 10/100Mbps Fast Ethernet through the RJ-45 LAN port, an LVDS/DVI connector and extensive connectivity options including USB2.0, COM and PS2. A multimedia connector supports external TV-out, video capture port interface & LPC interface (an add-on card is required), while an audio connector supports line-out, line-in, mic-in, S/PDIF in & 7.1 channel audio output.


Model Name :: VIA EPIA PX10000G
VIA EPIA PX5000EG

Operating System :: Windows XP, Linux, Win CE, XPe

NVIDIA GeForce GTX 200




The graphics leader today introduced a new graphics processor family, the GeForce GTX 200, that can effectively reduce the amount of time it takes to perform day-to-day PC tasks. NVIDIA today released two GPUs under the new series -- the GeForce GTX 280 and GeForce GTX 260 GPUs. According to the firm, the latest graphics processors takes "graphics beyond gaming and gaming beyond anything that’s ever been possible before on a consumer computing platform." As per NVIDIA's estimate, the GPUs delivers 50% more gaming performance over the GeForce 8800 Ultra through 240 enhanced processor cores that provide incredible shading horsepower at resolutions as high as 2560 x 1600.


240 processor cores
602MHz graphics clock speed
1296MHz processor clock speed
1GB GDDR3 memory
512-bit memory interface
141.7GB/sec memory bandwidth
support for 3-way NVIDIA SLI
2560 X 1600 max resolution
HDMI, HDCP

Graphic cards AMD Radeon HD 4870 X2



Hello everybody and wellcome to AMD Radeon HD 4870 X2

The Radeon HD 4870 X2 card is AMD's answer to NVIDIA's flagship GeForce GTX 280 card in terms of pricing and placement. Even though NVIDIA recently dropped a bomb and lowered the price of the GeForce GTX 280 from $649 to $499, the estimated pricing we have seen for AMD's card is in the above $500 range and will likely fall on the lower end of that open ended range now. Isn't competition a wonderful thing? The card continues in the tradition of long PCB's, matching the size of the Radeon HD 3870 X2 as well as the GeForce 9800 GX2 design. The cooler for the Radeon HD 4870 X2 is of course a two slot design.

While note completely enclosed like NVIDIA dual-GPU cards, the back of the card has a thing metal heatsink plate that, coupled with the all-black look of the PCB, is actually pretty cool looking. Though the Radeon HD 4870 X2 is much heavier, it about the same size as the GeForce 9800 GX2 card. There is just a single CrossFire connector on the Radeon HD 4870 X2 card that will allow you to connect either one or two single GPU Radeon HD 4870 cards or one more Radeon HD 4870 X2 for quad CrossFireX support. Unfortunately we only got a single card for this preview so testing quad was out of the question for now.

MetaRAM quadruples DDR2 DIMM capacities


A MetaRAM DIMM

Since its launch in January 2006, the only thing that has been publicly known about former AMD CTO Fred Weber's new venture is its name: MetaRAM. Clearly, the stealth-mode company was working on something to do with RAM, but what? As of today, MetaRAM is finally ready to talk about its technology, and it appears to be a pretty solid evolutionary step for the tried-and-true SDRAM DIMM module. In short, MetaRAM's technology enables DIMM capacity increases of two or four times, so that a single DDR2 MetaSDRAM DIMM can hold 4GB or 8GB of memory while still being a drop-in replacement for a normal DIMM.

Because MetaRAM's high-capacity DIMMs look to an Intel or AMD system like normal DDR2 DIMMs, the company expects to see servers with memory configurations that would normally require expensive custom hardware to become significantly cheaper. One of MetaRAM's channel partners will soon announce a server with 256GB of main memory for under $50,000, with 500GB boxes on tap for a higher price points

I'm tempted to suggest that "500GB of memory oughta be enough for anybody," but MetaRAM is looking to virtualization and enterprise databases as application domains that provide a rationale for putting that much memory in a single server. MetaRAM claims that its own research indicates that 80 percent of enterprise server databases are under 500GB in size, and if this is true, then hosting those databases entirely in main memory could get a lot cheaper after today.

MetaRAM is a fabless semiconductor company, and its manufacturing partners are Hynix and SMART Modular. Both chipmakers are currently sampling 8GB DDR2 DIMMs, and MetaRAM expects to see servers and workstations that include the technology available from Rackable and launch partners later this quarter.

next more..

Seagate's Latest Desktop HDD Has 1.5TB Capacity



WOW

Seagate announced three new consumer-level hard drives today, which it claims are the "industry's first 1.5-terabyte desktop and half-terabyte notebook hard drives." The company claims that it is able to greatly increase the areal density of its drive substrates by utilizing perpendicular magnetic recording (PMR) technology. Wikipedia states that PMR is "capable of delivering more than triple the storage density of traditional longitudinal recording."

Seagate's latest desktop-class hard drive, the Barracuda 7200.11, will be available in a 1.5TB capacity starting in August. The 3.5-inch drive is made up of four 375GB platters and has a 7,200-rpm rotational speed. It has a 3Gb/second SATA interface, or 1.5Gb/second using Native Command Queuing (NCQ). Seagate also claims that the new 1.5TB drive supports a sustained data rate of up to 120MB/second. This represents a slight improvement in performance over the existing drives in Seagate's Barracuda 7200.11 series, which have stated sustained data rates between 105 and 115MB/second--with the 1TB Barracuda 7200.11 on the slow end of that scale at 105MB/second. While many of the existing drives in the 7200.11 series have both 16MB and 32MB cache versions, the 1.5TB will likely only be available with a 32MB cache--similar to its 1TB sibling. Pricing has yet be announced.

nextmore..

credit :: Wikipedia

XFX GeForce GTX 260 640M XXX !!



GeForce GTX 260 is the most affordable solution from the most high-end GPU family from nVidia, GeForce GTX 200, especially now that nVidia is promoting a massive price cut. XFX GeForce GTX 260 640M XXX is an overclocked version of GeForce GTX 260. How does it compare to the standard GTX 260 and to its main competitors? Is it worthwhile paying a little bit more and getting this version instead of the standard GTX 260? Read on.

GeForce GTX 260 standard clocks are 576 MHz for the GPU, 1,242 MHz for the shader processors and 1 GHz (2 GHz DDR) for the memories. The main difference between GTX 260 and the top-of-line GTX 280 is the number of shader processors (192 vs. 240), memory interface (448-bit vs. 512-bit) and memory size (896 MB vs. 1 GB), besides the clock rates, of course.

XFX GeForce GTX 260 640M XXX runs at 640 MHz (hence the “640M” on the model name) with its shader processors running at 1,363 MHz and memories running at 1,150 MHz (2,300 MHz DDR), clock rates that are higher than GTX 280’s (this GPU runs at 602 MHz, with shaders at 1,296 MHz and memories at 1,107 MHz or 2,214 MHz DDR). But, as mentioned, this GPU has less shader processors and a narrower memory interface. During our review we will compare this overclocked card from XFX with a GeForce GTX 280.

AMD Socket G34





Hello Good morning

AMD's 12-core and 8-core processors get a new home in 2010

AMD's newest roadmap reveals a major shift in early 2010: the company will once again overhaul its socket architecture to make way for DDR3 support.

The new socket, dubbed G34, will also ship with two new second-generation 45nm processors. The first of these processors, 8-core Sao Paolo, is described as a "twin native-quadcore Shanghai processor" by one AMD engineer. Shanghai, expected to ship late this year, is AMD's first 45nm shrink of the ill-fated Barcelona processor.

This past April, AMD guidance hinted at a 12-core behemoth of a processor. This CPU is now named Magny-Cours after the French town made famous by its Formula One French Grand Prix circuit.

Both of these new processors will feature four HyperTransport 3 interconnects, 12MB of L3 cache and 512KB L2 cache per core.

Intel's next-generation Nehalem chip, scheduled for launch late this year but already well leaked, is the first to feature tri-channel DDR3 memory support. AMD will up the ante in 2010, with registered and unregistered quad-channel DDR3 support. Current roadmaps claim standard support will include speeds from 800 to 1600 MHz.

AMD insiders would reveal very little about the G34 socket, other than its a derivative of the highly secretive G3 socket that was to replace Socket F (1207). As far as company documentation goes, G3 ceased to exist in March 2008, and has been replaced with the G34 program instead. The first of these sockets will be available for developers in early 2009.

We counted 1974 pin connects on the leaked G34 diagram -- 767 more pins than AMD's current LGA1207 socket. Given the additional interconnect pathways for DDR3 and the HyperTransport buses, a significant increase in the number of pins was to be expected.

The addition of a fourth HyperTransport link may prove to be one of the most interesting features of the Sao Paulo and Magny-Cours processors. In a full four-socket configuration, each physical processor will dedicate a HyperTransport link to each of the other sockets. This leaves one additional HyperTransport lane per processor, which AMD documentation claims will finally be used for its long-discussed Torrenza program.

The hype behind Torrenza largely disappeared after AMD's Barcelona launch sour, though the company has hinted before that Torrenza will make a perfect interconnect to GPUs or IBM Cell processors. This is exactly the type of setup roadmapped for the fastest public supercomputer in the world

Thank

AMD Athlon XP 2800+ 2.08GHz 333FSB 512KB Processor Retail


Hello

The AMD Athlon XP processor with QuantiSpeed architecture powers the next generation in computing platforms, delivering extra performance for cutting-edge applications and an extraordinary computing experience. The AMD Athlon XP processor is the latest member of the AMD Athlon family of processors designed to meet the computation-intensive requirements of cutting-edge software applications running on high-performance desktop systems. AMD delivers tremendous performance by increasing the amount of work done per clock cycle and improving the operating frequency at the same time. The end result is a processor design that produces a high volume of work done per cycle and high operating frequencies - an optimum combination for compelling application performance.??The AMD Athlon XP processor offers fast results when working with digital media like audio, video, and image files. It provides for outstanding near real-time voice, video, and CAD/CAM as a result of features like larger cache ...

Thank

iPhone 3G




It's not the groundbreaking, industry-changing event that the original iPhone was. But the iPhone 3G is a worthy upgrade to Apple's smartphone, and fixes a few flaws that kept many people from buying the first version.
The addition of fast 3G wireless data, GPS and a more flexible, extensible operating system mean the iPhone is now entirely competitive with almost every other smartphone on the market. And its new, lower purchase price will remove the final obstacle to purchase for many people. In short, this phone is about to become very, very popular, as it deserves to be.
Physically, the phone is nearly identical to the previous generation. It measures 2.4 x 4.5 x 0.4 inches, and weighs 4.8 ounces, making it just 0.1 inch wider and 0.3 ounces lighter than the iPhone 1.0. It feels substantially thinner, thanks to tapered edges, which make it sit more comfortably in the hand. Instead of a silver aluminum back, the new iPhone has a plastic backing, available in either glossy black or shiny, iMac white. The screen and 2-megapixel camera are identical to those in the older iPhone, but the external speakers are much improved. Call quality was noticeably clearer in our initial tests.

Fast Data and GPS
One of the biggest shortcomings of the iPhone's first-generation model was its reliance on AT&T's slow EDGE data network. The new 3G data support means that the iPhone can download data 2-3 times faster than the old model. Of course, it still has WiFi support, and based on early reviews, you may want to use WiFi whenever it's possible, because 3G usage will drain the iPhone's batteries quickly.
It's probably not a must-have feature for most users, but the addition of a GPS receiver is a welcome enhancement. The iPhone OS can now use a combination of GPS data and triangulation from WiFi hotspots and cell-tower locations to establish its location. This feature has worked well in our testing so far, and we expect its usefulness to expand as an increasing number of applications start to take advantage of it.

more...

Specs: Apple iPhone 3G
3G wireless data
WiFi
Bluetooth
GPS
2-megapixel camera
iPhone 2.0 OS
3.5-inch multi-touch LCD
320 x 480 pixel display