AMD's long-awaited Bulldozer processor finally hit the market this week, and the Web has been flooded with benchmark results. One thing is clear: this won't kill Intel's Sandy Bridge, as some were hoping. Indeed, in some tests, Bulldozer can't even keep up with its predecessor. The launch of the Phenom in 2007 was similarly underwhelming—it arrived late, broken, and slow—but AMD managed to turn things around with Phenom II to produce a viable competitor to many of Intel's processors.
AMD's future success will depend on the company's ability to make lemonade from the Bulldozer lemons. And its ability to do that will be governed by the Bulldozer architecture: is it fundamentally flawed, or are the performance issues merely teething trouble?
Oracle said this week that it's building a cloud service to host many of its key software products, including Java, database, middleware and CRM. As if anticipating concerns that the aptly named Oracle Public Cloud might be another vehicle for locking customers into Oracle software, though, CEO Larry Ellison tore into rival Salesforce.com, claiming Oracle will differentiate itself with industry standards and support for “full interoperability with other clouds and your data center on premise.”
The Oracle Public Cloud is a broad mix of platform-as-a-service and software-as-a-service, and a potential competitor to Salesforce, Microsoft, and others. The Oracle Fusion CRM Cloud Service and Oracle’s workforce management tools are already available, while the database and Java services, as well as a new business-focused social network, will be released “under controlled availability in the near future,” Oracle says. Oracle boasts the Public Cloud will provide “all the productivity of Java, without the IT,” and “the Oracle database you love, now in the cloud.”
My desktop isn't the only computer I plan to replace in the next few months. I need a new laptop too, and my goal is simple: to find a 13" MacBook Air that isn't made by Apple.
It turns out that I'm not the only one wanting this mythical non-Apple MacBook Air. Intel wants them too—it calls them Ultrabooks. The chip company has been kicking the Ultrabook idea around for a few months now, and it has grand ambitions: by the end of next year, it wants 40 percent of PC laptops to be Ultrabooks.
The BlueGene/Q processors that will power the 20 petaflops Sequoia supercomputer being built by IBM for Lawrence Livermore National Labs will be the first commercial processors to include hardware support for transactional memory. Transactional memory could prove to be a versatile solution to many of the issues that currently make highly scalable parallel programming a difficult task. Most research so far has been done on software-based transactional memory implementations. The BlueGene/Q-powered supercomputer will allow a much more extensive real-world testing of the technology and concepts. The inclusion of the feature was revealed at Hot Chips last week.
BlueGene/Q itself is a multicore 64-bit PowerPC-based system-on-chip based on IBM's multicore-oriented, 4-way multithreaded PowerPC A2 design. Each 1.47 billion transistor chip includes 18 cores. Sixteen will be used for running actual computations, one will be used for running the operating system, and the final core will be used to improve chip reliability. For BlueGene/Q, a quad floating point unit, capable of up to four double-precision floating point operations at a time, has been added to every A2 core. At the intended 1.6GHz clock speed, each chip will be capable of a total of 204.8 GFLOPS within a 55 W power envelope. The chips also include memory controllers and I/O connectivity.
Although watching TV shows from the 1970s suggests otherwise, the era wasn't completely devoid of all things resembling modern communication systems. Sure, the 50Kbps modems that the ARPANET ran on were the size of refrigerators, and the widely used Bell 103 modems only transferred 300 bits per second. But long distance digital communication was common enough, relative to the number of computers deployed. Terminals could also be hooked up to mainframe and minicomputers over relatively short distances with simple serial lines or with more complex multidrop systems. This was all well known; what was new in the '70s was the local area network (LAN). But how to connect all these machines?
The point of a LAN is to connect many more than just two systems, so a simple cable back and forth doesn't get the job done. Connecting several thousands of computers to a LAN can in theory be done using a star, a ring, or a bus topology. A star is obvious enough: every computer is connected to some central point. A bus consists of a single, long cable that computers connect to along its run. With a ring, a cable runs from the first computer to the second, from there to the third and so on until all participating systems are connected, and then the last is connected to the first, completing the ring.
Not content with making bold claims about the performance and efficiency of future iterations of its Atom processor line, Intel used its investor relations day to point out just how much better Windows would be on Intel than on ARM.
Intel Senior Vice President Renée James said that Windows on ARM would offer no backwards compatibility at all with existing x86. Instead, James said that Windows on ARM processors would exclusively offer a new, mobile-oriented, touch-friendly interface. In contrast, x86 versions would include both the new interface and a "legacy" interface suitable for conventional laptops and desktops. x86 systems would, therefore, offer the best of both worlds: a new interface for new tablet form factors, and a conventional interface for the enormous body of existing x86 Windows software. The chance of ARM ever running such software? In James' words, "Not now. Not ever."
Mohamed Hassan had just installed anti-malware software on his new Samsung laptop when, much to his surprise, the software alerted him to the presence of a keystroke logger. A brand-new machine, and it was apparently already recording every password and username he typed. He returned the computer for an unrelated reason, and bought a second Samsung laptop to replace it. Lo and behold, the same keylogger was apparently found on this new machine.
Naturally, he asked Samsung about this, only to receive a range of confused answers. Initially the support person he talked to denied any Samsung involvement, claiming "all Samsung did was to manufacture the hardware." On escalating the issue, supervisor claimed to have no idea how the software might have got onto his PC; Hassan was then told that Samsung installed the software so that it could "monitor the performance of the machine and to find out how it is being used."
Ask Ars was one of the first features of the newly born Ars Technica back in 1998. It's all about your questions and our community's answers. Each week, we'll dig into our bag of questions, answer a few based on our own know-how, and then we'll turn to the community for your take. To submit your own question, see our helpful tips page.
Question: How much of a difference do "green" drives actually make in a system build? Do you save enough power for it to be worthwhile, or is it just a marketing gimmick?
When a drive is "green," the designation usually just means that it runs on the slower side—5400 rotations per minute, as opposed to the more ubiquitous 7200 RPM. But in some cases, this slowdown can translate to drives that are quieter, cooler, and less power-hungry. We're not talking the same power savings as, say, switching to fluorescent light-bulbs from incandescent ones. But there are a few watts to be saved here, which makes green drives a decent choice for a platform that will see a lot of use, but doesn't necessarily need to be high-performance. (If you're really looking for power savings above all else, though, the absolute best option is a solid-state drive.)
The three features that are touted the most often by manufacturers of green drives, as we said, are their relatively quiet and cool operation and their lower power consumption. These specs are measured in decibels, degrees Celsius, and watts, respectively, and can usually be found on fact sheets for various drive models on the manufacturer's website (here's a Western Digital sampling) or from third-party benchmarks, if you don't trust Big Data Storage.
Welcome to the re-launch of Ask Ars, brought to you by CDW!
Re-launch, you ask? Why, yes! Ask Ars was one of the first features of the newly born Ars Technica back in 1998. Ask Ars is all about your questions and our community's answers. Each week, we'll dig into our bag of questions, answer a few based on our own know-how, and then comes the best part: we turn to the community for your take.
To launch, we reached out to some of our geekiest friends to solicit their burning questions. Without further ado, let's dive into our first question. Don't forget to send us your questions, too! To submit your question, see our helpful tips page.
Let's get started with a question that was unthinkable in 1998!
Q: I've heard that some SSD controllers do "garbage collection" while others don't. Is this really that big of a deal, and if so, which controllers should I be on the lookout for?
To begin with, an SSD that doesn't do garbage collection would be like an elevator that only goes up—that is, it would never delete anything. However, some drives are able to do it more quickly than others, and some engage in a process called "idle garbage collection" that distributes the workload across periods of inactivity. But before we get into that, we'll take a minute to describe how and why an SSD does garbage collection, and why a drive that does only that would be a weak one indeed.
Solid state drives have two hangups that force them to deal with data differently than hard disk drives do: they can only erase data in larger chunks than they can write it, and their storage cells can only be written a certain number of times (10,000 is standard) before they start to fail. This makes tasks like modifying files much harder for SSDs than HDDs.
Yesterday, Steve Ballmer took the stage to orchestrate the introduction of his company's long-awaited revamp of its phone operating system. We've picked up a few review units, but aren't able to talk about them at the moment, so we thought we'd share some of our photos of the event to hold you over until the reviews are ready. Some of them reveal a bit about the Windows Phone 7 interface.
Intel is about to experiment with a new concept in mass-market processors with its forthcoming Pentium G6951 CPU: upgradability. The chips will be upgradable by end users via a purchased code that is punched in to a special program. Previews of the processor quietly hit the Web last month, and with Engadget's post of the retail packaging, people took notice with reactions ranging from surprise to outright disgust.
The Pentium G6951 is a low-end processor. Dual core, 2.8GHz, 3 MB cache, and expected to be around $90 each when bought in bulk—identical to the already-shipping Pentium G6950. The special part is the software unlock. Buy an unlock code for around $50, run the software downloaded from Intel's site, and your processor will get two new features: hyperthreading will be enabled, and another 1 MB of cache will be unlocked, giving the chip a specification just short of Intel's lowest Core i3-branded processor, the 2.93 GHz Core i3-530. Once unlocked, the G6951 becomes a G6952.
A DARPA-funded processor start-up has made bold claims about a new kind of processor that computes using probabilities, rather than the traditional ones and zeroes of conventional processors. Lyric Semiconductor, an MIT spin-off, claims that its probabilistic processors could speed up some kinds of computation by a factor of a thousand, allowing racks of servers to be replaced with small processing appliances.
Calculations involving probabilities have a wide range of applications. Many spam filters, for example, work on the basis of probability; if an e-mail contains the word "Viagra" it's more likely to be spam than one which doesn't, and with enough of these likely-to-be-spam words, the filter can flag the mail as being spam with a high degree of confidence. Probabilities are represented as numbers between 0, impossible, and 1, certain. A fair coin toss has a probability of 0.5 of coming up heads.
The tire pressure monitors built into modern cars have been shown to be insecure by researchers from Rutgers University and the University of South Carolina. The wireless sensors, compulsory in new automobiles in the US since 2008, can be used to track vehicles or feed bad data to the electronic control units (ECU), causing them to malfunction.
Earlier in the year, researchers from the University of Washington and University of California San Diego showed that the ECUs could be hacked, giving attackers the ability to be both annoying, by enabling wipers or honking the horn, and dangerous, by disabling the brakes or jamming the accelerator.
The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they're wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.
The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.
Unlike the work earlier this year, these attacks are more of a nuisance than any real danger; the tire sensors only send a message every 60-90 seconds, giving attackers little opportunity to compromise systems or cause any real damage. Nonetheless, both pieces of research demonstrate that these in-car computers have been designed with ineffective security measures.
The Rutgers and South Carolina research will be presented at the USENIX Security conference later this week.
A newly developed driving interface may let blind people independently drive cars on the open road. Designed by engineers at Virginia Tech, the system incorporates various nonvisual cues into the driver's seat of a dune buggy that could help the blind navigate roads without assistance.
The project began in 2007, when a group of Virginia Tech researchers placed in a DARPA competition to develop a vehicle that could drive itself. Later, the same group received a grant from the National Federation of the Blind to incorporate the laser detection system that allowed the car to navigate and detect obstacles into an interface that could be understood through senses other than sight.
The new, nonvisual interfaces use a combination of tactile cues to inform the driver. One is called Drive Grip; it's a set of gloves that vibrate on various portions of the knuckles to signal the driver when it's time to turn.
The interfaces are set to be incorporated into a Ford Escape and demonstrated at the Rolex 24 At Daytona on January 29, 2011. The ultimate goal is an ambitious one: the car could lead to a change in longstanding legislation that prohibits driving while blind, so long as the vehicle is equipped with the appropriate technology.
Today Toshiba announced the Libretto W100, an ultra-mobile PC sporting a pair of 7" 1024 × 600 multitouch screens, a 1.2GHz Pentium U5400 processor, 2GB RAM, and a 62GB solid state disk. The all-touch device is designed to be used as a conventional laptop, and vertically, like a book.
The W100 includes haptic technology, giving the touchscreens tactile feedback; there's also 802.11b/g/n support, Bluetooth, and a built-in camera. This is all in a slightly bulky—7.95" × 4.84" × 1.2"—but lightweight—1.8 lbs (just a hair more than the iPad)—package. In spite of the size, it is certainly a fully-featured machine.
Toshiba is describing the W100 as a "concept PC," an acknowledgement that it won't be a machine suitable for everyone. It will hit the market in August, with prices starting at $1099, albeit with limited availability. The device was shown as part of Toshiba's celebration of 25 years of laptops; the first clamshell laptop was released by Toshiba some 25 years ago.
The company is positioning the W100 as an Ultra Mobile PC—something highly portable, but still in every sense a PC, with all the functionality that entails. The similarity to Microsoft's Courier concept, however, is striking. Courier paired the dual-screen, book-like form-factor with specialized software that fully exploited the touch capabilities to provide a natural, intuitive interface.
However, as with so many tablet-like devices before, the W100 does not do this. The W100 includes Windows 7 Home Premium, which is a perfectly good operating system, but it is not purpose-built for pure touch machines. The user interface is designed for a mouse and a keyboard, and though Windows 7 does include some concessions to touch (for example, it includes an on-screen keyboard with multitouch support, and it enlarges certain interface elements when used with touch machines), it still falls a long way short of the purpose-built interfaces found in so many cell phones and the iPad.
To fill this gap, the W100 does include some custom software: a "Toshiba Bulletin Board," that provides a touch-friendly, widget-based desktop, and "Toshiba ReelTime," with touch-friendly file management. The device can also be used as a more conventional laptop, with one screen serving as a keyboard. A number of keyboard layouts are supported, including a neat split mode for use with thumbs.
The software problem is a continued issue for Microsoft. Given the hardware specs of the W100, Windows 7 is in some ways a natural fit: this is a piece of hardware that's got the horsepower to run fully fledged desktop apps without a problem (in terms of computational capabilities, it has something like five times the integer performance of the A4 processor in the iPad). Using one screen as a keyboard—a keyboard with tactile feedback, no less—arguably also justifies the use of full Windows 7, as it makes the W100 functionally equivalent to a standard laptop.
But if that's all the device is going to be used for, it might as well abandon the second screen and just use a regular keyboard. The unique value of the W100 is that it can be tilted sideways and held like a book with a pair of screens—only it lacks the software to really make use of this mode.
As such, it's hard to see the point of the W100. A similar device based on, say, Android would make sense with the touchscreens, but would then be (in comparison to other Android devices) immensely overpowered, with the drop in battery life that implies. Sticking with Windows 7 limits the utility of the touchscreens, but justifies the stuff under the hood. Combined with the price, it's not hard to see why Toshiba is labeling this a "concept PC." The W100 is unlikely to emulate the iPad's sales figures, and isn't enough—yet—to herald a new era of portable computing.
So we were sent a Wii Classic Controller Pro, the $20 update to the first-generation Classic Controller for the Nintendo Wii. Does it look more "pro" to you?
The controller is laid out very similarly to the first Classic Controller, with the exception of the fins coming down from the bottom of the controller. These fins are thinner than they look in pictures, and they take some getting used to. The idea here is to give the player a better grip than the SNES-style original, but in practice they never seemed to be positioned where we'd like them.
The black controller is also glossy, which means it looks like a mess of fingerprints within seconds of being taken out of its box. It also looks goofy as hell hooked into all my white Wiimotes. You can buy a white version of the controller for the same price if you're into a matching home theater set up.
As nice as the controller is, it never really seemed to fit perfectly in my hand. The Z-button is now placed behind the L and R buttons, giving you two triggers on each side, much like a Dual Shock 3. This also took a little while to get used to. For some odd reason it always felt as if it was going to fall out of my hands.
If you already have a Classic Controller, there isn't much reason to upgrade to the Pro. If you don't, however, head to your local game store and try them both out before making your purchase. More choice is never a bad thing.
Microsoft has applied for a neat patent for a smart inductive charger (via Being Manan). Inductive charging, used, for example, in the Palm Pre's Touchstone, allows for contactless charging of devices in close proximity.
The charger couples inductive charging with an LCD screen that can be used to show off "weather conditions, sports scores, news headlines, and/or other selected items" through a wireless connection to a PC. More useful, I would think, would be some indication of the charging status of the device.
Apparent pictures of a prototype of the device have emerged. Though we can't be certain that it uses the patented technology, the prototype is all but identical to the design shown in the patent, suggesting that development is quite far advanced. The prototype is shown charging a wireless mouse. This seems a rather mundane use for such a fancy charger, and the thing is rather smaller than might be expected—the mouse covers the LCD screen when it's on the charger, which renders it all a bit useless.
A larger device that could be used to charge a range of devices (mouse, Windows Phone, some future Zune) would seem a lot more compelling—a one-stop charging shop for all your wireless Microsoft gadgets.
A recent patent application (via WMPowerUser.com) describes a system devised by Microsoft to enable automatic pairing of devices over short-range wireless connections such as Bluetooth and Wireless USB. After an initial manual pairing, say between a phone and a PC, the system would allow those devices to automatically pair with other related devices, such as a second PC.
The pairing mechanism would act as an alternative to the preexisting pairing mechanisms already built in to these protocols, and would require device support for both sides of the operation. Public key cryptography is used to securely share pairing information among different devices; that information might be transmitted via USB key, network connection or any other convenient method. The described system respects user identities, so merely pairing with a computer would not mean that anyone logged into the machine would be able to use an automatically paired device.
Scenarios in which suitably enabled devices would be useful are not too hard to envision. Having phones automatically paired to all the PCs you own is perhaps the most obvious example of when this would be useful, but more broadly, any peripheral could be used: headsets that you pair with your PC but also work automatically with your phone, mice that work with every PC you own, and so on.
Of course, filing a patent does not mean that this will ever materialize in any shipping product, and there's no indication thus far that this will form a part of Windows Phone 7 Series. That said, phones and PCs are probably the best-suited devices to this kind of technology as, being software-driven, they're the easiest to update to include this kind of extension. Seamless wireless syncing (Zune already performs syncing over WiFi, unlike the iPhone), including seamless wireless pairing, would certainly be another way in which Microsoft could distinguish its phone platform from the iPhone, and would enable the company to promote the more connected, less wired merits of Windows Phone.
Virtualization is a key enabling technology for the modern datacenter. Without virtualization, tricks like load balancing and multitenancy wouldn't be available from datacenters that use commodity x86 hardware to supply the on-demand compute cycles and networked storage that powers the current generation of cloud-based Web applications.
Even though it has been used pervasively in datacenters for the past few years, virtualization isn't standing still. Rather, the technology is still evolving, and with the launch of I/O virtualization support from Intel and AMD it's poised to reach new levels of performance and flexibility. Our past virtualization coverage looked at the basics of what virtualization is, and how processors are virtualized. The current installment will take a close look at how I/O virtualization is used to boost the performance of individual servers by better virtualizing parts of the machine besides the CPU.
The AMD Radeon HD 5570 has bowed and the usual suspects have reviews up.