12.10.2009
Google.org unveils deforestation monitor
Its new "high-performance satellite imagery-processing engine" can process terabytes of information on thousands of Google servers while giving access to the results online.
The platform, which was demonstrated on Thursday at the International Climate Change Conference in Copenhagen, would allow anyone using the tool to monitor whether or not trees were being chopped down in a given forest. It analyzes satellite images to show forest changes over a given time period.
Google.org's test platform of the software shows new deforestation in red, new degradation in orange, old deforestation in yellow, and intact forest in green for the time period between August 17 and September 15.
(Credit: Google.org)
The platform could be used as a tool for countries to conform to REDD (Reducing Emissions from Deforestation and Forest Degradation in Developing Countries), a program proposed by the United Nations.
If REDD is implemented, it would require member nations to monitor the state of their forests and land use. It would offer money in exchange for those nations preventing people from cutting down forests deemed significant to curbing emissions. The aim of the program is to essentially make the trees worth more alive than loggers could make from chopping them down or than farmers could make from converting forestland into farmland.
The benefit of the program is based on the recommendations of several reports, most notably the "Stern Review on the Economics of Climate Change" report conducted by the U.K. government.
The Stern Review report's analysis on land use change (PDF) found that keeping forests intact is one of the most cost-effective ways to quickly cut annual world carbon emissions. The reticence in implementing REDD has been over how the program would be administered, and how forests would be accurately monitored in developing nations where technological resources are limited.
"We hope this technology will help stop the destruction of the world's rapidly-disappearing forests. Emissions from tropical deforestation are comparable to the emissions of all of the European Union, and are greater than those of all cars, trucks, planes, ships, and trains worldwide," Google.org said on its blog.
Currently, the program is in a testing phase with plans from Google.org to "make it more broadly available over the next year." The organization also said it's willing to provide the platform as a "not-for-profit service," an indication that the satellite imagery may not be freely available to the general public, but restricted to scientists, governments, or environmental monitoring agencies.
Google Earth has been providing satellite imagery of forests over time, but an accurate way to quickly and sufficiently analyze the incoming raw satellite imagery data was not readily available, according to Google.org. This new platform provides that missing analysis, the group said.
The platform was developed in conjunction with Greg Asner, professor of the Department of Global Ecology at the Carnegie Institution for Science and of the Department of Geological and Environmental Sciences at Stanford University; and Carlos Souza, a geologist at Imazon, the Amazon Institute for the People and Environment.
12.08.2009
AMD upgraded as 'Fusion,' 16-core chip future looms
Advanced Micro Devices stock was upgraded Thursday by Broadpoint AmTech analyst Doug Freedman, citing a solid product road map and debt restructuring efforts.
AMD was trading above $7 midday on Thursday, high above the $3.50 (approximate) lows seen back in July of this year.
Freedman said in a research note Thursday that he is upgrading AMD to "buy" from "neutral" and raising the price target to $10 from $5.80.
"Positive events...lead us to believe that AMD's risk/reward is now compelling," he said. One of the biggest positives was AMD's move on Wednesday to pay off $1 billion in debt using part of its $1.25 billion settlement income from Intel and a new $500 million bond offering. "We believe AMD's debt of $3.7B will be reduced by 25 percent," Freedman said.
And Future "Fusion" chips point toward a more competitive AMD. Fusion silicon--which combines the main CPU processor with the graphics chip or GPU--is due in 2011. "We believe Fusion (CPU+GPU) will deliver discrete-like performance on an integrated chip," Freedman said, referring to high-performance standalone "discrete" graphics processors. "Fusion will likely be a low-cost product--targeting mainstream and lower-end," according to Freedman.
Chips that go into servers are also likely set for market share gains, Freedman said. "We estimate that server share could grow from ~8 percent currently, by our own forecast, to ~12 percent by FY10 year-end," he wrote. High-end "Maranello" chips boasting as many as 12 processing cores are due in the first half of next year and 16-core processors are coming in 2011.
Graphics chips that are compatible with Windows 7 DirectX 11 technology for accelerating games and general multimedia tasks are also expected to do well, such as the company's HD 5000 series of graphics chips.
source:news.cnet.com
Intel unveils supercomputer chip, NEC partnership
Intel on Monday disclosed a version of its Xeon processor line optimized for supercomputers and announced a partnership with NEC to develop future supercomputers.
At Supercomputing 2009 in Portland, Ore., Intel unveiled a future version of its "Nehalem-EX" processor optimized for supercomputers. The six-core chip will run at higher speeds than eight-core versions of the Nehalem-EX processors and will offer advantages for supercomputer specific tasks, Intel said in a statement. Intel also refers to supercomputing as high-performance computing, or HPC.
The chip architecture will offer greater memory speeds and capacity and will allow customers to build single computers or "nodes" with up to 256 such processors, according to Intel. This will be available next year, Intel said.
Intel said Monday that four out of every five supercomputers on the Top500 list published Monday are powered by Intel processors.
Intel also announced that it is partnering with Japan's NEC--that country's largest supercomputer vendor--to jointly develop technologies "that will push the boundaries of supercomputing performance," according to a joint statement.
NEC will use the technologies in future supercomputers based on the Intel Xeon processor and other technologies such as AVX (Advanced Vector Extensions), an extension to Intel's x86 instruction set architecture.
AVX will be used with Intel's upcoming Sandy Bridge microarchitecture due in 2011, according to Intel.
"With NEC further innovating on Intel Xeon processor-based systems, Intel is poised to bring Intel Xeon processor performance to an even wider supercomputing audience, " said Richard Dracott, general manager of Intel's High Performance Computing Group, in a statement.
Fumihiko Hisamitsu, general manager of HPC Division at NEC, said: "NEC's substantial experience in the development of vector processing systems...is a natural fit for taking Intel architecture further into new markets."
A vector processor design can perform operations on multiple data elements simultaneously. Intel Xeon chips are good at scalar processing, which handles one data item at a time.
The initial focus of the collaboration will be the development of technology to boost the memory speed and scalability--the latter refers to expanding a system to increase performance or capacity. "Such enhancements are intended to benefit systems targeting not only the very high end of the scientific computing market segment, but also to benefit smaller HPC installations," the two companies said.
NEC will also continue to sell its existing SX vector processor-based products. NEC, for example, currently markets the SX-9 supercomputer.
IBM: Envisioning the world's fastest supercomputer
IBM will release a radical new chip next year that will go into a University of Illinois supercomputer in a quest to build what may become the world's fastest supercomputer.
That university's supercomputer center is a storied place, home to both famous fictional and real supercomputers. The notorious HAL 9000 sentient supercomputer in "2001: A Space Odyssey" was built in Urbana, Illinois, presumably on the University of Illinois Urbana-Champaign campus.
Though not aspiring to artificial intelligence, the IBM Blue Waters project supercomputer, like the HAL 9000 series, will be able to do massively complex calculations in an instant and, like HAL, be built in Urbana-Champaign. It is being housed in a special building on the Urbana-Champaign campus specifically for the computer that will theoretically be capable of achieving 10 petaflops, about 10 times as fast as the fastest supercomputer today. (A petaflop is 1 quadrillion floating point operations per second, a key indicator of supercomputer performance.)
Part of the National Center for Supercomputing Applications (NCSA) at the University of Illinois, it will be the largest publicly accessible supercomputer in the world when it's turned on sometime in 2011.
Supercomputers are essentially a large collection of microprocessors acting in concert on a complex problem. As processor designs go, the upcoming Blue Waters' IBM Power7 processor--due in the first half of 2010--is a big step for IBM: the processor integrates the features of a chip used in its "Roadrunner" supercomputer, which has often been ranked as the fastest supercomputer in the world. Power7 fuses the flagship Power chip design with key technology from a separate "Cell" processor--the latter was part of IBM's Roadrunner system at the Los Alamos National Laboratory, according to Bradley McCredie, an IBM Fellow in the Systems and Technology Group.
"We took some of that genetic material from the Cell program--ways to do floating point (calculations)--and embedded that right into the Power7 core," McCredie said in an interview with CNET.
But that's not the only thing that makes the Power7 chip special. It integrates eight processing cores in one chip package and each core can execute four tasks--called "threads"--turning an individual chip into a virtual 32-core processor. As a yardstick, Intel's high-end Xeon processors typically have two threads per processing core.
Artist rendering of University of Illinois center that will house IBM's Blue Waters supercomputer
(Credit: University of Illinois)IBM is also using novel memory technology. Widely used "static" RAM memory, used as the on-chip memory in almost all processors today, can add as much as a billion transistors to high-end processors. IBM wanted to avoid these ballooning--and costly--chip counts and elected to use a technology called E-DRAM, keeping the total number of transistors to 1.2 billion. "The equivalent number of transistors if we had done all of the cache in (static RAM) is well in excess of two billion," McCredie said.
And the chip's speed? Between 3GHz and 4GHz (IBM has yet to make a final decision), which is actually a lower rating than the previous Power6 chip which ran at 5GHz. "We have gotten performance from other spots, such as the dense E-DRAM. We had to back off from the gigahertz in order to get eight of these cores on to the chip and not have it melt," according to McCredie.
IBM has also made other tweaks to get the performance up. It has brought circuitry onto the chip for communicating with system memory--which was previously external to the processor--and returned to "out of order" instructions.
And how does IBM keep this dense collection of ultrafast processors cool? In a word, water. "We actually went a bit further environmentally," said Ed Seminaro, an IBM Fellow who is involved with the University of Illinois project. "We took a lot of the infrastructure that's typically inside of the computer room for cooling and powering and moved the equivalent of that infrastructure right into that same cabinet with the server, storage, and interconnect hardware."
Seminaro continued: "The whole rack is water-cooled. We actually water-cool the processor directly to pull the heat out. We take it right to water, which is very power efficient," he said.
World's fastest supercomputer?
The Blue Waters project is funded by the National Science Foundation and follows a Defense Advanced Research Projects Agency (DARPA) project that also uses IBM's Power7 chip, according to Seminaro.
"We did a lot more than what we would have typically done because of this (DARPA) engagement," Seminaro said. "And what followed was the Blue Waters contract. Blue Waters was a bid to the National Science Foundation. We had strong story based on what we did for DARPA," he said.
Blue Waters will be able to theoretically hook together 16,384 Power7 chips--referred to as "nodes"--for a total theoretical performance of 16 petaflops, though IBM said that, at least initially, the theoretical peak performance will likely be closer to 10 petaflops and the much more strict (and realistic) "sustained" performance on real-world software applications (not cited in Top 500 supercomputer statistics) will be one petaflop.
But organizations like DARPA and the NSF are not looking only at Top 500 "peak" benchmarks, which can be achieved rather crudely, according to Seminaro. "You can get a pretty good number without a lot of bandwidth (speed) between nodes," he said, referring to the Top 500 Supercomputer "LINPACK" benchmark. "Because there's almost no communication between nodes that you have to do for this benchmark. And you can get away with very poor memory bandwidth."
In Blue Waters' case, the transfer rate between nodes is a game changer, Seminaro believes. "The transfer of data between any of those two nodes in the system is at the full rate of 192GB per second--peak," he said. "So, you can get data from anyplace to anyplace at that kind of speed with latency on the order of less than one microsecond."
This kind of performance is crucial for big companies and governments alike. "Companies [including] Boeing, GM, and Ford, use these systems heavily. Most of the crash tests are now done on these machines. And weather prediction--a large percentage is done on this platform," Seminaro said. More specialized government-centric applications include simulations of how to properly dispose of nuclear waste, he added.
Blue Waters is the largest supercomputer project that Seminaro has been involved in since 1999, when he first participated in a supercomputer project. "This is really the biggest," he said.
And watch out Intel, Power7 is coming to commercial server products too. Said Seminaro: "We will be shipping [Power7 processors] sometime in the first half of next year in some [of] our commercial products."
source:news.cnet.com
Apple MacBook vs. HP Envy
The trend-setting MacBook Pro and Air both now face tough competition from Hewlett-Packard, which has the resources to match, and in some cases exceed, Apple laptop designs.
HP Envy 13
(Credit: Hewlett-Packard)I will expand very briefly on a previous post where I compared, on technological merits, the 13-inch Apple MacBook Pro and Air laptops with an HP Envy 13 in response to some of the comments attached to the post.
I had stated, as an opinion, that the aluminum-clad HP Envy 13 had eclipsed Apple MacBooks technologically in some crucial areas. Namely, processors offered, screen resolution, graphics, and battery life.
The assertion that the HP Envy 13 has surpassed, in some important respects, the MacBook Air and 13-inch MacBook Pro in technology shouldn't be that surprising considering the financial and technological resources that HP has.
Companies like HP and Dell bifurcate their lineups into inexpensive (typically retail consumer models) and more expensive (often business models). Some models are of decidedly lower quality than Apple--as many comments quickly point out--but some are equal to or better than a roughly equivalent Apple laptop both in quality and technology.
The Envy 13--which is HP's entry into the luxury laptop category--falls into the better-than-Apple-laptop-technology category, in my opinion. The luxury Adamo line from Dell is also making a play to, at the very least, achieve parity with Apple's MacBook line.
Again, this is an opinion, not a be-all, end-all verdict on the fate of Apple. And not a review per se that gets into benchmarks. I'm just looking at the raw technology.
Opinion pieces invariably elicit strong counter arguments--not to mention strong opinions (or invective). Especially when Apple is involved.
source:news.cnet.com