Multicore Chips: getting difficult for programmers

Adding more cores is desirable to meet growing computing demands, but it could create more challenges for programmers writing code that enables applications to work effectively with Multicore chips.
As technology develops at a fast rate, a challenge for developers is to adapt to programming for Multicore systems, said Doung Davis, vice president of the digital enterprise group at Intel, during a speech at the Multicore Expo in Santa Clara, California. Programmers will have to transition from programming for single-core processors to multiple cores, while future-proofing the code to keep up-to-date in case additional cores are added to a computing system.
Programming models can be designed that take advantage of hyperthreading, which enables parallel processing capabilities of multiple cores to boost application performance in a cost-effective. Intel is working with universities and funding programs that will train programmers to develop applications that solve those problems.
Intel, along with Microsoft, has donated $20 million to the University of California at Berkeley and the University of Illinois at Champaign-Urbana, to train students and conduct research on Multicore programming and parallel computing. The centers will tackle the challenges of programming for Multicore processors to carry out more than one set of program instructions at a time, a scenario known as parallel computing.
Beyond future-proofing code for parallelism, adapting legacy applications to work in new computing environments that take advantage of Multicore processing is a challenge the coders face. Writing code from scratch is the ideal option, but it can be expensive.
Every major processor architecture has undergone quick changes because of the rapid rate of change as described by Moore's Law, which calls for better application and processor performance every two years, but now the challenge is to deliver performance within a defined power envelope. Power consumption is driving Multicore chip development, and programmers need to write code that works within that power envelope.
Adding cores to a chip to boost performance is a better power-saving option than cranking up clock frequency of a single-core processor, Davis said. Adding cores increases performance, but cuts down on power consumption.
In 2007, about 40% of desktops, laptops and servers shipped with Multicore processors. By 2011, about 90 % of PCs shipping has Multicore systems. Almost all of Microsoft Windows Vista PCs shipping in 2008 was Multicore, Davis said.
Intel is also working on an 80-core Polaris chip, which brings teraflops of performance. The next 'killer' application for Multicore computing could be tools that enable the real-time collection, mining and analysis of data. For example, military personnel using wearable Multicore computers are able to simulate analyze and synthesize data in real time to show how a situation will unfold. Doing so is viable and doesn't create risk for military personnel.
As cores are added, the performance boost may also enable more applications. The oil and gas industry will demanded one Petaflop of computing capacity in 2010, compared to 400 teraflops in 2008, to cost-effectively collect seismic data, compare it to historical data and analyze the data. Compared to the past, oil and gas explorers can collect and analyze data much faster now.
Glossary:
Petaflop: It is a theoretical measure of computer speed that corresponds to a thousand trillion (1015) floating point (flops) operations per second.
Teraflop: It is the processor speed of one trillion floating point operations per second.
Nadeem Khan Khattak

The writer is an international journalist, commentator and has vast experience in the international Politics & Finance. He is providing the most recent information, and reasonable discussions with proofs. If any readers want to contact him or ask a question, you can reach him by writing in the comment section.

Post a Comment (0)
Previous Post Next Post