Photoshop and Multicore

John Nack at Adobe writes an interesting post about Adobe Photoshop and what types of things will go better on systems with multi-core processors (which are really like having multiple processors on one chip — both the Mac I just got and the Sony Vaio I’m typing to you on right now have two cores, by the way).

Intel is spending a lot of time figuring out how to put more cores on each chip. This is a place where the hardware is leading the software. I remember my days talking to the Kernel and C++ teams at Microsoft and how they were scratching their heads on how to make it easier to write multicore software. See, only the most advanced software engineers can write software for multicores, and it’ll be years before we see most software really take advantage of multicore systems. John’s letter gives you some idea of what Adobe’s engineers are looking at.

16 thoughts on “Photoshop and Multicore

  1. I think that most software that matters while take advantage of dual or more cores fairly soon — before the end of 2008.

    A good friend of mine is a graphics developer and he has to have the latest thing to keep up with demand for work — Maya rendering takes a lot of power and RAM. I can see people that do hard-core gaming, graphics and photo/video stuff using and taking advantage of bleeding-edge systems, but what about the other 99% of people. You know — normal people that have normal needs.

    I’ve been in the IT industry now for almost 10 years, and I’ve noticed what I think is an alarming trend, and that is waste. The industry makes these ultra-fast systems with gobs and gobs of fast processors, tons of RAM, huge-capability video cards, etc. Whatever happened to writing really tight code that didn’t NEED to suck up so much need? It IS possible. For example, even though the guys at Opera ASA have what is arguably the best Web browser on the market AND has everything but the kitchen sink, it still comes in at 6.2 MB. That’s coding. Firefox has a similar size, but it doesn’t include a mail client, a bit-torrent client, or a fraction of the useful little widgets that Opera comes with by default. Firefox requires tons of add-ons in the form of extensions to make the browser even remotely usable except as a basic browser. The only reason companies don’t care is that bigger, better, faster, etc., drives sales and makes money. The average customer doesn’t need multi-core processors and 2 GB of RAM. MS could have written their new OS and required almost no hardware upgrades, but since MS gets a chunk of money for each PC shipped with their OS, it makes sense for them to require more and more hardware to do so much more.

    Like

  2. I think that most software that matters while take advantage of dual or more cores fairly soon — before the end of 2008.

    A good friend of mine is a graphics developer and he has to have the latest thing to keep up with demand for work — Maya rendering takes a lot of power and RAM. I can see people that do hard-core gaming, graphics and photo/video stuff using and taking advantage of bleeding-edge systems, but what about the other 99% of people. You know — normal people that have normal needs.

    I’ve been in the IT industry now for almost 10 years, and I’ve noticed what I think is an alarming trend, and that is waste. The industry makes these ultra-fast systems with gobs and gobs of fast processors, tons of RAM, huge-capability video cards, etc. Whatever happened to writing really tight code that didn’t NEED to suck up so much need? It IS possible. For example, even though the guys at Opera ASA have what is arguably the best Web browser on the market AND has everything but the kitchen sink, it still comes in at 6.2 MB. That’s coding. Firefox has a similar size, but it doesn’t include a mail client, a bit-torrent client, or a fraction of the useful little widgets that Opera comes with by default. Firefox requires tons of add-ons in the form of extensions to make the browser even remotely usable except as a basic browser. The only reason companies don’t care is that bigger, better, faster, etc., drives sales and makes money. The average customer doesn’t need multi-core processors and 2 GB of RAM. MS could have written their new OS and required almost no hardware upgrades, but since MS gets a chunk of money for each PC shipped with their OS, it makes sense for them to require more and more hardware to do so much more.

    Like

  3. “only the most advanced software engineers can write software for multicores”

    It’s pretty each to chuck in a couple of extra threads and get a measurable benefit on multiple cores. Getting anything above 1.5x the performance is the hard bit.

    Of course, one of the great things about multi-processor/multi-core machines is that even if an application should max out a core then the machine still stays responsive and usable. This alone can be a great productivity benefit.

    Like

  4. “only the most advanced software engineers can write software for multicores”

    It’s pretty each to chuck in a couple of extra threads and get a measurable benefit on multiple cores. Getting anything above 1.5x the performance is the hard bit.

    Of course, one of the great things about multi-processor/multi-core machines is that even if an application should max out a core then the machine still stays responsive and usable. This alone can be a great productivity benefit.

    Like

  5. “Getting anything above 1.5x the performance is the hard bit.”

    Well, yes – although how easy it is to get performance to scale with number of cores very much depends on the particular problem at hand. However, I think by far the biggest reason why even the most talented of developers find writing multi-threaded code hard is that: a) it’s difficult to avoid introducing bugs; and b) because a multi-threaded program runs differently each time the program executes, it’s often difficult to reproduce the circumstances where the bugs manifest themselves – which can make de-bugging time-consuming.

    Like

  6. “Getting anything above 1.5x the performance is the hard bit.”

    Well, yes – although how easy it is to get performance to scale with number of cores very much depends on the particular problem at hand. However, I think by far the biggest reason why even the most talented of developers find writing multi-threaded code hard is that: a) it’s difficult to avoid introducing bugs; and b) because a multi-threaded program runs differently each time the program executes, it’s often difficult to reproduce the circumstances where the bugs manifest themselves – which can make de-bugging time-consuming.

    Like

  7. Adobe might have an advantage in multicore development, because they spent many years developing their applications to work on multi-processor machines. Back before the processor boom, I had a multi-processor and the only app I could find to really take advantage was photoshop.

    Like

  8. Adobe might have an advantage in multicore development, because they spent many years developing their applications to work on multi-processor machines. Back before the processor boom, I had a multi-processor and the only app I could find to really take advantage was photoshop.

    Like

  9. I dunno about the software not being there. a standard Mac OS X application that does nothing to explicitly create threads may usually have 10+ threads running that OS X itself creates. Some applications that use threading (like Safari) explicitly may have 45+ threads running. So I think it’s that MS’s software is lagging, more or less.

    Like

  10. I dunno about the software not being there. a standard Mac OS X application that does nothing to explicitly create threads may usually have 10+ threads running that OS X itself creates. Some applications that use threading (like Safari) explicitly may have 45+ threads running. So I think it’s that MS’s software is lagging, more or less.

    Like

  11. “… and it’ll be years before we see most software really take advantage of multicore systems.”

    Hehe, so I don’t have to worry about my poverty. I still use my Pentium III Notebook and dont care about multi-core and Vista and stuff.

    Adobe products are too expensive for me as well! 😦

    Like

  12. “… and it’ll be years before we see most software really take advantage of multicore systems.”

    Hehe, so I don’t have to worry about my poverty. I still use my Pentium III Notebook and dont care about multi-core and Vista and stuff.

    Adobe products are too expensive for me as well! 😦

    Like

  13. There are other tools available for adding concurrency to your applications besides threads, and some of those tools are significantly safer to use; although not always as appropriate for the types of concurrency problems that threads are really good at.

    There are two danger points with threads. The first is inexperienced engineers who look at the APIs and think that’s all there is to the threading game. This is dangerous because it is very easy for them to add multithreaded bugs that only turn up on the production machines at their largest client. I often see people add support for threads and then only test on a single processor box. This is a mistake.

    The second danger point is, as Simon stated, related to debugging those threading bugs. Threading is the software development community’s equivalent of the Heisenberg Uncertainty Principle. Threading problems are timing related and when we try to observe a thread, we change the timing. Timing can be changed by our actions, other processes running on the box, available resources, etc… It is the developer’s responsibility to make sure that all threads produce the expected results…every time.

    It’s that attention to detail that makes multithreaded development so difficult and engineers who can successfully delivery stable multithreaded applications, so valuable.

    Like

  14. There are other tools available for adding concurrency to your applications besides threads, and some of those tools are significantly safer to use; although not always as appropriate for the types of concurrency problems that threads are really good at.

    There are two danger points with threads. The first is inexperienced engineers who look at the APIs and think that’s all there is to the threading game. This is dangerous because it is very easy for them to add multithreaded bugs that only turn up on the production machines at their largest client. I often see people add support for threads and then only test on a single processor box. This is a mistake.

    The second danger point is, as Simon stated, related to debugging those threading bugs. Threading is the software development community’s equivalent of the Heisenberg Uncertainty Principle. Threading problems are timing related and when we try to observe a thread, we change the timing. Timing can be changed by our actions, other processes running on the box, available resources, etc… It is the developer’s responsibility to make sure that all threads produce the expected results…every time.

    It’s that attention to detail that makes multithreaded development so difficult and engineers who can successfully delivery stable multithreaded applications, so valuable.

    Like

Comments are closed.