dl@cs.oswego.edu
.
(All of these are of course unedited; cut-and-pasted from the indicated pages or articles.)
Concurrent Programming in Java: Design Principles and Patterns by Doug Lea (Addison-Wesley, $40) covers how to use threads effectively and common design issues in multithreaded programming.Most books will tell you the mechanics of how to use threads. This book goes a step beyond that and discusses in detail when using threads is appropriate, and when it is not. Many topics are covered (Synchronization, data flow, scheduling, etc) and there are numerous examples throughout the book. Each chapter also includes a "Further Readings" section that lists other books and papers that are useful references.
I highly recommend this book. I think it's one that that every serious Java programmer should own. However, I caution novice programmers that they might find some of the topics a bit difficult to understand.
This is the best book available on multi-threaded programming, but it's a little tough going in parts.
This book pushes the envelope of leading edge development into a new realm. Seasoned Object Oriented programmers will recognize this work as documenting the state of the art. The clarity of examples let me believe I understood the nuances of this deeply complex subject. I hope Mr. Lea goes on to apply the design principles and patterns to other areas of distributed object development.
From maclenna@ozone.uiowa.edu (Mark MacLennan) Organization Center for Global and Regional Environmental Research Date 23 Nov 1996 04:24:35 GMT Newsgroups comp.lang.java.programmer Message-ID <575ua3$k5u@flood.weeg.uiowa.edu> References 1 2 3 > > Mark MacLennanwrote: > >Hmm - you might want to re-read the threads stuff yet again to learn > >that parallel programming and programming with threads are not > >necessarily the same thing - there are some important and sometimes > >subtle differences. linden@positive.eng.sun.com (Peter van der Linden) replies: > I think a lot of people use the terms "concurrent programming", > "threads programming", "parallel programming", and "lightweight > processes" more or less interchangeably. Yes, you are right - many people do use these technical terms more or less interchangeably and often they do imply the same thing but that is not necessarily the case. Some people even use "C++" and "object-oriented programming" more or less interchangeably too ;-) > > Why don't you help us out a bit by explaining what you think the > important and subtle differences are? Thanks, > Parallel programming is generally concerned with methods and designs that specifically exploit multiple CPUs - often this involves a particular configuration of processors (and possibly specialized algorithms) along with attempting to parallelize problems that may not obviously appear to be parallel. This parallelism implies tasks that are to be performed simultaneously on different processors. The use of threads tends to exploit problems that intrinsically can be divided into multiple operations within a single program - and will typically execute on a single processor computer (although not necessarily excluding parallel computers). Any parallelism is usually at the option of the underlying system (i.e. the threads may or may not run at the same time). There are problems that lend themselves to multithreaded code (e.g. web browsers, certain user interfaces) that are not normally associated with "parallel programming". Actually, a far better summary of these terms (and others) along with LOTS of references can be found at the end of chapter 1 of Doug Lea's "Concurrent Programming in Java" book. You can also infer why he choose the term "concurrent programming" for the title of his book rather than "parallel programming" :-) [There are a handful of really good Java books out there among the Java glut and this is one of them ...] cheers, MARK
From rmartin@oma.com (Robert C. Martin) Organization Object Mentor Inc. Date Mon, 11 Nov 1996 17:52:33 -0600 Newsgroups comp.object Message-IDReferences 1 2 3 4 5 6 7 In article <55q3v3$r2@nnews.OLdham.GPsemi.Com>, Dave Whipp wrote: > Bill Gooch wrote: > >I think because it brings up a host of complications around > >concurrency control. It's also basically harder (for most > >people) to simulate highly asynchronous behavior mentally > >than to deal with sequential programs. > > I don't think that its a problem of "harder", it more one of > "different". Once you've used asynchronous simulators a bit, > or build some hardware, it becomes natural the think in terms > asychronous messages. It may become natural, but it is still harder. The problems of concurrent access can be daunting. For an excellent discussion of this topic see Doug Lea's new book: "Concurrent Programming in Java". This one is a keeper. Although the title mentions the word Java, the book is really about concurrent programming and just happens to use Java as the language that it explains things in. The concepts it presents are universal. Read the book even if you aren't using Java. > Even in an asychronous model, causality still applies. But it can be a *lot* harder to track. Indeed, it can be impossible. Consider: int a=1; int b=2; void swapab() {int tmp=a; a=b; b=tmp;} void swapba() {int tmp=b; b=a; a=tmp;} /* in thread 1 */ swapab(); /* in thread 2 */ swapba(); The values of a and b after the two threads complete might be any of the following combinations. a b ----------- 1 2 1 1 2 2 2 1 Which of these combinations will be determined only by the timing of the two threads; And that timing can be based upon ouside stimuli which are for all intents and purposes random. Thus, there can be a reasonable amount of non-determinism involved with concurrent programming. Causality may exist, but that is little comfort when the thread of causality cannot be determined. > If you > send a message to an object then it will process it at a time > after you send it. Most states only emit a small number of > messages before terminating, so it is still possible to trace > "threads" of causality. Unless you write spagetti code, an > asychronous model is no harder to follow than a sychronous one. Actually it can be one hell of a lot harder. Consider the classic race condition. Thread 1 sends a request to thread 2. Thread 1 waits an appropriate amount of time and then decides to cancel the request because it can't wait any longer; so it sends a cancel message to thread 2. Thread 1 sends the response to the request just as thread two is cancelling it. Thus, thread 1 receives a response to a request that it has just cancelled, and thread 2 receives a cancel to a response that it has just serviced. This is a minor race condition. Unless you are very careful to understand all the race conditions that can exist between threads, you can get into some really horrible difficulties. Consider this. Thread 1 needs resources A and B in order to do its job. Thread 2 needs the same resources. Thread 1 acquires resource A and is then prempted by thread 2. Thread 2 acquires resource B and then is prempted by thread 1. Now thread 1 blocks waiting for B and thread 2 blocks waiting for A. Deadlock. Deadlock between two resources and two threads is relatively easy to avoid. But when N threads and M resources are in play, it gets a *lot* harder. Don't minimize the issues of concurrent programming. There are some very serious engineering concerns that need to be addressed. Anyone rushing headlong into this without careful study and consderation will find themselves in a world of hurt. -- Robert C. Martin | Design Consulting | Training courses offered: Object Mentor | rmartin@oma.com | Object Oriented Design 14619 N Somerset Cr | Tel: (847) 918-1004 | C++ Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com "One of the great commandments of science is: 'Mistrust arguments from authority.'" -- Carl Sagan