Do you ever wish the GNU C++ compiler provided shorter compiler syntax errors so as to not scare uni students away?
Of course, but it is not all GCC's fault. The fundamental problem is that C++98 provides no way for the programmer to directly and simply state a template's requirements on its argument types. That is a weakness of the language - not of a complier - and can only be completely addressed through a language change, which will be part of C++0x.
I'm referring to "concepts" which will allow C++0x programmers to precisely specify the requirements of sets of template arguments and have those requirements checked at call points and definition points (in isolation) just like any other type check in the language. For details, see any of my papers on C++0x or "Concepts: Linguistic Support for Generic Programming in C++" by Doug Gregor et al (including me) from OOPSLA'06 (available from my publications page). An experimental implementation can be downloaded from Doug Gregor's home pages.
Until "concepts" are universally available, we can use "constrains classes" to dramatically improve checking; see my technical FAQ.
The STL is one of the few (if not the only) general purpose libraries for which programmers can actually see complexity guarantees. Why do you think this is?
The STL is - as usual - ahead of its time. It is hard work to provide the right guarantees and most library designers prefer to spend their efforts on more visible features. The complexity guarantees is basically one attempt among many to ensure quality.
In the last couple of years, we have seen distributed computing become more available to the average programmer. How will this affect C++?
That's hard to say, but before dealing with distributed programming, a language has to support concurrency and be able to deal with more than the conventional "flat/uniform" memory model. C++0x does exactly that. The memory model, the atomic types, and the thread local storage provides the basic guarantees needed to support a good threads library. In all, C++0x allows for the basic and efficient use of multi-cores. On top of that, we need higher-level concurrency models for easy and effective exploitation of concurrency in our applications. Language features such as "function objects" (available in C++98) and lambdas (a C++0x feature) will help that, but we need to provide support beyond the basic "let a bunch of threads loose in a common address space" view of concurrency, which I consider necessary as infrastructure and the worst possible way of organizing concurrent applications.
As ever, the C++ approach is to provide efficient primitives and very general (and efficient) abstraction mechanisms, which is then used to build higher-level abstractions as libraries.
Of course you don't have to wait for C++0x to do concurrent programming in C++. People have been doing that for years and most of what the new standard offers related to concurrency is currently available in pre-standard forms.
Do you see this leading to the creation to a new generation of general purpose languages?
Many of the "scripting languages" provide facilities for managing state in a Web environment, and that is their real strength. Simple text manipulation is fairly easily matched by libraries, such as the new C++ regular expression library (available now from boost.org) but it is hard to conceive of a language that is both general-purpose and distributed. The root of that problem is that convenient distributed programming relies on simplification and specialization. A general-purpose language cannot just provide a single high-level distribution model.
I see no fundamental reason against a general-purpose language being augmented by basic facilities for distribution, however, and I (unsuccessfully) argued that C++0x should do exactly that. I think that eventually all major languages will provide some support for distribution through a combination of direct language support, run-time support, or libraries.