In order to automatically parallelize a program, it is necessary for us to know exactly:
The fact that both C and Fortran can have global variables, makes it messy (but not impossible) to find out exactly who modifies what variables. When large programs are split into separate compilation units, it will require the compiler to look thru all compilation units (source files), to see what routines touch the global variables.
In C, aliasing is possible. That is, two seemingly distinct variables actually share some of the same memory. So, code that changes one variable, may actually also change another, although it is not clear from the code. It is not necessarily possible to predict aliasing at compile time. This makes it effectively impossible to automatically parallelize C code, without the user supplying hints for the compiler.
However, the main problems with existing parallelizing systems (as I see it) is, that they are parallelizing at compile-time. Dimensions of variables are most usually unknown at compile time, which means that parallelization must rely on either: The user telling the compiler and/or libraries about the likely dimension of the variables, or the code and libraries performing a number of dimension checks at run-time.
If one chooses to hand-parallelize the code, using MPI, PVM, or similar libraries, the problem of changes becomes even worse. The criteria for when parallelization pays off and when it doesn't, is often reflected in the design of the program, not just in a set of easily configured parameters. (I know, because I have been working with a lot of code suffering from this).