[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section describes known problems that affect users of GCC. Most of these are not GCC bugs per se--if they were, we would fix them. But the result for a user may be like the result of a bug.
Some of these problems are due to bugs in other software, some are missing features that are too much work to add, and some are places where people's opinions differ as to what is best.
7.1 Actual Bugs We Haven't Fixed Yet Bugs we will fix later. 7.2 Installation Problems Problems that manifest when you install GCC. 7.3 Cross-Compiler Problems Common problems of cross compiling with GCC. 7.4 Interoperation Problems using GCC with other compilers, and with certain linkers, assemblers and debuggers. 7.5 Problems Compiling Certain Programs Problems compiling certain programs. 7.6 Incompatibilities of GCC GCC is incompatible with traditional C. 7.7 Fixed Header Files GNU C uses corrected versions of system header files. This is necessary, but doesn't always work smoothly. 7.8 Standard Libraries GNU C uses the system C library, which might not be compliant with the ISO/ANSI C standard. 7.9 Disappointments and Misunderstandings Regrettable things we can't change, but not quite bugs. 7.10 Common Misunderstandings with GNU C++ Common misunderstandings with GNU C++. 7.11 Caveats of using protoize
Things to watch out for when using protoize
.7.12 Certain Changes We Don't Want to Make Things we think are right, but some others disagree. 7.13 Warning Messages and Error Messages Which problems in your code get warnings, and which get errors.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
fixincludes
script interacts badly with automounters; if the
directory of system header files is automounted, it tends to be
unmounted while fixincludes
is running. This would seem to be a
bug in the automounter. We don't know any good way to work around it.
fixproto
script will sometimes add prototypes for the
sigsetjmp
and siglongjmp
functions that reference the
jmp_buf
type before that type is defined. To work around this,
edit the offending file and place the typedef in front of the
prototypes.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This is a list of problems (and some apparent problems which don't really mean anything is wrong) that show up during installation of GNU CC.
CC
can interfere with the functioning of make
.
fixincludes
if the
System V file system doesn't support symbolic links. These problems
result in a failure to fix the declaration of size_t
in
`sys/types.h'. If you find that size_t
is a signed type and
that type mismatches occur, this could be the cause.
The solution is not to use such a directory for building GCC.
gcc
driver program looked for
as
and ld
in various places; for example, in files
beginning with `/usr/local/lib/gcc-'. GCC version 2 looks for
them in the directory
`/usr/local/lib/gcc-lib/target/version'.
Thus, to use a version of as
or ld
that is not the system
default, for example gas
or GNU ld
, you must put them in
that directory (or make links to them from that directory).
make
. These failures, which
are often due to files that were not found, are expected, and can safely
be ignored.
make
recompiles parts of the compiler when installing
the compiler. In one case, this was traced down to a bug in
make
. Either ignore the problem or switch to GNU Make.
enquire
, which is part of building
GCC. The fix is to get rid of the file real-ld
which purify
installs--so that GCC won't try to use it.
__GNU_LIBRARY__
conditional
around line 31 to `#if 1'.
enquire
hangs due to a hardware problem in the motherboard--it
reports floating point exceptions to the kernel incorrectly. You can
install GCC except for `float.h' by patching out the command to
run enquire
. You may also be able to fix the problem for real by
getting a replacement motherboard. This problem was observed in
Revision E of the Micronics motherboard, and is fixed in Revision F.
It has also been observed in the MYLEX MXA-33 motherboard.
If you encounter this problem, you may also want to consider removing the FPU from the socket during the compilation. Alternatively, if you are running SCO Unix, you can reboot and force the FPU to be ignored. To do this, type `hd(40)unix auto ignorefpu'.
One of these systems is the Unix from Interactive Systems: 386/ix. On this system, an alternate emulator is provided, and it does work. To use it, execute this command as super-user:
ln /etc/emulator.rel1 /etc/emulator |
and then reboot the system. (The default emulator file remains present under the name `emulator.dflt'.)
Try using `/etc/emulator.att', if you have such a problem on the SCO system.
Another system which has this problem is Esix. We don't know whether it has an alternate emulator that works.
On NetBSD 0.8, a similar problem manifests itself as these error messages:
enquire.c: In function `fprop': enquire.c:2328: floating overflow |
genflags
or genoutput
while building GCC. This is said to
be due to a bug in sh
. You can probably get around it by running
genflags
or genoutput
manually and then retrying the
make
.
The solution is to compile the current version of GCC without `-g'. That makes a working compiler which you can use to recompile with `-g'.
To check whether an optional package is installed, use
the pkginfo
command. To add an optional package, use the
pkgadd
command. For further details, see the Solaris
documentation.
For Solaris 2.0 and 2.1, GCC needs six packages: `SUNWarc', `SUNWbtool', `SUNWesu', `SUNWhea', `SUNWlibm', and `SUNWtoo'.
For Solaris 2.2, GCC needs an additional seventh package: `SUNWsprot'.
PATH
.
add.d
.
It would be nice to extend GAS to produce the gp tables, but they are optional, and there should not be a warning about their absence.
fixincludes
. This causes
problems in building GCC. Once GCC is installed, the problems go
away.
To work around this problem, when making the stage 1 compiler, specify this option to Make:
GCC_FOR_TARGET="./xgcc -B./ -I./include" |
When making stage 2 and stage 3, specify this option:
CFLAGS="-g -I./include" |
Users have also reported some problems with version 2.20 of the MIPS compiler tools that were shipped with RISC/os 4.x. The earlier version 2.11 seems to work fine.
alloca
against shared
libraries on RISC-OS 5.0, and DEC's OSF/1 systems. This is a bug
in the linker, that is supposed to be fixed in future revisions.
To protect against this, GCC passes `-non_shared' to the
linker unless you pass an explicit `-shared' or
`-call_shared' switch.
ld fatal: failed to write symbol name something in strings table for file whatever |
This probably indicates that the disk is full or your ULIMIT won't allow the file to be as large as it needs to be.
This problem can also result because the kernel parameter MAXUMEM
is too small. If so, you must regenerate the kernel and make the value
much larger. The default value is reported to be 1024; a value of 32768
is said to work. Smaller values may also work.
/usr/local/lib/bison.simple: In function `yyparse': /usr/local/lib/bison.simple:625: virtual memory exhausted |
that too indicates a problem with disk space, ULIMIT, or MAXUMEM
.
To solve this problem, reconfigure the kernel adding the following line to the configuration file:
MAXUMEM = 4096 |
_floatdisf cc1: warning: `-g' option not supported on this version of GCC cc1: warning: `-g1' option not supported on this version of GCC ./xgcc: Internal compiler error: program as got fatal signal 11 |
A patched version of the assembler is available by anonymous ftp from
altdorf.ai.mit.edu
as the file
`archive/cph/hpux-8.0-assembler'. If you have HP software support,
the patch can also be obtained directly from HP, as described in the
following note:
This is the patched assembler, to patch SR#1653-010439, where the assembler aborts on floating point constants.The bug is not really in the assembler, but in the shared library version of the function "cvtnum(3c)". The bug on "cvtnum(3c)" is SR#4701-078451. Anyway, the attached assembler uses the archive library version of "cvtnum(3c)" and thus does not exhibit the bug.
This patch is also known as PHCO_4484.
fixproto
shell script triggers a bug in the system shell.
If you encounter this problem, upgrade your operating system or
use BASH (the GNU shell) to run fixproto
.
muldi3
in file `libgcc2.c'.
You may be able to succeed by getting GCC version 1, installing it, and using it to compile GCC version 2. The bug in the Pyramid C compiler does not seem to affect GCC version 1.
va_arg
when you build GCC.
If this happens, then you need to link most programs with the library `iclib.a'. You must also modify `stdio.h' as follows: before the lines
#if defined(__i860__) && !defined(_VA_LIST) #include <va_list.h> |
insert the line
#if __PGC__ |
and after the lines
extern int vprintf(const char *, va_list ); extern int vsprintf(char *, const char *, va_list ); #endif |
insert the line
#endif /* __PGC__ */ |
These problems don't exist in operating system version 1.1.
./fixproto: sh internal 1K buffer overflow |
To fix this, change the first line of the fixproto script to look like:
#!/bin/ksh |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
You may run into problems with cross compilation on certain machines, for several reasons.
The compiler writes these integer constants by examining the floating point value as an integer and printing that integer, because this is simple to write and independent of the details of the floating point representation. But this does not work if the compiler is running on a different machine with an incompatible floating point format, or even a different byte-ordering.
In addition, correct constant folding of floating point values requires representing them in the target machine's format. (The C standard does not quite require this, but in practice it is the only way to win.)
It is now possible to overcome these problems by defining macros such
as REAL_VALUE_TYPE
. But doing so is a substantial amount of
work for each target machine.
See section 17.18 Cross Compilation and Floating Point.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section lists various difficulties encountered in using GNU C or GNU C++ together with other compilers or with the assemblers, linkers, libraries and debuggers on certain systems.
This effect is intentional, to protect you from more subtle problems. Compilers differ as to many internal details of C++ implementation, including: how class instances are laid out, how multiple inheritance is implemented, and how virtual function calls are handled. If the name encoding were made the same, your programs would link against libraries provided from other compilers--but the programs would then crash when run. Incompatible libraries are then detected at link time, rather than at run time.
Many systems come with header files that won't work with GCC unless
corrected by fixincludes
. The corrected header files go in a new
directory; GCC searches this directory before `/usr/include'.
If you use `-I/usr/include', this tells GCC to search
`/usr/include' earlier on, before the corrected headers. The
result is that you get the uncorrected header files.
Instead, you should use these options (when compiling C programs):
-I/usr/local/lib/gcc-lib/target/version/include -I/usr/include |
For C++ programs, GCC also uses a special directory that defines C++ interfaces to standard C subroutines. This directory is meant to be searched before other standard include directories, so that it takes precedence. If you are compiling C++ programs and specifying include directories explicitly, use this option first, then the two options above:
-I/usr/local/lib/g++-include |
double
on an 8-byte
boundary, and it expects every double
to be so aligned. The Sun
compiler usually gives double
values 8-byte alignment, with one
exception: function arguments of type double
may not be aligned.
As a result, if a function compiled with Sun CC takes the address of an
argument of type double
and passes this pointer of type
double *
to a function compiled with GCC, dereferencing the
pointer may cause a fatal signal.
One way to solve this problem is to compile your entire program with GNU
CC. Another solution is to modify the function that is compiled with
Sun CC to copy the argument into a local variable; local variables
are always properly aligned. A third solution is to modify the function
that uses the pointer to dereference it via the following function
access_double
instead of directly with `*':
inline double access_double (double *unaligned_ptr) { union d2i { double d; int i[2]; }; union d2i *p = (union d2i *) unaligned_ptr; union d2i u; u.i[0] = p->i[0]; u.i[1] = p->i[1]; return u.d; } |
Storing into the pointer can be done likewise with the same union.
malloc
function in the `libmalloc.a' library
may allocate memory that is only 4 byte aligned. Since GCC on the
Sparc assumes that doubles are 8 byte aligned, this may result in a
fatal signal if doubles are stored in memory allocated by the
`libmalloc.a' library.
The solution is to not use the `libmalloc.a' library. Use instead
malloc
and related functions from `libc.a'; they do not have
this problem.
_dlclose
, _dlsym
or _dlopen
when linking, compile and link against the file
`mit/util/misc/dlsym.c' from the MIT version of X windows.
cc
does not
compile GCC correctly. We do not yet know why. However, GCC
compiled on earlier HP-UX versions works properly on HP-UX 9.01 and can
compile itself properly on 9.01.
alloca
or variable-size arrays. This is because GCC doesn't
generate HP-UX unwind descriptors for such functions. It may even be
impossible to generate them.
(warning) Use of GR3 when frame >= 8192 may cause conflict. |
These warnings are harmless and can be safely ignored.
IBM has produced a fixed version of the assembler. The upgraded assembler unfortunately was not included in any of the AIX 3.2 update PTF releases (3.2.2, 3.2.3, or 3.2.3e). Users of AIX 3.1 should request PTF U403044 from IBM and users of AIX 3.2 should request PTF U416277. See the file `README.RS6000' for more details on these updates.
You can test for the presense of a fixed assembler by using the command
as -u < /dev/null |
If the command exits normally, the assembler fix already is installed. If the assembler complains that "-u" is an unknown flag, you need to order the fix.
extern int foo; ... foo ... static int foo; |
will cause the linker to report an undefined symbol foo
.
Although this behavior differs from most other systems, it is not a
bug because redefining an extern
variable as static
is undefined in ANSI C.
size_t
. You should change `sys/types.h' by adding these
lines around the definition of size_t
:
#ifndef _SIZE_T #define _SIZE_T actual typedef here #endif |
GCC uses the same convention as the Ultrix C compiler. You can use these options to produce code compatible with the Fortran compiler:
-fcall-saved-r2 -fcall-saved-r3 -fcall-saved-r4 -fcall-saved-r5 |
-L/usr/local/lib/gcc-lib/we32k-att-sysv/2.8.1 -lgcc -lc_s |
The first specifies where to find the library `libgcc.a' specified with the `-lgcc' option.
GCC does linking by invoking ld
, just as cc
does, and
there is no reason why it should matter which compilation program
you use to invoke ld
. If someone tracks this problem down,
it can probably be fixed easily.
ecvt
, fcvt
and gcvt
. Given valid
floating point numbers, they sometimes print `NaN'.
Or use the `-noasmopt' option when you compile GCC with itself, and then again when you compile your program. (This is a temporary kludge to turn off assembler optimization on Irix.) If this proves to be what you need, edit the assembler spec in the file `specs' so that it unconditionally passes `-O0' to the assembler, and never passes `-O2' or `-O3'.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Certain programs have problems compiling.
#ifdef __STDC__ #define NeedFunctionPrototypes 0 #endif |
-traditional -Dvolatile=__volatile__ -I/usr/include/sun -I/usr/ucbinclude -fpcc-struct-return |
most of which are unnecessary with GCC 2.4.5 and newer versions. You
can make a properly working Perl by setting ccflags
to
`-fwritable-strings' (implied by the `-traditional' in the
original options) and cppflags
to empty in `config.sh', then
typing `./doSH; make depend; make'.
You can prevent this problem by linking GCC with the GNU malloc (which thus replaces the malloc that comes with the system). GNU malloc is available as a separate package, and also in the file `src/gmalloc.c' in the GNU Emacs 19 distribution.
If you have installed GNU malloc as a separate library package, use this option when you relink GCC:
MALLOC=/usr/local/lib/libgmalloc.a |
Alternatively, if you have compiled `gmalloc.c' from Emacs 19, copy the object file to `gmalloc.o' and use this option when you relink GCC:
MALLOC=gmalloc.o |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are several noteworthy incompatibilities between GNU C and most existing (non-ANSI) versions of C. The `-traditional' option eliminates many of these incompatibilities, but not all, by telling GNU C to behave like the other C compilers.
One consequence is that you cannot call mktemp
with a string
constant argument. The function mktemp
always alters the
string its argument points to.
Another consequence is that sscanf
does not work on some systems
when passed a string constant as its format control string or input.
This is because sscanf
incorrectly tries to write into the string
constant. Likewise fscanf
and scanf
.
The best solution to these problems is to change the program to use
char
-array variables with initialization strings for these
purposes instead of string constants. But if this is not possible,
you can use the `-fwritable-strings' flag, which directs GCC
to handle string constants the same way most C compilers do.
`-traditional' also has this effect, among others.
-2147483648
is positive.
This is because 2147483648 cannot fit in the type int
, so
(following the ANSI C rules) its data type is unsigned long int
.
Negating this value yields 2147483648 again.
#define foo(a) "a" |
will produce output "a"
regardless of what the argument a is.
The `-traditional' option directs GCC to handle such cases (among others) in the old-fashioned (non-ANSI) fashion.
setjmp
and longjmp
, the only automatic
variables guaranteed to remain valid are those declared
volatile
. This is a consequence of automatic register
allocation. Consider this function:
jmp_buf j; foo () { int a, b; a = fun1 (); if (setjmp (j)) return a; a = fun2 (); /* |
Here a
may or may not be restored to its first value when the
longjmp
occurs. If a
is allocated in a register, then
its first value is restored; otherwise, it keeps the last value stored
in it.
If you use the `-W' option with the `-O' option, you will get a warning when GCC thinks such a problem might be possible.
The `-traditional' option directs GNU C to put variables in
the stack by default, rather than in registers, in functions that
call setjmp
. This results in the behavior found in
traditional C compilers.
foobar ( #define luser hack) |
ANSI C does not permit such a construct. It would make sense to support it when `-traditional' is used, but it is too much work to implement.
In some other C compilers, a extern
declaration affects all the
rest of the file even if it happens within a block.
The `-traditional' option directs GNU C to treat all extern
declarations as global, like traditional compilers.
long
, etc., with a typedef name,
as shown here:
typedef int foo; typedef long foo bar; |
In ANSI C, this is not allowed: long
and other type modifiers
require an explicit int
. Because this criterion is expressed
by Bison grammar rules rather than C code, the `-traditional'
flag cannot alter it.
#if 0 You can't expect this to work. #endif |
The best solution to such a problem is to put the text into an actual C comment delimited by `/*...*/'. However, `-traditional' suppresses these error messages.
time
, so it did not matter what type your program declared it to
return. But in systems with ANSI C headers, time
is declared to
return time_t
, and if that is not the same as long
, then
`long time ();' is erroneous.
The solution is to change your program to use time_t
as the return
type of time
.
float
, PCC converts it to
a double. GCC actually returns a float
. If you are concerned
with PCC compatibility, you should declare your functions to return
double
; you might as well say what you mean.
The method used by GCC is as follows: a structure or union which is
1, 2, 4 or 8 bytes long is returned like a scalar. A structure or union
with any other size is stored into an address supplied by the caller
(usually in a special, fixed register, but on some machines it is passed
on the stack). The machine-description macros STRUCT_VALUE
and
STRUCT_INCOMING_VALUE
tell GCC where to pass this address.
By contrast, PCC on most target machines returns structures and unions of any size by copying the data into an area of static storage, and then returning the address of that storage as if it were a pointer value. The caller must copy the data from that memory area to the place where the value is wanted. GCC does not use this method because it is slower and nonreentrant.
On some newer machines, PCC uses a reentrant convention for all structure and union returning. GCC on most of these machines uses a compatible convention when returning structures and unions in memory, but still returns small structures and unions in registers.
You can tell GCC to use a compatible convention for all structure and union returning with the option `-fpcc-struct-return'.
A preprocessing token is a preprocessing number if it begins with a digit and is followed by letters, underscores, digits, periods and `e+', `e-', `E+', or `E-' character sequences.
To make the above program fragment valid, place whitespace in front of the minus sign. This whitespace will end the preprocessing number.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
GCC needs to install corrected versions of some system header files. This is because most target systems have some header files that won't work with GCC unless they are changed. Some have bugs, some are incompatible with ANSI C, and some depend on special features of other compilers.
Installing GCC automatically creates and installs the fixed header
files, by running a program called fixincludes
(or for certain
targets an alternative such as fixinc.svr4
). Normally, you
don't need to pay attention to this. But there are cases where it
doesn't do the right thing automatically.
The programs that fix the header files do not understand this special way of using symbolic links; therefore, the directory of fixed header files is good only for the machine model used to build it.
In SunOS 4, only programs that look inside the kernel will notice the difference between machine models. Therefore, for most purposes, you need not be concerned about this.
It is possible to make separate sets of fixed header files for the different machine models, and arrange a structure of symbolic links so as to use the proper set, but you'll have to do this by hand.
fixincludes
script to fail.
This means you will encounter problems due to bugs in the system header files. It may be no comfort that they aren't GCC's fault, but it does mean that there's nothing for us to do about them.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
GCC by itself attempts to be what the ISO/ANSI C standard calls a conforming freestanding implementation. This means all ANSI C language features are available, as well as the contents of `float.h', `limits.h', `stdarg.h', and `stddef.h'. The rest of the C library is supplied by the vendor of the operating system. If that C library doesn't conform to the C standards, then your programs might get warnings (especially when using `-Wall') that you don't expect.
For example, the sprintf
function on SunOS 4.1.3 returns
char *
while the C standard says that sprintf
returns an
int
. The fixincludes
program could make the prototype for
this function match the Standard, but that would be wrong, since the
function will still return char *
.
If you need a Standard compliant library, then you need to find one, as
GCC does not provide one. The GNU C library (called glibc
)
has been ported to a number of operating systems, and provides ANSI/ISO,
POSIX, BSD and SystemV compatibility. You could also ask your operating
system vendor if newer libraries are available.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
These problems are perhaps regrettable, but we don't know any practical way around them.
This occurs because sometimes GCC optimizes the variable out of existence. There is no way to tell the debugger how to compute the value such a variable "would have had", and it is not clear that would be desirable anyway. So GCC simply does not mention the eliminated variable when it writes debugging information.
You have to expect a certain amount of disagreement between the executable and your source code, when you use optimization.
int foo (struct mumble *); struct mumble { ... }; int foo (struct mumble *x) { ... } |
This code really is erroneous, because the scope of struct
mumble
in the prototype is limited to the argument list containing it.
It does not refer to the struct mumble
defined with file scope
immediately below--they are two unrelated types with similar names in
different scopes.
But in the definition of foo
, the file-scope type is used
because that is available to be inherited. Thus, the definition and
the prototype do not match, and you get an error.
This behavior may seem silly, but it's what the ANSI standard specifies.
It is easy enough for you to make your code work by moving the
definition of struct mumble
above the prototype. It's not worth
being incompatible with ANSI C just to avoid an error for the example
shown above.
If you care about controlling the amount of memory that is accessed, use volatile but do not use bitfields.
If new system header files are installed, nothing automatically arranges
to update the corrected header files. You will have to reinstall GCC
to fix the new header files. More specifically, go to the build
directory and delete the files `stmp-fixinc' and
`stmp-headers', and the subdirectory include
; then do
`make install' again.
double
in memory.
Compiled code moves values between memory and floating point registers
at its convenience, and moving them into memory truncates them.
You can partially avoid this problem by using the `-ffloat-store' option (see section 2.8 Options That Control Optimization).
If the code is rewritten to use the ANSI standard `stdarg.h' method of variable arguments, and the prototype is in scope at the time of the call, everything will work fine.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
C++ is a complex language and an evolving one, and its standard definition (the ISO C++ standard) was only recently completed. As a result, your C++ compiler may occasionally surprise you, even when its behavior is correct. This section discusses some areas that frequently give rise to questions of this sort.
7.10.1 Declare and Define Static Members Static member declarations are not definitions 7.10.2 Temporaries May Vanish Before You Expect Temporaries may vanish before you expect 7.10.3 Implicit Copy-Assignment for Virtual Bases Copy Assignment operators copy virtual bases twice
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When a class has static data members, it is not enough to declare the static member; you must also define it. For example:
class Foo { ... void method(); static int bar; }; |
This declaration only establishes that the class Foo
has an
int
named Foo::bar
, and a member function named
Foo::method
. But you still need to define both
method
and bar
elsewhere. According to the draft ANSI
standard, you must supply an initializer in one (and only one) source
file, such as:
int Foo::bar = 0; |
Other C++ compilers may not correctly implement the standard behavior.
As a result, when you switch to g++
from one of these compilers,
you may discover that a program that appeared to work correctly in fact
does not conform to the standard: g++
reports as undefined
symbols any static data members that lack definitions.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is dangerous to use pointers or references to portions of a
temporary object. The compiler may very well delete the object before
you expect it to, leaving a pointer to garbage. The most common place
where this problem crops up is in classes like string classes,
especially ones that define a conversion function to type char *
or const char *
-- which is one reason why the standard
string
class requires you to call the c_str
member
function. However, any class that returns a pointer to some internal
structure is potentially subject to this problem.
For example, a program may use a function strfunc
that returns
string
objects, and another function charfunc
that
operates on pointers to char
:
string strfunc (); void charfunc (const char *); void f () { const char *p = strfunc().c_str(); ... charfunc (p); ... charfunc (p); } |
In this situation, it may seem reasonable to save a pointer to the C
string returned by the c_str
member function and use that rather
than call c_str
repeatedly. However, the temporary string
created by the call to strfunc
is destroyed after p
is
initialized, at which point p
is left pointing to freed memory.
Code like this may run successfully under some other compilers, particularly obsolete cfront-based compilers that delete temporaries along with normal local variables. However, the GNU C++ behavior is standard-conforming, so if your program depends on late destruction of temporaries it is not portable.
The safe way to write such code is to give the temporary a name, which forces it to remain until the end of the scope of the name. For example:
string& tmp = strfunc (); charfunc (tmp.c_str ()); |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
When a base class is virtual, only one subobject of the base class belongs to each full object. Also, the constructors and destructors are invoked only once, and called from the most-derived class. However, such objects behave unspecified when being assigned. For example:
struct Base{ char *name; Base(char *n) : name(strdup(n)){} Base& operator= (const Base& other){ free (name); name = strdup (other.name); } }; struct A:virtual Base{ int val; A():Base("A"){} }; struct B:virtual Base{ int bval; B():Base("B"){} }; struct Derived:public A, public B{ Derived():Base("Derived"){} }; void func(Derived &d1, Derived &d2) { d1 = d2; } |
The C++ standard specifies that `Base::Base' is only called once when constructing or copy-constructing a Derived object. It is unspecified whether `Base::operator=' is called more than once when the implicit copy-assignment for Derived objects is invoked (as it is inside `func' in the example).
g++ implements the "intuitive" algorithm for copy-assignment: assign all
direct bases, then assign all members. In that algorithm, the virtual
base subobject can be encountered many times. In the example, copying
proceeds in the following order: `val', `name' (via
strdup
), `bval', and `name' again.
If application code relies on copy-assignment, a user-defined copy-assignment operator removes any uncertainties. With such an operator, the application can define whether and how the virtual base subobject is assigned.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
protoize
The conversion programs protoize
and unprotoize
can
sometimes change a source file in a way that won't work unless you
rearrange it.
protoize
can insert references to a type name or type tag before
the definition, or in a file where they are not defined.
If this happens, compiler error messages should show you where the new references are, so fixing the file by hand is straightforward.
protoize
cannot figure out.
For example, it can't determine argument types for declaring a
pointer-to-function variable; this you must do by hand. protoize
inserts a comment containing `???' each time it finds such a
variable; so you can find all such variables by searching for this
string. ANSI C does not require declaring the argument types of
pointer-to-function types.
unprotoize
can easily introduce bugs. If the program
relied on prototypes to bring about conversion of arguments, these
conversions will not take place in the program without prototypes.
One case in which you can be sure unprotoize
is safe is when
you are removing prototypes that were made with protoize
; if
the program worked before without any prototypes, it will work again
without them.
You can find all the places where this problem might occur by compiling the program with the `-Wconversion' option. It prints a warning whenever an argument is converted.
protoize
cannot get the argument types for a function whose
definition was not actually compiled due to preprocessing conditionals.
When this happens, protoize
changes nothing in regard to such
a function. protoize
tries to detect such instances and warn
about them.
You can generally work around this problem by using protoize
step
by step, each time specifying a different set of `-D' options for
compilation, until all of the functions have been converted. There is
no automatic way to verify that you have got them all, however.
If you plan on converting source files which contain such code, it is recommended that you first make sure that each conditionally compiled region of source code which contains an alternative function header also contains at least one additional follower token (past the final right parenthesis of the function header). This should circumvent the problem.
unprotoize
can become confused when trying to convert a function
definition or declaration which contains a declaration for a
pointer-to-function formal argument which has the same name as the
function being defined or declared. We recommand you avoid such choices
of formal parameter names.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This section lists changes that people frequently request, but which we do not make because we think GCC is better without them.
Such a feature would work only occasionally--only for calls that appear in the same file as the called function, following the definition. The only way to check all calls reliably is to add a prototype for the function. But adding a prototype eliminates the motivation for this feature. So the feature is not worthwhile.
Shift count operands are probably signed more often than unsigned. Warning about this would cause far more annoyance than good.
Such assignments must be very common; warning about them would cause more annoyance than good.
It's very common to have unreachable code in machine-generated programs. For example, this happens normally in some files of GNU C itself.
Coming as I do from a Lisp background, I balk at the idea that there is
something dangerous about discarding a value. There are functions that
return values which some callers may find useful; it makes no sense to
clutter the program with a cast to void
whenever the value isn't
useful.
This assumption is false on certain systems when `#pragma weak' is used.
This would cause storage layout to be incompatible with most other C compilers. And it doesn't seem very important, given that you can get the same result in other ways. The case where it matters most is when the enumeration-valued object is inside a structure, and in that case you can specify a field width explicitly.
The ANSI C standard leaves it up to the implementation whether a bitfield
declared plain int
is signed or not. This in effect creates two
alternative dialects of C.
The GNU C compiler supports both dialects; you can specify the signed dialect with `-fsigned-bitfields' and the unsigned dialect with `-funsigned-bitfields'. However, this leaves open the question of which dialect to use by default.
Currently, the preferred dialect makes plain bitfields signed, because
this is simplest. Since int
is the same as signed int
in
every other context, it is cleanest for them to be the same in bitfields
as well.
Some computer manufacturers have published Application Binary Interface standards which specify that plain bitfields should be unsigned. It is a mistake, however, to say anything about this issue in an ABI. This is because the handling of plain bitfields distinguishes two dialects of C. Both dialects are meaningful on every type of machine. Whether a particular object file was compiled using signed bitfields or unsigned is of no concern to other object files, even if they access the same bitfields in the same data structures.
A given program is written in one or the other of these two dialects. The program stands a chance to work on most any machine if it is compiled with the proper dialect. It is unlikely to work at all if compiled with the wrong dialect.
Many users appreciate the GNU C compiler because it provides an environment that is uniform across machines. These users would be inconvenienced if the compiler treated plain bitfields differently on certain machines.
Occasionally users write programs intended only for a particular machine type. On these occasions, the users would benefit if the GNU C compiler were to support by default the same dialect as the other compilers on that machine. But such applications are rare. And users writing a program to run on more than one type of machine cannot possibly benefit from this kind of compatibility.
This is why GCC does and will treat plain bitfields in the same fashion on all types of machines (by default).
There are some arguments for making bitfields unsigned by default on all machines. If, for example, this becomes a universal de facto standard, it would make sense for GCC to go along with it. This is something to be considered in the future.
(Of course, users strongly concerned about portability should indicate explicitly in each bitfield whether it is signed or not. In this way, they write programs which have the same meaning in both C dialects.)
__STDC__
when `-ansi' is not used.
Currently, GCC defines __STDC__
as long as you don't use
`-traditional'. This provides good results in practice.
Programmers normally use conditionals on __STDC__
to ask whether
it is safe to use certain features of ANSI C, such as function
prototypes or ANSI token concatenation. Since plain `gcc' supports
all the features of ANSI C, the correct answer to these questions is
"yes".
Some users try to use __STDC__
to check for the availability of
certain library facilities. This is actually incorrect usage in an ANSI
C program, because the ANSI C standard says that a conforming
freestanding implementation should define __STDC__
even though it
does not have the library facilities. `gcc -ansi -pedantic' is a
conforming freestanding implementation, and it is therefore required to
define __STDC__
, even though it does not come with an ANSI C
library.
Sometimes people say that defining __STDC__
in a compiler that
does not completely conform to the ANSI C standard somehow violates the
standard. This is illogical. The standard is a standard for compilers
that claim to support ANSI C, such as `gcc -ansi'---not for other
compilers such as plain `gcc'. Whatever the ANSI C standard says
is relevant to the design of plain `gcc' without `-ansi' only
for pragmatic reasons, not as a requirement.
GCC normally defines __STDC__
to be 1, and in addition
defines __STRICT_ANSI__
if you specify the `-ansi' option.
On some hosts, system include files use a different convention, where
__STDC__
is normally 0, but is 1 if the user specifies strict
conformance to the C Standard. GCC follows the host convention when
processing system include files, but when processing user files it follows
the usual GNU C convention.
__STDC__
in C++.
Programs written to compile with C++-to-C translators get the
value of __STDC__
that goes with the C compiler that is
subsequently used. These programs must test __STDC__
to determine what kind of C preprocessor that compiler uses:
whether they should concatenate tokens in the ANSI C fashion
or in the traditional fashion.
These programs work properly with GNU C++ if __STDC__
is defined.
They would not work otherwise.
In addition, many header files are written to provide prototypes in ANSI
C but not in traditional C. Many of these header files can work without
change in C++ provided __STDC__
is defined. If __STDC__
is not defined, they will all fail, and will all need to be changed to
test explicitly for C++ as well.
Historically, GCC has not deleted "empty" loops under the assumption that the most likely reason you would put one in a program is to have a delay, so deleting them will not make real programs run any faster.
However, the rationale here is that optimization of a nonempty loop cannot produce an empty one, which holds for C but is not always the case for C++.
Moreover, with `-funroll-loops' small "empty" loops are already removed, so the current behavior is both sub-optimal and inconsistent and will change in the future.
It is never safe to depend on the order of evaluation of side effects. For example, a function call like this may very well behave differently from one compiler to another:
void func (int, int); int i = 2; func (i++, i++); |
There is no guarantee (in either the C or the C++ standard language
definitions) that the increments will be evaluated in any particular
order. Either increment might happen first. func
might get the
arguments `2, 3', or it might get `3, 2', or even `2, 2'.
Strictly speaking, there is no prohibition in the ANSI C standard against allowing structures with volatile fields in registers, but it does not seem to make any sense and is probably not what you wanted to do. So the compiler will give an error message in this case.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The GNU compiler can produce two kinds of diagnostics: errors and warnings. Each kind has a different purpose:
Warnings may indicate danger points where you should check to make sure that your program really does what you intend; or the use of obsolete features; or the use of nonstandard features of GNU C or C++. Many warnings are issued only if you ask for them, with one of the `-W' options (for instance, `-Wall' requests a variety of useful warnings).
GCC always tries to compile your program if possible; it never gratuitously rejects a program whose meaning is clear merely because (for instance) it fails to conform to a standard. In some cases, however, the C and C++ standards specify that certain extensions are forbidden, and a diagnostic must be issued by a conforming compiler. The `-pedantic' option tells GCC to issue warnings in such cases; `-pedantic-errors' says to make them errors instead. This does not mean that all non-ANSI constructs get warnings or errors.
See section Options to Request or Suppress Warnings, for more detail on these and related command-line options.
[ << ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |