This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: alloca is bad?


On Thu, Nov 09, 2000 at 09:20:32PM -0500, Christopher Faylor wrote:
> A patch that I recently submitted to gdb-patches used the alloca ()
> function to allocate memory.  I've been told in private email that I
> mustn't use alloca because "stack corruption problems are harder to
> debug" than heap corruption problems.
> 
> I was surprised by this assertion and so I thought I'd ask for a
> consensus here.  Should the use of alloca be deprecated in gdb?
> 
> It is my assertion that the amount of bookkeeping and overhead required
> to use malloc in a way that is analogous with alloca essentially
> nullifies the "harder to debug" argument.  malloc requires a free and
> many times, in gdb context, the only way to guarantee a free is with the
> use of the cleanup function.  Any time you add the complexity of
> something like 'cleanup()' (or whatever other mechanism you use to
> ensure that what you malloc is automatically freed) you can't claim to
> have reduced debugging problems.  Speaking of free, with alloca you
> don't have memory leaks.  With malloc, you do.
> 
> If alloca is bad, then why are local arrays and pointers to local
> variables and parameters ok?
> 
> Inquiring minds...

It depends on many things.  For any allocation that might involve a large
number of items (say more than a page worth), it is better to use malloc than
alloca.  This is due to a couple of factors:

   1)	Stack space is usually more limited than data space.  On some systems
	(ie, Windows), this is set at link time, while on others it can be set
	by the parent shell (though usually the parent can't set the limit
	higher than it's limit).  By the way, this affects large auto arrays
	with static bounds as well.

   2)	There is no error return value for alloca, so you can't have a more
	friendly exit, instead the heap just gets corrupted.  The compiler just
	allocates extra memory and assumes the stack is big enough.

   3)	Large allocations of more than a page might confuse the OS's method of
	growing the stack, which expects to be done via a function prologue.

   4)	There are 3rd party tools to check for pointers being out of bounds for
	malloc requests like electric fence that you can plug in to get the
	checking.

The FSF GCC maintainers have gone through and tried to eliminate all allocas of
any based off of the number of pseudo registers, basic blocks, or other
variable things, and certainly seems to reduce the number of people complaining
that their Windows compiler just died without warning.

-- 
Michael Meissner, Red Hat, Inc.
PMB 198, 174 Littleton Road #3, Westford, Massachusetts 01886, USA
Work:	  meissner@redhat.com		phone: +1 978-486-9304
Non-work: meissner@spectacle-pond.org	fax:   +1 978-692-4482

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]