This is the mail archive of the
mailing list for the GDB project.
Re: GDB to C++ issue: deletion
- From: André Pönitz <apoenitz at trolltech dot com>
- To: gdb at sources dot redhat dot com
- Date: Fri, 1 Aug 2008 10:54:20 +0200
- Subject: Re: GDB to C++ issue: deletion
- References: <200807312204.m6VM4JQM007611@tully.CS.Berkeley.EDU>
On Friday 01 August 2008 00:04:19 Paul Hilfinger wrote:
> Alpar wrote:
> > A good example for that was the debate on xfree vs. delete. One of the
> > goals of STL standard containers is that using them the programmer will
> > almost never has to use 'delete'.
> This reminded me of an issue that I'd be curious to see discussed.
> The STL container classes are certainly convenient for decreasing
> memory leaks with little fuss, but they do so internally with
> individual frees of allocated items. GDB currently does much of its
> allocation using a region-based style (in the form of obstacks). This
> reduces leakage at some expense in safety (although I have not seen
> many dangling-pointer problems triggered premature obstack releases in
> GDB). Allegedly, GDB's use of obstacks significantly speeds up
> allocation and (of course) deletion. I have certainly seen programs
> in which the need to free data structures individually adds
> significant cost*. The STL container classes provide for allocators,
> but I have no experience in how well they could be made to work in
> duplicating GDB's current performance in this area.
> I'd be interested in hearing the thoughts of the C++ lobby.
0. It's the wrong question ;-) It should not be "How to I translate the
code in lines 30 through 42 from C to C++?" but "The C code in lines
30 through 42 solves problem X. How would I solve X in C++?"
(yes, custom allocators might be a solution to certain generic
allocation performance problems, but the solution more often is
1. The problem of finding alternatives does not exist ;-) If someone feels
- or, better, has proof - that good old C-style code provides the best
performance in some part of the code, and that part of the code
is on the critical path, well, than just leave that piece of code as it is.
Almost all C code is valid C++.
2. Even on this level, C++ might help to improve code readability and
maintanance by using e.g. helper classes with destructors etc. that
would translate to exactly the same machine code yet be more compact
on the source side.
3. Thirdly: Micro-optimizations. They are nice to improve the performance
of a piece of code _after_ one came to the conclusion that a certain
approach is conceptually the best and uses already the theoretically best
(or at least reasonable) data structures for the job. It's not uncommon to
gain a factor of 2 to 10 just by doing micro-optimizations, gaining half of a
percent at a time. So they are a valuable tool in the tool chest. However,
using them is a waste of resources if it's based on the wrong algorithm or
Now, C code typically starts with linked lists as containers and improves
a bit on that and then applies micro optimizations in the "user code"
and than gets stuck at this level as refactoring of micro-optimized
code is difficult especially when the optimizations are inlined in the user
code and the algorithms/containers are tweaked away from "standard"
With C++ one typically starts at least with the algorithm or container
that promises good scalability. That's easy, because "they are there".
Switching later is also comparatively easy, sometimes it's even sufficient
just to replace a single line to switch, say, from a std::list to a std::vector
or such. So C++ code is much more lilkely to use the right container
for the task, and using the wrong container typically costs a factor of "N"
not 2 or 10.
Incidentally, I think that the currently existing gdb (and other GNU core
tools) nicely demonstrate the problem from a user's perspective: It does
not scale well, and it has much worse performance than some competitors.
Of course, multi-platform abstraction has an price, but using e.g. quadratic
algorithms for an O(n) or O(n log n) task imply practical limits... see e.g.
http://sourceware.org/ml/binutils/2004-01/msg00476.html for an example]
 Of course, every now and then a Guru level coder shows up, replaces
one central trouble maker with something very cool, more efficient,
even more hand-crafted, that happens to work well without being
understood by aynone else. Then the Guru leaves, and the code becomes
a "no go" area for mere mortals...