This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: How big should bigcore.core be?


Mark Kettenis wrote:


If the latter is true, I really think we should disable this test by
default and only enable it on systems where we know we won't generate
ridiculously large files running the risk of crashing the machine.
For my work on GDB, I regularly use machines that are not mine, and
I'd really hate it if I crashed those.


As one data point, I can confirm from experience that bigcore can
crash an s390x-ibm-linux system hard (provided you run with memory
overcommit and no ulimits).

Cause of crash is that it allocates a couple of terabytes heap,
and then tries to coredump them; the Linux kernel coredump routines
do a full address space page-table walk, which insists on creating
all *page tables* visited, if not yet present. For a couple of
terabytes address space, you need a couple of gigabytes page tables,
which are non-pagable kernel memory -- more real memory than my machine had ...

So it expands the page tables without actually allocating pages :-( This would explain why, for amd64, it takes longer than I'd expect to write out a 3mb phsyical (500gb virtual) core file -> kernel bug.


Now, running with overcommit and no ulimits on a 64-bit Linux machine is something you shouldn't do anyway, but it's still
not nice to hit this problem just from running the gdb test
suite :-/

Andrew




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]