This is the mail archive of the gdb-prs@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug gdb/19828] 7.11 regression: non-stop gdb -p <process from a container>: internal error


https://sourceware.org/bugzilla/show_bug.cgi?id=19828

--- Comment #4 from cvs-commit at gcc dot gnu.org <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Pedro Alves <palves@sourceware.org>:

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=95e94c3f18aaf34fadcd9a2a882ffe6147b9acc3

commit 95e94c3f18aaf34fadcd9a2a882ffe6147b9acc3
Author: Pedro Alves <palves@redhat.com>
Date:   Tue May 24 14:47:56 2016 +0100

    [Linux] Read vDSO range from /proc/PID/task/PID/maps instead of
/proc/PID/maps

    ... as it's _much_ faster.

    Hacking the gdb.threads/attach-many-short-lived-threads.exp test to
    spawn thousands of threads instead of dozens to stress and debug
    timeout problems with gdb.threads/attach-many-short-lived-threads.exp,
    I saw that GDB would spend several seconds just reading the
    /proc/PID/smaps file, to determine the vDSO mapping range.  GDB opens
    and reads the whole file just once, and caches the result, but even
    that is too slow.  For example, with almost 8000 threads:

     $ ls /proc/3518/task/ | wc -l
     7906

    reading the /proc/PID/smaps file grepping for "vdso" takes over 15
    seconds :

     $ time cat /proc/3518/smaps | grep vdso
     7ffdbafee000-7ffdbaff0000 r-xp 00000000 00:00 0                         
[vdso]

     real    0m15.371s
     user    0m0.008s
     sys     0m15.017s

    Looking around the web for hints, I found a nice description of the
    issue here:

    
http://backtrace.io/blog/blog/2014/11/12/large-thread-counts-and-slow-process-maps/

    The problem is that /proc/PID/smaps wants to show the mappings as
    being thread stack, and that has the kernel iterating over all threads
    in the thread group, for each mapping.

    The fix is to use the "map" file under /proc/PID/task/PID/ instead of
    the /proc/PID/ one, as the former doesn't mark thread stacks for all
    threads.

    That alone drops the timing to the millisecond range on my machine:

     $ time cat /proc/3518/task/3518/smaps | grep vdso
     7ffdbafee000-7ffdbaff0000 r-xp 00000000 00:00 0                         
[vdso]

     real    0m0.150s
     user    0m0.009s
     sys     0m0.084s

    And since we only need the vdso mapping's address range, we can use
    "maps" file instead of "smaps", and it's even cheaper:

    /proc/PID/task/PID/maps :

     $ time cat /proc/3518/task/3518/maps | grep vdso
     7ffdbafee000-7ffdbaff0000 r-xp 00000000 00:00 0                         
[vdso]

     real    0m0.027s
     user    0m0.000s
     sys     0m0.017s

    gdb/ChangeLog:
    2016-05-24  Pedro Alves  <palves@redhat.com>

        PR gdb/19828
        * linux-tdep.c (find_mapping_size): Delete.
        (linux_vsyscall_range_raw): Rewrite reading from
        /proc/PID/task/PID/maps directly instead of using
        gdbarch_find_memory_regions.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]