This is the mail archive of the gdb-prs@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug gdb/19828] 7.11 regression: non-stop gdb -p <process from a container>: internal error


https://sourceware.org/bugzilla/show_bug.cgi?id=19828

--- Comment #6 from cvs-commit at gcc dot gnu.org <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Pedro Alves <palves@sourceware.org>:

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=774113b02f41ded4d9ba4d18571ee5024312ad1b

commit 774113b02f41ded4d9ba4d18571ee5024312ad1b
Author: Pedro Alves <palves@redhat.com>
Date:   Tue May 24 14:47:57 2016 +0100

    [Linux] Optimize PID -> struct lwp_info lookup

    Hacking the gdb.threads/attach-many-short-lived-threads.exp test to
    spawn thousands of threads instead of dozens, and running gdb under
    perf, I saw that GDB was spending most of the time in find_lwp_pid:

       - captured_main
          - 93.61% catch_command_errors
             - 87.41% attach_command
                - 87.40% linux_nat_attach
                   - 87.40% linux_proc_attach_tgid_threads
                      - 82.38% attach_proc_task_lwp_callback
                         - 81.01% find_lwp_pid
                              5.30% ptid_get_lwp
                            + 0.10% ptid_lwp_p
                         + 0.64% add_thread
                         + 0.26% set_running
                         + 0.24% set_executing
                           0.12% ptid_get_lwp
                         + 0.01% ptrace
                         + 0.01% add_lwp

    attach_proc_task_lwp_callback is called once for each LWP that we
    attach to, found by listing the /proc/PID/task/ directory.  In turn,
    attach_proc_task_lwp_callback calls find_lwp_pid to check whether the
    LWP we're about to try to attach to is already known.  Since
    find_lwp_pid does a linear walk over the whole LWP list, this becomes
    quadratic.  We do the /proc/PID/task/ listing until we get two
    iterations in a row where we found no new threads.  So the second and
    following times we walk the /proc/PID/task/ dir, we're going to take
    an even worse find_lwp_pid hit.

    Fix this by adding a hash table keyed by LWP PID, for fast lookup.

    The linked list embedded in the LWP structure itself is kept, and made
    a double-linked list, so that removals from that list are O(1).  An
    earlier version of this patch got rid of this list altogether, but
    that revealed hidden dependencies / assumptions on how the list is
    sorted.  For example, killing a process and then waiting for all the
    LWPs status using iterate_over_lwps only works as is because the
    leader LWP is always last in the list.  So I thought it better to take
    an incremental approach and make this patch concern itself _only_ with
    the PID lookup optimization.

    gdb/ChangeLog:
    2016-05-24  Pedro Alves  <palves@redhat.com>

        PR gdb/19828
        * linux-nat.c (lwp_lwpid_htab): New htab.
        (lwp_info_hash, lwp_lwpid_htab_eq, lwp_lwpid_htab_create)
        (lwp_lwpid_htab_add_lwp): New functions.
        (lwp_list): Tweak comment.
        (lwp_list_add, lwp_list_remove, lwp_lwpid_htab_remove_pid): New
        functions.
        (purge_lwp_list): Rewrite, using htab_traverse_noresize.
        (add_initial_lwp): Add lwp to htab too.  Use lwp_list_add.
        (delete_lwp): Use lwp_list_remove.  Remove htab too.
        (find_lwp_pid): Search in htab.
        (_initialize_linux_nat): Call lwp_lwpid_htab_create.
        * linux-nat.h (struct lwp_info) <prev>: New field.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]