This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/16291] feature request: provide simpler ways to compute stack and tls boundaries


https://sourceware.org/bugzilla/show_bug.cgi?id=16291

--- Comment #48 from Kostya Serebryany <konstantin.s.serebryany at gmail dot com> ---
(In reply to Rich Felker from comment #47)
> On Tue, Feb 04, 2014 at 02:18:14PM +0000, konstantin.s.serebryany at gmail
> dot com wrote:
> > Properly catching thread exit is a challenge by itself, 
> > Today we are using yet another hack to catch thread exit -- I wonder
> > if you could suggest a better approach. 
> > I added a relevant section to the wiki page above.
> 
> I don't see the lack of a hook-based approach for DTLS destruction as
> a new problem, just another symptom of your lack of a good approach to
> catching thread exit. Your usage case is harder than the usual (which
> can be achieved simply by wrapping pthread_create to use a special
> start function that installs a cancellation handler then calls the
> real start function) because you want to catch the point where the
> thread is truely dead (all dtors having been called, DTLS freed, etc.)
> which is, formally, supposed to be invisible to application code (i.e.
> atomic with respect to thread exit).

Correct. While we are at it, do you have a comment about our 
recursive pthread_setspecific hack? 
I know it's bad, but don't know how bad.

> 
> I'm not yet sure what the right approach to this is, but I'm skeptical
> of providing a public API to allow applications to observe a state
> that's not supposed to be observable.
> 
> > > or a call to dlclose. 
> > 
> > When dlclose happens in one thread, we need to do something with DTLS in 
> > all threads, which is tricky, if at all possible, w/o knowing 
> > how exactly glibc itself handles this case. 
> > hook-based approach will not have this problem,
> 
> If the TLS query API works correctly, you should not care how glibc
> implements it internally. You should just trust the results to be
> correct after dlclose returns, so that wrapping dlclose to call the
> real dlclose then re-query after it returns just works.

I frankly can't image an interface that can be invoked
right-before or right-after dlclose that will iterate over DTLS in other
threads, unless those threads are blocked somehow. 

Admittedly, this is not our only problem with dlclose, but if we are to
implement a new and shiny interface I'd rather prefer to handle dlclose
correctly.

> Of course this has a race window where the memory has already been
> freed but you don't know about that. I'm not sure if you care about

We do care. For msan, this race would mean a sporadic non-reproducible 
false positive or maybe a crash. 


> that, but if you do, I think the right approach is just to be wrapping
> mmap and malloc so that you can see if they allocate a range you
> thought belonged to something else, and if so, patch up your records.

We wrap malloc and at <= 2.18 we did handle DTLS somewhat satisfactorily,
because DTLS was allocated by __libc_memalign called via plt.
We can wrap the libc's mmap (and msan/tsan actually do this already),
but libc itself calls mmap bypassing 
plt, so we can not observe those mmap calls. 
Unless, of course, we complicate things my one or the other ways of 
catching all syscalls (ptrace, etc) but that will create more problems 
than it will solve (e.g. this will not work in sandboxed environments)

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]