This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][BZ #13613] Allow a single-threaded process to cancel itself


On Wed, May 9, 2012 at 11:51 AM, Siddhesh Poyarekar
<siddhesh.poyarekar@gmail.com> wrote:
> On 9 May 2012 21:00, Carlos O'Donell <carlos@systemhalted.org> wrote:
>> * After calling pthread_cancel() *all* of the optimizations that could
>> have used SINGLE_THREAD_P are not available, not just those related to
>> cancellation.
>
> This should not make a difference, because a single thread cancelling
> itself means that the process will end after unwind. For any
> multi-threaded situation this does not make any difference since the
> value was already 1.

OK, you've convinced me that performance is not a good argument.

>> * It overloads multiple_threads with a new meaning i.e. "Is true if
>> either more than one thread is running or if the one thread called
>> pthread_cancel()", which is bad for maintainability.
>
> I agree. I think a union like:
>
> ? ? ?union
> ? ? ?{
> ? ? ? ?int __multiple_threads;
> ? ? ? ?int __enable_cancellation_points;
> ? ? ?} cancellation;
> ? ? ?#define multiple_threads cancellation.__multiple_threads
> ? ? ?#define enable_cancellation_points
> cancellation.__enable_cancellation_points
>
> this should work. Let me check.

I like this better along with a comment describing why they share the
same variable, that way if we split them apart some day we'll know
what we need to do.

I'm a little worried that this has never been enabled before, so could
you please include some more coverage in your testing:

* pthread_setcancelstate()

* pthread_setcanceltype()

and

* pthread_testcancel()

Which should ensure we don't regress in the single thread case.

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]