This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] glibc: Remove CPU set size checking from affinity functions [BZ #19143]


On 03/10/2016 06:07 PM, Michael Kerrisk (man-pages) wrote:
> Hello Florian.
> 
> On 03/10/2016 12:20 PM, Florian Weimer wrote:
>> On 03/08/2016 08:42 PM, Michael Kerrisk (man-pages) wrote:
>>
>>>> One caveat is that sched_getaffinity can set bits beyond the requested
>>>> allocation size (in bits) because the kernel gets a padded CPU vector
>>>> and sees a few additional bits.  
>>>
>>> I'm not quite clear on this point. Does it get a padded CPU vector
>>> because CPU_ALLOC() might allocate a vector of size larger than the
>>> user requested?
>>
>> Yes, this is the problem, combined with CPU_ALLOC_SIZE returning the
>> larger size (which is unavoidable).
> 
> Thanks for the clarification. I added this paragraph:
> 
>        Be aware that CPU_ALLOC(3) may allocate a slightly  larger  CPU
>        set  than  requested  (because  CPU sets are implemented as bit
>        masks  allocated  in  units  of  sizeof(long)).   Consequently,
>        sched_getaffinity()  can  set bits beyond the requested allocaâ
>        tion size, because the  kernel  sees  a  few  additional  bits.
>        Therefore,  the  caller  should  iterate  over  the bits in the
>        returned set, counting those  which  are  set,  and  stop  upon
>        reaching  the value returned by CPU_COUNT(3) (rather than iterâ
>        ating over the number of bits requested  to  be  allocated).

This looks reasonable, thanks.

Florian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]