This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: glibc 2.23 --- Starting soft/slush freeze



On 12-01-2016 12:49, Torvald Riegel wrote:
> On Tue, 2016-01-12 at 10:36 -0200, Adhemerval Zanella wrote:
>>
>> On 12-01-2016 10:25, Torvald Riegel wrote:
>>> On Mon, 2016-01-11 at 18:52 -0200, Adhemerval Zanella wrote:
>>>> Hi all,
>>>>
>>>> At stated in a previous message we are now in soft/slushy freeze mode. 
>>>> Please do not commit new features other than the ones already reviewed.
>>>> If your new feature still needs review and was not listed in the releases 
>>>> wiki as a blocker feature [2] it is unfortunately too late. Please defer 
>>>> to glibc 2.23 when it opens.
>>>
>>> The barrier patch has been reviewed by Paul Murphy and tested by him on
>>> Power.  Paul said he might prefer if someone else reviewed it too,
>>> though.  Can this be considered reviewed then?
>>
>> I will also check it on aarch64/arm today and I also would like to see it
>> on 2.23.
> 
> Thanks!
> 

I saw no regression on aarch64 or arm (v8). I will do another check on armv7
by the end of the week, although I do not foresee any issue since armv8 is ok.

>> The only bit I am not sure is the sparc stated by you it might
>> broke on newer implementation. Do you think we can get sparc parts done
>> and tested by the end of the week?
> 
> The only thing that makes sparc different is that pre-v9 sparc32 has no
> real support for atomics.  The atomic operations we have are supported,
> but CAS or other read-modify-write operations are only atomic within a
> process (the hardware only has a test-and-set, so things need to be
> lock-based essentially).
> 
> For all but pre-v9 sparc, I'd just use the new barrier implementation.
> For pre-v9 sparc, I could try to hack up a version that uses an internal
> lock for all the read-modify-write atomic ops used by the new barrier
> implementation.  But I can't test this.
> 
> Dave, could you take care of adapting to pre-v9 sparc?

Right, I also do not feel compelled on delay this feature due a very
specific architecture lacking of support. I recall Dave saying he only 
build and test sparc near release date, so it will give him roughly a
month to either help adjust the code to pre-v9 sparc or help you with
testing. Any objections?

> 
> It also feels like we might need a better process to handle such pre-v9
> sparc issues.  It comes up whenever we change concurrent code that is
> supposed to work in a process-shared setting (it will come up for
> condvars again).  Any suggestions?
> 
> I think one relatively clean way to support pre-v9 sparc on
> process-shared was if we could set/unset a lock that is to be used for
> subsequently executed atomic operations.  The set/unset would be a noop
> for all but pre-v9 sparc.  On pre-v9 sparc, we could put this address
> into a thread-local variable, and have the atomic RMW ops check whether
> such a lock is set or not; if it is, it will be used instead of one of
> the 64 per-process locks used for non-process-shared atomics.
> This way, we probably wouldn't need custom pre-v9 sparc code anymore.
> Performance may be somewhat slower, but it will be slower anyway due to
> having to use a lock in the first place instead of using a comparatively
> light-weight native atomic operation.
> Thoughts?

This seems a reasonable approach, specially the part to get rid of custom
pre-v9 sparc code. I would also aim first on conformance and code simplicity
and thus later try on optimize this locking mechanism, if possible.

> 
>>>
>>> Also, what about the list of desirable features?  I had assumed they are
>>> candidates for late commits too but will, unlike blockers, not block the
>>> release.
>>>
>>
>> I see only the new barrier and condvar implementation as the desirable
>> features on this release. From your previous messages I see that condvar
>> is still showing some issues with your internal tests, so I take we might
>> delay to 2.24 (correct me if I am wrong).
> 
> I'm behind schedule on the condvar fix, but I could try to finish it
> this week too.  Would this make the new condvar acceptable for 2.23?
> I don't want to rush this, but I worry about delaying to 2.24, still not
> getting any reviews, and being in the same position when the 2.24
> release comes around.
> Are there volunteers who would be willing and able to review the new
> condvar provided the fix gets posted upstream end of this week?
> 

How confident are you about the condvar implementation? I recall you saying
you tested on Fedore Rawhide, but it leads to some issues. I would prefer
to not rush on 2.23 and work to get it done subsequent months after master
is open again (so we have plenty time of fix until next releases).


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]