This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH glibc 2/2] aarch64: handle STO_AARCH64_VARIANT_PCS


* Szabolcs Nagy:

>> The link editor changes are more concerning.  I expect us to backport
>> them because old linkers will silently create broken binaries.  That's
>> true for all three approaches, I think.  But given the long-standing
>> recommendation that you should build on the oldest distribution you want
>> to support, I'm not sure how successful such backporting efforts will be
>> in preventing broken binaries which only work superficially.  (Again,
>> CentOS could be a problem here.)
>
> old linker can silently create broken binaries
> (with unmarked SVE symbols), but the assembler
> would fail when it sees the .variant_pcs directive
> (which is emitted by the compiler to mark SVE
> functions)
>
> since in binutils the linker and assembler are
> in one package, i think it's less likely to get
> silent breakage: if you use a new compiler
> with old binutils you see an assembler error
> immediately.

We have three linkers today (ld.gold, ld.bfd, LLVM lld), and it is not
unusual to have four of them installed (system version plus toolchain
module/software collection version for binutils).  So I'm afraid that
it's common that software will be linked with a linker from a different
binutils version than the assembler that was used.

With LLVM, there is also the matter of the assembler built into the
compiler (not sure if this is matters for aarch64 yet).

> if your toolchain has its own new assembler but
> uses an old linker, then using -z now linking is
> a way to avoid breakage (assuming such situation
> can be detected, i didnt try any detection in gcc
> because of the asm issue).

I'm not concerned about the breakage and how end users can work around
it.  What worries me are broken binaries that appear to run correctly
today and break with a future dynamic linker update.  I consider them
time bombs.

> (i think new dynamic relocation would not fix
> the silent breakage with old linker unless new
> static relocs are introduced too for relocatable
> objects, which means many new relocs, potentially
> with non-trivial new asm syntax etc)

Yeah, this is how we have dealt with similar situations on other
architectures.  It was still not good enough because it turned out that
binutils strip had a bug, where it would replace unknown relocations
with 0 and *still* generate an output file. 8-(

>> Perhaps we should combine the loader backport of (1) or (2) with an SVE
>> trampoline that zaps the first vector argument register?  Just to make
>> the breakage obvious immediately, and not when we change vector register
>> usage in the dynamic linker five years from now, exposing a substantial
>> number of binaries as broken.  If that ever happens, we *will* have to
>> save the entire register file in the trampoline at that point, like we
>> do now on x86-64.
>
> note: for SVE the breakage is quite obvious: dynamic
> linkers save/restore q0-q7 registers which overlap
> with z0-z7 SVE argument registers and this zeros
> out the top bits before the call is entered (if the
> SVE registers >128bit).

So incorrectly linked binaries will fail if they ever call the function?
I think that's not too bad then.  You still have to test on an SVE
machine, of course, but that's probably a good idea anyway if you are
building for SVE.

If you get a failure with glibc master for an incorrectly program
*today*, that's good news.

Thanks,
Florian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]