This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: RISC-V glibc port, v5


On Thu, 25 Jan 2018 08:44:00 PST (-0800), joseph@codesourcery.com wrote:
On Wed, 24 Jan 2018, Palmer Dabbelt wrote:

This v5 patch set contains what we hope is a viable RISC-V port for inclusion
into glibc-2.27.  For sanity's sake, we'd like to only mark the rv64gc/lp64d
and rv64gc/lp64 ports as stable right now and leave the remaining ports a work
in progress -- the rv32 ports are blocked on Linux being a bit flaky, and the

Are those Linux problems being worked on?  When are fixes expected?
Would you expect to be able to add test results for glibc 2.27 for the
three RV32 configurations to the wiki page shortly after the release, even
if the kernel isn't ready yet?

We're working through them, but I'm not sure how long it will take.

I don't approve of the patch series having an inconsistent message about
the status of RV32 support.  Either RV32 is supported by the glibc port,
possibly with a caveat in README like those for i[4567]86-*-gnu and
hppa-*-linux-gnu (the latter of which caveats should be removed...),
referring to kernel issues, and is built by build-many-glibcs.py, etc., or
it's not, in which case you should submit a cleanly RV64-only port, no
RV32 sysdeps directories or conditionals, with a view to possibly adding
RV32 later (with a GLIBC_2.28 or later minimum symbol version for RV32 in
that case).

I think the best thing to do here would be to remove the RV32 ABI lists and target it for 2.28. I don't mind removing the rv32 code as well, I don't think it's that tightly coupled to the rv64 code.

rv64imac/lp64 port just flat out takes too long to run in simulation (or on an

In my experience, testing MIPS with QEMU takes about three hours per ABI.
That's doing cross testing, with the compilers running on a Haswell-based
Xeon and the MIPS programs running via SSH to QEMU system emulation
running on the same Haswell-based Xeon.  SSH connection sharing is used to
avoid SSH connection setup overhead for each test, but the build and
testing is not using parallel make (so such tests can be run in parallel
for many different configurations, depending on the number of cores you
have, without slowing down).

That time does not vary significantly between hard and soft float
libraries.  I would not expect RISC-V testing with QEMU to be
significantly slower than MIPS, or to have more dependence on whether hard
or soft float is being tested.  (QEMU of course is emulating target
floating point in any case - the difference is just between emulation in
native x86_64 code, versus emulation in target architecture integer code
which is then translated to x86_64 code by QEMU.)

OK, let me try to get a reasonable environment up locally and run the full test suite.

FPGA, which is even slower).  The rv64gc/lp64d port is the primary target of

(I've tested for MIPS on FPGAs in the past - as I recall, it was still
less than a day per ABI, though we've added a lot more tests since then.)

* Some additional WIP GCC patches.

Are those to go in GCC 8?  GCC 8 and GCC 7?

They'll be targeted for GCC 8, and from the looks of it they're bugfixes so I anticipate that they'll be backported to the 7 release branch as well. Of course, we won't know for sure until we have the fixes how we want them for GCC.

* The latest Linux kernel headers.

Meaning 4.15 release, due out hopefully this weekend, will be good enough?

4.15-rc8 has the relevant patch (to make __NR_riscv_flush_icache), so it'll be in the released tarball.

Which leaves a total of 15 archicture-specific failures.  The test suite
results for rv64gc/lp64 are slightly better than this, Darius can provide links
to full results on his website like usual.

I take it these failures are still being worked on (possibly for fixes
after 2.27), since it's likely there are some remaining glibc or kernel
bugs to resolve as shown up by these failures?

Yep.  We're working through the additional failures.


   FAIL: sysvipc/test-sysvmsg
   FAIL: sysvipc/test-sysvsem
   FAIL: sysvipc/test-sysvshm

E.g. these look very much like there's a problem with SysV IPC in glibc or
the kernel.

Thanks, we'll take a look.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]