This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: The future of static dlopen


On 12/20/2017 08:03 AM, Carlos O'Donell wrote:

We don't know if that is the best solution for what our users want.

* Allowing different dynamic loaders provides better isolation.
   - Would require a loader<->loader API.
   - Even better LD_AUDIT isolation.

* Allowing different dynamic loaders lets you load newer libraries
   than you can possibly support.
   - Load libraries in a chroot/container that may require a newer
     ld.so (so long as the new ld.so supports the loader<->loader API).

Your suggestion is the simplest solution though, which is to move any
needed features into the parent ld.so, and always assure your outer
process uses the latest ld.so.

It's not a suggestion, it's was the loader currently does (and I have written a test case to verify that it actually works, i.e. that symbols implemented by the loader have the same address on both sides of dlmopen).

However, this works only for dlmopen.  For static dlopen, there is
no outer lds.so that can be shared.  Instead, a new inner ld.so is
loaded but not initialized, leading to bugs such as bug 20802
(getauxval not working after static dlopen).

There is an outer ld.so, but it's linked *into* the application.

It's code compiled from mostly the same sources in elf/, but I can assure you that it is *not* anything close to resembling ld.so at run time: It does not have a dynamic symbol table (so no interposition into libc). It does not have its own link map entry.

In fact, when the inner ld.so appears to work, it only does so
because it is bypassed.  For dlopen from the loaded DSOs, we have
two different mechanisms, one for libc, one for libdl, which install
the non-ld.so implementation of dlopen into the inner libc, called
__libc_register_dl_open_hook and __libc_register_dlfcn_hook.  These
hooks, when active, completely replace the implementation.  Here's
the example for dlopen:

The design of these hooks is to bridge the static ld.so into the
inner dynamic namespace, and effect what happens with dlmopen, having
just one dynamic loader.

The mechanisms are completely different. dlmopen works essentially the same as regular dynamic linking. For static dlopen, providing dynamic linker functionality requires that we write custom hooks or other mechanisms, and use them to override ld.so behavior. If we don't do that, loaded DSOs will use the uninitialized ld.so, which is unlikely to work.

void * __dlopen (const char *file, int mode DL_CALLER_DECL) { # ifdef
SHARED if (__glibc_unlikely (_dlfcn_hook != NULL)) return
_dlfcn_hook->dlopen (file, mode, DL_CALLER); # endif

This is not exactly harmless because there are still crash handlers
which call dlopen as part of the crash reporting procedure (to load
the libgcc unwinder).

What harm is caused by this? Could you expand on this a bit?

There are exploits which overwrite the hook pointers to achieve code execution. This was particularly attractive when we still called dlopen on heap corruption.

Should we remove support for static dlopen?  And use some other
mechanism to implement NSS for statically linked binaries?

Yes, I think we *could* remove support for static dlopen if you could
solve the NSS issues.

Okay, I'll post a patch to add a deprecation notice to NEWS.

It would be easiest to have a proxy process to handle these requests
for you... such a proxy process could be a proxy thread instead?
As you suggest earlier have the kernel start a new tid, and map into
your VMA a new dynamic executable that you can access and call into
for services?

I would just add an option to /usr/bin/getent which causes it to enter co-process mode. It's not going to be extremely efficient (especially if we don't use a persistent subprocess, but it would be quite reliable, unlike what we have to day).

Thanks,
Florian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]