This is the mail archive of the libc-help@sourceware.org mailing list for the glibc project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
On 24/03/15 03:53, Carlos O'Donell wrote: >>> >> __have_o_nonblock >>> >> = (EXT(statp).nssocks[ns] == -1 >>> >> && errno == EINVAL ? -1 : 1); >> > And that is why it complains, probably. >> > >> > Please check the vanilla glibc code. > The code is still there, but you still need to show what is happening. > Why does hellgrind think there is a race condition there, and does > it make sense. The reason helgrind is complaining is because __have_o_nonblock is being set, but __have_o_nonblock is a global variable. So, if __have_o_nonblock is overwritten by a different thread, the running thread will use whatever the __have_o_nonblock is set to in the "overwriting thread." My guess is this will affect the connection that reopen makes, and will make non-blocking a problem. Another guess as to how to create a 'proof of concept', is by having the threaded function randomly choose 1 or 0, and if 1, set nonblocking, and if 0, don't choose nonblocking. Then do the getaddrinfo(), then monitor if it sets it to non-blocking or not, and confirm that it does what it's supposed to do. However, I don't know how to do this. The only place I can find reference to non-blocking getaddrinfo is here: http://wiki.treck.com/getaddrinfo#Non-blocking_Mode which is not glibc. I don't think I can do any more investigating, as I 1) don't know the glibc code well enough, and 2) don't know helgrind/drd well enough. I've just been reading from http://valgrind.org/docs/manual/drd-manual.html#drd-manual.data-races Thanks, -- -- Joshua Rogers <https://internot.info/>
Attachment:
signature.asc
Description: OpenPGP digital signature
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |