This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Mutex Priority Inversion


Hello All,

I've finally found the cause of my netstack starvation; it's a nested
mutex priority inversion promoting a user space thread to be higher than
the network delivery thread. I don't intentionally nest mutex calls but
a call to a socket send() from within a user land mutex can cause a
nested mutex lock on splx in the netstack. If this is interleaved by the
network alarm thread, you end up with your user thread being scheduled
above the network delivery thread until the outer mutex is released.
Hence the TCP layer fails to send acks, hence the EPIPE.

Discussed at length here:
http://sourceware.org/ml/ecos-discuss/2005-05/msg00349.html

Moral of the story, don't use a lengthy mutex locks whilst making
netstack calls.

I'm attempting to prove this by disabling mutex priority inversion, but
when I set CYGSEM_KERNEL_SYNCH_MUTEX_INVERSION_PROTOCOL_INHERIT to 0
along with the PROTOCOL_CEILING to 0 I expect the protocol to be
configured as NONE. Unfortunately the posix cdl then causes INHERIT and
CEILING to be inferred as 1 due to a dependency. Disable the cdl option
in posix which requires PROTOCOL_INHERIT & PROTOCOL_CEILING and
ecosconfig is happy but the build fails, as mutex.hxx fails to define
cyg_protocol unless CYGSEM_KERNEL_SYNCH_MUTEX_INVERSION_PROTOCOL_DYNAMIC
is enabled. It isn't enabled unless the number of protocols exceeds 1. 

Am I looking in the wrong place, is there a much easier way to disable
priority inversion?

Any help as always is much appreciated.

TIA

Andrew.


--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]