This is the mail archive of the
binutils@sources.redhat.com
mailing list for the binutils project.
Re: PATCH: Fix ll/sc for mips (take 3)
- From: Ralf Baechle <ralf at oss dot sgi dot com>
- To: "H . J . Lu" <hjl at lucon dot org>
- Cc: Dominic Sweetman <dom at algor dot co dot uk>, GNU C Library <libc-alpha at sources dot redhat dot com>, linux-mips at oss dot sgi dot com, binutils at sources dot redhat dot com
- Date: Wed, 6 Feb 2002 11:32:59 +0100
- Subject: Re: PATCH: Fix ll/sc for mips (take 3)
- References: <20020202120354.A1522@lucon.org> <mailpost.1012680250.7159@news-sj1-1> <yov5ofj65elj.fsf@broadcom.com> <15454.22661.855423.532827@gladsmuir.algor.co.uk> <20020204083115.C13384@lucon.org> <15454.47823.837119.847975@gladsmuir.algor.co.uk> <20020204172857.A22337@lucon.org> <20020204215804.A2095@nevyn.them.org> <20020205113017.A6144@lucon.org> <20020205135407.A8309@lucon.org>
On Tue, Feb 05, 2002 at 01:54:07PM -0800, H . J . Lu wrote:
> __asm__ __volatile__
> ("/* Inline compare & swap */\n"
> "1:\n\t"
> "ll %1,%5\n\t"
> "move %0,$0\n\t"
> "bne %1,%3,2f\n\t"
> "move %0,%4\n\t"
> "sc %0,%2\n\t"
> "beqz %0,1b\n\t"
> "2:\n\t"
> "/* End compare & swap */"
> : "=&r" (ret), "=&r" (temp), "=m" (*p)
> : "r" (oldval), "r" (newval), "m" (*p)
> : "memory");
>
> The assembler will do
>
> 0xd724 <__pthread_alt_lock+212>: ll v1,0(s1)
> 0xd728 <__pthread_alt_lock+216>: move a1,zero
> 0xd72c <__pthread_alt_lock+220>: bne v1,s0,0xd744 <__pthread_alt_lock+244>
> 0xd730 <__pthread_alt_lock+224>: nop
> 0xd734 <__pthread_alt_lock+228>: move a1,v0
> 0xd738 <__pthread_alt_lock+232>: sc a1,0(s1)
> 0xd73c <__pthread_alt_lock+236>: beqz a1,0xd724 <__pthread_alt_lock+212>
> 0xd740 <__pthread_alt_lock+240>: nop
>
> There is an extra "nop" in the delay slot. I don't think gas is smart
> enough to fill the delay slot. I will put back those ".set noredor".
The solution is to move the move instruction in front of the branch
instruction. The assembler will then move it into the delay slot:
__asm__ __volatile__
("/* Inline compare & swap */\n"
"1:\n\t"
"ll %1,%5\n\t"
"move %0,$0\n\t"
"move %0,%4\n\t"
"bne %1,%3,2f\n\t"
"sc %0,%2\n\t"
"beqz %0,1b\n\t"
"2:\n\t"
"/* End compare & swap */"
: "=&r" (ret), "=&r" (temp), "=m" (*p)
: "r" (oldval), "r" (newval), "m" (*p)
: "memory");
Also this function looks like a good candidate for inlining (Is it actually
inlined? Haven't checked ...) where depending on it's use the address of
*p is loaded twice from the GOT, so changing the code to:
__asm__ __volatile__
("/* Inline compare & swap */\n"
"1:\n\t"
"ll %1,(%5)\n\t"
"move %0,$0\n\t"
"move %0,%4\n\t"
"bne %1,%3,2f\n\t"
"sc %0,(%2)\n\t"
"beqz %0,1b\n\t"
"2:\n\t"
"/* End compare & swap */"
: "=&r" (ret), "=&r" (temp), "=r" (p)
: "r" (oldval), "r" (newval), "r" (p)
: "memory");
will avoid having to pay that PIC bloat twice and get you around the gas
inefficiency of putting in too many nops into PIC code.
Ralf