This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCHv2] aarch64: detect atomic sequences like other ll/sc architectures


Hi,


On 27 March 2014 01:51, Kyle McMartin <kmcmarti@redhat.com> wrote:

> +  /* Look for a Load Exclusive instruction which begins the sequence.  */
> +  if (!decode_masked_match (insn, 0x3fc00000, 0x08400000))
> +    return 0;

Are you sure these masks and patterns are accurate? Looks to me that
this excludes many of the load exclusive instructions and includes
part of the unallocated encoding space. There are several different
encodings to match here covering ld[a]xr{b,h,} and ld[a]xp.  The masks
and patterns will be something like:

0xbfff7c00 0x085f7c00
0xbfff7c00 0x885f7c00
0xbfff0000 0x887f0000

> +      if (decode_masked_match (insn, 0x3fc00000, 0x08000000))

This also looks wrong.

> +  /* Test that we can step over ldxr/stxr. This sequence should step from
> +     ldxr to the following __asm __volatile.  */
> +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> +                    "       cmp     %0,#1\n"                             \
> +                    "       b.eq    out\n"                               \
> +                    "       add     %0,%0,1\n"                           \
> +                    "       stxr    %w1,%0,%2\n"                         \
> +                    "       cbnz    %w1,1b"                              \
> +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> +                    : : "memory");
> +
> +  /* This sequence should take the conditional branch and step from ldxr
> +     to the return dword line.  */
> +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> +                    "       cmp     %0,#1\n"                             \
> +                    "       b.eq    out\n"                               \
> +                    "       add     %0,%0,1\n"                           \
> +                    "       stxr    %w1,%0,%2\n"                         \
> +                    "       cbnz    %w1,1b\n"                            \
> +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> +                    : : "memory");
> +
> +  dword = -1;
> +__asm __volatile ("out:\n");
> +  return dword;
> +}

How about testing at least one instruction from each group of load
store exclusives?

Cheers
/Marcus


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]