This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [PATCH] ppc: Add Power ISA 3.0/POWER9 instructions record support
- From: "Ulrich Weigand" <uweigand at de dot ibm dot com>
- To: emachado at linux dot vnet dot ibm dot com
- Cc: gdb-patches at sourceware dot org
- Date: Tue, 20 Sep 2016 17:06:08 +0200 (CEST)
- Subject: Re: [PATCH] ppc: Add Power ISA 3.0/POWER9 instructions record support
- Authentication-results: sourceware.org; auth=none
Edjunior Barbosa Machado wrote:
>+ switch(ext & 0x01f)
Space before '('.
>+ {
>+ case 2: /* Add PC Immediate Shifted */
>+ case 82: /* Return From System Call Vectored */
>+ record_full_arch_list_add_reg (regcache, tdep->ppc_ps_regnum);
>+ return 0;
This is a privileged instruction, which we don't need to track here.
>+ case 370: /* Stop */
> /* Do nothing. */
Likewise.
>+ case 890: /* Extend-Sign Word and Shift Left Immediate (445) */
>+ case 890 | 1: /* Extend-Sign Word and Shift Left Immediate (445) */
>+ if (PPC_RC (insn))
>+ record_full_arch_list_add_reg (regcache, tdep->ppc_cr_regnum);
>+ /* FALL-THROUGH */
> /* These only write to RA. */
Move the above down the block starting with
/* These write RA. Update CR if RC is set. */
>+ case 902: /* Paste */
This one is a bit interesting, since according to the ISA, the update of
the memory block happens asyncroneously. But I guess by the time the
debugger gets involved, this will probably have happened ...
>+ case 397: /* Store VSX Vector with Length */
>+ case 429: /* Store VSX Vector Left-justified with Length */
>+ if (PPC_RA (insn) != 0)
>+ regcache_raw_read_unsigned (regcache,
>+ tdep->ppc_gp0_regnum + PPC_RA (insn), &ea);
>+ regcache_raw_read_unsigned (regcache,
>+ tdep->ppc_gp0_regnum + PPC_RB (insn), &rb);
>+ nb = rb & 0xff;
>+ if (nb != 0)
>+ record_full_arch_list_add_mem (ea, nb);
If I'm reading the ISA correctly, there are never more than 16 bytes stored.
>+ case 774: /* Copy */
>+ case 838: /* CP_Abort */
These are also interesting. They manipulate the state of the internal 128-byte
copy/paste buffer, so in order to reverse past those instructions, we'd
really have to restore that state. But I don't think this is even possible
with current kernel support (or even hardware support)? You should verify
with kernel people what to do about this. (In fact, maybe even single-
stepping across a copy will not work correctly since the single-step
intercept kernel entry point may call cp_abort ...)
>+ case 27: /* VSX Scalar Compare Not Equal Double-Precision */
This was removed from the final ISA 3.0B.
>+ case 91: /* VSX Vector Compare Not Equal Single-Precision */
>+ case 123: /* VSX Vector Compare Not Equal Double-Precision */
Those were removed from the final ISA 3.0B as well.
>+ case 347:
>+ switch (PPC_FIELD(insn, 11, 5))
Space before '('.
>+ case 475:
>+ switch (PPC_FIELD(insn, 11, 5))
Space before '('.
>+ case 804:
>+ switch (PPC_FIELD(insn, 11, 5))
Space before '('.
>+ {
>+ case 0: /* VSX Scalar Absolute Quad-Precision */
>+ ppc_record_vsr (regcache, tdep, PPC_XT (insn));
This actually uses PPC_VRT (insn) + 32, just like the others.
>+ case 836:
>+ switch (PPC_FIELD(insn, 11, 5))
Space before '('.
> case 17: /* System call */
>- if (PPC_LEV (insn) != 0)
>+ /* System Call Vectored */
>+ if ((PPC_LEV (insn) != 0) && ((insn & 0x2) != 2) && ((insn & 0x3) != 1))
> goto UNKNOWN_OP;
>
> if (tdep->ppc_syscall_record != NULL)
I think Linux doesn't support using scv in user space -- you should check
with the kernel folks. If it doesn't, process record shouldn't accept it.
But if it actually *does* support scv, then you'll have to update the
syscall_record callback to describe how it is used. For one, with scv
the restriction that PPC_LEV (insn) must be 0 no longer applies ...
> printf_unfiltered (_("no syscall record support\n"));
> return -1;
> }
>+
>+ record_full_arch_list_add_reg (regcache, tdep->ppc_ps_regnum);
>+ if ((insn & 0x3) != 1)
>+ {
>+ record_full_arch_list_add_reg (regcache, tdep->ppc_cr_regnum);
>+ record_full_arch_list_add_reg (regcache, tdep->ppc_lr_regnum);
>+ }
Why is that here? That should be done by the syscall_record callback
if appropriate, since it is platform-specific.
>- case 57: /* Load Floating-Point Double Pair */
>- if (PPC_FIELD (insn, 30, 2) != 0)
>- goto UNKNOWN_OP;
>- tmp = tdep->ppc_fp0_regnum + (PPC_RT (insn) & ~1);
>- record_full_arch_list_add_reg (regcache, tmp);
>- record_full_arch_list_add_reg (regcache, tmp + 1);
>+ case 57:
>+ /* Load Floating-Point Double Pair */
>+ if ((insn & 0x3) == 0)
Note you still need to go to UNKNOWN_OP if (insn & 0x3) == 1.
Maybe a switch on (insn & 0x3) would be nicer?
>+ {
>+ tmp = tdep->ppc_fp0_regnum + (PPC_RT (insn) & ~1);
>+ record_full_arch_list_add_reg (regcache, tmp);
>+ record_full_arch_list_add_reg (regcache, tmp + 1);
>+ return 0;
You shouldn't do a "return 0" here, but rather fall through to the end
of the function, so that the PC gets recorded.
>+ }
>+
>+ /* Load VSX Scalar Doubleword */
>+ /* Load VSX Scalar Single */
>+ if (((insn & 0x3) == 2)
>+ || ((insn & 0x3) == 3))
>+ {
>+ ppc_record_vsr (regcache, tdep, PPC_VRT (insn) + 32);
>+ return 0;
Likewise.
The following late additions to ISA 3.0B are still missing (from opcodes):
{"addex", ZRC(31,170,0), Z2_MASK, POWER9, 0, {RT, RA, RB, CY}},
{"vmsumudm", VXA(4, 35), VXA_MASK, PPCVEC3, 0, {VD, VA, VB, VC}},
{"mffsce", XMMF(63,583,0,1), XMMF_MASK|RB_MASK, POWER9, PPCVLE, {FRT}},
{"mffscdrn", XMMF(63,583,2,4), XMMF_MASK, POWER9, PPCVLE, {FRT, FRB}},
{"mffscdrni", XMMF(63,583,2,5), XMMF_MASK|(3<<14), POWER9, PPCVLE, {FRT, DRM}},
{"mffscrn", XMMF(63,583,2,6), XMMF_MASK, POWER9, PPCVLE, {FRT, FRB}},
{"mffscrni", XMMF(63,583,2,7), XMMF_MASK|(7<<13), POWER9, PPCVLE, {FRT, RM}},
{"mffsl", XMMF(63,583,3,0), XMMF_MASK|RB_MASK, POWER9, PPCVLE, {FRT}},
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU/Linux compilers and toolchain
Ulrich.Weigand@de.ibm.com