This is the mail archive of the binutils@sourceware.org mailing list for the binutils project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC PATCH, binutils, ARM 1/11, ping] Refactor Cortex-A8 erratum workaround in preparation


Ping?

On Wednesday 23 December 2015 15:54:21 Thomas Preud'homme wrote:
> Better when the patch is there.
> 
> > From: binutils-owner@sourceware.org [mailto:binutils-
> > owner@sourceware.org] On Behalf Of Thomas Preud'homme
> > Sent: Wednesday, December 23, 2015 3:52 PM
> > 
> > Hi,
> > 
> > [Posting patch series as RFC]
> > 
> > This patch is part of a patch series to add support for ARMv8-M security
> > extension[1] to GNU ld. This specific patch refactors Cortex-A8 erratum
> > workaround code to handle Thumb code relocation in stubs in the same
> > as other relocations in stubs, as needed by subsequent patches.
> > 
> > Contrary to other types of veneers, the ones for Cortex-A8 erratum
> > workaround are generated for branches without relocations against
> > them (see comment below "if (found && found->non_a8_stub)" in
> > cortex_a8_erratum_scan () function). Because of that, the related code
> > needs to patch the source of the branch itself rather than rely on
> > elf32_arm_final_link_relocate to be called to relocate it.
> > 
> > Currently, this is done by storing the offset of the source of the branch
> > from the beginning of its section in struct elf32_arm_stub_hash_entry's
> > target_value field rather than the offset of the target of the branch.
> > arm_build_one_stub () then compute the target offset from the source
> > offset and the distance between the source and target of the branch
> > stored in a new target_addend field of struct
> > elf32_arm_stub_hash_entry. That offset is taken from the branch
> > immediate and thus includes the addend. Because of that, the relocation
> > addend for the veneer must not be used when computing the target
> > offset which which led to the creation of the if in arm_build_one_stub ()
> > function (see the difference in computation of points_to).
> > 
> > While all this works, this special casing makes the code less maintainable
> > and more difficult to understand (especially target_value having a
> > different meaning). This patch aims at improving this.
> > 
> > 
> > [1] Software requirements for ARMv8-M security extension are
> > described in document ARM-ECM-0359818 [2]
> > [2] Available on http://infocenter.arm.com in Developer guides and
> > articles > Software development > ARMÂv8-M Security Extensions:
> > Requirements on Development Tools
> > 
> > ChangeLog entry is as follows:
> > 
> > 
> > *** bfd/ChangeLog ***
> > 
> > 2015-08-07  Thomas Preud'homme  <thomas.preudhomme@arm.com>
> > 
> >         * elf32-arm.c (struct elf32_arm_stub_hash_entry): Delete
> > 
> > target_addend
> > 
> >         field and add source_value.
> >         (struct a8_erratum_fix): Delete addend field and add
> >         target_offset.
> >         (stub_hash_newfunc): Initialize source_value field amd remove
> >         initialization for target_addend.
> > 
> >         (arm_build_one_stub): Stop special casing Thumb relocations:
> > promote
> > 
> >         the else to being always executed, moving the
> >         arm_stub_a8_veneer_b_cond specific code in it.  Remove
> >         stub_entry->target_addend from points_to computation.
> >         (cortex_a8_erratum_scan): Store in a8_erratum_fix structure the
> > 
> > offset
> > 
> >         to target symbol from start of section rather than the offset from
> >         the
> >         stub address.
> >         (elf32_arm_size_stubs): Set stub_entry's source_value and
> > 
> > target_value
> > 
> >         fields from struct a8_erratum_fix's offset and target_offset
> >         respectively.
> >         (make_branch_to_a8_stub): Rename target variable to loc.
> > 
> > Compute
> > 
> >         veneered_insn_loc and loc using stub_entry's source_value.
> 
> diff --git a/bfd/elf32-arm.c b/bfd/elf32-arm.c
> index 2f1b212..77082a7 100644
> --- a/bfd/elf32-arm.c
> +++ b/bfd/elf32-arm.c
> @@ -2608,8 +2608,12 @@ struct elf32_arm_stub_hash_entry
>    bfd_vma target_value;
>    asection *target_section;
> 
> -  /* Offset to apply to relocation referencing target_value.  */
> -  bfd_vma target_addend;
> +  /* Same as above but for the source of the branch to the stub.  Used for
> +     Cortex-A8 erratum workaround to patch it to branch to the stub.  As
> +     such, source section does not need to be recorded since Cortex-A8
> erratum +     workaround stubs are only generated when both source and
> target are in the +     same section.  */
> +  bfd_vma source_value;
> 
>    /* The instruction which caused this stub to be generated (only valid for
> Cortex-A8 erratum workaround stubs at present).  */
> @@ -2777,7 +2781,7 @@ struct a8_erratum_fix
>    bfd *input_bfd;
>    asection *section;
>    bfd_vma offset;
> -  bfd_vma addend;
> +  bfd_vma target_offset;
>    unsigned long orig_insn;
>    char *stub_name;
>    enum elf32_arm_stub_type stub_type;
> @@ -3354,9 +3358,9 @@ stub_hash_newfunc (struct bfd_hash_entry *entry,
>        eh = (struct elf32_arm_stub_hash_entry *) entry;
>        eh->stub_sec = NULL;
>        eh->stub_offset = 0;
> +      eh->source_value = 0;
>        eh->target_value = 0;
>        eh->target_section = NULL;
> -      eh->target_addend = 0;
>        eh->orig_insn = 0;
>        eh->stub_type = arm_stub_none;
>        eh->stub_size = 0;
> @@ -4385,65 +4389,36 @@ arm_build_one_stub (struct bfd_hash_entry
> *gen_entry, BFD_ASSERT (nrelocs != 0 && nrelocs <= MAXRELOCS);
> 
>    for (i = 0; i < nrelocs; i++)
> -    if (template_sequence[stub_reloc_idx[i]].r_type == R_ARM_THM_JUMP24
> -	|| template_sequence[stub_reloc_idx[i]].r_type == R_ARM_THM_JUMP19
> -	|| template_sequence[stub_reloc_idx[i]].r_type == R_ARM_THM_CALL
> -	|| template_sequence[stub_reloc_idx[i]].r_type == R_ARM_THM_XPC22)
> -      {
> -	Elf_Internal_Rela rel;
> -	bfd_boolean unresolved_reloc;
> -	char *error_message;
> -	enum arm_st_branch_type branch_type
> -	  = (template_sequence[stub_reloc_idx[i]].r_type != R_ARM_THM_XPC22
> -	     ? ST_BRANCH_TO_THUMB : ST_BRANCH_TO_ARM);
> -	bfd_vma points_to = sym_value + stub_entry->target_addend;
> -
> -	rel.r_offset = stub_entry->stub_offset + stub_reloc_offset[i];
> -	rel.r_info = ELF32_R_INFO (0,
> -				   template_sequence[stub_reloc_idx[i]].r_type);
> -	rel.r_addend = template_sequence[stub_reloc_idx[i]].reloc_addend;
> -
> -	if (stub_entry->stub_type == arm_stub_a8_veneer_b_cond && i == 0)
> -	  /* The first relocation in the elf32_arm_stub_a8_veneer_b_cond[]
> -	     template should refer back to the instruction after the original
> -	     branch.  */
> -	  points_to = sym_value;
> -
> -	/* There may be unintended consequences if this is not true.  */
> -	BFD_ASSERT (stub_entry->h == NULL);
> -
> -	/* Note: _bfd_final_link_relocate doesn't handle these relocations
> -	   properly.  We should probably use this function unconditionally,
> -	   rather than only for certain relocations listed in the enclosing
> -	   conditional, for the sake of consistency.  */
> -	elf32_arm_final_link_relocate (elf32_arm_howto_from_type
> -	    (template_sequence[stub_reloc_idx[i]].r_type),
> -	  stub_bfd, info->output_bfd, stub_sec, stub_sec->contents, &rel,
> -	  points_to, info, stub_entry->target_section, "", STT_FUNC,
> -	  branch_type, (struct elf_link_hash_entry *) stub_entry->h,
> -	  &unresolved_reloc, &error_message);
> -      }
> -    else
> -      {
> -	Elf_Internal_Rela rel;
> -	bfd_boolean unresolved_reloc;
> -	char *error_message;
> -	bfd_vma points_to = sym_value + stub_entry->target_addend
> -	  + template_sequence[stub_reloc_idx[i]].reloc_addend;
> -
> -	rel.r_offset = stub_entry->stub_offset + stub_reloc_offset[i];
> -	rel.r_info = ELF32_R_INFO (0,
> -				   template_sequence[stub_reloc_idx[i]].r_type);
> -	rel.r_addend = 0;
> -
> -	elf32_arm_final_link_relocate (elf32_arm_howto_from_type
> -	    (template_sequence[stub_reloc_idx[i]].r_type),
> -	  stub_bfd, info->output_bfd, stub_sec, stub_sec->contents, &rel,
> -	  points_to, info, stub_entry->target_section, "", STT_FUNC,
> -	  stub_entry->branch_type,
> -	  (struct elf_link_hash_entry *) stub_entry->h, &unresolved_reloc,
> -	  &error_message);
> -      }
> +    {
> +      Elf_Internal_Rela rel;
> +      bfd_boolean unresolved_reloc;
> +      char *error_message;
> +      bfd_vma points_to =
> +	sym_value + template_sequence[stub_reloc_idx[i]].reloc_addend;
> +
> +      rel.r_offset = stub_entry->stub_offset + stub_reloc_offset[i];
> +      rel.r_info = ELF32_R_INFO (0,
> +				 template_sequence[stub_reloc_idx[i]].r_type);
> +      rel.r_addend = 0;
> +
> +      if (stub_entry->stub_type == arm_stub_a8_veneer_b_cond && i == 0)
> +	/* The first relocation in the elf32_arm_stub_a8_veneer_b_cond[]
> +	   template should refer back to the instruction after the original
> +	   branch.  We use target_section as Cortex-A8 erratum workaround stubs
> +	   are only generated when both source and target are in the same
> +	   section.  */
> +	points_to = stub_entry->target_section->output_section->vma
> +		    + stub_entry->target_section->output_offset
> +		    + stub_entry->source_value;
> +
> +      elf32_arm_final_link_relocate (elf32_arm_howto_from_type
> +	  (template_sequence[stub_reloc_idx[i]].r_type),
> +	   stub_bfd, info->output_bfd, stub_sec, stub_sec->contents, &rel,
> +	   points_to, info, stub_entry->target_section, "", STT_FUNC,
> +	   stub_entry->branch_type,
> +	   (struct elf_link_hash_entry *) stub_entry->h, &unresolved_reloc,
> +	   &error_message);
> +    }
> 
>    return TRUE;
>  #undef MAXRELOCS
> @@ -5036,7 +5011,8 @@ cortex_a8_erratum_scan (bfd *input_bfd,
>  			  a8_fixes[num_a8_fixes].input_bfd = input_bfd;
>  			  a8_fixes[num_a8_fixes].section = section;
>  			  a8_fixes[num_a8_fixes].offset = i;
> -			  a8_fixes[num_a8_fixes].addend = offset;
> +			  a8_fixes[num_a8_fixes].target_offset =
> +			    target - base_vma;
>  			  a8_fixes[num_a8_fixes].orig_insn = insn;
>  			  a8_fixes[num_a8_fixes].stub_name = stub_name;
>  			  a8_fixes[num_a8_fixes].stub_type = stub_type;
> @@ -5609,9 +5585,9 @@ elf32_arm_size_stubs (bfd *output_bfd,
>  	  stub_entry->stub_offset = 0;
>  	  stub_entry->id_sec = link_sec;
>  	  stub_entry->stub_type = a8_fixes[i].stub_type;
> +	  stub_entry->source_value = a8_fixes[i].offset;
>  	  stub_entry->target_section = a8_fixes[i].section;
> -	  stub_entry->target_value = a8_fixes[i].offset;
> -	  stub_entry->target_addend = a8_fixes[i].addend;
> +	  stub_entry->target_value = a8_fixes[i].target_offset;
>  	  stub_entry->orig_insn = a8_fixes[i].orig_insn;
>  	  stub_entry->branch_type = a8_fixes[i].branch_type;
> 
> @@ -16064,7 +16040,7 @@ make_branch_to_a8_stub (struct bfd_hash_entry
> *gen_entry, bfd_vma veneered_insn_loc, veneer_entry_loc;
>    bfd_signed_vma branch_offset;
>    bfd *abfd;
> -  unsigned int target;
> +  unsigned int loc;
> 
>    stub_entry = (struct elf32_arm_stub_hash_entry *) gen_entry;
>    data = (struct a8_branch_to_stub_data *) in_arg;
> @@ -16075,9 +16051,11 @@ make_branch_to_a8_stub (struct bfd_hash_entry
> *gen_entry,
> 
>    contents = data->contents;
> 
> +  /* We use target_section as Cortex-A8 erratum workaround stubs are only
> +     generated when both source and target are in the same section.  */
>    veneered_insn_loc = stub_entry->target_section->output_section->vma
>  		      + stub_entry->target_section->output_offset
> -		      + stub_entry->target_value;
> +		      + stub_entry->source_value;
> 
>    veneer_entry_loc = stub_entry->stub_sec->output_section->vma
>  		     + stub_entry->stub_sec->output_offset
> @@ -16089,7 +16067,7 @@ make_branch_to_a8_stub (struct bfd_hash_entry
> *gen_entry, branch_offset = veneer_entry_loc - veneered_insn_loc - 4;
> 
>    abfd = stub_entry->target_section->owner;
> -  target = stub_entry->target_value;
> +  loc = stub_entry->source_value;
> 
>    /* We attempt to avoid this condition by setting
> stubs_always_after_branch in elf32_arm_size_stubs if we've enabled the
> Cortex-A8 erratum workaround. @@ -16150,8 +16128,8 @@
> make_branch_to_a8_stub (struct bfd_hash_entry *gen_entry, return FALSE;
>      }
> 
> -  bfd_put_16 (abfd, (branch_insn >> 16) & 0xffff, &contents[target]);
> -  bfd_put_16 (abfd, branch_insn & 0xffff, &contents[target + 2]);
> +  bfd_put_16 (abfd, (branch_insn >> 16) & 0xffff, &contents[loc]);
> +  bfd_put_16 (abfd, branch_insn & 0xffff, &contents[loc + 2]);
> 
>    return TRUE;
>  }
> 
> > The patch doesn't show any regression when running the binutils-gdb
> > testsuite for the arm-none-eabi target.
> > 
> > Any comments?
> > 
> > Best regards,
> > 
> > Thomas


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]