This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [2/3] [PATCH] value_optimized_out and value_fetch_lazy


On 07/01/2013 07:06 PM, Andrew Burgess wrote:
> On 19/06/2013 4:50 PM, Pedro Alves wrote:
>> Can you explain a little better the value_primitive_field
>> changes?  Is that code motion mainly an optimization that could
>> be done separately, or is it necessary for correctness?  IOW, is
>> that something that could be split out too?
> 
> The changes in value_primitive_field are functional.  I add some nested
> blocks so the churn looks worse than it really is.
> 
> In the old world value_primitive_field the code looks roughly like this:
> 
>   if (value_optimized_out (arg1))
>     v = allocate_optimized_out_value (type);    /* CASE 1 */
>   else if (TYPE_FIELD_BITSIZE (arg_type, fieldno))
>     /* CASE 2 */
>   else if (fieldno < TYPE_N_BASECLASSES (arg_type))
>     /* CASE 3 */
>   else
>     /* CASE 4 */
> 
> Now, consider ARG1 to value_primitive_field has the following
> properties, (a) optimised out but not yet marked optimized out,
> (b) lval_memory, an (c) lazy.  We'll not hit "CASE 1", but, if
> we hit "CASE 2" then we will return another lazy value, same for
> "CASE 3", and the same for "CASE 4".  However, with my change we
> decide at the first call to value_optimized_out that the value
> is gone, and so land in "CASE 1".
> 
> Also note, in the old world we initially case value_optimized_out,
> which might return false, but really mean true once we've done a
> value_fetch_lazy, later, in "CASE 3" and "CASE 4" we called
> value_fetch_lazy, but then failed to take any special action if
> the value has now become optimized out.
> 
> When I initially wrote my patch I did not change
> value_primitive_field, and some tests did regress, and the new
> output was less helpful then the current behaviour.

Alright, I tried that out myself, and I see it now:

@@ -8106 +8106 @@
-PASS: gdb.base/exprs.exp: print null_t_struct && null_t_struct->v_int_member == 0
+FAIL: gdb.base/exprs.exp: print null_t_struct && null_t_struct->v_int_member == 0
@@ -9291 +9291 @@
-PASS: gdb.base/longest-types.exp: print &f->buf
+FAIL: gdb.base/longest-types.exp: print &f->buf (timeout)
@@ -10975,2 +10975,2 @@
-PASS: gdb.base/ptype.exp: ptype v_t_struct_p.v_float_member
-PASS: gdb.base/ptype.exp: ptype v_t_struct_p->v_float_member
+FAIL: gdb.base/ptype.exp: ptype v_t_struct_p.v_float_member
+FAIL: gdb.base/ptype.exp: ptype v_t_struct_p->v_float_member
@@ -16927,2 +16927,2 @@
-PASS: gdb.cp/inherit.exp: ptype g_B.A::a
-PASS: gdb.cp/inherit.exp: ptype g_B.A::x
+FAIL: gdb.cp/inherit.exp: ptype g_B.A::a
+FAIL: gdb.cp/inherit.exp: ptype g_B.A::x
@@ -16931,2 +16931,2 @@
-PASS: gdb.cp/inherit.exp: ptype g_C.A::a
-PASS: gdb.cp/inherit.exp: ptype g_C.A::x
+FAIL: gdb.cp/inherit.exp: ptype g_C.A::a
+FAIL: gdb.cp/inherit.exp: ptype g_C.A::x

The FAILs are of the form:

(gdb) PASS: gdb.base/exprs.exp: print & (void) v_char
print null_t_struct && null_t_struct->v_int_member == 0
Cannot access memory at address 0x0
(gdb) FAIL: gdb.base/exprs.exp: print null_t_struct && null_t_struct->v_int_member == 0
...
(gdb) PASS: gdb.base/ptype.exp: ptype v_struct1->v_float_member
ptype v_t_struct_p.v_float_member
Cannot access memory at address 0x0
(gdb) FAIL: gdb.base/ptype.exp: ptype v_t_struct_p.v_float_member

> 
> My patch then does 2 things, first, I add in the extra checks
> for has this value become optimized out after we call
> value_fetch_lazy, this is needed for correctness, as a result of
> this first change, only "CASE 2" does not handle optimized out
> value, so I add code into "CASE 2" to handle optimized out values.
> Now, I no longer need the initial check for optimized out values, I
> can leave the values as lazy for as long as possible, only fetching
> them if they are lazy and lval_register, this restores the original
> behaviour, but, I believe is more correct.

Here's that part of the patch with "diff -w" (ignore all whitespace):

diff --git c/gdb/value.c w/gdb/value.c
index e5cf1d7..970fb66 100644
--- c/gdb/value.c
+++ w/gdb/value.c
@@ -2631,9 +2631,7 @@ value_primitive_field (struct value *arg1, int offset,
      description correctly.  */
   check_typedef (type);

-  if (value_optimized_out (arg1))
-    v = allocate_optimized_out_value (type);
-  else if (TYPE_FIELD_BITSIZE (arg_type, fieldno))
+  if (TYPE_FIELD_BITSIZE (arg_type, fieldno))
     {
       /* Handle packed fields.

@@ -2647,6 +2645,10 @@ value_primitive_field (struct value *arg1, int offset,
       int bitpos = TYPE_FIELD_BITPOS (arg_type, fieldno);
       int container_bitsize = TYPE_LENGTH (type) * 8;

+      if (arg1->optimized_out)
+	v = allocate_optimized_out_value (type);
+      else
+	{
 	  v = allocate_value_lazy (type);
 	  v->bitsize = TYPE_FIELD_BITSIZE (arg_type, fieldno);
 	  if ((bitpos % container_bitsize) + v->bitsize <= container_bitsize
@@ -2661,6 +2663,7 @@ value_primitive_field (struct value *arg1, int offset,
 	  if (!value_lazy (arg1))
 	    value_fetch_lazy (v);
 	}
+    }
   else if (fieldno < TYPE_N_BASECLASSES (arg_type))
     {
       /* This field is actually a base subobject, so preserve the
@@ -2672,6 +2675,10 @@ value_primitive_field (struct value *arg1, int offset,
       if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1))
 	value_fetch_lazy (arg1);

+      if (arg1->optimized_out)
+	v = allocate_optimized_out_value (type);
+      else
+	{
 	  /* We special case virtual inheritance here because this
 	     requires access to the contents, which we would rather avoid
 	     for references to ordinary fields of unavailable values.  */
@@ -2696,6 +2703,7 @@ value_primitive_field (struct value *arg1, int offset,
 	  v->offset = value_offset (arg1);
 	  v->embedded_offset = offset + value_embedded_offset (arg1) + boffset;
 	}
+    }
   else
     {
       /* Plain old data member */
@@ -2705,7 +2713,9 @@ value_primitive_field (struct value *arg1, int offset,
       if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1))
 	value_fetch_lazy (arg1);

-      if (value_lazy (arg1))
+      if (arg1->optimized_out)
+	v = allocate_optimized_out_value (type);
+      else if (value_lazy (arg1))
 	v = allocate_value_lazy (type);
       else
 	{


Makes sense to me now.


> Here's the improved patch, ok to apply?

OK, with ...

> gdb/ChangeLog
> 
> 2013-07-01  Andrew Burgess  <aburgess@broadcom.com>
> 
> 	* stack.c (read_frame_arg): No longer need to fetch lazy values,
> 	checking for optimized out will ensure lazy values are loaded.

Write:

 	* stack.c (read_frame_arg): No longer fetch lazy values.

> 	* value.c (value_optimized_out): If the value is not already marked
> 	optimized out, and is lazy then fetch it so we can know for sure
> 	if the value is optimized out.

Write:

 	* value.c (value_optimized_out): If the value is not already marked
 	optimized out, and is lazy then fetch it.

and put the "so we can know for sure if the value is optimized out." comment
in the sources.

> 	(value_primitive_field): Move optimized out check later to later in
> 	the function after we have loaded any lazy values.

"later to later" sounds like a later too much.  It'd be great to have a
comment in the sources about this detail.

> 	(value_fetch_lazy): Use optimized out flag directly rather than
> 	calling optimized_out method to avoid triggering recursion.

Write:

	(value_fetch_lazy): Use optimized out flag directly rather than
 	calling optimized_out method.

and put the "to avoid triggering recursion." comment in the sources.

[ ChangeLog's mention "what's".  "why's" go into sources and commit logs. ]

Thanks,
-- 
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]