PR target/20150
* msp430-dis.c (msp430dis_read_two_bytes): New function.
(msp430dis_opcode_unsigned): New function.
(msp430dis_opcode_signed): New function.
(msp430_singleoperand): Use the new opcode reading functions.
Only disassenmble bytes if they were successfully read.
(msp430_doubleoperand): Likewise.
(msp430_branchinstr): Likewise.
(msp430x_callx_instr): Likewise.
(print_insn_msp430): Check that it is safe to read bytes before
attempting disassembly. Use the new opcode reading functions.
When evaluating an expression with EVAL_AVOID_SIDE_EFFECTS if the value
we return is forced to be of type not_lval then GDB will be unable to
take the address of the returned value.
Instead, we should properly initialise the LVAL of the returned value.
This commit builds on two previous commits 2520f728b7 (Forward
VALUE_LVAL when avoiding side effects for STRUCTOP_STRUCT) and
ac775bf4d3 (gdb: Forward VALUE_LVAL when avoiding side effects for
STRUCTOP_PTR), which in turn build on ac1ca910d7 (Fixes for PR
exp/15364).
This commit is currently untested due to my lack of access to an OpenCL
compiler, however, if follows the same pattern as the first two commits
mentioned above and so I believe that it is correct.
gdb/ChangeLog:
* opencl-lang.c (evaluate_subexp_opencl): If
EVAL_AVOID_SIDE_EFFECTS mode, forward the VALUE_LVAL attribute to
the returned value in the STRUCTOP_STRUCT case.
Assume that we have a C program like this:
struct foo_type
{
int var;
} foo;
struct foo_type *foo_ptr = &foo;
int
main ()
{
return foo_ptr->var;
}
Then GDB should be able to evaluate the following, however, it currently
does not:
(gdb) start
...
(gdb) whatis &(foo_ptr->var)
Attempt to take address of value not located in memory.
The problem is that in EVAL_AVOID_SIDE_EFFECTS mode,
eval.c:evaluate_subexp_standard always returns a not_lval value as the
result for a STRUCTOP_PTR operation. As a consequence, the rest of
the code believes that one cannot take the address of the returned
value.
This patch fixes STRUCTOP_PTR handling so that the VALUE_LVAL
attribute for the returned value is properly initialized. After this
change, the above session becomes:
(gdb) start
...
(gdb) whatis &(foo_ptr->var)
type = int *
This commit is largely the same as commit 2520f728b7 (Forward
VALUE_LVAL when avoiding side effects for STRUCTOP_STRUCT) but applied
to STRUCTOP_PTR rather than STRUCTOP_STRUCT. Both of these commits are
building on top of commit ac1ca910d7 (Fixes for PR exp/15364).
gdb/ChangeLog:
* eval.c (evaluate_subexp_standard): If EVAL_AVOID_SIDE_EFFECTS
mode, forward the VALUE_LVAL attribute to the returned value in
the STRUCTOP_PTR case.
gdb/testsuite/ChangeLog:
* gdb.base/whatis.c: Extend the test case.
* gdb.base/whatis.exp: Add additional tests.
It contains values between 128 and 256 which fit in an unsigned char, but not a
signed char, so we should explicitly use unsigned char to not rely on how these
values are converted to signed char.
gas/ChangeLog:
2016-05-26 Trevor Saunders <tbsaunde+binutils@tbsaunde.org>
* config/tc-metag.c (metag_handle_align): Make the type of noop
unsigned char.
gas/ChangeLog:
2016-05-26 Trevor Saunders <tbsaunde+binutils@tbsaunde.org>
* config/tc-rx.c (md_convert_frag): Make the type of reloc_type
bfd_reloc_code_real_type.
Upon a `bfd_reloc_outofrange' error continue processing so that any
further issues are also reported, similarly to how `bfd_reloc_overflow'
is handled. Adjust message formatting accordingly, using `%X' to abort
processing at conclusion.
Reduce the number of test cases by grouping relocations the handling of
which can now be verified together with a single source and dump.
bfd/
* elfxx-mips.c (_bfd_mips_elf_relocate_section)
<bfd_reloc_outofrange>: Use the `%X%H' rather than `%C' format
for message. Continue processing rather than returning failure.
ld/
* testsuite/ld-mips-elf/unaligned-jalx-0.d: Fold
`unaligned-jalx-2' here.
* testsuite/ld-mips-elf/unaligned-jalx-mips16-0.d: Fold
`unaligned-jalx-mips16-2' here.
* testsuite/ld-mips-elf/unaligned-jalx-micromips-0.d: Fold
`unaligned-jalx-micromips-2' here.
* testsuite/ld-mips-elf/unaligned-jalx-0.s: Update accordingly.
* testsuite/ld-mips-elf/unaligned-jalx-1.d: Update error
message.
* testsuite/ld-mips-elf/unaligned-jalx-mips16-1.d: Likewise.
* testsuite/ld-mips-elf/unaligned-jalx-micromips-1.d: Likewise.
* testsuite/ld-mips-elf/unaligned-jalx-2.d: Remove test.
* testsuite/ld-mips-elf/unaligned-jalx-mips16-2.d: Remove test.
* testsuite/ld-mips-elf/unaligned-jalx-micromips-2.d: Remove
test.
* testsuite/ld-mips-elf/unaligned-jalx-2.s: Remove test source.
* testsuite/ld-mips-elf/unaligned-lwpc-0.d: Fold
`unaligned-lwpc-3' here.
* testsuite/ld-mips-elf/unaligned-lwpc-0.s: Update accordingly.
* testsuite/ld-mips-elf/unaligned-lwpc-1.d: Fold
`unaligned-lwpc-2' here.
* testsuite/ld-mips-elf/unaligned-lwpc-1.s: Update accordingly.
* testsuite/ld-mips-elf/unaligned-lwpc-2.d: Remove test.
* testsuite/ld-mips-elf/unaligned-lwpc-2.s: Remove test source.
* testsuite/ld-mips-elf/unaligned-lwpc-3.d: Remove test.
* testsuite/ld-mips-elf/unaligned-lwpc-3.s: Remove test source.
* testsuite/ld-mips-elf/unaligned-ldpc-0.d: Fold
`unaligned-ldpc-4' here.
* testsuite/ld-mips-elf/unaligned-ldpc-0.s: Update accordingly.
* testsuite/ld-mips-elf/unaligned-ldpc-1.d: Update error
message. Fold `unaligned-ldpc-2' and `unaligned-ldpc-3' here.
* testsuite/ld-mips-elf/unaligned-ldpc-1.s: Update accordingly.
* testsuite/ld-mips-elf/unaligned-ldpc-2.d: Remove test.
* testsuite/ld-mips-elf/unaligned-ldpc-2.s: Remove test source.
* testsuite/ld-mips-elf/unaligned-ldpc-3.d: Remove test.
* testsuite/ld-mips-elf/unaligned-ldpc-3.s: Remove test source.
* testsuite/ld-mips-elf/unaligned-ldpc-4.d: Remove test.
* testsuite/ld-mips-elf/unaligned-ldpc-4.s: Remove test source.
* testsuite/ld-mips-elf/mips-elf.exp: Delete removed tests.
The AVX512VL bit alone isn't sufficient to select a 128-bit or 256-bit
AVX512 instruction. We must match another AVX512 bit.
PR gas/20140
* config/tc-i386.c (cpu_flags_match): Require another match
for AVX512VL.
* testsuite/gas/i386/i386.exp: Run avx512vl-1, avx512vl-2,
x86-64-avx512vl-1 and x86-64-avx512vl-2.
* testsuite/gas/i386/avx512vl-1.l: New file.
* testsuite/gas/i386/avx512vl-1.s: Likewise.
* testsuite/gas/i386/avx512vl-2.l: Likewise.
* testsuite/gas/i386/avx512vl-2.s: Likewise.
* testsuite/gas/i386/x86-64-avx512vl-1.l: Likewise.
* testsuite/gas/i386/x86-64-avx512vl-1.s: Likewise.
* testsuite/gas/i386/x86-64-avx512vl-2.l: Likewise.
* testsuite/gas/i386/x86-64-avx512vl-2.s: Likewise.
A `bfd_reloc_outofrange' condition from `mips_elf_calculate_relocation'
currently triggers the warning callback, which in the case of LD prints
messages like:
foo.o: In function `foo':
(.text+0x0): warning: JALX to a non-word-aligned address
or:
foo.o: In function `foo':
(.text+0x0): warning: PC-relative load from unaligned address
and nothing else, which suggests this is a benign condition and link has
otherwise successfully run to completion. This is however not the case,
the link terminates right away with no further messages and no output
produced.
Use the general error or warning info callback then, preserving the
message format. Also set a BFD error condition so that a failure is
unambiguously reported. Complement the change with a set of suitable
test suite additions.
bfd/
* elfxx-mips.c (_bfd_mips_elf_relocate_section)
<bfd_reloc_outofrange>: Call `->einfo' rather than `->warning'.
Call `bfd_set_error'.
ld/
* testsuite/ld-mips-elf/unaligned-jalx-0.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-1.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-2.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-mips16-0.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-mips16-1.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-mips16-2.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-micromips-0.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-micromips-1.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-micromips-2.d: New test.
* testsuite/ld-mips-elf/unaligned-lwpc-0.d: New test.
* testsuite/ld-mips-elf/unaligned-lwpc-1.d: New test.
* testsuite/ld-mips-elf/unaligned-lwpc-2.d: New test.
* testsuite/ld-mips-elf/unaligned-lwpc-3.d: New test.
* testsuite/ld-mips-elf/unaligned-ldpc-0.d: New test.
* testsuite/ld-mips-elf/unaligned-ldpc-1.d: New test.
* testsuite/ld-mips-elf/unaligned-ldpc-2.d: New test.
* testsuite/ld-mips-elf/unaligned-ldpc-3.d: New test.
* testsuite/ld-mips-elf/unaligned-ldpc-4.d: New test.
* testsuite/ld-mips-elf/unaligned-jalx-0.s: New test source.
* testsuite/ld-mips-elf/unaligned-jalx-1.s: New test source.
* testsuite/ld-mips-elf/unaligned-jalx-2.s: New test source.
* testsuite/ld-mips-elf/unaligned-insn.s: New test source.
* testsuite/ld-mips-elf/unaligned-lwpc-0.s: New test source.
* testsuite/ld-mips-elf/unaligned-lwpc-1.s: New test source.
* testsuite/ld-mips-elf/unaligned-lwpc-2.s: New test source.
* testsuite/ld-mips-elf/unaligned-lwpc-3.s: New test source.
* testsuite/ld-mips-elf/unaligned-ldpc-0.s: New test source.
* testsuite/ld-mips-elf/unaligned-ldpc-1.s: New test source.
* testsuite/ld-mips-elf/unaligned-ldpc-2.s: New test source.
* testsuite/ld-mips-elf/unaligned-ldpc-3.s: New test source.
* testsuite/ld-mips-elf/unaligned-ldpc-4.s: New test source.
* testsuite/ld-mips-elf/unaligned-syms.s: New test source.
* testsuite/ld-mips-elf/mips-elf.exp: Run the new tests.
Add all AVX512 bits to CPU_ANY_AVX_FLAGS.
* i386-gen.c (cpu_flag_init): Add CpuVREX to CPU_AVX512DQ_FLAGS,
CPU_AVX512BW_FLAGS, CPU_AVX512VL_FLAGS, CPU_AVX512IFMA_FLAGS
and CPU_AVX512VBMI_FLAGS. Add CpuAVX512DQ, CpuAVX512BW,
CpuAVX512VL, CpuAVX512IFMA and CpuAVX512VBMI to
CPU_ANY_AVX_FLAGS.
* i386-init.h: Regenerated.
Since existing ld and gold support the 64-bit (MIPS) ELF archives, we
can use the 64-bit (MIPS) ELF archives as 64-bit archives. Since the
plugin target is used to create archive in plugin-enabled ar, we need
a way to enable 64-bit archives in the plugin target. This patch adds
--enable-64-bit-archive to bfd to force 64-bit archives in ar and
ranlib. Since both 64-bit MIPS and s390 ELF targets currently use
64-bit archives, 64-bit archives are enabled by default for them.
64-bit archive is generated automatically if the archive is too big.
Tested on Linux/x86 and Linux/x86-64 with existing ld and gold.
bfd/
PR binutils/14625
* archive.c (bfd_slurp_armap): Replace
bfd_elf64_archive_slurp_armap with
_bfd_archive_64_bit_slurp_armap.
(bsd_write_armap): Call _bfd_archive_64_bit_write_armap if
BFD64 is defined and the archive is too big.
(coff_write_armap): Likewise.
* archive64.c (bfd_elf64_archive_slurp_armap): Renamed to ...
(_bfd_archive_64_bit_slurp_armap): This.
(bfd_elf64_archive_write_armap): Renamed to ...
(_bfd_archive_64_bit_write_armap): This.
* configure.ac: Add --enable-64-bit-archive.
(want_64_bit_archive): New. Set to true by default for 64-bit
MIPS and s390 ELF targets.
(USE_64_BIT_ARCHIVE): New AC_DEFINE.
* config.in: Regenerated.
* configure: Likewise.
* elf64-mips.c (bfd_elf64_archive_functions): Removed.
(bfd_elf64_archive_slurp_armap): Likewise.
(bfd_elf64_archive_write_armap): Likewise.
(bfd_elf64_archive_slurp_extended_name_table): Likewise.
(bfd_elf64_archive_construct_extended_name_table): Likewise.
(bfd_elf64_archive_truncate_arname): Likewise.
(bfd_elf64_archive_read_ar_hdr): Likewise.
(bfd_elf64_archive_write_ar_hdr): Likewise.
(bfd_elf64_archive_openr_next_archived_file): Likewise.
(bfd_elf64_archive_get_elt_at_index): Likewise.
(bfd_elf64_archive_generic_stat_arch_elt): Likewise.
(bfd_elf64_archive_update_armap_timestamp): Likewise.
* elf64-s390.c (bfd_elf64_archive_functions): Removed.
(bfd_elf64_archive_slurp_armap): Likewise.
(bfd_elf64_archive_write_armap): Likewise.
(bfd_elf64_archive_slurp_extended_name_table): Likewise.
(bfd_elf64_archive_construct_extended_name_table): Likewise.
(bfd_elf64_archive_truncate_arname): Likewise.
(bfd_elf64_archive_read_ar_hdr): Likewise.
(bfd_elf64_archive_write_ar_hdr): Likewise.
(bfd_elf64_archive_openr_next_archived_file): Likewise.
(bfd_elf64_archive_get_elt_at_index): Likewise.
(bfd_elf64_archive_generic_stat_arch_elt): Likewise.
(bfd_elf64_archive_update_armap_timestamp): Likewise.
* elfxx-target.h (TARGET_BIG_SYM): Use _bfd_archive_64_bit on
BFD_JUMP_TABLE_ARCHIVE if USE_64_BIT_ARCHIVE is defined and
bfd_elfNN_archive_functions isn't defined.
(TARGET_LITTLE_SYM): Likewise.
* libbfd-in.h (_bfd_archive_64_bit_slurp_armap): New prototype.
(_bfd_archive_64_bit_write_armap): Likewise.
(_bfd_archive_64_bit_slurp_extended_name_table): New macro.
(_bfd_archive_64_bit_construct_extended_name_table): Likewise.
(_bfd_archive_64_bit_truncate_arname): Likewise.
(_bfd_archive_64_bit_read_ar_hdr): Likewise.
(_bfd_archive_64_bit_write_ar_hdr): Likewise.
(_bfd_archive_64_bit_openr_next_archived_file): Likewise.
(_bfd_archive_64_bit_get_elt_at_index): Likewise.
(_bfd_archive_64_bit_generic_stat_arch_elt): Likewise.
(_bfd_archive_64_bit_update_armap_timestamp): Likewise.
* libbfd.h: Regenerated.
* plugin.c (plugin_vec): Use _bfd_archive_64_bit on
BFD_JUMP_TABLE_ARCHIVE if USE_64_BIT_ARCHIVE is defined.
binutils/
PR binutils/14625
* NEWS: Mention --enable-64-bit-archive.
During archive rescan to resolve symbol references for files added by
LTO, linker add_archive_element callback is called to check if an
archive element should added. After all IR symbols have been claimed,
linker won't claim new IR symbols and shouldn't add the LTO archive
element. This patch updates linker add_archive_element callback to
return FALSE when seeing an LTO archive element during rescan and
changes ELF linker to skip such archive element.
bfd/
PR ld/20103
* cofflink.c (coff_link_check_archive_element): Return TRUE if
linker add_archive_element callback returns FALSE.
* ecoff.c (ecoff_link_check_archive_element): Likewise.
* elf64-ia64-vms.c (elf64_vms_link_add_archive_symbols): Skip
archive element if linker add_archive_element callback returns
FALSE.
* elflink.c (elf_link_add_archive_symbols): Likewise.
* pdp11.c (aout_link_check_ar_symbols): Likewise.
* vms-alpha.c (alpha_vms_link_add_archive_symbols): Likewise.
* xcofflink.c (xcoff_link_check_dynamic_ar_symbols): Likewise.
(xcoff_link_check_ar_symbols): Likewise.
ld/
PR ld/20103
* ldmain.c (add_archive_element): Don't claim new IR symbols
after all IR symbols have been claimed.
* plugin.c (plugin_call_claim_file): Remove no_more_claiming
check.
* testsuite/ld-plugin/lto.exp (pr20103): New proc.
Run PR ld/20103 tests.
* testsuite/ld-plugin/pr20103a.c: New file.
* testsuite/ld-plugin/pr20103b.c: Likewise.
* testsuite/ld-plugin/pr20103c.c: Likewise.
Ulrich pointed out that an earlier patch had misspelled
HAVE_LIBPYTHON2_4, adding an extra "_". This caused a build failure.
This patch fixes the bug.
2016-05-25 Tom Tromey <tom@tromey.com>
* python/py-value.c (value_object_as_number): Use correct spelling
of HAVE_LIBPYTHON2_4.
PR target/2006764
* config/tc-arm.c (move_or_literal_pool): Only generate a VMOV.I64
instruction if supported by the currently selected fpu variant.
* testsuite/gas/arm/vfpv3-ldr_immediate.s: Add test of this PR.
* testsuite/gas/arm/vfpv3-ldr_immediate.d: Update expected disassembly.
Variable "show" was hardcoded to zero for pointer and reference types.
This implementation didn't allow a correct "whatis" print
for those types and results in same output for "ptype" and "whatis".
Before:
(gdb) whatis t3p
type = PTR TO -> ( Type t3
integer(kind=4) :: t3_i
Type t2 :: t2_n
End Type t3 )
After:
(gdb) whatis t3p
type = PTR TO -> ( Type t3 )
2016-05-25 Bernhard Heckel <bernhard.heckel@intel.com>
gdb/Changelog:
* f-typeprint.c (f_type_print_base): Replace 0 by show.
gdb/testsuite/Changelog:
* gdb.fortran/type.f90: Add pointer variable.
* gdb.fortran/whatis_type.exp: Add whatis/ptype of pointers.
As as result of printing only the outer elements of nested structures,
some testcases have to be added to check for corner cases with VLA's.
2016-05-25 Bernhard Heckel <bernhard.heckel@intel.com>
gdb/testsuite/Changelog:
* gdb.fortran/vla-type.exp: Access elements in nested structs.
According to the typeprint's description, the level of details is
decreased by one for the typeprint of elements of a structure.
Before:
(gdb) ptype t3v
type = Type t3
integer(kind=4) :: t3_i
Type t2
integer(kind=4) :: t2_i
Type t1
integer(kind=4) :: t1_i
real(kind=4) :: t1_r
End Type t1 :: t1_n
End Type t2 :: t2_n
End Type t3
After:
(gdb) ptype t3v
type = Type t3
integer(kind=4) :: t3_i
Type t2 :: t2_n
End Type t3
2016-05-25 Bernhard Heckel <bernhard.heckel@intel.com>
gdb/Changelog:
* f-typeprint.c (f_type_print_base): Decrease show by one.
gdb/testsuite/Changelog:
* gdb.fortran/type.f90: Add nested structures.
* gdb.fortran/whatis-type.exp: Whatis/ptype nested structures.
* gdb.fortran/derived-type.exp: Adapt expected output.
* gdb.fortran/vla-type.exp: Adapt expected output.
According to the typeprint's description, elements of a structure
should not be printed when show is < 1.
This variable is also used to distinguish the level of details
between "ptype" and "whatis" expressions.
Before:
(gdb) whatis t1v
type = Type t1
integer(kind=4) :: t1_i
real(kind=4) :: t1_r
End Type t1
After:
(gdb) whatis t1v
type = Type t1
2016-05-25 Bernhard Heckel <bernhard.heckel@intel.com>
gdb/Changelog:
* f-typeprint.c (f_type_print_base): Don't print fields when show < 0.
gdb/testsuite/Changelog:
* gdb.fortran/whatis_type.exp: Adapt expected output.
Level of indentation was not proper handled when printing
the elements type's name.
Before:
type = Type t1
integer(kind=4) :: var_1
integer(kind=4) :: var_2
End Type t1
After:
type = Type t1
integer(kind=4) :: var_1
integer(kind=4) :: var_2
End Type t1
2016-05-25 Bernhard Heckel <bernhard.heckel@intel.com>
gdb/Changelog:
* f-typeprint.c (f_type_print_base): Take print level into account.
gdb/testsuite/Changelog:
* gdb.fortran/print_type.exp: Fix expected output.
* gdb.fortran/whatis_type.exp: Fix expected output.
This patch fixes PR python/17386.
The bug is that gdb.Value does not implement the Python __index__
method. This method is needed to convert a Python object to an index
and is used by various operations in Python, such as indexing an
array.
The fix is to implement the nb_index method for gdb.Value.
nb_index was added in Python 2.5. I don't have a good way to test
Python 2.4, but I made an attempt to accomodate it.
I chose to use valpy_long in all cases because this simplifies porting
to Python 3, and because there didn't seem to be any harm.
Built and regtested on x86-64 Fedora 23.
2016-05-24 Tom Tromey <tom@tromey.com>
PR python/17386:
* python/py-value.c (value_object_as_number): Add
nb_inplace_floor_divide, nb_inplace_true_divide, nb_index.
2016-05-24 Tom Tromey <tom@tromey.com>
PR python/17386:
* gdb.python/py-value.exp (test_value_numeric_ops): Add tests that
use value as an index.
Python 2's PyNumberMethods has nb_inplace_divide, but Python 3 does
not. This patch adds it for Python 2.
This buglet didn't cause much fallout because the only non-NULL entry
in value_object_as_number after this is for valpy_divide; and the
missing slot caused it to slide up to nb_floor_divide (where
nb_true_divide was intended).
2016-05-24 Tom Tromey <tom@tromey.com>
* python/py-value.c (value_object_as_number): Add
nb_inplace_divide for Python 2.
PR python/17981 notes that gdb.breakpoints() returns None when there
are no breakpoints; whereas an empty list or tuple would be more in
keeping with Python and the documentation.
This patch fixes the bug by changing the no-breakpoint return to make
an empty tuple.
Built and regtested on x86-64 Fedora 23.
2016-05-23 Tom Tromey <tom@tromey.com>
PR python/17981:
* python/py-breakpoint.c (gdbpy_breakpoints): Return a new tuple
when there are no breakpoints.
2016-05-23 Tom Tromey <tom@tromey.com>
* python.texi (Basic Python): Document gdb.breakpoints return.
2016-05-23 Tom Tromey <tom@tromey.com>
PR python/17981:
* gdb.python/py-breakpoint.exp (test_bkpt_basic): Add test for
no-breakpoint case.
PR gdb/19194 points out a typo in the documentation. I'm checking
this in as obvious.
2016-05-24 Tom Tromey <tom@tromey.com>
PR gdb/19194:
* gdb.texinfo (gdb man): Fix typo.
When GDB attaches to a process, it looks at the /proc/PID/task/ dir
for all clone threads of that process, and attaches to each of them.
Usually, if there is more than one clone thread, it means the program
is multi threaded and linked with pthreads. Thus when GDB soon after
attaching finds and loads a libthread_db matching the process, it'll
add a thread to the thread list for each of the initially found
lower-level LWPs.
If, however, GDB fails to find/load a matching libthread_db, nothing
is adding the LWPs to the thread list. And because of that, "detach"
hits an internal error:
(gdb) PASS: gdb.threads/clone-attach-detach.exp: fg attach 1: attach
info threads
Id Target Id Frame
* 1 LWP 6891 "clone-attach-de" 0x00007f87e5fd0790 in __nanosleep_nocancel () at ../sysdeps/unix/syscall-template.S:84
(gdb) FAIL: gdb.threads/clone-attach-detach.exp: fg attach 1: info threads shows two LWPs
detach
.../src/gdb/thread.c:1010: internal-error: is_executing: Assertion `tp' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n)
FAIL: gdb.threads/clone-attach-detach.exp: fg attach 1: detach (GDB internal error)
From here:
...
#8 0x00000000007ba7cc in internal_error (file=0x98ea68 ".../src/gdb/thread.c", line=1010, fmt=0x98ea30 "%s: Assertion `%s' failed.")
at .../src/gdb/common/errors.c:55
#9 0x000000000064bb83 in is_executing (ptid=...) at .../src/gdb/thread.c:1010
#10 0x00000000004c23bb in get_pending_status (lp=0x12c5cc0, status=0x7fffffffdc0c) at .../src/gdb/linux-nat.c:1235
#11 0x00000000004c2738 in detach_callback (lp=0x12c5cc0, data=0x0) at .../src/gdb/linux-nat.c:1317
#12 0x00000000004c1a2a in iterate_over_lwps (filter=..., callback=0x4c2599 <detach_callback>, data=0x0) at .../src/gdb/linux-nat.c:899
#13 0x00000000004c295c in linux_nat_detach (ops=0xe7bd30, args=0x0, from_tty=1) at .../src/gdb/linux-nat.c:1358
#14 0x000000000068284d in delegate_detach (self=0xe7bd30, arg1=0x0, arg2=1) at .../src/gdb/target-delegates.c:34
#15 0x0000000000694141 in target_detach (args=0x0, from_tty=1) at .../src/gdb/target.c:2241
#16 0x0000000000630582 in detach_command (args=0x0, from_tty=1) at .../src/gdb/infcmd.c:2975
...
Tested on x86-64 Fedora 23. Also confirmed the test passes against
gdbserver with "maint set target-non-stop".
gdb/ChangeLog:
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* linux-nat.c (attach_proc_task_lwp_callback): Mark the lwp
resumed, and add the thread to GDB's thread list.
testsuite/ChangeLog:
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* gdb.threads/clone-attach-detach.c: New file.
* gdb.threads/clone-attach-detach.exp: New file.
Working on the fix for gdb/19828, I saw
gdb.threads/attach-many-short-lived-threads.exp fail once in an
unusual way. Unfortunately I didn't keep debug logs, but it's an
issue similar to what's been fixed in remote.c a while ago --
linux-nat.c was not fetching the pending status from the right place.
gdb/ChangeLog:
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* linux-nat.c (get_pending_status): If the thread reported the
event to the core and it's pending, use the pending status signal
number.
Hacking the gdb.threads/attach-many-short-lived-threads.exp test to
spawn thousands of threads instead of dozens, and running gdb under
perf, I saw that GDB was spending most of the time in find_lwp_pid:
- captured_main
- 93.61% catch_command_errors
- 87.41% attach_command
- 87.40% linux_nat_attach
- 87.40% linux_proc_attach_tgid_threads
- 82.38% attach_proc_task_lwp_callback
- 81.01% find_lwp_pid
5.30% ptid_get_lwp
+ 0.10% ptid_lwp_p
+ 0.64% add_thread
+ 0.26% set_running
+ 0.24% set_executing
0.12% ptid_get_lwp
+ 0.01% ptrace
+ 0.01% add_lwp
attach_proc_task_lwp_callback is called once for each LWP that we
attach to, found by listing the /proc/PID/task/ directory. In turn,
attach_proc_task_lwp_callback calls find_lwp_pid to check whether the
LWP we're about to try to attach to is already known. Since
find_lwp_pid does a linear walk over the whole LWP list, this becomes
quadratic. We do the /proc/PID/task/ listing until we get two
iterations in a row where we found no new threads. So the second and
following times we walk the /proc/PID/task/ dir, we're going to take
an even worse find_lwp_pid hit.
Fix this by adding a hash table keyed by LWP PID, for fast lookup.
The linked list embedded in the LWP structure itself is kept, and made
a double-linked list, so that removals from that list are O(1). An
earlier version of this patch got rid of this list altogether, but
that revealed hidden dependencies / assumptions on how the list is
sorted. For example, killing a process and then waiting for all the
LWPs status using iterate_over_lwps only works as is because the
leader LWP is always last in the list. So I thought it better to take
an incremental approach and make this patch concern itself _only_ with
the PID lookup optimization.
gdb/ChangeLog:
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* linux-nat.c (lwp_lwpid_htab): New htab.
(lwp_info_hash, lwp_lwpid_htab_eq, lwp_lwpid_htab_create)
(lwp_lwpid_htab_add_lwp): New functions.
(lwp_list): Tweak comment.
(lwp_list_add, lwp_list_remove, lwp_lwpid_htab_remove_pid): New
functions.
(purge_lwp_list): Rewrite, using htab_traverse_noresize.
(add_initial_lwp): Add lwp to htab too. Use lwp_list_add.
(delete_lwp): Use lwp_list_remove. Remove htab too.
(find_lwp_pid): Search in htab.
(_initialize_linux_nat): Call lwp_lwpid_htab_create.
* linux-nat.h (struct lwp_info) <prev>: New field.
Hacking the gdb.threads/attach-many-short-lived-threads.exp test to
spawn thousands of threads instead of dozens, I saw GDB having trouble
keeping up with threads being spawned too fast, when it tried to stop
them all. This was because while gdb is doing that, it updates the
thread list to make sure no new thread has sneaked in that might need
to be paused. It does this a few times until it sees no-new-threads
twice in a row. The thread listing update itself is not that
expensive, however, in the Linux backend, updating the threads list
calls linux_common_core_of_thread for each LWP to record on which core
each LWP was last seen running, which opens/reads/closes a /proc file
for each LWP which becomes expensive when you need to do it for
thousands of LWPs.
perf shows gdb in linux_common_core_of_thread 44% of the time, in the
stop_all_threads -> update_thread_list path in this use case.
This patch simply makes linux_common_core_of_thread avoid updating the
core the thread is bound to if the thread hasn't run since the last
time we updated that info. This makes linux_common_core_of_thread
disappear into the noise in the perf report.
gdb/ChangeLog:
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* linux-nat.c (linux_resume_one_lwp_throw): Clear the LWP's core
field.
(linux_nat_update_thread_list): Don't fetch the core if already
known.
... as it's _much_ faster.
Hacking the gdb.threads/attach-many-short-lived-threads.exp test to
spawn thousands of threads instead of dozens to stress and debug
timeout problems with gdb.threads/attach-many-short-lived-threads.exp,
I saw that GDB would spend several seconds just reading the
/proc/PID/smaps file, to determine the vDSO mapping range. GDB opens
and reads the whole file just once, and caches the result, but even
that is too slow. For example, with almost 8000 threads:
$ ls /proc/3518/task/ | wc -l
7906
reading the /proc/PID/smaps file grepping for "vdso" takes over 15
seconds :
$ time cat /proc/3518/smaps | grep vdso
7ffdbafee000-7ffdbaff0000 r-xp 00000000 00:00 0 [vdso]
real 0m15.371s
user 0m0.008s
sys 0m15.017s
Looking around the web for hints, I found a nice description of the
issue here:
http://backtrace.io/blog/blog/2014/11/12/large-thread-counts-and-slow-process-maps/
The problem is that /proc/PID/smaps wants to show the mappings as
being thread stack, and that has the kernel iterating over all threads
in the thread group, for each mapping.
The fix is to use the "map" file under /proc/PID/task/PID/ instead of
the /proc/PID/ one, as the former doesn't mark thread stacks for all
threads.
That alone drops the timing to the millisecond range on my machine:
$ time cat /proc/3518/task/3518/smaps | grep vdso
7ffdbafee000-7ffdbaff0000 r-xp 00000000 00:00 0 [vdso]
real 0m0.150s
user 0m0.009s
sys 0m0.084s
And since we only need the vdso mapping's address range, we can use
"maps" file instead of "smaps", and it's even cheaper:
/proc/PID/task/PID/maps :
$ time cat /proc/3518/task/3518/maps | grep vdso
7ffdbafee000-7ffdbaff0000 r-xp 00000000 00:00 0 [vdso]
real 0m0.027s
user 0m0.000s
sys 0m0.017s
gdb/ChangeLog:
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* linux-tdep.c (find_mapping_size): Delete.
(linux_vsyscall_range_raw): Rewrite reading from
/proc/PID/task/PID/maps directly instead of using
gdbarch_find_memory_regions.
A following patch (fix for gdb/19828) makes linux-nat.c add threads to
GDB's thread list earlier in the "attach" sequence, and that causes a
surprising regression on
gdb.threads/attach-many-short-lived-threads.exp on my machine. The
extra "thread x exited" handling and traffic slows down that test
enough that GDB core has trouble keeping up with new threads that are
spawned while trying to stop existing ones.
I saw the exact same issue with remote/gdbserver a while ago and fixed
it in 65706a29ba (Remote thread create/exit events) so part of the
fix here is the exact same -- add support for thread created events to
gdb/linux-nat.c. infrun.c:stop_all_threads enables those events when
it tries to stop threads, which ensures that new threads never get a
chance to themselves start new threads, thus fixing the race.
gdb/
2016-05-24 Pedro Alves <palves@redhat.com>
PR gdb/19828
* linux-nat.c (report_thread_events): New global.
(linux_handle_extended_wait): Report
TARGET_WAITKIND_THREAD_CREATED if thread event reporting is
enabled.
(wait_lwp, linux_nat_filter_event): Report all thread exits if
thread event reporting is enabled. Remove comment.
(filter_exit_event): New function.
(linux_nat_wait_1): Use it.
(linux_nat_thread_events): New function.
(linux_nat_add_target): Install it as target_thread_events method.
Do not convert jump relocs against local MIPS16 or microMIPS symbols to
refer to a section symbol instead even on RELA targets, as it makes it
impossible for the linker to make a JAL to JALX conversion based on ISA
symbol annotation, breaking regular and compressed MIPS interlinking.
gas/
* config/tc-mips.c (mips_fix_adjustable): Also return 0 for
jump relocations against MIPS16 or microMIPS symbols on RELA
targets.
* testsuite/gas/mips/jalx-local.d: New test.
* testsuite/gas/mips/jalx-local-n32.d: New test.
* testsuite/gas/mips/jalx-local-n64.d: New test.
* testsuite/gas/mips/jalx-local.s: New test source.
* testsuite/gas/mips/mips.exp: Run the new tests.
ld/
* testsuite/ld-mips-elf/jalx-local.d: New test.
* testsuite/ld-mips-elf/jalx-local-n32.d: New test.
* testsuite/ld-mips-elf/jalx-local-n64.d: New test.
* testsuite/ld-mips-elf/mips-elf.exp: Run the new tests.
With code refactoring made in commit b886a2ab0d and the addition of
`calculate_reloc' and a separate test for TLS relocs against constants
made there the preexisting fall-through from the TLS reloc switch case
has effectively become a dead execution path. This is because the call
to `calculate_reloc' present there is only made if `fixP->fx_done' is
true, which can only be the case if `fixP->fx_addsy' is NULL, which in
turn has already triggered the TLS reloc test and made execution break
out of the switch statement.
Remove the fall-through then and reshape code accordingly.
gas/
* config/tc-mips.c (md_apply_fix)
<BFD_RELOC_MIPS16_TLS_TPREL_LO16>: Remove fall-through, adjust
code accordingly.
It always returns an element of the enum operatorT, so it should be clearer to
make that the return type.
gas/ChangeLog:
2016-05-24 Trevor Saunders <tbsaunde+binutils@tbsaunde.org>
* config/tc-xtensa.c (struct suffix_reloc_map): Change type of field
operator to operatorT.
(map_suffix_reloc_to_operator): Change return type to operatorT.
Nothing ever assigns to ft32_target_format, so its always null, which means the
bfd target arch is the default one. It looks like ft32 only has one target
format, so we can just define TARGET_FORMAT to be that literal string.
gas/ChangeLog:
2016-05-24 Trevor Saunders <tbsaunde+binutils@tbsaunde.org>
* config/tc-ft32.h (DEFAULT_TARGET_FORMAT): Remove.
(ft32_target_format): Likewise.
(TARGET_FORMAT): Adjust.
They only hold values from the op_err enum, so it should be clearer to give
them the enum type.
gas/ChangeLog:
2016-05-24 Trevor Saunders <tbsaunde+binutils@tbsaunde.org>
* config/tc-cr16.c (check_range): Make type of retval op_err.
* config/tc-crx.c: Likewise.