old-cross-binutils/gdb/linux-nat.h
Pedro Alves 2db9a4275c GNU/Linux: Stop using libthread_db/td_ta_thr_iter
TL;DR - GDB can hang if something refreshes the thread list out of the
target while the target is running.  GDB hangs inside td_ta_thr_iter.
The fix is to not use that libthread_db function anymore.

Long version:

Running the testsuite against my all-stop-on-top-of-non-stop series is
still exposing latent non-stop bugs.

I was originally seeing this with the multi-create.exp test, back when
we were still using libthread_db thread event breakpoints.  The
all-stop-on-top-of-non-stop series forces a thread list refresh each
time GDB needs to start stepping over a breakpoint (to pause all
threads).  That test hits the thread event breakpoint often, resulting
in a bunch of step-over operations, thus a bunch of thread list
refreshes while some threads in the target are running.

The commit adds a real non-stop mode test that triggers the issue,
based on multi-create.exp, that does an explicit "info threads" when a
breakpoint is hit.  IOW, it does the same things the as-ns series was
doing when testing multi-create.exp.

The bug is a race, so it unfortunately takes several runs for the test
to trigger it.  In fact, even when setting the test running in a loop,
it sometimes takes several minutes for it to trigger for me.

The race is related to libthread_db's td_ta_thr_iter.  This is
libthread_db's entry point for walking the thread list of the
inferior.

Sometimes, when GDB refreshes the thread list from the target,
libthread_db's td_ta_thr_iter can somehow see glibc's thread list as a
cycle, and get stuck in an infinite loop.

The issue is that when a thread exits, its thread control structure in
glibc is moved from a "used" list to a "cache" list.  These lists are
simply circular linked lists where the "next/prev" pointers are
embedded in the thread control structure itself.  The "next" pointer
of the last element of the list points back to the list's sentinel
"head".  There's only one set of "next/prev" pointers for both lists;
thus a thread can only be in one of the lists at a time, not in both
simultaneously.

So when thread C exits, simplifying, the following happens.  A-C are
threads.  stack_used and stack_cache are the list's heads.

Before:

  stack_used -> A -> B -> C -> (&stack_used)
  stack_cache -> (&stack_cache)

After:

  stack_used -> A -> B -> (&stack_used)
  stack_cache -> C -> (&stack_cache)

td_ta_thr_iter starts by iterating at the list's head's next, and
iterates until it sees a thread whose next pointer points to the
list's head again.  Thus in the before case above, C's next points to
stack_used, indicating end of list.  In the same case, the stack_cache
list is empty.

For each thread being iterated, td_ta_thr_iter reads the whole thread
object out of the inferior.  This includes the thread's "next"
pointer.

In the scenario above, it may happen that td_ta_thr_iter is iterating
thread B and has already read B's thread structure just before thread
C exits and its control structure moves to the cached list.

Now, recall that td_ta_thr_iter is running in the context of GDB, and
there's no locking between GDB and the inferior.  From it's local copy
of B, td_ta_thr_iter believes that the next thread after B is thread
C, so it happilly continues iterating to C, a thread that has already
exited, and is now in the stack cache list.

After iterating C, td_ta_thr_iter finds the stack_cache head, which
because it is not stack_used, td_ta_thr_iter assumes it's just another
thread.  After this, unless the reverse race triggers, GDB gets stuck
in td_ta_thr_iter forever walking the stack_cache list, as no thread
in thatlist has a next pointer that points back to stack_used (the
terminating condition).

Before fully understanding the issue, I tried adding cycle detection
to GDB's td_ta_thr_iter callback.  However, td_ta_thr_iter skips
calling the callback in some cases, which means that it's possible
that the callback isn't called at all, making it impossible for GDB to
break the loop.  I did manage to get GDB stuck in that state more than
once.

Fortunately, we can avoid the issue altogether.  We don't really need
td_ta_thr_iter for live debugging nowadays, given PTRACE_EVENT_CLONE.
We already know how to map and lwp id to a thread id without iterating
(thread_from_lwp), so use that more.

gdb/ChangeLog:
2015-02-20  Pedro Alves  <palves@redhat.com>

	* linux-nat.c (linux_handle_extended_wait): Call
	thread_db_notice_clone whenever a new clone LWP is detected.
	(linux_stop_and_wait_all_lwps, linux_unstop_all_lwps): New
	functions.
	* linux-nat.h (thread_db_attach_lwp): Delete declaration.
	(thread_db_notice_clone, linux_stop_and_wait_all_lwps)
	(linux_unstop_all_lwps): Declare.
	* linux-thread-db.c (struct thread_get_info_inout): Delete.
	(thread_get_info_callback): Delete.
	(thread_from_lwp): Use td_thr_get_info and record_thread.
	(thread_db_attach_lwp): Delete.
	(thread_db_notice_clone): New function.
	(try_thread_db_load_1): If /proc is mounted and shows the
	process'es task list, walk over all LWPs and call thread_from_lwp
	instead of relying on td_ta_thr_iter.
	(attach_thread): Don't call check_thread_signals here.  Split the
	tail part of the function (which adds the thread to the core GDB
	thread list) to ...
	(record_thread): ... this function.  Call check_thread_signals
	here.
	(thread_db_wait): Don't call thread_db_find_new_threads_1.  Always
	call thread_from_lwp.
	(thread_db_update_thread_list): Rename to ...
	(thread_db_update_thread_list_org): ... this.
	(thread_db_update_thread_list): New function.
	(thread_db_find_thread_from_tid): Delete.
	(thread_db_get_ada_task_ptid): Simplify.
	* nat/linux-procfs.c: Include <sys/stat.h>.
	(linux_proc_task_list_dir_exists): New function.
	* nat/linux-procfs.h (linux_proc_task_list_dir_exists): Declare.

gdb/gdbserver/ChangeLog:
2015-02-20  Pedro Alves  <palves@redhat.com>

	* thread-db.c: Include "nat/linux-procfs.h".
	(thread_db_init): Skip listing new threads if the kernel supports
	PTRACE_EVENT_CLONE and /proc/PID/task/ is accessible.

gdb/testsuite/ChangeLog:
2015-02-20  Pedro Alves  <palves@redhat.com>

	* gdb.threads/multi-create-ns-info-thr.exp: New file.
2015-02-20 21:40:31 +00:00

235 lines
8.1 KiB
C

/* Native debugging support for GNU/Linux (LWP layer).
Copyright (C) 2000-2015 Free Software Foundation, Inc.
This file is part of GDB.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
#include "target.h"
#include <signal.h>
struct arch_lwp_info;
/* Reasons an LWP last stopped. */
enum lwp_stop_reason
{
/* Either not stopped, or stopped for a reason that doesn't require
special tracking. */
LWP_STOPPED_BY_NO_REASON,
/* Stopped by a software breakpoint. */
LWP_STOPPED_BY_SW_BREAKPOINT,
/* Stopped by a hardware breakpoint. */
LWP_STOPPED_BY_HW_BREAKPOINT,
/* Stopped by a watchpoint. */
LWP_STOPPED_BY_WATCHPOINT
};
/* Structure describing an LWP. This is public only for the purposes
of ALL_LWPS; target-specific code should generally not access it
directly. */
struct lwp_info
{
/* The process id of the LWP. This is a combination of the LWP id
and overall process id. */
ptid_t ptid;
/* If this flag is set, we need to set the event request flags the
next time we see this LWP stop. */
int must_set_ptrace_flags;
/* Non-zero if this LWP is cloned. In this context "cloned" means
that the LWP is reporting to its parent using a signal other than
SIGCHLD. */
int cloned;
/* Non-zero if we sent this LWP a SIGSTOP (but the LWP didn't report
it back yet). */
int signalled;
/* Non-zero if this LWP is stopped. */
int stopped;
/* Non-zero if this LWP will be/has been resumed. Note that an LWP
can be marked both as stopped and resumed at the same time. This
happens if we try to resume an LWP that has a wait status
pending. We shouldn't let the LWP run until that wait status has
been processed, but we should not report that wait status if GDB
didn't try to let the LWP run. */
int resumed;
/* The last resume GDB requested on this thread. */
enum resume_kind last_resume_kind;
/* If non-zero, a pending wait status. */
int status;
/* When 'stopped' is set, this is where the lwp last stopped, with
decr_pc_after_break already accounted for. If the LWP is
running, and stepping, this is the address at which the lwp was
resumed (that is, it's the previous stop PC). If the LWP is
running and not stepping, this is 0. */
CORE_ADDR stop_pc;
/* Non-zero if we were stepping this LWP. */
int step;
/* The reason the LWP last stopped, if we need to track it
(breakpoint, watchpoint, etc.) */
enum lwp_stop_reason stop_reason;
/* On architectures where it is possible to know the data address of
a triggered watchpoint, STOPPED_DATA_ADDRESS_P is non-zero, and
STOPPED_DATA_ADDRESS contains such data address. Otherwise,
STOPPED_DATA_ADDRESS_P is false, and STOPPED_DATA_ADDRESS is
undefined. Only valid if STOPPED_BY_WATCHPOINT is true. */
int stopped_data_address_p;
CORE_ADDR stopped_data_address;
/* Non-zero if we expect a duplicated SIGINT. */
int ignore_sigint;
/* If WAITSTATUS->KIND != TARGET_WAITKIND_SPURIOUS, the waitstatus
for this LWP's last event. This may correspond to STATUS above,
or to a local variable in lin_lwp_wait. */
struct target_waitstatus waitstatus;
/* Signal wether we are in a SYSCALL_ENTRY or
in a SYSCALL_RETURN event.
Values:
- TARGET_WAITKIND_SYSCALL_ENTRY
- TARGET_WAITKIND_SYSCALL_RETURN */
int syscall_state;
/* The processor core this LWP was last seen on. */
int core;
/* Arch-specific additions. */
struct arch_lwp_info *arch_private;
/* Next LWP in list. */
struct lwp_info *next;
};
/* The global list of LWPs, for ALL_LWPS. Unlike the threads list,
there is always at least one LWP on the list while the GNU/Linux
native target is active. */
extern struct lwp_info *lwp_list;
/* Iterate over each active thread (light-weight process). */
#define ALL_LWPS(LP) \
for ((LP) = lwp_list; \
(LP) != NULL; \
(LP) = (LP)->next)
/* Attempt to initialize libthread_db. */
void check_for_thread_db (void);
/* Called from the LWP layer to inform the thread_db layer that PARENT
spawned CHILD. Both LWPs are currently stopped. This function
does whatever is required to have the child LWP under the
thread_db's control --- e.g., enabling event reporting. Returns
true on success, false if the process isn't using libpthread. */
extern int thread_db_notice_clone (ptid_t parent, ptid_t child);
/* Return the set of signals used by the threads library. */
extern void lin_thread_get_thread_signals (sigset_t *mask);
/* Find process PID's pending signal set from /proc/pid/status. */
void linux_proc_pending_signals (int pid, sigset_t *pending,
sigset_t *blocked, sigset_t *ignored);
extern int lin_lwp_attach_lwp (ptid_t ptid);
extern void linux_stop_lwp (struct lwp_info *lwp);
/* Stop all LWPs, synchronously. (Any events that trigger while LWPs
are being stopped are left pending.) */
extern void linux_stop_and_wait_all_lwps (void);
/* Set resumed LWPs running again, as they were before being stopped
with linux_stop_and_wait_all_lwps. (LWPS with pending events are
left stopped.) */
extern void linux_unstop_all_lwps (void);
/* Iterator function for lin-lwp's lwp list. */
struct lwp_info *iterate_over_lwps (ptid_t filter,
int (*callback) (struct lwp_info *,
void *),
void *data);
/* Create a prototype generic GNU/Linux target. The client can
override it with local methods. */
struct target_ops * linux_target (void);
/* Create a generic GNU/Linux target using traditional
ptrace register access. */
struct target_ops *
linux_trad_target (CORE_ADDR (*register_u_offset)(struct gdbarch *, int, int));
/* Register the customized GNU/Linux target. This should be used
instead of calling add_target directly. */
void linux_nat_add_target (struct target_ops *);
/* Register a method to call whenever a new thread is attached. */
void linux_nat_set_new_thread (struct target_ops *, void (*) (struct lwp_info *));
/* Register a method to call whenever a new fork is attached. */
typedef void (linux_nat_new_fork_ftype) (struct lwp_info *parent,
pid_t child_pid);
void linux_nat_set_new_fork (struct target_ops *ops,
linux_nat_new_fork_ftype *fn);
/* Register a method to call whenever a process is killed or
detached. */
typedef void (linux_nat_forget_process_ftype) (pid_t pid);
void linux_nat_set_forget_process (struct target_ops *ops,
linux_nat_forget_process_ftype *fn);
/* Call the method registered with the function above. PID is the
process to forget about. */
void linux_nat_forget_process (pid_t pid);
/* Register a method that converts a siginfo object between the layout
that ptrace returns, and the layout in the architecture of the
inferior. */
void linux_nat_set_siginfo_fixup (struct target_ops *,
int (*) (siginfo_t *,
gdb_byte *,
int));
/* Register a method to call prior to resuming a thread. */
void linux_nat_set_prepare_to_resume (struct target_ops *,
void (*) (struct lwp_info *));
/* Update linux-nat internal state when changing from one fork
to another. */
void linux_nat_switch_fork (ptid_t new_ptid);
/* Store the saved siginfo associated with PTID in *SIGINFO.
Return 1 if it was retrieved successfully, 0 otherwise (*SIGINFO is
uninitialized in such case). */
int linux_nat_get_siginfo (ptid_t ptid, siginfo_t *siginfo);
/* Set alternative SIGTRAP-like events recognizer. */
void linux_nat_set_status_is_event (struct target_ops *t,
int (*status_is_event) (int status));