old-cross-binutils/gdb/doc/gdbint.texinfo
Ulrich Weigand 179101d624 ChangeLog:
* config/i386/nm-i386sol2.h (USE_PROC_FS): Do not define.
	* config/mips/nm-irix5.h (USE_PROC_FS): Do not define.
	* config/nm-linux.h (USE_PROC_FS): Do not undefine.

doc/ChangeLog:

	* gdbint.texinfo (Native Conditionals): Remove USE_PROC_FS.
2007-05-08 23:39:14 +00:00

7376 lines
277 KiB
Text

\input texinfo @c -*- texinfo -*-
@setfilename gdbint.info
@include gdb-cfg.texi
@dircategory Software development
@direntry
* Gdb-Internals: (gdbint). The GNU debugger's internals.
@end direntry
@ifinfo
This file documents the internals of the GNU debugger @value{GDBN}.
Copyright (C) 1990, 1991, 1992, 1993, 1994, 1996, 1998, 1999, 2000, 2001,
2002, 2003, 2004, 2005, 2006
Free Software Foundation, Inc.
Contributed by Cygnus Solutions. Written by John Gilmore.
Second Edition by Stan Shebs.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1 or
any later version published by the Free Software Foundation; with no
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
Texts. A copy of the license is included in the section entitled ``GNU
Free Documentation License''.
@end ifinfo
@setchapternewpage off
@settitle @value{GDBN} Internals
@syncodeindex fn cp
@syncodeindex vr cp
@titlepage
@title @value{GDBN} Internals
@subtitle{A guide to the internals of the GNU debugger}
@author John Gilmore
@author Cygnus Solutions
@author Second Edition:
@author Stan Shebs
@author Cygnus Solutions
@page
@tex
\def\$#1${{#1}} % Kluge: collect RCS revision info without $...$
\xdef\manvers{\$Revision$} % For use in headers, footers too
{\parskip=0pt
\hfill Cygnus Solutions\par
\hfill \manvers\par
\hfill \TeX{}info \texinfoversion\par
}
@end tex
@vskip 0pt plus 1filll
Copyright @copyright{} 1990,1991,1992,1993,1994,1996,1998,1999,2000,2001,
2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1 or
any later version published by the Free Software Foundation; with no
Invariant Sections, with no Front-Cover Texts, and with no Back-Cover
Texts. A copy of the license is included in the section entitled ``GNU
Free Documentation License''.
@end titlepage
@contents
@node Top
@c Perhaps this should be the title of the document (but only for info,
@c not for TeX). Existing GNU manuals seem inconsistent on this point.
@top Scope of this Document
This document documents the internals of the GNU debugger, @value{GDBN}. It
includes description of @value{GDBN}'s key algorithms and operations, as well
as the mechanisms that adapt @value{GDBN} to specific hosts and targets.
@menu
* Requirements::
* Overall Structure::
* Algorithms::
* User Interface::
* libgdb::
* Symbol Handling::
* Language Support::
* Host Definition::
* Target Architecture Definition::
* Target Descriptions::
* Target Vector Definition::
* Native Debugging::
* Support Libraries::
* Coding::
* Porting GDB::
* Versions and Branches::
* Start of New Year Procedure::
* Releasing GDB::
* Testsuite::
* Hints::
* GDB Observers:: @value{GDBN} Currently available observers
* GNU Free Documentation License:: The license for this documentation
* Index::
@end menu
@node Requirements
@chapter Requirements
@cindex requirements for @value{GDBN}
Before diving into the internals, you should understand the formal
requirements and other expectations for @value{GDBN}. Although some
of these may seem obvious, there have been proposals for @value{GDBN}
that have run counter to these requirements.
First of all, @value{GDBN} is a debugger. It's not designed to be a
front panel for embedded systems. It's not a text editor. It's not a
shell. It's not a programming environment.
@value{GDBN} is an interactive tool. Although a batch mode is
available, @value{GDBN}'s primary role is to interact with a human
programmer.
@value{GDBN} should be responsive to the user. A programmer hot on
the trail of a nasty bug, and operating under a looming deadline, is
going to be very impatient of everything, including the response time
to debugger commands.
@value{GDBN} should be relatively permissive, such as for expressions.
While the compiler should be picky (or have the option to be made
picky), since source code lives for a long time usually, the
programmer doing debugging shouldn't be spending time figuring out to
mollify the debugger.
@value{GDBN} will be called upon to deal with really large programs.
Executable sizes of 50 to 100 megabytes occur regularly, and we've
heard reports of programs approaching 1 gigabyte in size.
@value{GDBN} should be able to run everywhere. No other debugger is
available for even half as many configurations as @value{GDBN}
supports.
@node Overall Structure
@chapter Overall Structure
@value{GDBN} consists of three major subsystems: user interface,
symbol handling (the @dfn{symbol side}), and target system handling (the
@dfn{target side}).
The user interface consists of several actual interfaces, plus
supporting code.
The symbol side consists of object file readers, debugging info
interpreters, symbol table management, source language expression
parsing, type and value printing.
The target side consists of execution control, stack frame analysis, and
physical target manipulation.
The target side/symbol side division is not formal, and there are a
number of exceptions. For instance, core file support involves symbolic
elements (the basic core file reader is in BFD) and target elements (it
supplies the contents of memory and the values of registers). Instead,
this division is useful for understanding how the minor subsystems
should fit together.
@section The Symbol Side
The symbolic side of @value{GDBN} can be thought of as ``everything
you can do in @value{GDBN} without having a live program running''.
For instance, you can look at the types of variables, and evaluate
many kinds of expressions.
@section The Target Side
The target side of @value{GDBN} is the ``bits and bytes manipulator''.
Although it may make reference to symbolic info here and there, most
of the target side will run with only a stripped executable
available---or even no executable at all, in remote debugging cases.
Operations such as disassembly, stack frame crawls, and register
display, are able to work with no symbolic info at all. In some cases,
such as disassembly, @value{GDBN} will use symbolic info to present addresses
relative to symbols rather than as raw numbers, but it will work either
way.
@section Configurations
@cindex host
@cindex target
@dfn{Host} refers to attributes of the system where @value{GDBN} runs.
@dfn{Target} refers to the system where the program being debugged
executes. In most cases they are the same machine, in which case a
third type of @dfn{Native} attributes come into play.
Defines and include files needed to build on the host are host support.
Examples are tty support, system defined types, host byte order, host
float format.
Defines and information needed to handle the target format are target
dependent. Examples are the stack frame format, instruction set,
breakpoint instruction, registers, and how to set up and tear down the stack
to call a function.
Information that is only needed when the host and target are the same,
is native dependent. One example is Unix child process support; if the
host and target are not the same, doing a fork to start the target
process is a bad idea. The various macros needed for finding the
registers in the @code{upage}, running @code{ptrace}, and such are all
in the native-dependent files.
Another example of native-dependent code is support for features that
are really part of the target environment, but which require
@code{#include} files that are only available on the host system. Core
file handling and @code{setjmp} handling are two common cases.
When you want to make @value{GDBN} work ``native'' on a particular machine, you
have to include all three kinds of information.
@section Source Tree Structure
@cindex @value{GDBN} source tree structure
The @value{GDBN} source directory has a mostly flat structure---there
are only a few subdirectories. A file's name usually gives a hint as
to what it does; for example, @file{stabsread.c} reads stabs,
@file{dwarf2read.c} reads @sc{DWARF 2}, etc.
Files that are related to some common task have names that share
common substrings. For example, @file{*-thread.c} files deal with
debugging threads on various platforms; @file{*read.c} files deal with
reading various kinds of symbol and object files; @file{inf*.c} files
deal with direct control of the @dfn{inferior program} (@value{GDBN}
parlance for the program being debugged).
There are several dozens of files in the @file{*-tdep.c} family.
@samp{tdep} stands for @dfn{target-dependent code}---each of these
files implements debug support for a specific target architecture
(sparc, mips, etc). Usually, only one of these will be used in a
specific @value{GDBN} configuration (sometimes two, closely related).
Similarly, there are many @file{*-nat.c} files, each one for native
debugging on a specific system (e.g., @file{sparc-linux-nat.c} is for
native debugging of Sparc machines running the Linux kernel).
The few subdirectories of the source tree are:
@table @file
@item cli
Code that implements @dfn{CLI}, the @value{GDBN} Command-Line
Interpreter. @xref{User Interface, Command Interpreter}.
@item gdbserver
Code for the @value{GDBN} remote server.
@item gdbtk
Code for Insight, the @value{GDBN} TK-based GUI front-end.
@item mi
The @dfn{GDB/MI}, the @value{GDBN} Machine Interface interpreter.
@item signals
Target signal translation code.
@item tui
Code for @dfn{TUI}, the @value{GDBN} Text-mode full-screen User
Interface. @xref{User Interface, TUI}.
@end table
@node Algorithms
@chapter Algorithms
@cindex algorithms
@value{GDBN} uses a number of debugging-specific algorithms. They are
often not very complicated, but get lost in the thicket of special
cases and real-world issues. This chapter describes the basic
algorithms and mentions some of the specific target definitions that
they use.
@section Frames
@cindex frame
@cindex call stack frame
A frame is a construct that @value{GDBN} uses to keep track of calling
and called functions.
@cindex frame, unwind
@value{GDBN}'s frame model, a fresh design, was implemented with the
need to support @sc{dwarf}'s Call Frame Information in mind. In fact,
the term ``unwind'' is taken directly from that specification.
Developers wishing to learn more about unwinders, are encouraged to
read the @sc{dwarf} specification.
@findex frame_register_unwind
@findex get_frame_register
@value{GDBN}'s model is that you find a frame's registers by
``unwinding'' them from the next younger frame. That is,
@samp{get_frame_register} which returns the value of a register in
frame #1 (the next-to-youngest frame), is implemented by calling frame
#0's @code{frame_register_unwind} (the youngest frame). But then the
obvious question is: how do you access the registers of the youngest
frame itself?
@cindex sentinel frame
@findex get_frame_type
@vindex SENTINEL_FRAME
To answer this question, GDB has the @dfn{sentinel} frame, the
``-1st'' frame. Unwinding registers from the sentinel frame gives you
the current values of the youngest real frame's registers. If @var{f}
is a sentinel frame, then @code{get_frame_type (@var{f}) ==
SENTINEL_FRAME}.
@section Prologue Analysis
@cindex prologue analysis
@cindex call frame information
@cindex CFI (call frame information)
To produce a backtrace and allow the user to manipulate older frames'
variables and arguments, @value{GDBN} needs to find the base addresses
of older frames, and discover where those frames' registers have been
saved. Since a frame's ``callee-saves'' registers get saved by
younger frames if and when they're reused, a frame's registers may be
scattered unpredictably across younger frames. This means that
changing the value of a register-allocated variable in an older frame
may actually entail writing to a save slot in some younger frame.
Modern versions of GCC emit Dwarf call frame information (``CFI''),
which describes how to find frame base addresses and saved registers.
But CFI is not always available, so as a fallback @value{GDBN} uses a
technique called @dfn{prologue analysis} to find frame sizes and saved
registers. A prologue analyzer disassembles the function's machine
code starting from its entry point, and looks for instructions that
allocate frame space, save the stack pointer in a frame pointer
register, save registers, and so on. Obviously, this can't be done
accurately in general, but it's tractable to do well enough to be very
helpful. Prologue analysis predates the GNU toolchain's support for
CFI; at one time, prologue analysis was the only mechanism
@value{GDBN} used for stack unwinding at all, when the function
calling conventions didn't specify a fixed frame layout.
In the olden days, function prologues were generated by hand-written,
target-specific code in GCC, and treated as opaque and untouchable by
optimizers. Looking at this code, it was usually straightforward to
write a prologue analyzer for @value{GDBN} that would accurately
understand all the prologues GCC would generate. However, over time
GCC became more aggressive about instruction scheduling, and began to
understand more about the semantics of the prologue instructions
themselves; in response, @value{GDBN}'s analyzers became more complex
and fragile. Keeping the prologue analyzers working as GCC (and the
instruction sets themselves) evolved became a substantial task.
@cindex @file{prologue-value.c}
@cindex abstract interpretation of function prologues
@cindex pseudo-evaluation of function prologues
To try to address this problem, the code in @file{prologue-value.h}
and @file{prologue-value.c} provides a general framework for writing
prologue analyzers that are simpler and more robust than ad-hoc
analyzers. When we analyze a prologue using the prologue-value
framework, we're really doing ``abstract interpretation'' or
``pseudo-evaluation'': running the function's code in simulation, but
using conservative approximations of the values registers and memory
would hold when the code actually runs. For example, if our function
starts with the instruction:
@example
addi r1, 42 # add 42 to r1
@end example
@noindent
we don't know exactly what value will be in @code{r1} after executing
this instruction, but we do know it'll be 42 greater than its original
value.
If we then see an instruction like:
@example
addi r1, 22 # add 22 to r1
@end example
@noindent
we still don't know what @code{r1's} value is, but again, we can say
it is now 64 greater than its original value.
If the next instruction were:
@example
mov r2, r1 # set r2 to r1's value
@end example
@noindent
then we can say that @code{r2's} value is now the original value of
@code{r1} plus 64.
It's common for prologues to save registers on the stack, so we'll
need to track the values of stack frame slots, as well as the
registers. So after an instruction like this:
@example
mov (fp+4), r2
@end example
@noindent
then we'd know that the stack slot four bytes above the frame pointer
holds the original value of @code{r1} plus 64.
And so on.
Of course, this can only go so far before it gets unreasonable. If we
wanted to be able to say anything about the value of @code{r1} after
the instruction:
@example
xor r1, r3 # exclusive-or r1 and r3, place result in r1
@end example
@noindent
then things would get pretty complex. But remember, we're just doing
a conservative approximation; if exclusive-or instructions aren't
relevant to prologues, we can just say @code{r1}'s value is now
``unknown''. We can ignore things that are too complex, if that loss of
information is acceptable for our application.
So when we say ``conservative approximation'' here, what we mean is an
approximation that is either accurate, or marked ``unknown'', but
never inaccurate.
Using this framework, a prologue analyzer is simply an interpreter for
machine code, but one that uses conservative approximations for the
contents of registers and memory instead of actual values. Starting
from the function's entry point, you simulate instructions up to the
current PC, or an instruction that you don't know how to simulate.
Now you can examine the state of the registers and stack slots you've
kept track of.
@itemize @bullet
@item
To see how large your stack frame is, just check the value of the
stack pointer register; if it's the original value of the SP
minus a constant, then that constant is the stack frame's size.
If the SP's value has been marked as ``unknown'', then that means
the prologue has done something too complex for us to track, and
we don't know the frame size.
@item
To see where we've saved the previous frame's registers, we just
search the values we've tracked --- stack slots, usually, but
registers, too, if you want --- for something equal to the register's
original value. If the calling conventions suggest a standard place
to save a given register, then we can check there first, but really,
anything that will get us back the original value will probably work.
@end itemize
This does take some work. But prologue analyzers aren't
quick-and-simple pattern patching to recognize a few fixed prologue
forms any more; they're big, hairy functions. Along with inferior
function calls, prologue analysis accounts for a substantial portion
of the time needed to stabilize a @value{GDBN} port. So it's
worthwhile to look for an approach that will be easier to understand
and maintain. In the approach described above:
@itemize @bullet
@item
It's easier to see that the analyzer is correct: you just see
whether the analyzer properly (albeit conservatively) simulates
the effect of each instruction.
@item
It's easier to extend the analyzer: you can add support for new
instructions, and know that you haven't broken anything that
wasn't already broken before.
@item
It's orthogonal: to gather new information, you don't need to
complicate the code for each instruction. As long as your domain
of conservative values is already detailed enough to tell you
what you need, then all the existing instruction simulations are
already gathering the right data for you.
@end itemize
The file @file{prologue-value.h} contains detailed comments explaining
the framework and how to use it.
@section Breakpoint Handling
@cindex breakpoints
In general, a breakpoint is a user-designated location in the program
where the user wants to regain control if program execution ever reaches
that location.
There are two main ways to implement breakpoints; either as ``hardware''
breakpoints or as ``software'' breakpoints.
@cindex hardware breakpoints
@cindex program counter
Hardware breakpoints are sometimes available as a builtin debugging
features with some chips. Typically these work by having dedicated
register into which the breakpoint address may be stored. If the PC
(shorthand for @dfn{program counter})
ever matches a value in a breakpoint registers, the CPU raises an
exception and reports it to @value{GDBN}.
Another possibility is when an emulator is in use; many emulators
include circuitry that watches the address lines coming out from the
processor, and force it to stop if the address matches a breakpoint's
address.
A third possibility is that the target already has the ability to do
breakpoints somehow; for instance, a ROM monitor may do its own
software breakpoints. So although these are not literally ``hardware
breakpoints'', from @value{GDBN}'s point of view they work the same;
@value{GDBN} need not do anything more than set the breakpoint and wait
for something to happen.
Since they depend on hardware resources, hardware breakpoints may be
limited in number; when the user asks for more, @value{GDBN} will
start trying to set software breakpoints. (On some architectures,
notably the 32-bit x86 platforms, @value{GDBN} cannot always know
whether there's enough hardware resources to insert all the hardware
breakpoints and watchpoints. On those platforms, @value{GDBN} prints
an error message only when the program being debugged is continued.)
@cindex software breakpoints
Software breakpoints require @value{GDBN} to do somewhat more work.
The basic theory is that @value{GDBN} will replace a program
instruction with a trap, illegal divide, or some other instruction
that will cause an exception, and then when it's encountered,
@value{GDBN} will take the exception and stop the program. When the
user says to continue, @value{GDBN} will restore the original
instruction, single-step, re-insert the trap, and continue on.
Since it literally overwrites the program being tested, the program area
must be writable, so this technique won't work on programs in ROM. It
can also distort the behavior of programs that examine themselves,
although such a situation would be highly unusual.
Also, the software breakpoint instruction should be the smallest size of
instruction, so it doesn't overwrite an instruction that might be a jump
target, and cause disaster when the program jumps into the middle of the
breakpoint instruction. (Strictly speaking, the breakpoint must be no
larger than the smallest interval between instructions that may be jump
targets; perhaps there is an architecture where only even-numbered
instructions may jumped to.) Note that it's possible for an instruction
set not to have any instructions usable for a software breakpoint,
although in practice only the ARC has failed to define such an
instruction.
@findex BREAKPOINT
The basic definition of the software breakpoint is the macro
@code{BREAKPOINT}.
Basic breakpoint object handling is in @file{breakpoint.c}. However,
much of the interesting breakpoint action is in @file{infrun.c}.
@table @code
@cindex insert or remove software breakpoint
@findex target_remove_breakpoint
@findex target_insert_breakpoint
@item target_remove_breakpoint (@var{bp_tgt})
@itemx target_insert_breakpoint (@var{bp_tgt})
Insert or remove a software breakpoint at address
@code{@var{bp_tgt}->placed_address}. Returns zero for success,
non-zero for failure. On input, @var{bp_tgt} contains the address of the
breakpoint, and is otherwise initialized to zero. The fields of the
@code{struct bp_target_info} pointed to by @var{bp_tgt} are updated
to contain other information about the breakpoint on output. The field
@code{placed_address} may be updated if the breakpoint was placed at a
related address; the field @code{shadow_contents} contains the real
contents of the bytes where the breakpoint has been inserted,
if reading memory would return the breakpoint instead of the
underlying memory; the field @code{shadow_len} is the length of
memory cached in @code{shadow_contents}, if any; and the field
@code{placed_size} is optionally set and used by the target, if
it could differ from @code{shadow_len}.
For example, the remote target @samp{Z0} packet does not require
shadowing memory, so @code{shadow_len} is left at zero. However,
the length reported by @code{BREAKPOINT_FROM_PC} is cached in
@code{placed_size}, so that a matching @samp{z0} packet can be
used to remove the breakpoint.
@cindex insert or remove hardware breakpoint
@findex target_remove_hw_breakpoint
@findex target_insert_hw_breakpoint
@item target_remove_hw_breakpoint (@var{bp_tgt})
@itemx target_insert_hw_breakpoint (@var{bp_tgt})
Insert or remove a hardware-assisted breakpoint at address
@code{@var{bp_tgt}->placed_address}. Returns zero for success,
non-zero for failure. See @code{target_insert_breakpoint} for
a description of the @code{struct bp_target_info} pointed to by
@var{bp_tgt}; the @code{shadow_contents} and
@code{shadow_len} members are not used for hardware breakpoints,
but @code{placed_size} may be.
@end table
@section Single Stepping
@section Signal Handling
@section Thread Handling
@section Inferior Function Calls
@section Longjmp Support
@cindex @code{longjmp} debugging
@value{GDBN} has support for figuring out that the target is doing a
@code{longjmp} and for stopping at the target of the jump, if we are
stepping. This is done with a few specialized internal breakpoints,
which are visible in the output of the @samp{maint info breakpoint}
command.
@findex GET_LONGJMP_TARGET
To make this work, you need to define a macro called
@code{GET_LONGJMP_TARGET}, which will examine the @code{jmp_buf}
structure and extract the longjmp target address. Since @code{jmp_buf}
is target specific, you will need to define it in the appropriate
@file{tm-@var{target}.h} file. Look in @file{tm-sun4os4.h} and
@file{sparc-tdep.c} for examples of how to do this.
@section Watchpoints
@cindex watchpoints
Watchpoints are a special kind of breakpoints (@pxref{Algorithms,
breakpoints}) which break when data is accessed rather than when some
instruction is executed. When you have data which changes without
your knowing what code does that, watchpoints are the silver bullet to
hunt down and kill such bugs.
@cindex hardware watchpoints
@cindex software watchpoints
Watchpoints can be either hardware-assisted or not; the latter type is
known as ``software watchpoints.'' @value{GDBN} always uses
hardware-assisted watchpoints if they are available, and falls back on
software watchpoints otherwise. Typical situations where @value{GDBN}
will use software watchpoints are:
@itemize @bullet
@item
The watched memory region is too large for the underlying hardware
watchpoint support. For example, each x86 debug register can watch up
to 4 bytes of memory, so trying to watch data structures whose size is
more than 16 bytes will cause @value{GDBN} to use software
watchpoints.
@item
The value of the expression to be watched depends on data held in
registers (as opposed to memory).
@item
Too many different watchpoints requested. (On some architectures,
this situation is impossible to detect until the debugged program is
resumed.) Note that x86 debug registers are used both for hardware
breakpoints and for watchpoints, so setting too many hardware
breakpoints might cause watchpoint insertion to fail.
@item
No hardware-assisted watchpoints provided by the target
implementation.
@end itemize
Software watchpoints are very slow, since @value{GDBN} needs to
single-step the program being debugged and test the value of the
watched expression(s) after each instruction. The rest of this
section is mostly irrelevant for software watchpoints.
When the inferior stops, @value{GDBN} tries to establish, among other
possible reasons, whether it stopped due to a watchpoint being hit.
For a data-write watchpoint, it does so by evaluating, for each
watchpoint, the expression whose value is being watched, and testing
whether the watched value has changed. For data-read and data-access
watchpoints, @value{GDBN} needs the target to supply a primitive that
returns the address of the data that was accessed or read (see the
description of @code{target_stopped_data_address} below): if this
primitive returns a valid address, @value{GDBN} infers that a
watchpoint triggered if it watches an expression whose evaluation uses
that address.
@value{GDBN} uses several macros and primitives to support hardware
watchpoints:
@table @code
@findex TARGET_HAS_HARDWARE_WATCHPOINTS
@item TARGET_HAS_HARDWARE_WATCHPOINTS
If defined, the target supports hardware watchpoints.
@findex TARGET_CAN_USE_HARDWARE_WATCHPOINT
@item TARGET_CAN_USE_HARDWARE_WATCHPOINT (@var{type}, @var{count}, @var{other})
Return the number of hardware watchpoints of type @var{type} that are
possible to be set. The value is positive if @var{count} watchpoints
of this type can be set, zero if setting watchpoints of this type is
not supported, and negative if @var{count} is more than the maximum
number of watchpoints of type @var{type} that can be set. @var{other}
is non-zero if other types of watchpoints are currently enabled (there
are architectures which cannot set watchpoints of different types at
the same time).
@findex TARGET_REGION_OK_FOR_HW_WATCHPOINT
@item TARGET_REGION_OK_FOR_HW_WATCHPOINT (@var{addr}, @var{len})
Return non-zero if hardware watchpoints can be used to watch a region
whose address is @var{addr} and whose length in bytes is @var{len}.
@cindex insert or remove hardware watchpoint
@findex target_insert_watchpoint
@findex target_remove_watchpoint
@item target_insert_watchpoint (@var{addr}, @var{len}, @var{type})
@itemx target_remove_watchpoint (@var{addr}, @var{len}, @var{type})
Insert or remove a hardware watchpoint starting at @var{addr}, for
@var{len} bytes. @var{type} is the watchpoint type, one of the
possible values of the enumerated data type @code{target_hw_bp_type},
defined by @file{breakpoint.h} as follows:
@smallexample
enum target_hw_bp_type
@{
hw_write = 0, /* Common (write) HW watchpoint */
hw_read = 1, /* Read HW watchpoint */
hw_access = 2, /* Access (read or write) HW watchpoint */
hw_execute = 3 /* Execute HW breakpoint */
@};
@end smallexample
@noindent
These two macros should return 0 for success, non-zero for failure.
@findex target_stopped_data_address
@item target_stopped_data_address (@var{addr_p})
If the inferior has some watchpoint that triggered, place the address
associated with the watchpoint at the location pointed to by
@var{addr_p} and return non-zero. Otherwise, return zero. Note that
this primitive is used by @value{GDBN} only on targets that support
data-read or data-access type watchpoints, so targets that have
support only for data-write watchpoints need not implement these
primitives.
@findex HAVE_STEPPABLE_WATCHPOINT
@item HAVE_STEPPABLE_WATCHPOINT
If defined to a non-zero value, it is not necessary to disable a
watchpoint to step over it.
@findex HAVE_NONSTEPPABLE_WATCHPOINT
@item HAVE_NONSTEPPABLE_WATCHPOINT
If defined to a non-zero value, @value{GDBN} should disable a
watchpoint to step the inferior over it.
@findex HAVE_CONTINUABLE_WATCHPOINT
@item HAVE_CONTINUABLE_WATCHPOINT
If defined to a non-zero value, it is possible to continue the
inferior after a watchpoint has been hit.
@findex CANNOT_STEP_HW_WATCHPOINTS
@item CANNOT_STEP_HW_WATCHPOINTS
If this is defined to a non-zero value, @value{GDBN} will remove all
watchpoints before stepping the inferior.
@findex STOPPED_BY_WATCHPOINT
@item STOPPED_BY_WATCHPOINT (@var{wait_status})
Return non-zero if stopped by a watchpoint. @var{wait_status} is of
the type @code{struct target_waitstatus}, defined by @file{target.h}.
Normally, this macro is defined to invoke the function pointed to by
the @code{to_stopped_by_watchpoint} member of the structure (of the
type @code{target_ops}, defined on @file{target.h}) that describes the
target-specific operations; @code{to_stopped_by_watchpoint} ignores
the @var{wait_status} argument.
@value{GDBN} does not require the non-zero value returned by
@code{STOPPED_BY_WATCHPOINT} to be 100% correct, so if a target cannot
determine for sure whether the inferior stopped due to a watchpoint,
it could return non-zero ``just in case''.
@end table
@subsection x86 Watchpoints
@cindex x86 debug registers
@cindex watchpoints, on x86
The 32-bit Intel x86 (a.k.a.@: ia32) processors feature special debug
registers designed to facilitate debugging. @value{GDBN} provides a
generic library of functions that x86-based ports can use to implement
support for watchpoints and hardware-assisted breakpoints. This
subsection documents the x86 watchpoint facilities in @value{GDBN}.
To use the generic x86 watchpoint support, a port should do the
following:
@itemize @bullet
@findex I386_USE_GENERIC_WATCHPOINTS
@item
Define the macro @code{I386_USE_GENERIC_WATCHPOINTS} somewhere in the
target-dependent headers.
@item
Include the @file{config/i386/nm-i386.h} header file @emph{after}
defining @code{I386_USE_GENERIC_WATCHPOINTS}.
@item
Add @file{i386-nat.o} to the value of the Make variable
@code{NATDEPFILES} (@pxref{Native Debugging, NATDEPFILES}) or
@code{TDEPFILES} (@pxref{Target Architecture Definition, TDEPFILES}).
@item
Provide implementations for the @code{I386_DR_LOW_*} macros described
below. Typically, each macro should call a target-specific function
which does the real work.
@end itemize
The x86 watchpoint support works by maintaining mirror images of the
debug registers. Values are copied between the mirror images and the
real debug registers via a set of macros which each target needs to
provide:
@table @code
@findex I386_DR_LOW_SET_CONTROL
@item I386_DR_LOW_SET_CONTROL (@var{val})
Set the Debug Control (DR7) register to the value @var{val}.
@findex I386_DR_LOW_SET_ADDR
@item I386_DR_LOW_SET_ADDR (@var{idx}, @var{addr})
Put the address @var{addr} into the debug register number @var{idx}.
@findex I386_DR_LOW_RESET_ADDR
@item I386_DR_LOW_RESET_ADDR (@var{idx})
Reset (i.e.@: zero out) the address stored in the debug register
number @var{idx}.
@findex I386_DR_LOW_GET_STATUS
@item I386_DR_LOW_GET_STATUS
Return the value of the Debug Status (DR6) register. This value is
used immediately after it is returned by
@code{I386_DR_LOW_GET_STATUS}, so as to support per-thread status
register values.
@end table
For each one of the 4 debug registers (whose indices are from 0 to 3)
that store addresses, a reference count is maintained by @value{GDBN},
to allow sharing of debug registers by several watchpoints. This
allows users to define several watchpoints that watch the same
expression, but with different conditions and/or commands, without
wasting debug registers which are in short supply. @value{GDBN}
maintains the reference counts internally, targets don't have to do
anything to use this feature.
The x86 debug registers can each watch a region that is 1, 2, or 4
bytes long. The ia32 architecture requires that each watched region
be appropriately aligned: 2-byte region on 2-byte boundary, 4-byte
region on 4-byte boundary. However, the x86 watchpoint support in
@value{GDBN} can watch unaligned regions and regions larger than 4
bytes (up to 16 bytes) by allocating several debug registers to watch
a single region. This allocation of several registers per a watched
region is also done automatically without target code intervention.
The generic x86 watchpoint support provides the following API for the
@value{GDBN}'s application code:
@table @code
@findex i386_region_ok_for_watchpoint
@item i386_region_ok_for_watchpoint (@var{addr}, @var{len})
The macro @code{TARGET_REGION_OK_FOR_HW_WATCHPOINT} is set to call
this function. It counts the number of debug registers required to
watch a given region, and returns a non-zero value if that number is
less than 4, the number of debug registers available to x86
processors.
@findex i386_stopped_data_address
@item i386_stopped_data_address (@var{addr_p})
The target function
@code{target_stopped_data_address} is set to call this function.
This
function examines the breakpoint condition bits in the DR6 Debug
Status register, as returned by the @code{I386_DR_LOW_GET_STATUS}
macro, and returns the address associated with the first bit that is
set in DR6.
@findex i386_stopped_by_watchpoint
@item i386_stopped_by_watchpoint (void)
The macro @code{STOPPED_BY_WATCHPOINT}
is set to call this function. The
argument passed to @code{STOPPED_BY_WATCHPOINT} is ignored. This
function examines the breakpoint condition bits in the DR6 Debug
Status register, as returned by the @code{I386_DR_LOW_GET_STATUS}
macro, and returns true if any bit is set. Otherwise, false is
returned.
@findex i386_insert_watchpoint
@findex i386_remove_watchpoint
@item i386_insert_watchpoint (@var{addr}, @var{len}, @var{type})
@itemx i386_remove_watchpoint (@var{addr}, @var{len}, @var{type})
Insert or remove a watchpoint. The macros
@code{target_insert_watchpoint} and @code{target_remove_watchpoint}
are set to call these functions. @code{i386_insert_watchpoint} first
looks for a debug register which is already set to watch the same
region for the same access types; if found, it just increments the
reference count of that debug register, thus implementing debug
register sharing between watchpoints. If no such register is found,
the function looks for a vacant debug register, sets its mirrored
value to @var{addr}, sets the mirrored value of DR7 Debug Control
register as appropriate for the @var{len} and @var{type} parameters,
and then passes the new values of the debug register and DR7 to the
inferior by calling @code{I386_DR_LOW_SET_ADDR} and
@code{I386_DR_LOW_SET_CONTROL}. If more than one debug register is
required to cover the given region, the above process is repeated for
each debug register.
@code{i386_remove_watchpoint} does the opposite: it resets the address
in the mirrored value of the debug register and its read/write and
length bits in the mirrored value of DR7, then passes these new
values to the inferior via @code{I386_DR_LOW_RESET_ADDR} and
@code{I386_DR_LOW_SET_CONTROL}. If a register is shared by several
watchpoints, each time a @code{i386_remove_watchpoint} is called, it
decrements the reference count, and only calls
@code{I386_DR_LOW_RESET_ADDR} and @code{I386_DR_LOW_SET_CONTROL} when
the count goes to zero.
@findex i386_insert_hw_breakpoint
@findex i386_remove_hw_breakpoint
@item i386_insert_hw_breakpoint (@var{bp_tgt})
@itemx i386_remove_hw_breakpoint (@var{bp_tgt})
These functions insert and remove hardware-assisted breakpoints. The
macros @code{target_insert_hw_breakpoint} and
@code{target_remove_hw_breakpoint} are set to call these functions.
The argument is a @code{struct bp_target_info *}, as described in
the documentation for @code{target_insert_breakpoint}.
These functions work like @code{i386_insert_watchpoint} and
@code{i386_remove_watchpoint}, respectively, except that they set up
the debug registers to watch instruction execution, and each
hardware-assisted breakpoint always requires exactly one debug
register.
@findex i386_stopped_by_hwbp
@item i386_stopped_by_hwbp (void)
This function returns non-zero if the inferior has some watchpoint or
hardware breakpoint that triggered. It works like
@code{i386_stopped_data_address}, except that it doesn't record the
address whose watchpoint triggered.
@findex i386_cleanup_dregs
@item i386_cleanup_dregs (void)
This function clears all the reference counts, addresses, and control
bits in the mirror images of the debug registers. It doesn't affect
the actual debug registers in the inferior process.
@end table
@noindent
@strong{Notes:}
@enumerate 1
@item
x86 processors support setting watchpoints on I/O reads or writes.
However, since no target supports this (as of March 2001), and since
@code{enum target_hw_bp_type} doesn't even have an enumeration for I/O
watchpoints, this feature is not yet available to @value{GDBN} running
on x86.
@item
x86 processors can enable watchpoints locally, for the current task
only, or globally, for all the tasks. For each debug register,
there's a bit in the DR7 Debug Control register that determines
whether the associated address is watched locally or globally. The
current implementation of x86 watchpoint support in @value{GDBN}
always sets watchpoints to be locally enabled, since global
watchpoints might interfere with the underlying OS and are probably
unavailable in many platforms.
@end enumerate
@section Checkpoints
@cindex checkpoints
@cindex restart
In the abstract, a checkpoint is a point in the execution history of
the program, which the user may wish to return to at some later time.
Internally, a checkpoint is a saved copy of the program state, including
whatever information is required in order to restore the program to that
state at a later time. This can be expected to include the state of
registers and memory, and may include external state such as the state
of open files and devices.
There are a number of ways in which checkpoints may be implemented
in gdb, e.g.@: as corefiles, as forked processes, and as some opaque
method implemented on the target side.
A corefile can be used to save an image of target memory and register
state, which can in principle be restored later --- but corefiles do
not typically include information about external entities such as
open files. Currently this method is not implemented in gdb.
A forked process can save the state of user memory and registers,
as well as some subset of external (kernel) state. This method
is used to implement checkpoints on Linux, and in principle might
be used on other systems.
Some targets, e.g.@: simulators, might have their own built-in
method for saving checkpoints, and gdb might be able to take
advantage of that capability without necessarily knowing any
details of how it is done.
@section Observing changes in @value{GDBN} internals
@cindex observer pattern interface
@cindex notifications about changes in internals
In order to function properly, several modules need to be notified when
some changes occur in the @value{GDBN} internals. Traditionally, these
modules have relied on several paradigms, the most common ones being
hooks and gdb-events. Unfortunately, none of these paradigms was
versatile enough to become the standard notification mechanism in
@value{GDBN}. The fact that they only supported one ``client'' was also
a strong limitation.
A new paradigm, based on the Observer pattern of the @cite{Design
Patterns} book, has therefore been implemented. The goal was to provide
a new interface overcoming the issues with the notification mechanisms
previously available. This new interface needed to be strongly typed,
easy to extend, and versatile enough to be used as the standard
interface when adding new notifications.
See @ref{GDB Observers} for a brief description of the observers
currently implemented in GDB. The rationale for the current
implementation is also briefly discussed.
@node User Interface
@chapter User Interface
@value{GDBN} has several user interfaces. Although the command-line interface
is the most common and most familiar, there are others.
@section Command Interpreter
@cindex command interpreter
@cindex CLI
The command interpreter in @value{GDBN} is fairly simple. It is designed to
allow for the set of commands to be augmented dynamically, and also
has a recursive subcommand capability, where the first argument to
a command may itself direct a lookup on a different command list.
For instance, the @samp{set} command just starts a lookup on the
@code{setlist} command list, while @samp{set thread} recurses
to the @code{set_thread_cmd_list}.
@findex add_cmd
@findex add_com
To add commands in general, use @code{add_cmd}. @code{add_com} adds to
the main command list, and should be used for those commands. The usual
place to add commands is in the @code{_initialize_@var{xyz}} routines at
the ends of most source files.
@findex add_setshow_cmd
@findex add_setshow_cmd_full
To add paired @samp{set} and @samp{show} commands, use
@code{add_setshow_cmd} or @code{add_setshow_cmd_full}. The former is
a slightly simpler interface which is useful when you don't need to
further modify the new command structures, while the latter returns
the new command structures for manipulation.
@cindex deprecating commands
@findex deprecate_cmd
Before removing commands from the command set it is a good idea to
deprecate them for some time. Use @code{deprecate_cmd} on commands or
aliases to set the deprecated flag. @code{deprecate_cmd} takes a
@code{struct cmd_list_element} as it's first argument. You can use the
return value from @code{add_com} or @code{add_cmd} to deprecate the
command immediately after it is created.
The first time a command is used the user will be warned and offered a
replacement (if one exists). Note that the replacement string passed to
@code{deprecate_cmd} should be the full name of the command, i.e., the
entire string the user should type at the command line.
@section UI-Independent Output---the @code{ui_out} Functions
@c This section is based on the documentation written by Fernando
@c Nasser <fnasser@redhat.com>.
@cindex @code{ui_out} functions
The @code{ui_out} functions present an abstraction level for the
@value{GDBN} output code. They hide the specifics of different user
interfaces supported by @value{GDBN}, and thus free the programmer
from the need to write several versions of the same code, one each for
every UI, to produce output.
@subsection Overview and Terminology
In general, execution of each @value{GDBN} command produces some sort
of output, and can even generate an input request.
Output can be generated for the following purposes:
@itemize @bullet
@item
to display a @emph{result} of an operation;
@item
to convey @emph{info} or produce side-effects of a requested
operation;
@item
to provide a @emph{notification} of an asynchronous event (including
progress indication of a prolonged asynchronous operation);
@item
to display @emph{error messages} (including warnings);
@item
to show @emph{debug data};
@item
to @emph{query} or prompt a user for input (a special case).
@end itemize
@noindent
This section mainly concentrates on how to build result output,
although some of it also applies to other kinds of output.
Generation of output that displays the results of an operation
involves one or more of the following:
@itemize @bullet
@item
output of the actual data
@item
formatting the output as appropriate for console output, to make it
easily readable by humans
@item
machine oriented formatting--a more terse formatting to allow for easy
parsing by programs which read @value{GDBN}'s output
@item
annotation, whose purpose is to help legacy GUIs to identify interesting
parts in the output
@end itemize
The @code{ui_out} routines take care of the first three aspects.
Annotations are provided by separate annotation routines. Note that use
of annotations for an interface between a GUI and @value{GDBN} is
deprecated.
Output can be in the form of a single item, which we call a @dfn{field};
a @dfn{list} consisting of identical fields; a @dfn{tuple} consisting of
non-identical fields; or a @dfn{table}, which is a tuple consisting of a
header and a body. In a BNF-like form:
@table @code
@item <table> @expansion{}
@code{<header> <body>}
@item <header> @expansion{}
@code{@{ <column> @}}
@item <column> @expansion{}
@code{<width> <alignment> <title>}
@item <body> @expansion{}
@code{@{<row>@}}
@end table
@subsection General Conventions
Most @code{ui_out} routines are of type @code{void}, the exceptions are
@code{ui_out_stream_new} (which returns a pointer to the newly created
object) and the @code{make_cleanup} routines.
The first parameter is always the @code{ui_out} vector object, a pointer
to a @code{struct ui_out}.
The @var{format} parameter is like in @code{printf} family of functions.
When it is present, there must also be a variable list of arguments
sufficient used to satisfy the @code{%} specifiers in the supplied
format.
When a character string argument is not used in a @code{ui_out} function
call, a @code{NULL} pointer has to be supplied instead.
@subsection Table, Tuple and List Functions
@cindex list output functions
@cindex table output functions
@cindex tuple output functions
This section introduces @code{ui_out} routines for building lists,
tuples and tables. The routines to output the actual data items
(fields) are presented in the next section.
To recap: A @dfn{tuple} is a sequence of @dfn{fields}, each field
containing information about an object; a @dfn{list} is a sequence of
fields where each field describes an identical object.
Use the @dfn{table} functions when your output consists of a list of
rows (tuples) and the console output should include a heading. Use this
even when you are listing just one object but you still want the header.
@cindex nesting level in @code{ui_out} functions
Tables can not be nested. Tuples and lists can be nested up to a
maximum of five levels.
The overall structure of the table output code is something like this:
@smallexample
ui_out_table_begin
ui_out_table_header
@dots{}
ui_out_table_body
ui_out_tuple_begin
ui_out_field_*
@dots{}
ui_out_tuple_end
@dots{}
ui_out_table_end
@end smallexample
Here is the description of table-, tuple- and list-related @code{ui_out}
functions:
@deftypefun void ui_out_table_begin (struct ui_out *@var{uiout}, int @var{nbrofcols}, int @var{nr_rows}, const char *@var{tblid})
The function @code{ui_out_table_begin} marks the beginning of the output
of a table. It should always be called before any other @code{ui_out}
function for a given table. @var{nbrofcols} is the number of columns in
the table. @var{nr_rows} is the number of rows in the table.
@var{tblid} is an optional string identifying the table. The string
pointed to by @var{tblid} is copied by the implementation of
@code{ui_out_table_begin}, so the application can free the string if it
was @code{malloc}ed.
The companion function @code{ui_out_table_end}, described below, marks
the end of the table's output.
@end deftypefun
@deftypefun void ui_out_table_header (struct ui_out *@var{uiout}, int @var{width}, enum ui_align @var{alignment}, const char *@var{colhdr})
@code{ui_out_table_header} provides the header information for a single
table column. You call this function several times, one each for every
column of the table, after @code{ui_out_table_begin}, but before
@code{ui_out_table_body}.
The value of @var{width} gives the column width in characters. The
value of @var{alignment} is one of @code{left}, @code{center}, and
@code{right}, and it specifies how to align the header: left-justify,
center, or right-justify it. @var{colhdr} points to a string that
specifies the column header; the implementation copies that string, so
column header strings in @code{malloc}ed storage can be freed after the
call.
@end deftypefun
@deftypefun void ui_out_table_body (struct ui_out *@var{uiout})
This function delimits the table header from the table body.
@end deftypefun
@deftypefun void ui_out_table_end (struct ui_out *@var{uiout})
This function signals the end of a table's output. It should be called
after the table body has been produced by the list and field output
functions.
There should be exactly one call to @code{ui_out_table_end} for each
call to @code{ui_out_table_begin}, otherwise the @code{ui_out} functions
will signal an internal error.
@end deftypefun
The output of the tuples that represent the table rows must follow the
call to @code{ui_out_table_body} and precede the call to
@code{ui_out_table_end}. You build a tuple by calling
@code{ui_out_tuple_begin} and @code{ui_out_tuple_end}, with suitable
calls to functions which actually output fields between them.
@deftypefun void ui_out_tuple_begin (struct ui_out *@var{uiout}, const char *@var{id})
This function marks the beginning of a tuple output. @var{id} points
to an optional string that identifies the tuple; it is copied by the
implementation, and so strings in @code{malloc}ed storage can be freed
after the call.
@end deftypefun
@deftypefun void ui_out_tuple_end (struct ui_out *@var{uiout})
This function signals an end of a tuple output. There should be exactly
one call to @code{ui_out_tuple_end} for each call to
@code{ui_out_tuple_begin}, otherwise an internal @value{GDBN} error will
be signaled.
@end deftypefun
@deftypefun struct cleanup *make_cleanup_ui_out_tuple_begin_end (struct ui_out *@var{uiout}, const char *@var{id})
This function first opens the tuple and then establishes a cleanup
(@pxref{Coding, Cleanups}) to close the tuple. It provides a convenient
and correct implementation of the non-portable@footnote{The function
cast is not portable ISO C.} code sequence:
@smallexample
struct cleanup *old_cleanup;
ui_out_tuple_begin (uiout, "...");
old_cleanup = make_cleanup ((void(*)(void *)) ui_out_tuple_end,
uiout);
@end smallexample
@end deftypefun
@deftypefun void ui_out_list_begin (struct ui_out *@var{uiout}, const char *@var{id})
This function marks the beginning of a list output. @var{id} points to
an optional string that identifies the list; it is copied by the
implementation, and so strings in @code{malloc}ed storage can be freed
after the call.
@end deftypefun
@deftypefun void ui_out_list_end (struct ui_out *@var{uiout})
This function signals an end of a list output. There should be exactly
one call to @code{ui_out_list_end} for each call to
@code{ui_out_list_begin}, otherwise an internal @value{GDBN} error will
be signaled.
@end deftypefun
@deftypefun struct cleanup *make_cleanup_ui_out_list_begin_end (struct ui_out *@var{uiout}, const char *@var{id})
Similar to @code{make_cleanup_ui_out_tuple_begin_end}, this function
opens a list and then establishes cleanup (@pxref{Coding, Cleanups})
that will close the list.list.
@end deftypefun
@subsection Item Output Functions
@cindex item output functions
@cindex field output functions
@cindex data output
The functions described below produce output for the actual data
items, or fields, which contain information about the object.
Choose the appropriate function accordingly to your particular needs.
@deftypefun void ui_out_field_fmt (struct ui_out *@var{uiout}, char *@var{fldname}, char *@var{format}, ...)
This is the most general output function. It produces the
representation of the data in the variable-length argument list
according to formatting specifications in @var{format}, a
@code{printf}-like format string. The optional argument @var{fldname}
supplies the name of the field. The data items themselves are
supplied as additional arguments after @var{format}.
This generic function should be used only when it is not possible to
use one of the specialized versions (see below).
@end deftypefun
@deftypefun void ui_out_field_int (struct ui_out *@var{uiout}, const char *@var{fldname}, int @var{value})
This function outputs a value of an @code{int} variable. It uses the
@code{"%d"} output conversion specification. @var{fldname} specifies
the name of the field.
@end deftypefun
@deftypefun void ui_out_field_fmt_int (struct ui_out *@var{uiout}, int @var{width}, enum ui_align @var{alignment}, const char *@var{fldname}, int @var{value})
This function outputs a value of an @code{int} variable. It differs from
@code{ui_out_field_int} in that the caller specifies the desired @var{width} and @var{alignment} of the output.
@var{fldname} specifies
the name of the field.
@end deftypefun
@deftypefun void ui_out_field_core_addr (struct ui_out *@var{uiout}, const char *@var{fldname}, CORE_ADDR @var{address})
This function outputs an address.
@end deftypefun
@deftypefun void ui_out_field_string (struct ui_out *@var{uiout}, const char *@var{fldname}, const char *@var{string})
This function outputs a string using the @code{"%s"} conversion
specification.
@end deftypefun
Sometimes, there's a need to compose your output piece by piece using
functions that operate on a stream, such as @code{value_print} or
@code{fprintf_symbol_filtered}. These functions accept an argument of
the type @code{struct ui_file *}, a pointer to a @code{ui_file} object
used to store the data stream used for the output. When you use one
of these functions, you need a way to pass their results stored in a
@code{ui_file} object to the @code{ui_out} functions. To this end,
you first create a @code{ui_stream} object by calling
@code{ui_out_stream_new}, pass the @code{stream} member of that
@code{ui_stream} object to @code{value_print} and similar functions,
and finally call @code{ui_out_field_stream} to output the field you
constructed. When the @code{ui_stream} object is no longer needed,
you should destroy it and free its memory by calling
@code{ui_out_stream_delete}.
@deftypefun struct ui_stream *ui_out_stream_new (struct ui_out *@var{uiout})
This function creates a new @code{ui_stream} object which uses the
same output methods as the @code{ui_out} object whose pointer is
passed in @var{uiout}. It returns a pointer to the newly created
@code{ui_stream} object.
@end deftypefun
@deftypefun void ui_out_stream_delete (struct ui_stream *@var{streambuf})
This functions destroys a @code{ui_stream} object specified by
@var{streambuf}.
@end deftypefun
@deftypefun void ui_out_field_stream (struct ui_out *@var{uiout}, const char *@var{fieldname}, struct ui_stream *@var{streambuf})
This function consumes all the data accumulated in
@code{streambuf->stream} and outputs it like
@code{ui_out_field_string} does. After a call to
@code{ui_out_field_stream}, the accumulated data no longer exists, but
the stream is still valid and may be used for producing more fields.
@end deftypefun
@strong{Important:} If there is any chance that your code could bail
out before completing output generation and reaching the point where
@code{ui_out_stream_delete} is called, it is necessary to set up a
cleanup, to avoid leaking memory and other resources. Here's a
skeleton code to do that:
@smallexample
struct ui_stream *mybuf = ui_out_stream_new (uiout);
struct cleanup *old = make_cleanup (ui_out_stream_delete, mybuf);
...
do_cleanups (old);
@end smallexample
If the function already has the old cleanup chain set (for other kinds
of cleanups), you just have to add your cleanup to it:
@smallexample
mybuf = ui_out_stream_new (uiout);
make_cleanup (ui_out_stream_delete, mybuf);
@end smallexample
Note that with cleanups in place, you should not call
@code{ui_out_stream_delete} directly, or you would attempt to free the
same buffer twice.
@subsection Utility Output Functions
@deftypefun void ui_out_field_skip (struct ui_out *@var{uiout}, const char *@var{fldname})
This function skips a field in a table. Use it if you have to leave
an empty field without disrupting the table alignment. The argument
@var{fldname} specifies a name for the (missing) filed.
@end deftypefun
@deftypefun void ui_out_text (struct ui_out *@var{uiout}, const char *@var{string})
This function outputs the text in @var{string} in a way that makes it
easy to be read by humans. For example, the console implementation of
this method filters the text through a built-in pager, to prevent it
from scrolling off the visible portion of the screen.
Use this function for printing relatively long chunks of text around
the actual field data: the text it produces is not aligned according
to the table's format. Use @code{ui_out_field_string} to output a
string field, and use @code{ui_out_message}, described below, to
output short messages.
@end deftypefun
@deftypefun void ui_out_spaces (struct ui_out *@var{uiout}, int @var{nspaces})
This function outputs @var{nspaces} spaces. It is handy to align the
text produced by @code{ui_out_text} with the rest of the table or
list.
@end deftypefun
@deftypefun void ui_out_message (struct ui_out *@var{uiout}, int @var{verbosity}, const char *@var{format}, ...)
This function produces a formatted message, provided that the current
verbosity level is at least as large as given by @var{verbosity}. The
current verbosity level is specified by the user with the @samp{set
verbositylevel} command.@footnote{As of this writing (April 2001),
setting verbosity level is not yet implemented, and is always returned
as zero. So calling @code{ui_out_message} with a @var{verbosity}
argument more than zero will cause the message to never be printed.}
@end deftypefun
@deftypefun void ui_out_wrap_hint (struct ui_out *@var{uiout}, char *@var{indent})
This function gives the console output filter (a paging filter) a hint
of where to break lines which are too long. Ignored for all other
output consumers. @var{indent}, if non-@code{NULL}, is the string to
be printed to indent the wrapped text on the next line; it must remain
accessible until the next call to @code{ui_out_wrap_hint}, or until an
explicit newline is produced by one of the other functions. If
@var{indent} is @code{NULL}, the wrapped text will not be indented.
@end deftypefun
@deftypefun void ui_out_flush (struct ui_out *@var{uiout})
This function flushes whatever output has been accumulated so far, if
the UI buffers output.
@end deftypefun
@subsection Examples of Use of @code{ui_out} functions
@cindex using @code{ui_out} functions
@cindex @code{ui_out} functions, usage examples
This section gives some practical examples of using the @code{ui_out}
functions to generalize the old console-oriented code in
@value{GDBN}. The examples all come from functions defined on the
@file{breakpoints.c} file.
This example, from the @code{breakpoint_1} function, shows how to
produce a table.
The original code was:
@smallexample
if (!found_a_breakpoint++)
@{
annotate_breakpoints_headers ();
annotate_field (0);
printf_filtered ("Num ");
annotate_field (1);
printf_filtered ("Type ");
annotate_field (2);
printf_filtered ("Disp ");
annotate_field (3);
printf_filtered ("Enb ");
if (addressprint)
@{
annotate_field (4);
printf_filtered ("Address ");
@}
annotate_field (5);
printf_filtered ("What\n");
annotate_breakpoints_table ();
@}
@end smallexample
Here's the new version:
@smallexample
nr_printable_breakpoints = @dots{};
if (addressprint)
ui_out_table_begin (ui, 6, nr_printable_breakpoints, "BreakpointTable");
else
ui_out_table_begin (ui, 5, nr_printable_breakpoints, "BreakpointTable");
if (nr_printable_breakpoints > 0)
annotate_breakpoints_headers ();
if (nr_printable_breakpoints > 0)
annotate_field (0);
ui_out_table_header (uiout, 3, ui_left, "number", "Num"); /* 1 */
if (nr_printable_breakpoints > 0)
annotate_field (1);
ui_out_table_header (uiout, 14, ui_left, "type", "Type"); /* 2 */
if (nr_printable_breakpoints > 0)
annotate_field (2);
ui_out_table_header (uiout, 4, ui_left, "disp", "Disp"); /* 3 */
if (nr_printable_breakpoints > 0)
annotate_field (3);
ui_out_table_header (uiout, 3, ui_left, "enabled", "Enb"); /* 4 */
if (addressprint)
@{
if (nr_printable_breakpoints > 0)
annotate_field (4);
if (TARGET_ADDR_BIT <= 32)
ui_out_table_header (uiout, 10, ui_left, "addr", "Address");/* 5 */
else
ui_out_table_header (uiout, 18, ui_left, "addr", "Address");/* 5 */
@}
if (nr_printable_breakpoints > 0)
annotate_field (5);
ui_out_table_header (uiout, 40, ui_noalign, "what", "What"); /* 6 */
ui_out_table_body (uiout);
if (nr_printable_breakpoints > 0)
annotate_breakpoints_table ();
@end smallexample
This example, from the @code{print_one_breakpoint} function, shows how
to produce the actual data for the table whose structure was defined
in the above example. The original code was:
@smallexample
annotate_record ();
annotate_field (0);
printf_filtered ("%-3d ", b->number);
annotate_field (1);
if ((int)b->type > (sizeof(bptypes)/sizeof(bptypes[0]))
|| ((int) b->type != bptypes[(int) b->type].type))
internal_error ("bptypes table does not describe type #%d.",
(int)b->type);
printf_filtered ("%-14s ", bptypes[(int)b->type].description);
annotate_field (2);
printf_filtered ("%-4s ", bpdisps[(int)b->disposition]);
annotate_field (3);
printf_filtered ("%-3c ", bpenables[(int)b->enable]);
@dots{}
@end smallexample
This is the new version:
@smallexample
annotate_record ();
ui_out_tuple_begin (uiout, "bkpt");
annotate_field (0);
ui_out_field_int (uiout, "number", b->number);
annotate_field (1);
if (((int) b->type > (sizeof (bptypes) / sizeof (bptypes[0])))
|| ((int) b->type != bptypes[(int) b->type].type))
internal_error ("bptypes table does not describe type #%d.",
(int) b->type);
ui_out_field_string (uiout, "type", bptypes[(int)b->type].description);
annotate_field (2);
ui_out_field_string (uiout, "disp", bpdisps[(int)b->disposition]);
annotate_field (3);
ui_out_field_fmt (uiout, "enabled", "%c", bpenables[(int)b->enable]);
@dots{}
@end smallexample
This example, also from @code{print_one_breakpoint}, shows how to
produce a complicated output field using the @code{print_expression}
functions which requires a stream to be passed. It also shows how to
automate stream destruction with cleanups. The original code was:
@smallexample
annotate_field (5);
print_expression (b->exp, gdb_stdout);
@end smallexample
The new version is:
@smallexample
struct ui_stream *stb = ui_out_stream_new (uiout);
struct cleanup *old_chain = make_cleanup_ui_out_stream_delete (stb);
...
annotate_field (5);
print_expression (b->exp, stb->stream);
ui_out_field_stream (uiout, "what", local_stream);
@end smallexample
This example, also from @code{print_one_breakpoint}, shows how to use
@code{ui_out_text} and @code{ui_out_field_string}. The original code
was:
@smallexample
annotate_field (5);
if (b->dll_pathname == NULL)
printf_filtered ("<any library> ");
else
printf_filtered ("library \"%s\" ", b->dll_pathname);
@end smallexample
It became:
@smallexample
annotate_field (5);
if (b->dll_pathname == NULL)
@{
ui_out_field_string (uiout, "what", "<any library>");
ui_out_spaces (uiout, 1);
@}
else
@{
ui_out_text (uiout, "library \"");
ui_out_field_string (uiout, "what", b->dll_pathname);
ui_out_text (uiout, "\" ");
@}
@end smallexample
The following example from @code{print_one_breakpoint} shows how to
use @code{ui_out_field_int} and @code{ui_out_spaces}. The original
code was:
@smallexample
annotate_field (5);
if (b->forked_inferior_pid != 0)
printf_filtered ("process %d ", b->forked_inferior_pid);
@end smallexample
It became:
@smallexample
annotate_field (5);
if (b->forked_inferior_pid != 0)
@{
ui_out_text (uiout, "process ");
ui_out_field_int (uiout, "what", b->forked_inferior_pid);
ui_out_spaces (uiout, 1);
@}
@end smallexample
Here's an example of using @code{ui_out_field_string}. The original
code was:
@smallexample
annotate_field (5);
if (b->exec_pathname != NULL)
printf_filtered ("program \"%s\" ", b->exec_pathname);
@end smallexample
It became:
@smallexample
annotate_field (5);
if (b->exec_pathname != NULL)
@{
ui_out_text (uiout, "program \"");
ui_out_field_string (uiout, "what", b->exec_pathname);
ui_out_text (uiout, "\" ");
@}
@end smallexample
Finally, here's an example of printing an address. The original code:
@smallexample
annotate_field (4);
printf_filtered ("%s ",
hex_string_custom ((unsigned long) b->address, 8));
@end smallexample
It became:
@smallexample
annotate_field (4);
ui_out_field_core_addr (uiout, "Address", b->address);
@end smallexample
@section Console Printing
@section TUI
@node libgdb
@chapter libgdb
@section libgdb 1.0
@cindex @code{libgdb}
@code{libgdb} 1.0 was an abortive project of years ago. The theory was
to provide an API to @value{GDBN}'s functionality.
@section libgdb 2.0
@cindex @code{libgdb}
@code{libgdb} 2.0 is an ongoing effort to update @value{GDBN} so that is
better able to support graphical and other environments.
Since @code{libgdb} development is on-going, its architecture is still
evolving. The following components have so far been identified:
@itemize @bullet
@item
Observer - @file{gdb-events.h}.
@item
Builder - @file{ui-out.h}
@item
Event Loop - @file{event-loop.h}
@item
Library - @file{gdb.h}
@end itemize
The model that ties these components together is described below.
@section The @code{libgdb} Model
A client of @code{libgdb} interacts with the library in two ways.
@itemize @bullet
@item
As an observer (using @file{gdb-events}) receiving notifications from
@code{libgdb} of any internal state changes (break point changes, run
state, etc).
@item
As a client querying @code{libgdb} (using the @file{ui-out} builder) to
obtain various status values from @value{GDBN}.
@end itemize
Since @code{libgdb} could have multiple clients (e.g., a GUI supporting
the existing @value{GDBN} CLI), those clients must co-operate when
controlling @code{libgdb}. In particular, a client must ensure that
@code{libgdb} is idle (i.e. no other client is using @code{libgdb})
before responding to a @file{gdb-event} by making a query.
@section CLI support
At present @value{GDBN}'s CLI is very much entangled in with the core of
@code{libgdb}. Consequently, a client wishing to include the CLI in
their interface needs to carefully co-ordinate its own and the CLI's
requirements.
It is suggested that the client set @code{libgdb} up to be bi-modal
(alternate between CLI and client query modes). The notes below sketch
out the theory:
@itemize @bullet
@item
The client registers itself as an observer of @code{libgdb}.
@item
The client create and install @code{cli-out} builder using its own
versions of the @code{ui-file} @code{gdb_stderr}, @code{gdb_stdtarg} and
@code{gdb_stdout} streams.
@item
The client creates a separate custom @code{ui-out} builder that is only
used while making direct queries to @code{libgdb}.
@end itemize
When the client receives input intended for the CLI, it simply passes it
along. Since the @code{cli-out} builder is installed by default, all
the CLI output in response to that command is routed (pronounced rooted)
through to the client controlled @code{gdb_stdout} et.@: al.@: streams.
At the same time, the client is kept abreast of internal changes by
virtue of being a @code{libgdb} observer.
The only restriction on the client is that it must wait until
@code{libgdb} becomes idle before initiating any queries (using the
client's custom builder).
@section @code{libgdb} components
@subheading Observer - @file{gdb-events.h}
@file{gdb-events} provides the client with a very raw mechanism that can
be used to implement an observer. At present it only allows for one
observer and that observer must, internally, handle the need to delay
the processing of any event notifications until after @code{libgdb} has
finished the current command.
@subheading Builder - @file{ui-out.h}
@file{ui-out} provides the infrastructure necessary for a client to
create a builder. That builder is then passed down to @code{libgdb}
when doing any queries.
@subheading Event Loop - @file{event-loop.h}
@c There could be an entire section on the event-loop
@file{event-loop}, currently non-re-entrant, provides a simple event
loop. A client would need to either plug its self into this loop or,
implement a new event-loop that GDB would use.
The event-loop will eventually be made re-entrant. This is so that
@value{GDBN} can better handle the problem of some commands blocking
instead of returning.
@subheading Library - @file{gdb.h}
@file{libgdb} is the most obvious component of this system. It provides
the query interface. Each function is parameterized by a @code{ui-out}
builder. The result of the query is constructed using that builder
before the query function returns.
@node Symbol Handling
@chapter Symbol Handling
Symbols are a key part of @value{GDBN}'s operation. Symbols include variables,
functions, and types.
@section Symbol Reading
@cindex symbol reading
@cindex reading of symbols
@cindex symbol files
@value{GDBN} reads symbols from @dfn{symbol files}. The usual symbol
file is the file containing the program which @value{GDBN} is
debugging. @value{GDBN} can be directed to use a different file for
symbols (with the @samp{symbol-file} command), and it can also read
more symbols via the @samp{add-file} and @samp{load} commands, or while
reading symbols from shared libraries.
@findex find_sym_fns
Symbol files are initially opened by code in @file{symfile.c} using
the BFD library (@pxref{Support Libraries}). BFD identifies the type
of the file by examining its header. @code{find_sym_fns} then uses
this identification to locate a set of symbol-reading functions.
@findex add_symtab_fns
@cindex @code{sym_fns} structure
@cindex adding a symbol-reading module
Symbol-reading modules identify themselves to @value{GDBN} by calling
@code{add_symtab_fns} during their module initialization. The argument
to @code{add_symtab_fns} is a @code{struct sym_fns} which contains the
name (or name prefix) of the symbol format, the length of the prefix,
and pointers to four functions. These functions are called at various
times to process symbol files whose identification matches the specified
prefix.
The functions supplied by each module are:
@table @code
@item @var{xyz}_symfile_init(struct sym_fns *sf)
@cindex secondary symbol file
Called from @code{symbol_file_add} when we are about to read a new
symbol file. This function should clean up any internal state (possibly
resulting from half-read previous files, for example) and prepare to
read a new symbol file. Note that the symbol file which we are reading
might be a new ``main'' symbol file, or might be a secondary symbol file
whose symbols are being added to the existing symbol table.
The argument to @code{@var{xyz}_symfile_init} is a newly allocated
@code{struct sym_fns} whose @code{bfd} field contains the BFD for the
new symbol file being read. Its @code{private} field has been zeroed,
and can be modified as desired. Typically, a struct of private
information will be @code{malloc}'d, and a pointer to it will be placed
in the @code{private} field.
There is no result from @code{@var{xyz}_symfile_init}, but it can call
@code{error} if it detects an unavoidable problem.
@item @var{xyz}_new_init()
Called from @code{symbol_file_add} when discarding existing symbols.
This function needs only handle the symbol-reading module's internal
state; the symbol table data structures visible to the rest of
@value{GDBN} will be discarded by @code{symbol_file_add}. It has no
arguments and no result. It may be called after
@code{@var{xyz}_symfile_init}, if a new symbol table is being read, or
may be called alone if all symbols are simply being discarded.
@item @var{xyz}_symfile_read(struct sym_fns *sf, CORE_ADDR addr, int mainline)
Called from @code{symbol_file_add} to actually read the symbols from a
symbol-file into a set of psymtabs or symtabs.
@code{sf} points to the @code{struct sym_fns} originally passed to
@code{@var{xyz}_sym_init} for possible initialization. @code{addr} is
the offset between the file's specified start address and its true
address in memory. @code{mainline} is 1 if this is the main symbol
table being read, and 0 if a secondary symbol file (e.g., shared library
or dynamically loaded file) is being read.@refill
@end table
In addition, if a symbol-reading module creates psymtabs when
@var{xyz}_symfile_read is called, these psymtabs will contain a pointer
to a function @code{@var{xyz}_psymtab_to_symtab}, which can be called
from any point in the @value{GDBN} symbol-handling code.
@table @code
@item @var{xyz}_psymtab_to_symtab (struct partial_symtab *pst)
Called from @code{psymtab_to_symtab} (or the @code{PSYMTAB_TO_SYMTAB} macro) if
the psymtab has not already been read in and had its @code{pst->symtab}
pointer set. The argument is the psymtab to be fleshed-out into a
symtab. Upon return, @code{pst->readin} should have been set to 1, and
@code{pst->symtab} should contain a pointer to the new corresponding symtab, or
zero if there were no symbols in that part of the symbol file.
@end table
@section Partial Symbol Tables
@value{GDBN} has three types of symbol tables:
@itemize @bullet
@cindex full symbol table
@cindex symtabs
@item
Full symbol tables (@dfn{symtabs}). These contain the main
information about symbols and addresses.
@cindex psymtabs
@item
Partial symbol tables (@dfn{psymtabs}). These contain enough
information to know when to read the corresponding part of the full
symbol table.
@cindex minimal symbol table
@cindex minsymtabs
@item
Minimal symbol tables (@dfn{msymtabs}). These contain information
gleaned from non-debugging symbols.
@end itemize
@cindex partial symbol table
This section describes partial symbol tables.
A psymtab is constructed by doing a very quick pass over an executable
file's debugging information. Small amounts of information are
extracted---enough to identify which parts of the symbol table will
need to be re-read and fully digested later, when the user needs the
information. The speed of this pass causes @value{GDBN} to start up very
quickly. Later, as the detailed rereading occurs, it occurs in small
pieces, at various times, and the delay therefrom is mostly invisible to
the user.
@c (@xref{Symbol Reading}.)
The symbols that show up in a file's psymtab should be, roughly, those
visible to the debugger's user when the program is not running code from
that file. These include external symbols and types, static symbols and
types, and @code{enum} values declared at file scope.
The psymtab also contains the range of instruction addresses that the
full symbol table would represent.
@cindex finding a symbol
@cindex symbol lookup
The idea is that there are only two ways for the user (or much of the
code in the debugger) to reference a symbol:
@itemize @bullet
@findex find_pc_function
@findex find_pc_line
@item
By its address (e.g., execution stops at some address which is inside a
function in this file). The address will be noticed to be in the
range of this psymtab, and the full symtab will be read in.
@code{find_pc_function}, @code{find_pc_line}, and other
@code{find_pc_@dots{}} functions handle this.
@cindex lookup_symbol
@item
By its name
(e.g., the user asks to print a variable, or set a breakpoint on a
function). Global names and file-scope names will be found in the
psymtab, which will cause the symtab to be pulled in. Local names will
have to be qualified by a global name, or a file-scope name, in which
case we will have already read in the symtab as we evaluated the
qualifier. Or, a local symbol can be referenced when we are ``in'' a
local scope, in which case the first case applies. @code{lookup_symbol}
does most of the work here.
@end itemize
The only reason that psymtabs exist is to cause a symtab to be read in
at the right moment. Any symbol that can be elided from a psymtab,
while still causing that to happen, should not appear in it. Since
psymtabs don't have the idea of scope, you can't put local symbols in
them anyway. Psymtabs don't have the idea of the type of a symbol,
either, so types need not appear, unless they will be referenced by
name.
It is a bug for @value{GDBN} to behave one way when only a psymtab has
been read, and another way if the corresponding symtab has been read
in. Such bugs are typically caused by a psymtab that does not contain
all the visible symbols, or which has the wrong instruction address
ranges.
The psymtab for a particular section of a symbol file (objfile) could be
thrown away after the symtab has been read in. The symtab should always
be searched before the psymtab, so the psymtab will never be used (in a
bug-free environment). Currently, psymtabs are allocated on an obstack,
and all the psymbols themselves are allocated in a pair of large arrays
on an obstack, so there is little to be gained by trying to free them
unless you want to do a lot more work.
@section Types
@unnumberedsubsec Fundamental Types (e.g., @code{FT_VOID}, @code{FT_BOOLEAN}).
@cindex fundamental types
These are the fundamental types that @value{GDBN} uses internally. Fundamental
types from the various debugging formats (stabs, ELF, etc) are mapped
into one of these. They are basically a union of all fundamental types
that @value{GDBN} knows about for all the languages that @value{GDBN}
knows about.
@unnumberedsubsec Type Codes (e.g., @code{TYPE_CODE_PTR}, @code{TYPE_CODE_ARRAY}).
@cindex type codes
Each time @value{GDBN} builds an internal type, it marks it with one
of these types. The type may be a fundamental type, such as
@code{TYPE_CODE_INT}, or a derived type, such as @code{TYPE_CODE_PTR}
which is a pointer to another type. Typically, several @code{FT_*}
types map to one @code{TYPE_CODE_*} type, and are distinguished by
other members of the type struct, such as whether the type is signed
or unsigned, and how many bits it uses.
@unnumberedsubsec Builtin Types (e.g., @code{builtin_type_void}, @code{builtin_type_char}).
These are instances of type structs that roughly correspond to
fundamental types and are created as global types for @value{GDBN} to
use for various ugly historical reasons. We eventually want to
eliminate these. Note for example that @code{builtin_type_int}
initialized in @file{gdbtypes.c} is basically the same as a
@code{TYPE_CODE_INT} type that is initialized in @file{c-lang.c} for
an @code{FT_INTEGER} fundamental type. The difference is that the
@code{builtin_type} is not associated with any particular objfile, and
only one instance exists, while @file{c-lang.c} builds as many
@code{TYPE_CODE_INT} types as needed, with each one associated with
some particular objfile.
@section Object File Formats
@cindex object file formats
@subsection a.out
@cindex @code{a.out} format
The @code{a.out} format is the original file format for Unix. It
consists of three sections: @code{text}, @code{data}, and @code{bss},
which are for program code, initialized data, and uninitialized data,
respectively.
The @code{a.out} format is so simple that it doesn't have any reserved
place for debugging information. (Hey, the original Unix hackers used
@samp{adb}, which is a machine-language debugger!) The only debugging
format for @code{a.out} is stabs, which is encoded as a set of normal
symbols with distinctive attributes.
The basic @code{a.out} reader is in @file{dbxread.c}.
@subsection COFF
@cindex COFF format
The COFF format was introduced with System V Release 3 (SVR3) Unix.
COFF files may have multiple sections, each prefixed by a header. The
number of sections is limited.
The COFF specification includes support for debugging. Although this
was a step forward, the debugging information was woefully limited. For
instance, it was not possible to represent code that came from an
included file.
The COFF reader is in @file{coffread.c}.
@subsection ECOFF
@cindex ECOFF format
ECOFF is an extended COFF originally introduced for Mips and Alpha
workstations.
The basic ECOFF reader is in @file{mipsread.c}.
@subsection XCOFF
@cindex XCOFF format
The IBM RS/6000 running AIX uses an object file format called XCOFF.
The COFF sections, symbols, and line numbers are used, but debugging
symbols are @code{dbx}-style stabs whose strings are located in the
@code{.debug} section (rather than the string table). For more
information, see @ref{Top,,,stabs,The Stabs Debugging Format}.
The shared library scheme has a clean interface for figuring out what
shared libraries are in use, but the catch is that everything which
refers to addresses (symbol tables and breakpoints at least) needs to be
relocated for both shared libraries and the main executable. At least
using the standard mechanism this can only be done once the program has
been run (or the core file has been read).
@subsection PE
@cindex PE-COFF format
Windows 95 and NT use the PE (@dfn{Portable Executable}) format for their
executables. PE is basically COFF with additional headers.
While BFD includes special PE support, @value{GDBN} needs only the basic
COFF reader.
@subsection ELF
@cindex ELF format
The ELF format came with System V Release 4 (SVR4) Unix. ELF is similar
to COFF in being organized into a number of sections, but it removes
many of COFF's limitations.
The basic ELF reader is in @file{elfread.c}.
@subsection SOM
@cindex SOM format
SOM is HP's object file and debug format (not to be confused with IBM's
SOM, which is a cross-language ABI).
The SOM reader is in @file{somread.c}.
@section Debugging File Formats
This section describes characteristics of debugging information that
are independent of the object file format.
@subsection stabs
@cindex stabs debugging info
@code{stabs} started out as special symbols within the @code{a.out}
format. Since then, it has been encapsulated into other file
formats, such as COFF and ELF.
While @file{dbxread.c} does some of the basic stab processing,
including for encapsulated versions, @file{stabsread.c} does
the real work.
@subsection COFF
@cindex COFF debugging info
The basic COFF definition includes debugging information. The level
of support is minimal and non-extensible, and is not often used.
@subsection Mips debug (Third Eye)
@cindex ECOFF debugging info
ECOFF includes a definition of a special debug format.
The file @file{mdebugread.c} implements reading for this format.
@subsection DWARF 2
@cindex DWARF 2 debugging info
DWARF 2 is an improved but incompatible version of DWARF 1.
The DWARF 2 reader is in @file{dwarf2read.c}.
@subsection SOM
@cindex SOM debugging info
Like COFF, the SOM definition includes debugging information.
@section Adding a New Symbol Reader to @value{GDBN}
@cindex adding debugging info reader
If you are using an existing object file format (@code{a.out}, COFF, ELF, etc),
there is probably little to be done.
If you need to add a new object file format, you must first add it to
BFD. This is beyond the scope of this document.
You must then arrange for the BFD code to provide access to the
debugging symbols. Generally @value{GDBN} will have to call swapping routines
from BFD and a few other BFD internal routines to locate the debugging
information. As much as possible, @value{GDBN} should not depend on the BFD
internal data structures.
For some targets (e.g., COFF), there is a special transfer vector used
to call swapping routines, since the external data structures on various
platforms have different sizes and layouts. Specialized routines that
will only ever be implemented by one object file format may be called
directly. This interface should be described in a file
@file{bfd/lib@var{xyz}.h}, which is included by @value{GDBN}.
@section Memory Management for Symbol Files
Most memory associated with a loaded symbol file is stored on
its @code{objfile_obstack}. This includes symbols, types,
namespace data, and other information produced by the symbol readers.
Because this data lives on the objfile's obstack, it is automatically
released when the objfile is unloaded or reloaded. Therefore one
objfile must not reference symbol or type data from another objfile;
they could be unloaded at different times.
User convenience variables, et cetera, have associated types. Normally
these types live in the associated objfile. However, when the objfile
is unloaded, those types are deep copied to global memory, so that
the values of the user variables and history items are not lost.
@node Language Support
@chapter Language Support
@cindex language support
@value{GDBN}'s language support is mainly driven by the symbol reader,
although it is possible for the user to set the source language
manually.
@value{GDBN} chooses the source language by looking at the extension
of the file recorded in the debug info; @file{.c} means C, @file{.f}
means Fortran, etc. It may also use a special-purpose language
identifier if the debug format supports it, like with DWARF.
@section Adding a Source Language to @value{GDBN}
@cindex adding source language
To add other languages to @value{GDBN}'s expression parser, follow the
following steps:
@table @emph
@item Create the expression parser.
@cindex expression parser
This should reside in a file @file{@var{lang}-exp.y}. Routines for
building parsed expressions into a @code{union exp_element} list are in
@file{parse.c}.
@cindex language parser
Since we can't depend upon everyone having Bison, and YACC produces
parsers that define a bunch of global names, the following lines
@strong{must} be included at the top of the YACC parser, to prevent the
various parsers from defining the same global names:
@smallexample
#define yyparse @var{lang}_parse
#define yylex @var{lang}_lex
#define yyerror @var{lang}_error
#define yylval @var{lang}_lval
#define yychar @var{lang}_char
#define yydebug @var{lang}_debug
#define yypact @var{lang}_pact
#define yyr1 @var{lang}_r1
#define yyr2 @var{lang}_r2
#define yydef @var{lang}_def
#define yychk @var{lang}_chk
#define yypgo @var{lang}_pgo
#define yyact @var{lang}_act
#define yyexca @var{lang}_exca
#define yyerrflag @var{lang}_errflag
#define yynerrs @var{lang}_nerrs
@end smallexample
At the bottom of your parser, define a @code{struct language_defn} and
initialize it with the right values for your language. Define an
@code{initialize_@var{lang}} routine and have it call
@samp{add_language(@var{lang}_language_defn)} to tell the rest of @value{GDBN}
that your language exists. You'll need some other supporting variables
and functions, which will be used via pointers from your
@code{@var{lang}_language_defn}. See the declaration of @code{struct
language_defn} in @file{language.h}, and the other @file{*-exp.y} files,
for more information.
@item Add any evaluation routines, if necessary
@cindex expression evaluation routines
@findex evaluate_subexp
@findex prefixify_subexp
@findex length_of_subexp
If you need new opcodes (that represent the operations of the language),
add them to the enumerated type in @file{expression.h}. Add support
code for these operations in the @code{evaluate_subexp} function
defined in the file @file{eval.c}. Add cases
for new opcodes in two functions from @file{parse.c}:
@code{prefixify_subexp} and @code{length_of_subexp}. These compute
the number of @code{exp_element}s that a given operation takes up.
@item Update some existing code
Add an enumerated identifier for your language to the enumerated type
@code{enum language} in @file{defs.h}.
Update the routines in @file{language.c} so your language is included.
These routines include type predicates and such, which (in some cases)
are language dependent. If your language does not appear in the switch
statement, an error is reported.
@vindex current_language
Also included in @file{language.c} is the code that updates the variable
@code{current_language}, and the routines that translate the
@code{language_@var{lang}} enumerated identifier into a printable
string.
@findex _initialize_language
Update the function @code{_initialize_language} to include your
language. This function picks the default language upon startup, so is
dependent upon which languages that @value{GDBN} is built for.
@findex allocate_symtab
Update @code{allocate_symtab} in @file{symfile.c} and/or symbol-reading
code so that the language of each symtab (source file) is set properly.
This is used to determine the language to use at each stack frame level.
Currently, the language is set based upon the extension of the source
file. If the language can be better inferred from the symbol
information, please set the language of the symtab in the symbol-reading
code.
@findex print_subexp
@findex op_print_tab
Add helper code to @code{print_subexp} (in @file{expprint.c}) to handle any new
expression opcodes you have added to @file{expression.h}. Also, add the
printed representations of your operators to @code{op_print_tab}.
@item Add a place of call
@findex parse_exp_1
Add a call to @code{@var{lang}_parse()} and @code{@var{lang}_error} in
@code{parse_exp_1} (defined in @file{parse.c}).
@item Use macros to trim code
@cindex trimming language-dependent code
The user has the option of building @value{GDBN} for some or all of the
languages. If the user decides to build @value{GDBN} for the language
@var{lang}, then every file dependent on @file{language.h} will have the
macro @code{_LANG_@var{lang}} defined in it. Use @code{#ifdef}s to
leave out large routines that the user won't need if he or she is not
using your language.
Note that you do not need to do this in your YACC parser, since if @value{GDBN}
is not build for @var{lang}, then @file{@var{lang}-exp.tab.o} (the
compiled form of your parser) is not linked into @value{GDBN} at all.
See the file @file{configure.in} for how @value{GDBN} is configured
for different languages.
@item Edit @file{Makefile.in}
Add dependencies in @file{Makefile.in}. Make sure you update the macro
variables such as @code{HFILES} and @code{OBJS}, otherwise your code may
not get linked in, or, worse yet, it may not get @code{tar}red into the
distribution!
@end table
@node Host Definition
@chapter Host Definition
With the advent of Autoconf, it's rarely necessary to have host
definition machinery anymore. The following information is provided,
mainly, as an historical reference.
@section Adding a New Host
@cindex adding a new host
@cindex host, adding
@value{GDBN}'s host configuration support normally happens via Autoconf.
New host-specific definitions should not be needed. Older hosts
@value{GDBN} still use the host-specific definitions and files listed
below, but these mostly exist for historical reasons, and will
eventually disappear.
@table @file
@item gdb/config/@var{arch}/@var{xyz}.mh
This file once contained both host and native configuration information
(@pxref{Native Debugging}) for the machine @var{xyz}. The host
configuration information is now handed by Autoconf.
Host configuration information included a definition of
@code{XM_FILE=xm-@var{xyz}.h} and possibly definitions for @code{CC},
@code{SYSV_DEFINE}, @code{XM_CFLAGS}, @code{XM_ADD_FILES},
@code{XM_CLIBS}, @code{XM_CDEPS}, etc.; see @file{Makefile.in}.
New host only configurations do not need this file.
@item gdb/config/@var{arch}/xm-@var{xyz}.h
This file once contained definitions and includes required when hosting
gdb on machine @var{xyz}. Those definitions and includes are now
handled by Autoconf.
New host and native configurations do not need this file.
@emph{Maintainer's note: Some hosts continue to use the @file{xm-xyz.h}
file to define the macros @var{HOST_FLOAT_FORMAT},
@var{HOST_DOUBLE_FORMAT} and @var{HOST_LONG_DOUBLE_FORMAT}. That code
also needs to be replaced with either an Autoconf or run-time test.}
@end table
@subheading Generic Host Support Files
@cindex generic host support
There are some ``generic'' versions of routines that can be used by
various systems. These can be customized in various ways by macros
defined in your @file{xm-@var{xyz}.h} file. If these routines work for
the @var{xyz} host, you can just include the generic file's name (with
@samp{.o}, not @samp{.c}) in @code{XDEPFILES}.
Otherwise, if your machine needs custom support routines, you will need
to write routines that perform the same functions as the generic file.
Put them into @code{@var{xyz}-xdep.c}, and put @code{@var{xyz}-xdep.o}
into @code{XDEPFILES}.
@table @file
@cindex remote debugging support
@cindex serial line support
@item ser-unix.c
This contains serial line support for Unix systems. This is always
included, via the makefile variable @code{SER_HARDWIRE}; override this
variable in the @file{.mh} file to avoid it.
@item ser-go32.c
This contains serial line support for 32-bit programs running under DOS,
using the DJGPP (a.k.a.@: GO32) execution environment.
@cindex TCP remote support
@item ser-tcp.c
This contains generic TCP support using sockets.
@end table
@section Host Conditionals
When @value{GDBN} is configured and compiled, various macros are
defined or left undefined, to control compilation based on the
attributes of the host system. These macros and their meanings (or if
the meaning is not documented here, then one of the source files where
they are used is indicated) are:
@ftable @code
@item @value{GDBN}INIT_FILENAME
The default name of @value{GDBN}'s initialization file (normally
@file{.gdbinit}).
@item NO_STD_REGS
This macro is deprecated.
@item SIGWINCH_HANDLER
If your host defines @code{SIGWINCH}, you can define this to be the name
of a function to be called if @code{SIGWINCH} is received.
@item SIGWINCH_HANDLER_BODY
Define this to expand into code that will define the function named by
the expansion of @code{SIGWINCH_HANDLER}.
@item ALIGN_STACK_ON_STARTUP
@cindex stack alignment
Define this if your system is of a sort that will crash in
@code{tgetent} if the stack happens not to be longword-aligned when
@code{main} is called. This is a rare situation, but is known to occur
on several different types of systems.
@item CRLF_SOURCE_FILES
@cindex DOS text files
Define this if host files use @code{\r\n} rather than @code{\n} as a
line terminator. This will cause source file listings to omit @code{\r}
characters when printing and it will allow @code{\r\n} line endings of files
which are ``sourced'' by gdb. It must be possible to open files in binary
mode using @code{O_BINARY} or, for fopen, @code{"rb"}.
@item DEFAULT_PROMPT
@cindex prompt
The default value of the prompt string (normally @code{"(gdb) "}).
@item DEV_TTY
@cindex terminal device
The name of the generic TTY device, defaults to @code{"/dev/tty"}.
@item FOPEN_RB
Define this if binary files are opened the same way as text files.
@item HAVE_MMAP
@findex mmap
In some cases, use the system call @code{mmap} for reading symbol
tables. For some machines this allows for sharing and quick updates.
@item HAVE_TERMIO
Define this if the host system has @code{termio.h}.
@item INT_MAX
@itemx INT_MIN
@itemx LONG_MAX
@itemx UINT_MAX
@itemx ULONG_MAX
Values for host-side constants.
@item ISATTY
Substitute for isatty, if not available.
@item LONGEST
This is the longest integer type available on the host. If not defined,
it will default to @code{long long} or @code{long}, depending on
@code{CC_HAS_LONG_LONG}.
@item CC_HAS_LONG_LONG
@cindex @code{long long} data type
Define this if the host C compiler supports @code{long long}. This is set
by the @code{configure} script.
@item PRINTF_HAS_LONG_LONG
Define this if the host can handle printing of long long integers via
the printf format conversion specifier @code{ll}. This is set by the
@code{configure} script.
@item HAVE_LONG_DOUBLE
Define this if the host C compiler supports @code{long double}. This is
set by the @code{configure} script.
@item PRINTF_HAS_LONG_DOUBLE
Define this if the host can handle printing of long double float-point
numbers via the printf format conversion specifier @code{Lg}. This is
set by the @code{configure} script.
@item SCANF_HAS_LONG_DOUBLE
Define this if the host can handle the parsing of long double
float-point numbers via the scanf format conversion specifier
@code{Lg}. This is set by the @code{configure} script.
@item LSEEK_NOT_LINEAR
Define this if @code{lseek (n)} does not necessarily move to byte number
@code{n} in the file. This is only used when reading source files. It
is normally faster to define @code{CRLF_SOURCE_FILES} when possible.
@item L_SET
This macro is used as the argument to @code{lseek} (or, most commonly,
@code{bfd_seek}). FIXME, should be replaced by SEEK_SET instead,
which is the POSIX equivalent.
@item NORETURN
If defined, this should be one or more tokens, such as @code{volatile},
that can be used in both the declaration and definition of functions to
indicate that they never return. The default is already set correctly
if compiling with GCC. This will almost never need to be defined.
@item ATTR_NORETURN
If defined, this should be one or more tokens, such as
@code{__attribute__ ((noreturn))}, that can be used in the declarations
of functions to indicate that they never return. The default is already
set correctly if compiling with GCC. This will almost never need to be
defined.
@item SEEK_CUR
@itemx SEEK_SET
Define these to appropriate value for the system @code{lseek}, if not already
defined.
@item STOP_SIGNAL
This is the signal for stopping @value{GDBN}. Defaults to
@code{SIGTSTP}. (Only redefined for the Convex.)
@item USG
Means that System V (prior to SVR4) include files are in use. (FIXME:
This symbol is abused in @file{infrun.c}, @file{regex.c}, and
@file{utils.c} for other things, at the moment.)
@item lint
Define this to help placate @code{lint} in some situations.
@item volatile
Define this to override the defaults of @code{__volatile__} or
@code{/**/}.
@end ftable
@node Target Architecture Definition
@chapter Target Architecture Definition
@cindex target architecture definition
@value{GDBN}'s target architecture defines what sort of
machine-language programs @value{GDBN} can work with, and how it works
with them.
The target architecture object is implemented as the C structure
@code{struct gdbarch *}. The structure, and its methods, are generated
using the Bourne shell script @file{gdbarch.sh}.
@section Operating System ABI Variant Handling
@cindex OS ABI variants
@value{GDBN} provides a mechanism for handling variations in OS
ABIs. An OS ABI variant may have influence over any number of
variables in the target architecture definition. There are two major
components in the OS ABI mechanism: sniffers and handlers.
A @dfn{sniffer} examines a file matching a BFD architecture/flavour pair
(the architecture may be wildcarded) in an attempt to determine the
OS ABI of that file. Sniffers with a wildcarded architecture are considered
to be @dfn{generic}, while sniffers for a specific architecture are
considered to be @dfn{specific}. A match from a specific sniffer
overrides a match from a generic sniffer. Multiple sniffers for an
architecture/flavour may exist, in order to differentiate between two
different operating systems which use the same basic file format. The
OS ABI framework provides a generic sniffer for ELF-format files which
examines the @code{EI_OSABI} field of the ELF header, as well as note
sections known to be used by several operating systems.
@cindex fine-tuning @code{gdbarch} structure
A @dfn{handler} is used to fine-tune the @code{gdbarch} structure for the
selected OS ABI. There may be only one handler for a given OS ABI
for each BFD architecture.
The following OS ABI variants are defined in @file{defs.h}:
@table @code
@findex GDB_OSABI_UNINITIALIZED
@item GDB_OSABI_UNINITIALIZED
Used for struct gdbarch_info if ABI is still uninitialized.
@findex GDB_OSABI_UNKNOWN
@item GDB_OSABI_UNKNOWN
The ABI of the inferior is unknown. The default @code{gdbarch}
settings for the architecture will be used.
@findex GDB_OSABI_SVR4
@item GDB_OSABI_SVR4
UNIX System V Release 4.
@findex GDB_OSABI_HURD
@item GDB_OSABI_HURD
GNU using the Hurd kernel.
@findex GDB_OSABI_SOLARIS
@item GDB_OSABI_SOLARIS
Sun Solaris.
@findex GDB_OSABI_OSF1
@item GDB_OSABI_OSF1
OSF/1, including Digital UNIX and Compaq Tru64 UNIX.
@findex GDB_OSABI_LINUX
@item GDB_OSABI_LINUX
GNU using the Linux kernel.
@findex GDB_OSABI_FREEBSD_AOUT
@item GDB_OSABI_FREEBSD_AOUT
FreeBSD using the @code{a.out} executable format.
@findex GDB_OSABI_FREEBSD_ELF
@item GDB_OSABI_FREEBSD_ELF
FreeBSD using the ELF executable format.
@findex GDB_OSABI_NETBSD_AOUT
@item GDB_OSABI_NETBSD_AOUT
NetBSD using the @code{a.out} executable format.
@findex GDB_OSABI_NETBSD_ELF
@item GDB_OSABI_NETBSD_ELF
NetBSD using the ELF executable format.
@findex GDB_OSABI_OPENBSD_ELF
@item GDB_OSABI_OPENBSD_ELF
OpenBSD using the ELF executable format.
@findex GDB_OSABI_WINCE
@item GDB_OSABI_WINCE
Windows CE.
@findex GDB_OSABI_GO32
@item GDB_OSABI_GO32
DJGPP.
@findex GDB_OSABI_IRIX
@item GDB_OSABI_IRIX
Irix.
@findex GDB_OSABI_INTERIX
@item GDB_OSABI_INTERIX
Interix (Posix layer for MS-Windows systems).
@findex GDB_OSABI_HPUX_ELF
@item GDB_OSABI_HPUX_ELF
HP/UX using the ELF executable format.
@findex GDB_OSABI_HPUX_SOM
@item GDB_OSABI_HPUX_SOM
HP/UX using the SOM executable format.
@findex GDB_OSABI_QNXNTO
@item GDB_OSABI_QNXNTO
QNX Neutrino.
@findex GDB_OSABI_CYGWIN
@item GDB_OSABI_CYGWIN
Cygwin.
@findex GDB_OSABI_AIX
@item GDB_OSABI_AIX
AIX.
@end table
Here are the functions that make up the OS ABI framework:
@deftypefun const char *gdbarch_osabi_name (enum gdb_osabi @var{osabi})
Return the name of the OS ABI corresponding to @var{osabi}.
@end deftypefun
@deftypefun void gdbarch_register_osabi (enum bfd_architecture @var{arch}, unsigned long @var{machine}, enum gdb_osabi @var{osabi}, void (*@var{init_osabi})(struct gdbarch_info @var{info}, struct gdbarch *@var{gdbarch}))
Register the OS ABI handler specified by @var{init_osabi} for the
architecture, machine type and OS ABI specified by @var{arch},
@var{machine} and @var{osabi}. In most cases, a value of zero for the
machine type, which implies the architecture's default machine type,
will suffice.
@end deftypefun
@deftypefun void gdbarch_register_osabi_sniffer (enum bfd_architecture @var{arch}, enum bfd_flavour @var{flavour}, enum gdb_osabi (*@var{sniffer})(bfd *@var{abfd}))
Register the OS ABI file sniffer specified by @var{sniffer} for the
BFD architecture/flavour pair specified by @var{arch} and @var{flavour}.
If @var{arch} is @code{bfd_arch_unknown}, the sniffer is considered to
be generic, and is allowed to examine @var{flavour}-flavoured files for
any architecture.
@end deftypefun
@deftypefun enum gdb_osabi gdbarch_lookup_osabi (bfd *@var{abfd})
Examine the file described by @var{abfd} to determine its OS ABI.
The value @code{GDB_OSABI_UNKNOWN} is returned if the OS ABI cannot
be determined.
@end deftypefun
@deftypefun void gdbarch_init_osabi (struct gdbarch info @var{info}, struct gdbarch *@var{gdbarch}, enum gdb_osabi @var{osabi})
Invoke the OS ABI handler corresponding to @var{osabi} to fine-tune the
@code{gdbarch} structure specified by @var{gdbarch}. If a handler
corresponding to @var{osabi} has not been registered for @var{gdbarch}'s
architecture, a warning will be issued and the debugging session will continue
with the defaults already established for @var{gdbarch}.
@end deftypefun
@deftypefun void generic_elf_osabi_sniff_abi_tag_sections (bfd *@var{abfd}, asection *@var{sect}, void *@var{obj})
Helper routine for ELF file sniffers. Examine the file described by
@var{abfd} and look at ABI tag note sections to determine the OS ABI
from the note. This function should be called via
@code{bfd_map_over_sections}.
@end deftypefun
@section Initializing a New Architecture
Each @code{gdbarch} is associated with a single @sc{bfd} architecture,
via a @code{bfd_arch_@var{arch}} constant. The @code{gdbarch} is
registered by a call to @code{register_gdbarch_init}, usually from
the file's @code{_initialize_@var{filename}} routine, which will
be automatically called during @value{GDBN} startup. The arguments
are a @sc{bfd} architecture constant and an initialization function.
The initialization function has this type:
@smallexample
static struct gdbarch *
@var{arch}_gdbarch_init (struct gdbarch_info @var{info},
struct gdbarch_list *@var{arches})
@end smallexample
The @var{info} argument contains parameters used to select the correct
architecture, and @var{arches} is a list of architectures which
have already been created with the same @code{bfd_arch_@var{arch}}
value.
The initialization function should first make sure that @var{info}
is acceptable, and return @code{NULL} if it is not. Then, it should
search through @var{arches} for an exact match to @var{info}, and
return one if found. Lastly, if no exact match was found, it should
create a new architecture based on @var{info} and return it.
Only information in @var{info} should be used to choose the new
architecture. Historically, @var{info} could be sparse, and
defaults would be collected from the first element on @var{arches}.
However, @value{GDBN} now fills in @var{info} more thoroughly,
so new @code{gdbarch} initialization functions should not take
defaults from @var{arches}.
@section Registers and Memory
@value{GDBN}'s model of the target machine is rather simple.
@value{GDBN} assumes the machine includes a bank of registers and a
block of memory. Each register may have a different size.
@value{GDBN} does not have a magical way to match up with the
compiler's idea of which registers are which; however, it is critical
that they do match up accurately. The only way to make this work is
to get accurate information about the order that the compiler uses,
and to reflect that in the @code{REGISTER_NAME} and related macros.
@value{GDBN} can handle big-endian, little-endian, and bi-endian architectures.
@section Pointers Are Not Always Addresses
@cindex pointer representation
@cindex address representation
@cindex word-addressed machines
@cindex separate data and code address spaces
@cindex spaces, separate data and code address
@cindex address spaces, separate data and code
@cindex code pointers, word-addressed
@cindex converting between pointers and addresses
@cindex D10V addresses
On almost all 32-bit architectures, the representation of a pointer is
indistinguishable from the representation of some fixed-length number
whose value is the byte address of the object pointed to. On such
machines, the words ``pointer'' and ``address'' can be used interchangeably.
However, architectures with smaller word sizes are often cramped for
address space, so they may choose a pointer representation that breaks this
identity, and allows a larger code address space.
For example, the Renesas D10V is a 16-bit VLIW processor whose
instructions are 32 bits long@footnote{Some D10V instructions are
actually pairs of 16-bit sub-instructions. However, since you can't
jump into the middle of such a pair, code addresses can only refer to
full 32 bit instructions, which is what matters in this explanation.}.
If the D10V used ordinary byte addresses to refer to code locations,
then the processor would only be able to address 64kb of instructions.
However, since instructions must be aligned on four-byte boundaries, the
low two bits of any valid instruction's byte address are always
zero---byte addresses waste two bits. So instead of byte addresses,
the D10V uses word addresses---byte addresses shifted right two bits---to
refer to code. Thus, the D10V can use 16-bit words to address 256kb of
code space.
However, this means that code pointers and data pointers have different
forms on the D10V. The 16-bit word @code{0xC020} refers to byte address
@code{0xC020} when used as a data address, but refers to byte address
@code{0x30080} when used as a code address.
(The D10V also uses separate code and data address spaces, which also
affects the correspondence between pointers and addresses, but we're
going to ignore that here; this example is already too long.)
To cope with architectures like this---the D10V is not the only
one!---@value{GDBN} tries to distinguish between @dfn{addresses}, which are
byte numbers, and @dfn{pointers}, which are the target's representation
of an address of a particular type of data. In the example above,
@code{0xC020} is the pointer, which refers to one of the addresses
@code{0xC020} or @code{0x30080}, depending on the type imposed upon it.
@value{GDBN} provides functions for turning a pointer into an address
and vice versa, in the appropriate way for the current architecture.
Unfortunately, since addresses and pointers are identical on almost all
processors, this distinction tends to bit-rot pretty quickly. Thus,
each time you port @value{GDBN} to an architecture which does
distinguish between pointers and addresses, you'll probably need to
clean up some architecture-independent code.
Here are functions which convert between pointers and addresses:
@deftypefun CORE_ADDR extract_typed_address (void *@var{buf}, struct type *@var{type})
Treat the bytes at @var{buf} as a pointer or reference of type
@var{type}, and return the address it represents, in a manner
appropriate for the current architecture. This yields an address
@value{GDBN} can use to read target memory, disassemble, etc. Note that
@var{buf} refers to a buffer in @value{GDBN}'s memory, not the
inferior's.
For example, if the current architecture is the Intel x86, this function
extracts a little-endian integer of the appropriate length from
@var{buf} and returns it. However, if the current architecture is the
D10V, this function will return a 16-bit integer extracted from
@var{buf}, multiplied by four if @var{type} is a pointer to a function.
If @var{type} is not a pointer or reference type, then this function
will signal an internal error.
@end deftypefun
@deftypefun CORE_ADDR store_typed_address (void *@var{buf}, struct type *@var{type}, CORE_ADDR @var{addr})
Store the address @var{addr} in @var{buf}, in the proper format for a
pointer of type @var{type} in the current architecture. Note that
@var{buf} refers to a buffer in @value{GDBN}'s memory, not the
inferior's.
For example, if the current architecture is the Intel x86, this function
stores @var{addr} unmodified as a little-endian integer of the
appropriate length in @var{buf}. However, if the current architecture
is the D10V, this function divides @var{addr} by four if @var{type} is
a pointer to a function, and then stores it in @var{buf}.
If @var{type} is not a pointer or reference type, then this function
will signal an internal error.
@end deftypefun
@deftypefun CORE_ADDR value_as_address (struct value *@var{val})
Assuming that @var{val} is a pointer, return the address it represents,
as appropriate for the current architecture.
This function actually works on integral values, as well as pointers.
For pointers, it performs architecture-specific conversions as
described above for @code{extract_typed_address}.
@end deftypefun
@deftypefun CORE_ADDR value_from_pointer (struct type *@var{type}, CORE_ADDR @var{addr})
Create and return a value representing a pointer of type @var{type} to
the address @var{addr}, as appropriate for the current architecture.
This function performs architecture-specific conversions as described
above for @code{store_typed_address}.
@end deftypefun
Here are some macros which architectures can define to indicate the
relationship between pointers and addresses. These have default
definitions, appropriate for architectures on which all pointers are
simple unsigned byte addresses.
@deftypefn {Target Macro} CORE_ADDR POINTER_TO_ADDRESS (struct type *@var{type}, char *@var{buf})
Assume that @var{buf} holds a pointer of type @var{type}, in the
appropriate format for the current architecture. Return the byte
address the pointer refers to.
This function may safely assume that @var{type} is either a pointer or a
C@t{++} reference type.
@end deftypefn
@deftypefn {Target Macro} void ADDRESS_TO_POINTER (struct type *@var{type}, char *@var{buf}, CORE_ADDR @var{addr})
Store in @var{buf} a pointer of type @var{type} representing the address
@var{addr}, in the appropriate format for the current architecture.
This function may safely assume that @var{type} is either a pointer or a
C@t{++} reference type.
@end deftypefn
@section Address Classes
@cindex address classes
@cindex DW_AT_byte_size
@cindex DW_AT_address_class
Sometimes information about different kinds of addresses is available
via the debug information. For example, some programming environments
define addresses of several different sizes. If the debug information
distinguishes these kinds of address classes through either the size
info (e.g, @code{DW_AT_byte_size} in @w{DWARF 2}) or through an explicit
address class attribute (e.g, @code{DW_AT_address_class} in @w{DWARF 2}), the
following macros should be defined in order to disambiguate these
types within @value{GDBN} as well as provide the added information to
a @value{GDBN} user when printing type expressions.
@deftypefn {Target Macro} int ADDRESS_CLASS_TYPE_FLAGS (int @var{byte_size}, int @var{dwarf2_addr_class})
Returns the type flags needed to construct a pointer type whose size
is @var{byte_size} and whose address class is @var{dwarf2_addr_class}.
This function is normally called from within a symbol reader. See
@file{dwarf2read.c}.
@end deftypefn
@deftypefn {Target Macro} char *ADDRESS_CLASS_TYPE_FLAGS_TO_NAME (int @var{type_flags})
Given the type flags representing an address class qualifier, return
its name.
@end deftypefn
@deftypefn {Target Macro} int ADDRESS_CLASS_NAME_to_TYPE_FLAGS (int @var{name}, int *var{type_flags_ptr})
Given an address qualifier name, set the @code{int} referenced by @var{type_flags_ptr} to the type flags
for that address class qualifier.
@end deftypefn
Since the need for address classes is rather rare, none of
the address class macros defined by default. Predicate
macros are provided to detect when they are defined.
Consider a hypothetical architecture in which addresses are normally
32-bits wide, but 16-bit addresses are also supported. Furthermore,
suppose that the @w{DWARF 2} information for this architecture simply
uses a @code{DW_AT_byte_size} value of 2 to indicate the use of one
of these "short" pointers. The following functions could be defined
to implement the address class macros:
@smallexample
somearch_address_class_type_flags (int byte_size,
int dwarf2_addr_class)
@{
if (byte_size == 2)
return TYPE_FLAG_ADDRESS_CLASS_1;
else
return 0;
@}
static char *
somearch_address_class_type_flags_to_name (int type_flags)
@{
if (type_flags & TYPE_FLAG_ADDRESS_CLASS_1)
return "short";
else
return NULL;
@}
int
somearch_address_class_name_to_type_flags (char *name,
int *type_flags_ptr)
@{
if (strcmp (name, "short") == 0)
@{
*type_flags_ptr = TYPE_FLAG_ADDRESS_CLASS_1;
return 1;
@}
else
return 0;
@}
@end smallexample
The qualifier @code{@@short} is used in @value{GDBN}'s type expressions
to indicate the presence of one of these "short" pointers. E.g, if
the debug information indicates that @code{short_ptr_var} is one of these
short pointers, @value{GDBN} might show the following behavior:
@smallexample
(gdb) ptype short_ptr_var
type = int * @@short
@end smallexample
@section Raw and Virtual Register Representations
@cindex raw register representation
@cindex virtual register representation
@cindex representations, raw and virtual registers
@emph{Maintainer note: This section is pretty much obsolete. The
functionality described here has largely been replaced by
pseudo-registers and the mechanisms described in @ref{Target
Architecture Definition, , Using Different Register and Memory Data
Representations}. See also @uref{http://www.gnu.org/software/gdb/bugs/,
Bug Tracking Database} and
@uref{http://sources.redhat.com/gdb/current/ari/, ARI Index} for more
up-to-date information.}
Some architectures use one representation for a value when it lives in a
register, but use a different representation when it lives in memory.
In @value{GDBN}'s terminology, the @dfn{raw} representation is the one used in
the target registers, and the @dfn{virtual} representation is the one
used in memory, and within @value{GDBN} @code{struct value} objects.
@emph{Maintainer note: Notice that the same mechanism is being used to
both convert a register to a @code{struct value} and alternative
register forms.}
For almost all data types on almost all architectures, the virtual and
raw representations are identical, and no special handling is needed.
However, they do occasionally differ. For example:
@itemize @bullet
@item
The x86 architecture supports an 80-bit @code{long double} type. However, when
we store those values in memory, they occupy twelve bytes: the
floating-point number occupies the first ten, and the final two bytes
are unused. This keeps the values aligned on four-byte boundaries,
allowing more efficient access. Thus, the x86 80-bit floating-point
type is the raw representation, and the twelve-byte loosely-packed
arrangement is the virtual representation.
@item
Some 64-bit MIPS targets present 32-bit registers to @value{GDBN} as 64-bit
registers, with garbage in their upper bits. @value{GDBN} ignores the top 32
bits. Thus, the 64-bit form, with garbage in the upper 32 bits, is the
raw representation, and the trimmed 32-bit representation is the
virtual representation.
@end itemize
In general, the raw representation is determined by the architecture, or
@value{GDBN}'s interface to the architecture, while the virtual representation
can be chosen for @value{GDBN}'s convenience. @value{GDBN}'s register file,
@code{registers}, holds the register contents in raw format, and the
@value{GDBN} remote protocol transmits register values in raw format.
Your architecture may define the following macros to request
conversions between the raw and virtual format:
@deftypefn {Target Macro} int REGISTER_CONVERTIBLE (int @var{reg})
Return non-zero if register number @var{reg}'s value needs different raw
and virtual formats.
You should not use @code{REGISTER_CONVERT_TO_VIRTUAL} for a register
unless this macro returns a non-zero value for that register.
@end deftypefn
@deftypefn {Target Macro} int DEPRECATED_REGISTER_RAW_SIZE (int @var{reg})
The size of register number @var{reg}'s raw value. This is the number
of bytes the register will occupy in @code{registers}, or in a @value{GDBN}
remote protocol packet.
@end deftypefn
@deftypefn {Target Macro} int DEPRECATED_REGISTER_VIRTUAL_SIZE (int @var{reg})
The size of register number @var{reg}'s value, in its virtual format.
This is the size a @code{struct value}'s buffer will have, holding that
register's value.
@end deftypefn
@deftypefn {Target Macro} struct type *DEPRECATED_REGISTER_VIRTUAL_TYPE (int @var{reg})
This is the type of the virtual representation of register number
@var{reg}. Note that there is no need for a macro giving a type for the
register's raw form; once the register's value has been obtained, @value{GDBN}
always uses the virtual form.
@end deftypefn
@deftypefn {Target Macro} void REGISTER_CONVERT_TO_VIRTUAL (int @var{reg}, struct type *@var{type}, char *@var{from}, char *@var{to})
Convert the value of register number @var{reg} to @var{type}, which
should always be @code{DEPRECATED_REGISTER_VIRTUAL_TYPE (@var{reg})}. The buffer
at @var{from} holds the register's value in raw format; the macro should
convert the value to virtual format, and place it at @var{to}.
Note that @code{REGISTER_CONVERT_TO_VIRTUAL} and
@code{REGISTER_CONVERT_TO_RAW} take their @var{reg} and @var{type}
arguments in different orders.
You should only use @code{REGISTER_CONVERT_TO_VIRTUAL} with registers
for which the @code{REGISTER_CONVERTIBLE} macro returns a non-zero
value.
@end deftypefn
@deftypefn {Target Macro} void REGISTER_CONVERT_TO_RAW (struct type *@var{type}, int @var{reg}, char *@var{from}, char *@var{to})
Convert the value of register number @var{reg} to @var{type}, which
should always be @code{DEPRECATED_REGISTER_VIRTUAL_TYPE (@var{reg})}. The buffer
at @var{from} holds the register's value in raw format; the macro should
convert the value to virtual format, and place it at @var{to}.
Note that REGISTER_CONVERT_TO_VIRTUAL and REGISTER_CONVERT_TO_RAW take
their @var{reg} and @var{type} arguments in different orders.
@end deftypefn
@section Using Different Register and Memory Data Representations
@cindex register representation
@cindex memory representation
@cindex representations, register and memory
@cindex register data formats, converting
@cindex @code{struct value}, converting register contents to
@emph{Maintainer's note: The way GDB manipulates registers is undergoing
significant change. Many of the macros and functions referred to in this
section are likely to be subject to further revision. See
@uref{http://sources.redhat.com/gdb/current/ari/, A.R. Index} and
@uref{http://www.gnu.org/software/gdb/bugs, Bug Tracking Database} for
further information. cagney/2002-05-06.}
Some architectures can represent a data object in a register using a
form that is different to the objects more normal memory representation.
For example:
@itemize @bullet
@item
The Alpha architecture can represent 32 bit integer values in
floating-point registers.
@item
The x86 architecture supports 80-bit floating-point registers. The
@code{long double} data type occupies 96 bits in memory but only 80 bits
when stored in a register.
@end itemize
In general, the register representation of a data type is determined by
the architecture, or @value{GDBN}'s interface to the architecture, while
the memory representation is determined by the Application Binary
Interface.
For almost all data types on almost all architectures, the two
representations are identical, and no special handling is needed.
However, they do occasionally differ. Your architecture may define the
following macros to request conversions between the register and memory
representations of a data type:
@deftypefn {Target Macro} int CONVERT_REGISTER_P (int @var{reg})
Return non-zero if the representation of a data value stored in this
register may be different to the representation of that same data value
when stored in memory.
When non-zero, the macros @code{REGISTER_TO_VALUE} and
@code{VALUE_TO_REGISTER} are used to perform any necessary conversion.
@end deftypefn
@deftypefn {Target Macro} void REGISTER_TO_VALUE (int @var{reg}, struct type *@var{type}, char *@var{from}, char *@var{to})
Convert the value of register number @var{reg} to a data object of type
@var{type}. The buffer at @var{from} holds the register's value in raw
format; the converted value should be placed in the buffer at @var{to}.
Note that @code{REGISTER_TO_VALUE} and @code{VALUE_TO_REGISTER} take
their @var{reg} and @var{type} arguments in different orders.
You should only use @code{REGISTER_TO_VALUE} with registers for which
the @code{CONVERT_REGISTER_P} macro returns a non-zero value.
@end deftypefn
@deftypefn {Target Macro} void VALUE_TO_REGISTER (struct type *@var{type}, int @var{reg}, char *@var{from}, char *@var{to})
Convert a data value of type @var{type} to register number @var{reg}'
raw format.
Note that @code{REGISTER_TO_VALUE} and @code{VALUE_TO_REGISTER} take
their @var{reg} and @var{type} arguments in different orders.
You should only use @code{VALUE_TO_REGISTER} with registers for which
the @code{CONVERT_REGISTER_P} macro returns a non-zero value.
@end deftypefn
@deftypefn {Target Macro} void REGISTER_CONVERT_TO_TYPE (int @var{regnum}, struct type *@var{type}, char *@var{buf})
See @file{mips-tdep.c}. It does not do what you want.
@end deftypefn
@section Frame Interpretation
@section Inferior Call Setup
@section Compiler Characteristics
@section Target Conditionals
This section describes the macros that you can use to define the target
machine.
@table @code
@item ADDR_BITS_REMOVE (addr)
@findex ADDR_BITS_REMOVE
If a raw machine instruction address includes any bits that are not
really part of the address, then define this macro to expand into an
expression that zeroes those bits in @var{addr}. This is only used for
addresses of instructions, and even then not in all contexts.
For example, the two low-order bits of the PC on the Hewlett-Packard PA
2.0 architecture contain the privilege level of the corresponding
instruction. Since instructions must always be aligned on four-byte
boundaries, the processor masks out these bits to generate the actual
address of the instruction. ADDR_BITS_REMOVE should filter out these
bits with an expression such as @code{((addr) & ~3)}.
@item ADDRESS_CLASS_NAME_TO_TYPE_FLAGS (@var{name}, @var{type_flags_ptr})
@findex ADDRESS_CLASS_NAME_TO_TYPE_FLAGS
If @var{name} is a valid address class qualifier name, set the @code{int}
referenced by @var{type_flags_ptr} to the mask representing the qualifier
and return 1. If @var{name} is not a valid address class qualifier name,
return 0.
The value for @var{type_flags_ptr} should be one of
@code{TYPE_FLAG_ADDRESS_CLASS_1}, @code{TYPE_FLAG_ADDRESS_CLASS_2}, or
possibly some combination of these values or'd together.
@xref{Target Architecture Definition, , Address Classes}.
@item ADDRESS_CLASS_NAME_TO_TYPE_FLAGS_P ()
@findex ADDRESS_CLASS_NAME_TO_TYPE_FLAGS_P
Predicate which indicates whether @code{ADDRESS_CLASS_NAME_TO_TYPE_FLAGS}
has been defined.
@item ADDRESS_CLASS_TYPE_FLAGS (@var{byte_size}, @var{dwarf2_addr_class})
@findex ADDRESS_CLASS_TYPE_FLAGS (@var{byte_size}, @var{dwarf2_addr_class})
Given a pointers byte size (as described by the debug information) and
the possible @code{DW_AT_address_class} value, return the type flags
used by @value{GDBN} to represent this address class. The value
returned should be one of @code{TYPE_FLAG_ADDRESS_CLASS_1},
@code{TYPE_FLAG_ADDRESS_CLASS_2}, or possibly some combination of these
values or'd together.
@xref{Target Architecture Definition, , Address Classes}.
@item ADDRESS_CLASS_TYPE_FLAGS_P ()
@findex ADDRESS_CLASS_TYPE_FLAGS_P
Predicate which indicates whether @code{ADDRESS_CLASS_TYPE_FLAGS} has
been defined.
@item ADDRESS_CLASS_TYPE_FLAGS_TO_NAME (@var{type_flags})
@findex ADDRESS_CLASS_TYPE_FLAGS_TO_NAME
Return the name of the address class qualifier associated with the type
flags given by @var{type_flags}.
@item ADDRESS_CLASS_TYPE_FLAGS_TO_NAME_P ()
@findex ADDRESS_CLASS_TYPE_FLAGS_TO_NAME_P
Predicate which indicates whether @code{ADDRESS_CLASS_TYPE_FLAGS_TO_NAME} has
been defined.
@xref{Target Architecture Definition, , Address Classes}.
@item ADDRESS_TO_POINTER (@var{type}, @var{buf}, @var{addr})
@findex ADDRESS_TO_POINTER
Store in @var{buf} a pointer of type @var{type} representing the address
@var{addr}, in the appropriate format for the current architecture.
This macro may safely assume that @var{type} is either a pointer or a
C@t{++} reference type.
@xref{Target Architecture Definition, , Pointers Are Not Always Addresses}.
@item BELIEVE_PCC_PROMOTION
@findex BELIEVE_PCC_PROMOTION
Define if the compiler promotes a @code{short} or @code{char}
parameter to an @code{int}, but still reports the parameter as its
original type, rather than the promoted type.
@item BITS_BIG_ENDIAN
@findex BITS_BIG_ENDIAN
Define this if the numbering of bits in the targets does @strong{not} match the
endianness of the target byte order. A value of 1 means that the bits
are numbered in a big-endian bit order, 0 means little-endian.
@item BREAKPOINT
@findex BREAKPOINT
This is the character array initializer for the bit pattern to put into
memory where a breakpoint is set. Although it's common to use a trap
instruction for a breakpoint, it's not required; for instance, the bit
pattern could be an invalid instruction. The breakpoint must be no
longer than the shortest instruction of the architecture.
@code{BREAKPOINT} has been deprecated in favor of
@code{BREAKPOINT_FROM_PC}.
@item BIG_BREAKPOINT
@itemx LITTLE_BREAKPOINT
@findex LITTLE_BREAKPOINT
@findex BIG_BREAKPOINT
Similar to BREAKPOINT, but used for bi-endian targets.
@code{BIG_BREAKPOINT} and @code{LITTLE_BREAKPOINT} have been deprecated in
favor of @code{BREAKPOINT_FROM_PC}.
@item BREAKPOINT_FROM_PC (@var{pcptr}, @var{lenptr})
@findex BREAKPOINT_FROM_PC
@anchor{BREAKPOINT_FROM_PC} Use the program counter to determine the
contents and size of a breakpoint instruction. It returns a pointer to
a string of bytes that encode a breakpoint instruction, stores the
length of the string to @code{*@var{lenptr}}, and adjusts the program
counter (if necessary) to point to the actual memory location where the
breakpoint should be inserted.
Although it is common to use a trap instruction for a breakpoint, it's
not required; for instance, the bit pattern could be an invalid
instruction. The breakpoint must be no longer than the shortest
instruction of the architecture.
Replaces all the other @var{BREAKPOINT} macros.
@item MEMORY_INSERT_BREAKPOINT (@var{bp_tgt})
@itemx MEMORY_REMOVE_BREAKPOINT (@var{bp_tgt})
@findex MEMORY_REMOVE_BREAKPOINT
@findex MEMORY_INSERT_BREAKPOINT
Insert or remove memory based breakpoints. Reasonable defaults
(@code{default_memory_insert_breakpoint} and
@code{default_memory_remove_breakpoint} respectively) have been
provided so that it is not necessary to define these for most
architectures. Architectures which may want to define
@code{MEMORY_INSERT_BREAKPOINT} and @code{MEMORY_REMOVE_BREAKPOINT} will
likely have instructions that are oddly sized or are not stored in a
conventional manner.
It may also be desirable (from an efficiency standpoint) to define
custom breakpoint insertion and removal routines if
@code{BREAKPOINT_FROM_PC} needs to read the target's memory for some
reason.
@item ADJUST_BREAKPOINT_ADDRESS (@var{address})
@findex ADJUST_BREAKPOINT_ADDRESS
@cindex breakpoint address adjusted
Given an address at which a breakpoint is desired, return a breakpoint
address adjusted to account for architectural constraints on
breakpoint placement. This method is not needed by most targets.
The FR-V target (see @file{frv-tdep.c}) requires this method.
The FR-V is a VLIW architecture in which a number of RISC-like
instructions are grouped (packed) together into an aggregate
instruction or instruction bundle. When the processor executes
one of these bundles, the component instructions are executed
in parallel.
In the course of optimization, the compiler may group instructions
from distinct source statements into the same bundle. The line number
information associated with one of the latter statements will likely
refer to some instruction other than the first one in the bundle. So,
if the user attempts to place a breakpoint on one of these latter
statements, @value{GDBN} must be careful to @emph{not} place the break
instruction on any instruction other than the first one in the bundle.
(Remember though that the instructions within a bundle execute
in parallel, so the @emph{first} instruction is the instruction
at the lowest address and has nothing to do with execution order.)
The FR-V's @code{ADJUST_BREAKPOINT_ADDRESS} method will adjust a
breakpoint's address by scanning backwards for the beginning of
the bundle, returning the address of the bundle.
Since the adjustment of a breakpoint may significantly alter a user's
expectation, @value{GDBN} prints a warning when an adjusted breakpoint
is initially set and each time that that breakpoint is hit.
@item CALL_DUMMY_LOCATION
@findex CALL_DUMMY_LOCATION
See the file @file{inferior.h}.
This method has been replaced by @code{push_dummy_code}
(@pxref{push_dummy_code}).
@item CANNOT_FETCH_REGISTER (@var{regno})
@findex CANNOT_FETCH_REGISTER
A C expression that should be nonzero if @var{regno} cannot be fetched
from an inferior process. This is only relevant if
@code{FETCH_INFERIOR_REGISTERS} is not defined.
@item CANNOT_STORE_REGISTER (@var{regno})
@findex CANNOT_STORE_REGISTER
A C expression that should be nonzero if @var{regno} should not be
written to the target. This is often the case for program counters,
status words, and other special registers. If this is not defined,
@value{GDBN} will assume that all registers may be written.
@item int CONVERT_REGISTER_P(@var{regnum})
@findex CONVERT_REGISTER_P
Return non-zero if register @var{regnum} can represent data values in a
non-standard form.
@xref{Target Architecture Definition, , Using Different Register and Memory Data Representations}.
@item DECR_PC_AFTER_BREAK
@findex DECR_PC_AFTER_BREAK
Define this to be the amount by which to decrement the PC after the
program encounters a breakpoint. This is often the number of bytes in
@code{BREAKPOINT}, though not always. For most targets this value will be 0.
@item DISABLE_UNSETTABLE_BREAK (@var{addr})
@findex DISABLE_UNSETTABLE_BREAK
If defined, this should evaluate to 1 if @var{addr} is in a shared
library in which breakpoints cannot be set and so should be disabled.
@item PRINT_FLOAT_INFO()
@findex PRINT_FLOAT_INFO
If defined, then the @samp{info float} command will print information about
the processor's floating point unit.
@item print_registers_info (@var{gdbarch}, @var{frame}, @var{regnum}, @var{all})
@findex print_registers_info
If defined, pretty print the value of the register @var{regnum} for the
specified @var{frame}. If the value of @var{regnum} is -1, pretty print
either all registers (@var{all} is non zero) or a select subset of
registers (@var{all} is zero).
The default method prints one register per line, and if @var{all} is
zero omits floating-point registers.
@item PRINT_VECTOR_INFO()
@findex PRINT_VECTOR_INFO
If defined, then the @samp{info vector} command will call this function
to print information about the processor's vector unit.
By default, the @samp{info vector} command will print all vector
registers (the register's type having the vector attribute).
@item DWARF_REG_TO_REGNUM
@findex DWARF_REG_TO_REGNUM
Convert DWARF register number into @value{GDBN} regnum. If not defined,
no conversion will be performed.
@item DWARF2_REG_TO_REGNUM
@findex DWARF2_REG_TO_REGNUM
Convert DWARF2 register number into @value{GDBN} regnum. If not
defined, no conversion will be performed.
@item ECOFF_REG_TO_REGNUM
@findex ECOFF_REG_TO_REGNUM
Convert ECOFF register number into @value{GDBN} regnum. If not defined,
no conversion will be performed.
@item END_OF_TEXT_DEFAULT
@findex END_OF_TEXT_DEFAULT
This is an expression that should designate the end of the text section.
@c (? FIXME ?)
@item EXTRACT_RETURN_VALUE(@var{type}, @var{regbuf}, @var{valbuf})
@findex EXTRACT_RETURN_VALUE
Define this to extract a function's return value of type @var{type} from
the raw register state @var{regbuf} and copy that, in virtual format,
into @var{valbuf}.
This method has been deprecated in favour of @code{gdbarch_return_value}
(@pxref{gdbarch_return_value}).
@item DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS(@var{regbuf})
@findex DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS
@anchor{DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS}
When defined, extract from the array @var{regbuf} (containing the raw
register state) the @code{CORE_ADDR} at which a function should return
its structure value.
@xref{gdbarch_return_value}.
@item DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS_P()
@findex DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS_P
Predicate for @code{DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS}.
@item DEPRECATED_FP_REGNUM
@findex DEPRECATED_FP_REGNUM
If the virtual frame pointer is kept in a register, then define this
macro to be the number (greater than or equal to zero) of that register.
This should only need to be defined if @code{DEPRECATED_TARGET_READ_FP}
is not defined.
@item DEPRECATED_FRAMELESS_FUNCTION_INVOCATION(@var{fi})
@findex DEPRECATED_FRAMELESS_FUNCTION_INVOCATION
Define this to an expression that returns 1 if the function invocation
represented by @var{fi} does not have a stack frame associated with it.
Otherwise return 0.
@item frame_align (@var{address})
@anchor{frame_align}
@findex frame_align
Define this to adjust @var{address} so that it meets the alignment
requirements for the start of a new stack frame. A stack frame's
alignment requirements are typically stronger than a target processors
stack alignment requirements (@pxref{DEPRECATED_STACK_ALIGN}).
This function is used to ensure that, when creating a dummy frame, both
the initial stack pointer and (if needed) the address of the return
value are correctly aligned.
Unlike @code{DEPRECATED_STACK_ALIGN}, this function always adjusts the
address in the direction of stack growth.
By default, no frame based stack alignment is performed.
@item int frame_red_zone_size
The number of bytes, beyond the innermost-stack-address, reserved by the
@sc{abi}. A function is permitted to use this scratch area (instead of
allocating extra stack space).
When performing an inferior function call, to ensure that it does not
modify this area, @value{GDBN} adjusts the innermost-stack-address by
@var{frame_red_zone_size} bytes before pushing parameters onto the
stack.
By default, zero bytes are allocated. The value must be aligned
(@pxref{frame_align}).
The @sc{amd64} (nee x86-64) @sc{abi} documentation refers to the
@emph{red zone} when describing this scratch area.
@cindex red zone
@item DEPRECATED_FRAME_CHAIN(@var{frame})
@findex DEPRECATED_FRAME_CHAIN
Given @var{frame}, return a pointer to the calling frame.
@item DEPRECATED_FRAME_CHAIN_VALID(@var{chain}, @var{thisframe})
@findex DEPRECATED_FRAME_CHAIN_VALID
Define this to be an expression that returns zero if the given frame is an
outermost frame, with no caller, and nonzero otherwise. Most normal
situations can be handled without defining this macro, including @code{NULL}
chain pointers, dummy frames, and frames whose PC values are inside the
startup file (e.g.@: @file{crt0.o}), inside @code{main}, or inside
@code{_start}.
@item DEPRECATED_FRAME_INIT_SAVED_REGS(@var{frame})
@findex DEPRECATED_FRAME_INIT_SAVED_REGS
See @file{frame.h}. Determines the address of all registers in the
current stack frame storing each in @code{frame->saved_regs}. Space for
@code{frame->saved_regs} shall be allocated by
@code{DEPRECATED_FRAME_INIT_SAVED_REGS} using
@code{frame_saved_regs_zalloc}.
@code{FRAME_FIND_SAVED_REGS} is deprecated.
@item FRAME_NUM_ARGS (@var{fi})
@findex FRAME_NUM_ARGS
For the frame described by @var{fi} return the number of arguments that
are being passed. If the number of arguments is not known, return
@code{-1}.
@item DEPRECATED_FRAME_SAVED_PC(@var{frame})
@findex DEPRECATED_FRAME_SAVED_PC
@anchor{DEPRECATED_FRAME_SAVED_PC} Given @var{frame}, return the pc
saved there. This is the return address.
This method is deprecated. @xref{unwind_pc}.
@item CORE_ADDR unwind_pc (struct frame_info *@var{this_frame})
@findex unwind_pc
@anchor{unwind_pc} Return the instruction address, in @var{this_frame}'s
caller, at which execution will resume after @var{this_frame} returns.
This is commonly referred to as the return address.
The implementation, which must be frame agnostic (work with any frame),
is typically no more than:
@smallexample
ULONGEST pc;
frame_unwind_unsigned_register (this_frame, D10V_PC_REGNUM, &pc);
return d10v_make_iaddr (pc);
@end smallexample
@noindent
@xref{DEPRECATED_FRAME_SAVED_PC}, which this method replaces.
@item CORE_ADDR unwind_sp (struct frame_info *@var{this_frame})
@findex unwind_sp
@anchor{unwind_sp} Return the frame's inner most stack address. This is
commonly referred to as the frame's @dfn{stack pointer}.
The implementation, which must be frame agnostic (work with any frame),
is typically no more than:
@smallexample
ULONGEST sp;
frame_unwind_unsigned_register (this_frame, D10V_SP_REGNUM, &sp);
return d10v_make_daddr (sp);
@end smallexample
@noindent
@xref{TARGET_READ_SP}, which this method replaces.
@item FUNCTION_EPILOGUE_SIZE
@findex FUNCTION_EPILOGUE_SIZE
For some COFF targets, the @code{x_sym.x_misc.x_fsize} field of the
function end symbol is 0. For such targets, you must define
@code{FUNCTION_EPILOGUE_SIZE} to expand into the standard size of a
function's epilogue.
@item DEPRECATED_FUNCTION_START_OFFSET
@findex DEPRECATED_FUNCTION_START_OFFSET
An integer, giving the offset in bytes from a function's address (as
used in the values of symbols, function pointers, etc.), and the
function's first genuine instruction.
This is zero on almost all machines: the function's address is usually
the address of its first instruction. However, on the VAX, for
example, each function starts with two bytes containing a bitmask
indicating which registers to save upon entry to the function. The
VAX @code{call} instructions check this value, and save the
appropriate registers automatically. Thus, since the offset from the
function's address to its first instruction is two bytes,
@code{DEPRECATED_FUNCTION_START_OFFSET} would be 2 on the VAX.
@item GCC_COMPILED_FLAG_SYMBOL
@itemx GCC2_COMPILED_FLAG_SYMBOL
@findex GCC2_COMPILED_FLAG_SYMBOL
@findex GCC_COMPILED_FLAG_SYMBOL
If defined, these are the names of the symbols that @value{GDBN} will
look for to detect that GCC compiled the file. The default symbols
are @code{gcc_compiled.} and @code{gcc2_compiled.},
respectively. (Currently only defined for the Delta 68.)
@item @value{GDBN}_MULTI_ARCH
@findex @value{GDBN}_MULTI_ARCH
If defined and non-zero, enables support for multiple architectures
within @value{GDBN}.
This support can be enabled at two levels. At level one, only
definitions for previously undefined macros are provided; at level two,
a multi-arch definition of all architecture dependent macros will be
defined.
@item @value{GDBN}_TARGET_IS_HPPA
@findex @value{GDBN}_TARGET_IS_HPPA
This determines whether horrible kludge code in @file{dbxread.c} and
@file{partial-stab.h} is used to mangle multiple-symbol-table files from
HPPA's. This should all be ripped out, and a scheme like @file{elfread.c}
used instead.
@item GET_LONGJMP_TARGET
@findex GET_LONGJMP_TARGET
For most machines, this is a target-dependent parameter. On the
DECstation and the Iris, this is a native-dependent parameter, since
the header file @file{setjmp.h} is needed to define it.
This macro determines the target PC address that @code{longjmp} will jump to,
assuming that we have just stopped at a @code{longjmp} breakpoint. It takes a
@code{CORE_ADDR *} as argument, and stores the target PC value through this
pointer. It examines the current state of the machine as needed.
@item DEPRECATED_GET_SAVED_REGISTER
@findex DEPRECATED_GET_SAVED_REGISTER
Define this if you need to supply your own definition for the function
@code{DEPRECATED_GET_SAVED_REGISTER}.
@item DEPRECATED_IBM6000_TARGET
@findex DEPRECATED_IBM6000_TARGET
Shows that we are configured for an IBM RS/6000 system. This
conditional should be eliminated (FIXME) and replaced by
feature-specific macros. It was introduced in a haste and we are
repenting at leisure.
@item I386_USE_GENERIC_WATCHPOINTS
An x86-based target can define this to use the generic x86 watchpoint
support; see @ref{Algorithms, I386_USE_GENERIC_WATCHPOINTS}.
@item SYMBOLS_CAN_START_WITH_DOLLAR
@findex SYMBOLS_CAN_START_WITH_DOLLAR
Some systems have routines whose names start with @samp{$}. Giving this
macro a non-zero value tells @value{GDBN}'s expression parser to check for such
routines when parsing tokens that begin with @samp{$}.
On HP-UX, certain system routines (millicode) have names beginning with
@samp{$} or @samp{$$}. For example, @code{$$dyncall} is a millicode
routine that handles inter-space procedure calls on PA-RISC.
@item DEPRECATED_INIT_EXTRA_FRAME_INFO (@var{fromleaf}, @var{frame})
@findex DEPRECATED_INIT_EXTRA_FRAME_INFO
If additional information about the frame is required this should be
stored in @code{frame->extra_info}. Space for @code{frame->extra_info}
is allocated using @code{frame_extra_info_zalloc}.
@item DEPRECATED_INIT_FRAME_PC (@var{fromleaf}, @var{prev})
@findex DEPRECATED_INIT_FRAME_PC
This is a C statement that sets the pc of the frame pointed to by
@var{prev}. [By default...]
@item INNER_THAN (@var{lhs}, @var{rhs})
@findex INNER_THAN
Returns non-zero if stack address @var{lhs} is inner than (nearer to the
stack top) stack address @var{rhs}. Define this as @code{lhs < rhs} if
the target's stack grows downward in memory, or @code{lhs > rsh} if the
stack grows upward.
@item gdbarch_in_function_epilogue_p (@var{gdbarch}, @var{pc})
@findex gdbarch_in_function_epilogue_p
Returns non-zero if the given @var{pc} is in the epilogue of a function.
The epilogue of a function is defined as the part of a function where
the stack frame of the function already has been destroyed up to the
final `return from function call' instruction.
@item DEPRECATED_SIGTRAMP_START (@var{pc})
@findex DEPRECATED_SIGTRAMP_START
@itemx DEPRECATED_SIGTRAMP_END (@var{pc})
@findex DEPRECATED_SIGTRAMP_END
Define these to be the start and end address of the @code{sigtramp} for the
given @var{pc}. On machines where the address is just a compile time
constant, the macro expansion will typically just ignore the supplied
@var{pc}.
@item IN_SOLIB_CALL_TRAMPOLINE (@var{pc}, @var{name})
@findex IN_SOLIB_CALL_TRAMPOLINE
Define this to evaluate to nonzero if the program is stopped in the
trampoline that connects to a shared library.
@item IN_SOLIB_RETURN_TRAMPOLINE (@var{pc}, @var{name})
@findex IN_SOLIB_RETURN_TRAMPOLINE
Define this to evaluate to nonzero if the program is stopped in the
trampoline that returns from a shared library.
@item IN_SOLIB_DYNSYM_RESOLVE_CODE (@var{pc})
@findex IN_SOLIB_DYNSYM_RESOLVE_CODE
Define this to evaluate to nonzero if the program is stopped in the
dynamic linker.
@item SKIP_SOLIB_RESOLVER (@var{pc})
@findex SKIP_SOLIB_RESOLVER
Define this to evaluate to the (nonzero) address at which execution
should continue to get past the dynamic linker's symbol resolution
function. A zero value indicates that it is not important or necessary
to set a breakpoint to get through the dynamic linker and that single
stepping will suffice.
@item INTEGER_TO_ADDRESS (@var{type}, @var{buf})
@findex INTEGER_TO_ADDRESS
@cindex converting integers to addresses
Define this when the architecture needs to handle non-pointer to address
conversions specially. Converts that value to an address according to
the current architectures conventions.
@emph{Pragmatics: When the user copies a well defined expression from
their source code and passes it, as a parameter, to @value{GDBN}'s
@code{print} command, they should get the same value as would have been
computed by the target program. Any deviation from this rule can cause
major confusion and annoyance, and needs to be justified carefully. In
other words, @value{GDBN} doesn't really have the freedom to do these
conversions in clever and useful ways. It has, however, been pointed
out that users aren't complaining about how @value{GDBN} casts integers
to pointers; they are complaining that they can't take an address from a
disassembly listing and give it to @code{x/i}. Adding an architecture
method like @code{INTEGER_TO_ADDRESS} certainly makes it possible for
@value{GDBN} to ``get it right'' in all circumstances.}
@xref{Target Architecture Definition, , Pointers Are Not Always
Addresses}.
@item NO_HIF_SUPPORT
@findex NO_HIF_SUPPORT
(Specific to the a29k.)
@item POINTER_TO_ADDRESS (@var{type}, @var{buf})
@findex POINTER_TO_ADDRESS
Assume that @var{buf} holds a pointer of type @var{type}, in the
appropriate format for the current architecture. Return the byte
address the pointer refers to.
@xref{Target Architecture Definition, , Pointers Are Not Always Addresses}.
@item REGISTER_CONVERTIBLE (@var{reg})
@findex REGISTER_CONVERTIBLE
Return non-zero if @var{reg} uses different raw and virtual formats.
@xref{Target Architecture Definition, , Raw and Virtual Register Representations}.
@item REGISTER_TO_VALUE(@var{regnum}, @var{type}, @var{from}, @var{to})
@findex REGISTER_TO_VALUE
Convert the raw contents of register @var{regnum} into a value of type
@var{type}.
@xref{Target Architecture Definition, , Using Different Register and Memory Data Representations}.
@item DEPRECATED_REGISTER_RAW_SIZE (@var{reg})
@findex DEPRECATED_REGISTER_RAW_SIZE
Return the raw size of @var{reg}; defaults to the size of the register's
virtual type.
@xref{Target Architecture Definition, , Raw and Virtual Register Representations}.
@item register_reggroup_p (@var{gdbarch}, @var{regnum}, @var{reggroup})
@findex register_reggroup_p
@cindex register groups
Return non-zero if register @var{regnum} is a member of the register
group @var{reggroup}.
By default, registers are grouped as follows:
@table @code
@item float_reggroup
Any register with a valid name and a floating-point type.
@item vector_reggroup
Any register with a valid name and a vector type.
@item general_reggroup
Any register with a valid name and a type other than vector or
floating-point. @samp{float_reggroup}.
@item save_reggroup
@itemx restore_reggroup
@itemx all_reggroup
Any register with a valid name.
@end table
@item DEPRECATED_REGISTER_VIRTUAL_SIZE (@var{reg})
@findex DEPRECATED_REGISTER_VIRTUAL_SIZE
Return the virtual size of @var{reg}; defaults to the size of the
register's virtual type.
Return the virtual size of @var{reg}.
@xref{Target Architecture Definition, , Raw and Virtual Register Representations}.
@item DEPRECATED_REGISTER_VIRTUAL_TYPE (@var{reg})
@findex REGISTER_VIRTUAL_TYPE
Return the virtual type of @var{reg}.
@xref{Target Architecture Definition, , Raw and Virtual Register Representations}.
@item struct type *register_type (@var{gdbarch}, @var{reg})
@findex register_type
If defined, return the type of register @var{reg}. This function
supersedes @code{DEPRECATED_REGISTER_VIRTUAL_TYPE}. @xref{Target Architecture
Definition, , Raw and Virtual Register Representations}.
@item REGISTER_CONVERT_TO_VIRTUAL(@var{reg}, @var{type}, @var{from}, @var{to})
@findex REGISTER_CONVERT_TO_VIRTUAL
Convert the value of register @var{reg} from its raw form to its virtual
form.
@xref{Target Architecture Definition, , Raw and Virtual Register Representations}.
@item REGISTER_CONVERT_TO_RAW(@var{type}, @var{reg}, @var{from}, @var{to})
@findex REGISTER_CONVERT_TO_RAW
Convert the value of register @var{reg} from its virtual form to its raw
form.
@xref{Target Architecture Definition, , Raw and Virtual Register Representations}.
@item const struct regset *regset_from_core_section (struct gdbarch * @var{gdbarch}, const char * @var{sect_name}, size_t @var{sect_size})
@findex regset_from_core_section
Return the appropriate register set for a core file section with name
@var{sect_name} and size @var{sect_size}.
@item SOFTWARE_SINGLE_STEP_P()
@findex SOFTWARE_SINGLE_STEP_P
Define this as 1 if the target does not have a hardware single-step
mechanism. The macro @code{SOFTWARE_SINGLE_STEP} must also be defined.
@item SOFTWARE_SINGLE_STEP(@var{signal}, @var{insert_breakpoints_p})
@findex SOFTWARE_SINGLE_STEP
A function that inserts or removes (depending on
@var{insert_breakpoints_p}) breakpoints at each possible destinations of
the next instruction. See @file{sparc-tdep.c} and @file{rs6000-tdep.c}
for examples.
@item SOFUN_ADDRESS_MAYBE_MISSING
@findex SOFUN_ADDRESS_MAYBE_MISSING
Somebody clever observed that, the more actual addresses you have in the
debug information, the more time the linker has to spend relocating
them. So whenever there's some other way the debugger could find the
address it needs, you should omit it from the debug info, to make
linking faster.
@code{SOFUN_ADDRESS_MAYBE_MISSING} indicates that a particular set of
hacks of this sort are in use, affecting @code{N_SO} and @code{N_FUN}
entries in stabs-format debugging information. @code{N_SO} stabs mark
the beginning and ending addresses of compilation units in the text
segment. @code{N_FUN} stabs mark the starts and ends of functions.
@code{SOFUN_ADDRESS_MAYBE_MISSING} means two things:
@itemize @bullet
@item
@code{N_FUN} stabs have an address of zero. Instead, you should find the
addresses where the function starts by taking the function name from
the stab, and then looking that up in the minsyms (the
linker/assembler symbol table). In other words, the stab has the
name, and the linker/assembler symbol table is the only place that carries
the address.
@item
@code{N_SO} stabs have an address of zero, too. You just look at the
@code{N_FUN} stabs that appear before and after the @code{N_SO} stab,
and guess the starting and ending addresses of the compilation unit from
them.
@end itemize
@item PC_LOAD_SEGMENT
@findex PC_LOAD_SEGMENT
If defined, print information about the load segment for the program
counter. (Defined only for the RS/6000.)
@item PC_REGNUM
@findex PC_REGNUM
If the program counter is kept in a register, then define this macro to
be the number (greater than or equal to zero) of that register.
This should only need to be defined if @code{TARGET_READ_PC} and
@code{TARGET_WRITE_PC} are not defined.
@item PARM_BOUNDARY
@findex PARM_BOUNDARY
If non-zero, round arguments to a boundary of this many bits before
pushing them on the stack.
@item stabs_argument_has_addr (@var{gdbarch}, @var{type})
@findex stabs_argument_has_addr
@findex DEPRECATED_REG_STRUCT_HAS_ADDR
@anchor{stabs_argument_has_addr} Define this to return nonzero if a
function argument of type @var{type} is passed by reference instead of
value.
This method replaces @code{DEPRECATED_REG_STRUCT_HAS_ADDR}
(@pxref{DEPRECATED_REG_STRUCT_HAS_ADDR}).
@item PROCESS_LINENUMBER_HOOK
@findex PROCESS_LINENUMBER_HOOK
A hook defined for XCOFF reading.
@item PROLOGUE_FIRSTLINE_OVERLAP
@findex PROLOGUE_FIRSTLINE_OVERLAP
(Only used in unsupported Convex configuration.)
@item PS_REGNUM
@findex PS_REGNUM
If defined, this is the number of the processor status register. (This
definition is only used in generic code when parsing "$ps".)
@item DEPRECATED_POP_FRAME
@findex DEPRECATED_POP_FRAME
@findex frame_pop
If defined, used by @code{frame_pop} to remove a stack frame. This
method has been superseded by generic code.
@item push_dummy_call (@var{gdbarch}, @var{function}, @var{regcache}, @var{pc_addr}, @var{nargs}, @var{args}, @var{sp}, @var{struct_return}, @var{struct_addr})
@findex push_dummy_call
@findex DEPRECATED_PUSH_ARGUMENTS.
@anchor{push_dummy_call} Define this to push the dummy frame's call to
the inferior function onto the stack. In addition to pushing
@var{nargs}, the code should push @var{struct_addr} (when
@var{struct_return}), and the return address (@var{bp_addr}).
@var{function} is a pointer to a @code{struct value}; on architectures that use
function descriptors, this contains the function descriptor value.
Returns the updated top-of-stack pointer.
This method replaces @code{DEPRECATED_PUSH_ARGUMENTS}.
@item CORE_ADDR push_dummy_code (@var{gdbarch}, @var{sp}, @var{funaddr}, @var{using_gcc}, @var{args}, @var{nargs}, @var{value_type}, @var{real_pc}, @var{bp_addr})
@findex push_dummy_code
@anchor{push_dummy_code} Given a stack based call dummy, push the
instruction sequence (including space for a breakpoint) to which the
called function should return.
Set @var{bp_addr} to the address at which the breakpoint instruction
should be inserted, @var{real_pc} to the resume address when starting
the call sequence, and return the updated inner-most stack address.
By default, the stack is grown sufficient to hold a frame-aligned
(@pxref{frame_align}) breakpoint, @var{bp_addr} is set to the address
reserved for that breakpoint, and @var{real_pc} set to @var{funaddr}.
This method replaces @code{CALL_DUMMY_LOCATION},
@code{DEPRECATED_REGISTER_SIZE}.
@item REGISTER_NAME(@var{i})
@findex REGISTER_NAME
Return the name of register @var{i} as a string. May return @code{NULL}
or @code{NUL} to indicate that register @var{i} is not valid.
@item DEPRECATED_REG_STRUCT_HAS_ADDR (@var{gcc_p}, @var{type})
@findex DEPRECATED_REG_STRUCT_HAS_ADDR
@anchor{DEPRECATED_REG_STRUCT_HAS_ADDR}Define this to return 1 if the
given type will be passed by pointer rather than directly.
This method has been replaced by @code{stabs_argument_has_addr}
(@pxref{stabs_argument_has_addr}).
@item SAVE_DUMMY_FRAME_TOS (@var{sp})
@findex SAVE_DUMMY_FRAME_TOS
@anchor{SAVE_DUMMY_FRAME_TOS} Used in @samp{call_function_by_hand} to
notify the target dependent code of the top-of-stack value that will be
passed to the inferior code. This is the value of the @code{SP}
after both the dummy frame and space for parameters/results have been
allocated on the stack. @xref{unwind_dummy_id}.
@item SDB_REG_TO_REGNUM
@findex SDB_REG_TO_REGNUM
Define this to convert sdb register numbers into @value{GDBN} regnums. If not
defined, no conversion will be done.
@item enum return_value_convention gdbarch_return_value (struct gdbarch *@var{gdbarch}, struct type *@var{valtype}, struct regcache *@var{regcache}, void *@var{readbuf}, const void *@var{writebuf})
@findex gdbarch_return_value
@anchor{gdbarch_return_value} Given a function with a return-value of
type @var{rettype}, return which return-value convention that function
would use.
@value{GDBN} currently recognizes two function return-value conventions:
@code{RETURN_VALUE_REGISTER_CONVENTION} where the return value is found
in registers; and @code{RETURN_VALUE_STRUCT_CONVENTION} where the return
value is found in memory and the address of that memory location is
passed in as the function's first parameter.
If the register convention is being used, and @var{writebuf} is
non-@code{NULL}, also copy the return-value in @var{writebuf} into
@var{regcache}.
If the register convention is being used, and @var{readbuf} is
non-@code{NULL}, also copy the return value from @var{regcache} into
@var{readbuf} (@var{regcache} contains a copy of the registers from the
just returned function).
@xref{DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS}, for a description of how
return-values that use the struct convention are handled.
@emph{Maintainer note: This method replaces separate predicate, extract,
store methods. By having only one method, the logic needed to determine
the return-value convention need only be implemented in one place. If
@value{GDBN} were written in an @sc{oo} language, this method would
instead return an object that knew how to perform the register
return-value extract and store.}
@emph{Maintainer note: This method does not take a @var{gcc_p}
parameter, and such a parameter should not be added. If an architecture
that requires per-compiler or per-function information be identified,
then the replacement of @var{rettype} with @code{struct value}
@var{function} should be pursued.}
@emph{Maintainer note: The @var{regcache} parameter limits this methods
to the inner most frame. While replacing @var{regcache} with a
@code{struct frame_info} @var{frame} parameter would remove that
limitation there has yet to be a demonstrated need for such a change.}
@item SKIP_PERMANENT_BREAKPOINT
@findex SKIP_PERMANENT_BREAKPOINT
Advance the inferior's PC past a permanent breakpoint. @value{GDBN} normally
steps over a breakpoint by removing it, stepping one instruction, and
re-inserting the breakpoint. However, permanent breakpoints are
hardwired into the inferior, and can't be removed, so this strategy
doesn't work. Calling @code{SKIP_PERMANENT_BREAKPOINT} adjusts the processor's
state so that execution will resume just after the breakpoint. This
macro does the right thing even when the breakpoint is in the delay slot
of a branch or jump.
@item SKIP_PROLOGUE (@var{pc})
@findex SKIP_PROLOGUE
A C expression that returns the address of the ``real'' code beyond the
function entry prologue found at @var{pc}.
@item SKIP_TRAMPOLINE_CODE (@var{pc})
@findex SKIP_TRAMPOLINE_CODE
If the target machine has trampoline code that sits between callers and
the functions being called, then define this macro to return a new PC
that is at the start of the real function.
@item SP_REGNUM
@findex SP_REGNUM
If the stack-pointer is kept in a register, then define this macro to be
the number (greater than or equal to zero) of that register, or -1 if
there is no such register.
@item STAB_REG_TO_REGNUM
@findex STAB_REG_TO_REGNUM
Define this to convert stab register numbers (as gotten from `r'
declarations) into @value{GDBN} regnums. If not defined, no conversion will be
done.
@item DEPRECATED_STACK_ALIGN (@var{addr})
@anchor{DEPRECATED_STACK_ALIGN}
@findex DEPRECATED_STACK_ALIGN
Define this to increase @var{addr} so that it meets the alignment
requirements for the processor's stack.
Unlike @ref{frame_align}, this function always adjusts @var{addr}
upwards.
By default, no stack alignment is performed.
@item STEP_SKIPS_DELAY (@var{addr})
@findex STEP_SKIPS_DELAY
Define this to return true if the address is of an instruction with a
delay slot. If a breakpoint has been placed in the instruction's delay
slot, @value{GDBN} will single-step over that instruction before resuming
normally. Currently only defined for the Mips.
@item STORE_RETURN_VALUE (@var{type}, @var{regcache}, @var{valbuf})
@findex STORE_RETURN_VALUE
A C expression that writes the function return value, found in
@var{valbuf}, into the @var{regcache}. @var{type} is the type of the
value that is to be returned.
This method has been deprecated in favour of @code{gdbarch_return_value}
(@pxref{gdbarch_return_value}).
@item SYMBOL_RELOADING_DEFAULT
@findex SYMBOL_RELOADING_DEFAULT
The default value of the ``symbol-reloading'' variable. (Never defined in
current sources.)
@item TARGET_CHAR_BIT
@findex TARGET_CHAR_BIT
Number of bits in a char; defaults to 8.
@item TARGET_CHAR_SIGNED
@findex TARGET_CHAR_SIGNED
Non-zero if @code{char} is normally signed on this architecture; zero if
it should be unsigned.
The ISO C standard requires the compiler to treat @code{char} as
equivalent to either @code{signed char} or @code{unsigned char}; any
character in the standard execution set is supposed to be positive.
Most compilers treat @code{char} as signed, but @code{char} is unsigned
on the IBM S/390, RS6000, and PowerPC targets.
@item TARGET_COMPLEX_BIT
@findex TARGET_COMPLEX_BIT
Number of bits in a complex number; defaults to @code{2 * TARGET_FLOAT_BIT}.
At present this macro is not used.
@item TARGET_DOUBLE_BIT
@findex TARGET_DOUBLE_BIT
Number of bits in a double float; defaults to @code{8 * TARGET_CHAR_BIT}.
@item TARGET_DOUBLE_COMPLEX_BIT
@findex TARGET_DOUBLE_COMPLEX_BIT
Number of bits in a double complex; defaults to @code{2 * TARGET_DOUBLE_BIT}.
At present this macro is not used.
@item TARGET_FLOAT_BIT
@findex TARGET_FLOAT_BIT
Number of bits in a float; defaults to @code{4 * TARGET_CHAR_BIT}.
@item TARGET_INT_BIT
@findex TARGET_INT_BIT
Number of bits in an integer; defaults to @code{4 * TARGET_CHAR_BIT}.
@item TARGET_LONG_BIT
@findex TARGET_LONG_BIT
Number of bits in a long integer; defaults to @code{4 * TARGET_CHAR_BIT}.
@item TARGET_LONG_DOUBLE_BIT
@findex TARGET_LONG_DOUBLE_BIT
Number of bits in a long double float;
defaults to @code{2 * TARGET_DOUBLE_BIT}.
@item TARGET_LONG_LONG_BIT
@findex TARGET_LONG_LONG_BIT
Number of bits in a long long integer; defaults to @code{2 * TARGET_LONG_BIT}.
@item TARGET_PTR_BIT
@findex TARGET_PTR_BIT
Number of bits in a pointer; defaults to @code{TARGET_INT_BIT}.
@item TARGET_SHORT_BIT
@findex TARGET_SHORT_BIT
Number of bits in a short integer; defaults to @code{2 * TARGET_CHAR_BIT}.
@item TARGET_READ_PC
@findex TARGET_READ_PC
@itemx TARGET_WRITE_PC (@var{val}, @var{pid})
@findex TARGET_WRITE_PC
@anchor{TARGET_WRITE_PC}
@itemx TARGET_READ_SP
@findex TARGET_READ_SP
@itemx TARGET_READ_FP
@findex TARGET_READ_FP
@findex read_pc
@findex write_pc
@findex read_sp
@findex read_fp
@anchor{TARGET_READ_SP} These change the behavior of @code{read_pc},
@code{write_pc}, and @code{read_sp}. For most targets, these may be
left undefined. @value{GDBN} will call the read and write register
functions with the relevant @code{_REGNUM} argument.
These macros are useful when a target keeps one of these registers in a
hard to get at place; for example, part in a segment register and part
in an ordinary register.
@xref{unwind_sp}, which replaces @code{TARGET_READ_SP}.
@item TARGET_VIRTUAL_FRAME_POINTER(@var{pc}, @var{regp}, @var{offsetp})
@findex TARGET_VIRTUAL_FRAME_POINTER
Returns a @code{(register, offset)} pair representing the virtual frame
pointer in use at the code address @var{pc}. If virtual frame pointers
are not used, a default definition simply returns
@code{DEPRECATED_FP_REGNUM}, with an offset of zero.
@item TARGET_HAS_HARDWARE_WATCHPOINTS
If non-zero, the target has support for hardware-assisted
watchpoints. @xref{Algorithms, watchpoints}, for more details and
other related macros.
@item TARGET_PRINT_INSN (@var{addr}, @var{info})
@findex TARGET_PRINT_INSN
This is the function used by @value{GDBN} to print an assembly
instruction. It prints the instruction at address @var{addr} in
debugged memory and returns the length of the instruction, in bytes. If
a target doesn't define its own printing routine, it defaults to an
accessor function for the global pointer
@code{deprecated_tm_print_insn}. This usually points to a function in
the @code{opcodes} library (@pxref{Support Libraries, ,Opcodes}).
@var{info} is a structure (of type @code{disassemble_info}) defined in
@file{include/dis-asm.h} used to pass information to the instruction
decoding routine.
@item struct frame_id unwind_dummy_id (struct frame_info *@var{frame})
@findex unwind_dummy_id
@anchor{unwind_dummy_id} Given @var{frame} return a @code{struct
frame_id} that uniquely identifies an inferior function call's dummy
frame. The value returned must match the dummy frame stack value
previously saved using @code{SAVE_DUMMY_FRAME_TOS}.
@xref{SAVE_DUMMY_FRAME_TOS}.
@item DEPRECATED_USE_STRUCT_CONVENTION (@var{gcc_p}, @var{type})
@findex DEPRECATED_USE_STRUCT_CONVENTION
If defined, this must be an expression that is nonzero if a value of the
given @var{type} being returned from a function must have space
allocated for it on the stack. @var{gcc_p} is true if the function
being considered is known to have been compiled by GCC; this is helpful
for systems where GCC is known to use different calling convention than
other compilers.
This method has been deprecated in favour of @code{gdbarch_return_value}
(@pxref{gdbarch_return_value}).
@item VALUE_TO_REGISTER(@var{type}, @var{regnum}, @var{from}, @var{to})
@findex VALUE_TO_REGISTER
Convert a value of type @var{type} into the raw contents of register
@var{regnum}'s.
@xref{Target Architecture Definition, , Using Different Register and Memory Data Representations}.
@item VARIABLES_INSIDE_BLOCK (@var{desc}, @var{gcc_p})
@findex VARIABLES_INSIDE_BLOCK
For dbx-style debugging information, if the compiler puts variable
declarations inside LBRAC/RBRAC blocks, this should be defined to be
nonzero. @var{desc} is the value of @code{n_desc} from the
@code{N_RBRAC} symbol, and @var{gcc_p} is true if @value{GDBN} has noticed the
presence of either the @code{GCC_COMPILED_SYMBOL} or the
@code{GCC2_COMPILED_SYMBOL}. By default, this is 0.
@item OS9K_VARIABLES_INSIDE_BLOCK (@var{desc}, @var{gcc_p})
@findex OS9K_VARIABLES_INSIDE_BLOCK
Similarly, for OS/9000. Defaults to 1.
@end table
Motorola M68K target conditionals.
@ftable @code
@item BPT_VECTOR
Define this to be the 4-bit location of the breakpoint trap vector. If
not defined, it will default to @code{0xf}.
@item REMOTE_BPT_VECTOR
Defaults to @code{1}.
@item NAME_OF_MALLOC
@findex NAME_OF_MALLOC
A string containing the name of the function to call in order to
allocate some memory in the inferior. The default value is "malloc".
@end ftable
@section Adding a New Target
@cindex adding a target
The following files add a target to @value{GDBN}:
@table @file
@vindex TDEPFILES
@item gdb/config/@var{arch}/@var{ttt}.mt
Contains a Makefile fragment specific to this target. Specifies what
object files are needed for target @var{ttt}, by defining
@samp{TDEPFILES=@dots{}} and @samp{TDEPLIBS=@dots{}}. Also specifies
the header file which describes @var{ttt}, by defining @samp{TM_FILE=
tm-@var{ttt}.h}.
You can also define @samp{TM_CFLAGS}, @samp{TM_CLIBS}, @samp{TM_CDEPS},
but these are now deprecated, replaced by autoconf, and may go away in
future versions of @value{GDBN}.
@item gdb/@var{ttt}-tdep.c
Contains any miscellaneous code required for this target machine. On
some machines it doesn't exist at all. Sometimes the macros in
@file{tm-@var{ttt}.h} become very complicated, so they are implemented
as functions here instead, and the macro is simply defined to call the
function. This is vastly preferable, since it is easier to understand
and debug.
@item gdb/@var{arch}-tdep.c
@itemx gdb/@var{arch}-tdep.h
This often exists to describe the basic layout of the target machine's
processor chip (registers, stack, etc.). If used, it is included by
@file{@var{ttt}-tdep.h}. It can be shared among many targets that use
the same processor.
@item gdb/config/@var{arch}/tm-@var{ttt}.h
(@file{tm.h} is a link to this file, created by @code{configure}). Contains
macro definitions about the target machine's registers, stack frame
format and instructions.
New targets do not need this file and should not create it.
@item gdb/config/@var{arch}/tm-@var{arch}.h
This often exists to describe the basic layout of the target machine's
processor chip (registers, stack, etc.). If used, it is included by
@file{tm-@var{ttt}.h}. It can be shared among many targets that use the
same processor.
New targets do not need this file and should not create it.
@end table
If you are adding a new operating system for an existing CPU chip, add a
@file{config/tm-@var{os}.h} file that describes the operating system
facilities that are unusual (extra symbol table info; the breakpoint
instruction needed; etc.). Then write a @file{@var{arch}/tm-@var{os}.h}
that just @code{#include}s @file{tm-@var{arch}.h} and
@file{config/tm-@var{os}.h}.
@section Converting an existing Target Architecture to Multi-arch
@cindex converting targets to multi-arch
This section describes the current accepted best practice for converting
an existing target architecture to the multi-arch framework.
The process consists of generating, testing, posting and committing a
sequence of patches. Each patch must contain a single change, for
instance:
@itemize @bullet
@item
Directly convert a group of functions into macros (the conversion does
not change the behavior of any of the functions).
@item
Replace a non-multi-arch with a multi-arch mechanism (e.g.,
@code{FRAME_INFO}).
@item
Enable multi-arch level one.
@item
Delete one or more files.
@end itemize
@noindent
There isn't a size limit on a patch, however, a developer is strongly
encouraged to keep the patch size down.
Since each patch is well defined, and since each change has been tested
and shows no regressions, the patches are considered @emph{fairly}
obvious. Such patches, when submitted by developers listed in the
@file{MAINTAINERS} file, do not need approval. Occasional steps in the
process may be more complicated and less clear. The developer is
expected to use their judgment and is encouraged to seek advice as
needed.
@subsection Preparation
The first step is to establish control. Build (with @option{-Werror}
enabled) and test the target so that there is a baseline against which
the debugger can be compared.
At no stage can the test results regress or @value{GDBN} stop compiling
with @option{-Werror}.
@subsection Add the multi-arch initialization code
The objective of this step is to establish the basic multi-arch
framework. It involves
@itemize @bullet
@item
The addition of a @code{@var{arch}_gdbarch_init} function@footnote{The
above is from the original example and uses K&R C. @value{GDBN}
has since converted to ISO C but lets ignore that.} that creates
the architecture:
@smallexample
static struct gdbarch *
d10v_gdbarch_init (info, arches)
struct gdbarch_info info;
struct gdbarch_list *arches;
@{
struct gdbarch *gdbarch;
/* there is only one d10v architecture */
if (arches != NULL)
return arches->gdbarch;
gdbarch = gdbarch_alloc (&info, NULL);
return gdbarch;
@}
@end smallexample
@noindent
@emph{}
@item
A per-architecture dump function to print any architecture specific
information:
@smallexample
static void
mips_dump_tdep (struct gdbarch *current_gdbarch,
struct ui_file *file)
@{
@dots{} code to print architecture specific info @dots{}
@}
@end smallexample
@item
A change to @code{_initialize_@var{arch}_tdep} to register this new
architecture:
@smallexample
void
_initialize_mips_tdep (void)
@{
gdbarch_register (bfd_arch_mips, mips_gdbarch_init,
mips_dump_tdep);
@end smallexample
@item
Add the macro @code{GDB_MULTI_ARCH}, defined as 0 (zero), to the file@*
@file{config/@var{arch}/tm-@var{arch}.h}.
@end itemize
@subsection Update multi-arch incompatible mechanisms
Some mechanisms do not work with multi-arch. They include:
@table @code
@item FRAME_FIND_SAVED_REGS
Replaced with @code{DEPRECATED_FRAME_INIT_SAVED_REGS}
@end table
@noindent
At this stage you could also consider converting the macros into
functions.
@subsection Prepare for multi-arch level to one
Temporally set @code{GDB_MULTI_ARCH} to @code{GDB_MULTI_ARCH_PARTIAL}
and then build and start @value{GDBN} (the change should not be
committed). @value{GDBN} may not build, and once built, it may die with
an internal error listing the architecture methods that must be
provided.
Fix any build problems (patch(es)).
Convert all the architecture methods listed, which are only macros, into
functions (patch(es)).
Update @code{@var{arch}_gdbarch_init} to set all the missing
architecture methods and wrap the corresponding macros in @code{#if
!GDB_MULTI_ARCH} (patch(es)).
@subsection Set multi-arch level one
Change the value of @code{GDB_MULTI_ARCH} to GDB_MULTI_ARCH_PARTIAL (a
single patch).
Any problems with throwing ``the switch'' should have been fixed
already.
@subsection Convert remaining macros
Suggest converting macros into functions (and setting the corresponding
architecture method) in small batches.
@subsection Set multi-arch level to two
This should go smoothly.
@subsection Delete the TM file
The @file{tm-@var{arch}.h} can be deleted. @file{@var{arch}.mt} and
@file{configure.in} updated.
@node Target Descriptions
@chapter Target Descriptions
@cindex target descriptions
The target architecture definition (@pxref{Target Architecture Definition})
contains @value{GDBN}'s hard-coded knowledge about an architecture. For
some platforms, it is handy to have more flexible knowledge about a specific
instance of the architecture---for instance, a processor or development board.
@dfn{Target descriptions} provide a mechanism for the user to tell @value{GDBN}
more about what their target supports, or for the target to tell @value{GDBN}
directly.
For details on writing, automatically supplying, and manually selecting
target descriptions, see @ref{Target Descriptions, , , gdb,
Debugging with @value{GDBN}}. This section will cover some related
topics about the @value{GDBN} internals.
@menu
* Target Descriptions Implementation::
* Adding Target Described Register Support::
@end menu
@node Target Descriptions Implementation
@section Target Descriptions Implementation
@cindex target descriptions, implementation
Before @value{GDBN} connects to a new target, or runs a new program on
an existing target, it discards any existing target description and
reverts to a default gdbarch. Then, after connecting, it looks for a
new target description by calling @code{target_find_description}.
A description may come from a user specified file (XML), the remote
@samp{qXfer:features:read} packet (also XML), or from any custom
@code{to_read_description} routine in the target vector. For instance,
the remote target supports guessing whether a MIPS target is 32-bit or
64-bit based on the size of the @samp{g} packet.
If any target description is found, @value{GDBN} creates a new gdbarch
incorporating the description by calling @code{gdbarch_update_p}. Any
@samp{<architecture>} element is handled first, to determine which
architecture's gdbarch initialization routine is called to create the
new architecture. Then the initialization routine is called, and has
a chance to adjust the constructed architecture based on the contents
of the target description. For instance, it can recognize any
properties set by a @code{to_read_description} routine. Also
see @ref{Adding Target Described Register Support}.
@node Adding Target Described Register Support
@section Adding Target Described Register Support
@cindex target descriptions, adding register support
Target descriptions can report additional registers specific to an
instance of the target. But it takes a little work in the architecture
specific routines to support this.
A target description must either have no registers or a complete
set---this avoids complexity in trying to merge standard registers
with the target defined registers. It is the architecture's
responsibility to validate that a description with registers has
everything it needs. To keep architecture code simple, the same
mechanism is used to assign fixed internal register numbers to
standard registers.
If @code{tdesc_has_registers} returns 1, the description contains
registers. The architecture's @code{gdbarch_init} routine should:
@itemize @bullet
@item
Call @code{tdesc_data_alloc} to allocate storage, early, before
searching for a matching gdbarch or allocating a new one.
@item
Use @code{tdesc_find_feature} to locate standard features by name.
@item
Use @code{tdesc_numbered_register} and @code{tdesc_numbered_register_choices}
to locate the expected registers in the standard features.
@item
Return @code{NULL} if a required feature is missing, or if any standard
feature is missing expected registers. This will produce a warning that
the description was incomplete.
@item
Free the allocated data before returning, unless @code{tdesc_use_registers}
is called.
@item
Call @code{set_gdbarch_num_regs} as usual, with a number higher than any
fixed number passed to @code{tdesc_numbered_register}.
@item
Call @code{tdesc_use_registers} after creating a new gdbarch, before
returning it.
@end itemize
After @code{tdesc_use_registers} has been called, the architecture's
@code{register_name}, @code{register_type}, and @code{register_reggroup_p}
routines will not be called; that information will be taken from
the target description. @code{num_regs} may be increased to account
for any additional registers in the description.
Pseudo-registers require some extra care:
@itemize @bullet
@item
Using @code{tdesc_numbered_register} allows the architecture to give
constant register numbers to standard architectural registers, e.g.@:
as an @code{enum} in @file{@var{arch}-tdep.h}. But because
pseudo-registers are always numbered above @code{num_regs},
which may be increased by the description, constant numbers
can not be used for pseudos. They must be numbered relative to
@code{num_regs} instead.
@item
The description will not describe pseudo-registers, so the
architecture must call @code{set_tdesc_pseudo_register_name},
@code{set_tdesc_pseudo_register_type}, and
@code{set_tdesc_pseudo_register_reggroup_p} to supply routines
describing pseudo registers. These routines will be passed
internal register numbers, so the same routines used for the
gdbarch equivalents are usually suitable.
@end itemize
@node Target Vector Definition
@chapter Target Vector Definition
@cindex target vector
The target vector defines the interface between @value{GDBN}'s
abstract handling of target systems, and the nitty-gritty code that
actually exercises control over a process or a serial port.
@value{GDBN} includes some 30-40 different target vectors; however,
each configuration of @value{GDBN} includes only a few of them.
@menu
* Managing Execution State::
* Existing Targets::
@end menu
@node Managing Execution State
@section Managing Execution State
@cindex execution state
A target vector can be completely inactive (not pushed on the target
stack), active but not running (pushed, but not connected to a fully
manifested inferior), or completely active (pushed, with an accessible
inferior). Most targets are only completely inactive or completely
active, but some support persistent connections to a target even
when the target has exited or not yet started.
For example, connecting to the simulator using @code{target sim} does
not create a running program. Neither registers nor memory are
accessible until @code{run}. Similarly, after @code{kill}, the
program can not continue executing. But in both cases @value{GDBN}
remains connected to the simulator, and target-specific commands
are directed to the simulator.
A target which only supports complete activation should push itself
onto the stack in its @code{to_open} routine (by calling
@code{push_target}), and unpush itself from the stack in its
@code{to_mourn_inferior} routine (by calling @code{unpush_target}).
A target which supports both partial and complete activation should
still call @code{push_target} in @code{to_open}, but not call
@code{unpush_target} in @code{to_mourn_inferior}. Instead, it should
call either @code{target_mark_running} or @code{target_mark_exited}
in its @code{to_open}, depending on whether the target is fully active
after connection. It should also call @code{target_mark_running} any
time the inferior becomes fully active (e.g.@: in
@code{to_create_inferior} and @code{to_attach}), and
@code{target_mark_exited} when the inferior becomes inactive (in
@code{to_mourn_inferior}). The target should also make sure to call
@code{target_mourn_inferior} from its @code{to_kill}, to return the
target to inactive state.
@node Existing Targets
@section Existing Targets
@cindex targets
@subsection File Targets
Both executables and core files have target vectors.
@subsection Standard Protocol and Remote Stubs
@value{GDBN}'s file @file{remote.c} talks a serial protocol to code
that runs in the target system. @value{GDBN} provides several sample
@dfn{stubs} that can be integrated into target programs or operating
systems for this purpose; they are named @file{*-stub.c}.
The @value{GDBN} user's manual describes how to put such a stub into
your target code. What follows is a discussion of integrating the
SPARC stub into a complicated operating system (rather than a simple
program), by Stu Grossman, the author of this stub.
The trap handling code in the stub assumes the following upon entry to
@code{trap_low}:
@enumerate
@item
%l1 and %l2 contain pc and npc respectively at the time of the trap;
@item
traps are disabled;
@item
you are in the correct trap window.
@end enumerate
As long as your trap handler can guarantee those conditions, then there
is no reason why you shouldn't be able to ``share'' traps with the stub.
The stub has no requirement that it be jumped to directly from the
hardware trap vector. That is why it calls @code{exceptionHandler()},
which is provided by the external environment. For instance, this could
set up the hardware traps to actually execute code which calls the stub
first, and then transfers to its own trap handler.
For the most point, there probably won't be much of an issue with
``sharing'' traps, as the traps we use are usually not used by the kernel,
and often indicate unrecoverable error conditions. Anyway, this is all
controlled by a table, and is trivial to modify. The most important
trap for us is for @code{ta 1}. Without that, we can't single step or
do breakpoints. Everything else is unnecessary for the proper operation
of the debugger/stub.
From reading the stub, it's probably not obvious how breakpoints work.
They are simply done by deposit/examine operations from @value{GDBN}.
@subsection ROM Monitor Interface
@subsection Custom Protocols
@subsection Transport Layer
@subsection Builtin Simulator
@node Native Debugging
@chapter Native Debugging
@cindex native debugging
Several files control @value{GDBN}'s configuration for native support:
@table @file
@vindex NATDEPFILES
@item gdb/config/@var{arch}/@var{xyz}.mh
Specifies Makefile fragments needed by a @emph{native} configuration on
machine @var{xyz}. In particular, this lists the required
native-dependent object files, by defining @samp{NATDEPFILES=@dots{}}.
Also specifies the header file which describes native support on
@var{xyz}, by defining @samp{NAT_FILE= nm-@var{xyz}.h}. You can also
define @samp{NAT_CFLAGS}, @samp{NAT_ADD_FILES}, @samp{NAT_CLIBS},
@samp{NAT_CDEPS}, etc.; see @file{Makefile.in}.
@emph{Maintainer's note: The @file{.mh} suffix is because this file
originally contained @file{Makefile} fragments for hosting @value{GDBN}
on machine @var{xyz}. While the file is no longer used for this
purpose, the @file{.mh} suffix remains. Perhaps someone will
eventually rename these fragments so that they have a @file{.mn}
suffix.}
@item gdb/config/@var{arch}/nm-@var{xyz}.h
(@file{nm.h} is a link to this file, created by @code{configure}). Contains C
macro definitions describing the native system environment, such as
child process control and core file support.
@item gdb/@var{xyz}-nat.c
Contains any miscellaneous C code required for this native support of
this machine. On some machines it doesn't exist at all.
@end table
There are some ``generic'' versions of routines that can be used by
various systems. These can be customized in various ways by macros
defined in your @file{nm-@var{xyz}.h} file. If these routines work for
the @var{xyz} host, you can just include the generic file's name (with
@samp{.o}, not @samp{.c}) in @code{NATDEPFILES}.
Otherwise, if your machine needs custom support routines, you will need
to write routines that perform the same functions as the generic file.
Put them into @file{@var{xyz}-nat.c}, and put @file{@var{xyz}-nat.o}
into @code{NATDEPFILES}.
@table @file
@item inftarg.c
This contains the @emph{target_ops vector} that supports Unix child
processes on systems which use ptrace and wait to control the child.
@item procfs.c
This contains the @emph{target_ops vector} that supports Unix child
processes on systems which use /proc to control the child.
@item fork-child.c
This does the low-level grunge that uses Unix system calls to do a ``fork
and exec'' to start up a child process.
@item infptrace.c
This is the low level interface to inferior processes for systems using
the Unix @code{ptrace} call in a vanilla way.
@end table
@section Native core file Support
@cindex native core files
@table @file
@findex fetch_core_registers
@item core-aout.c::fetch_core_registers()
Support for reading registers out of a core file. This routine calls
@code{register_addr()}, see below. Now that BFD is used to read core
files, virtually all machines should use @code{core-aout.c}, and should
just provide @code{fetch_core_registers} in @code{@var{xyz}-nat.c} (or
@code{REGISTER_U_ADDR} in @code{nm-@var{xyz}.h}).
@item core-aout.c::register_addr()
If your @code{nm-@var{xyz}.h} file defines the macro
@code{REGISTER_U_ADDR(addr, blockend, regno)}, it should be defined to
set @code{addr} to the offset within the @samp{user} struct of @value{GDBN}
register number @code{regno}. @code{blockend} is the offset within the
``upage'' of @code{u.u_ar0}. If @code{REGISTER_U_ADDR} is defined,
@file{core-aout.c} will define the @code{register_addr()} function and
use the macro in it. If you do not define @code{REGISTER_U_ADDR}, but
you are using the standard @code{fetch_core_registers()}, you will need
to define your own version of @code{register_addr()}, put it into your
@code{@var{xyz}-nat.c} file, and be sure @code{@var{xyz}-nat.o} is in
the @code{NATDEPFILES} list. If you have your own
@code{fetch_core_registers()}, you may not need a separate
@code{register_addr()}. Many custom @code{fetch_core_registers()}
implementations simply locate the registers themselves.@refill
@end table
When making @value{GDBN} run native on a new operating system, to make it
possible to debug core files, you will need to either write specific
code for parsing your OS's core files, or customize
@file{bfd/trad-core.c}. First, use whatever @code{#include} files your
machine uses to define the struct of registers that is accessible
(possibly in the u-area) in a core file (rather than
@file{machine/reg.h}), and an include file that defines whatever header
exists on a core file (e.g., the u-area or a @code{struct core}). Then
modify @code{trad_unix_core_file_p} to use these values to set up the
section information for the data segment, stack segment, any other
segments in the core file (perhaps shared library contents or control
information), ``registers'' segment, and if there are two discontiguous
sets of registers (e.g., integer and float), the ``reg2'' segment. This
section information basically delimits areas in the core file in a
standard way, which the section-reading routines in BFD know how to seek
around in.
Then back in @value{GDBN}, you need a matching routine called
@code{fetch_core_registers}. If you can use the generic one, it's in
@file{core-aout.c}; if not, it's in your @file{@var{xyz}-nat.c} file.
It will be passed a char pointer to the entire ``registers'' segment,
its length, and a zero; or a char pointer to the entire ``regs2''
segment, its length, and a 2. The routine should suck out the supplied
register values and install them into @value{GDBN}'s ``registers'' array.
If your system uses @file{/proc} to control processes, and uses ELF
format core files, then you may be able to use the same routines for
reading the registers out of processes and out of core files.
@section ptrace
@section /proc
@section win32
@section shared libraries
@section Native Conditionals
@cindex native conditionals
When @value{GDBN} is configured and compiled, various macros are
defined or left undefined, to control compilation when the host and
target systems are the same. These macros should be defined (or left
undefined) in @file{nm-@var{system}.h}.
@table @code
@item CHILD_PREPARE_TO_STORE
@findex CHILD_PREPARE_TO_STORE
If the machine stores all registers at once in the child process, then
define this to ensure that all values are correct. This usually entails
a read from the child.
[Note that this is incorrectly defined in @file{xm-@var{system}.h} files
currently.]
@item FETCH_INFERIOR_REGISTERS
@findex FETCH_INFERIOR_REGISTERS
Define this if the native-dependent code will provide its own routines
@code{fetch_inferior_registers} and @code{store_inferior_registers} in
@file{@var{host}-nat.c}. If this symbol is @emph{not} defined, and
@file{infptrace.c} is included in this configuration, the default
routines in @file{infptrace.c} are used for these functions.
@item FP0_REGNUM
@findex FP0_REGNUM
This macro is normally defined to be the number of the first floating
point register, if the machine has such registers. As such, it would
appear only in target-specific code. However, @file{/proc} support uses this
to decide whether floats are in use on this target.
@item GET_LONGJMP_TARGET
@findex GET_LONGJMP_TARGET
For most machines, this is a target-dependent parameter. On the
DECstation and the Iris, this is a native-dependent parameter, since
@file{setjmp.h} is needed to define it.
This macro determines the target PC address that @code{longjmp} will jump to,
assuming that we have just stopped at a longjmp breakpoint. It takes a
@code{CORE_ADDR *} as argument, and stores the target PC value through this
pointer. It examines the current state of the machine as needed.
@item I386_USE_GENERIC_WATCHPOINTS
An x86-based machine can define this to use the generic x86 watchpoint
support; see @ref{Algorithms, I386_USE_GENERIC_WATCHPOINTS}.
@item KERNEL_U_ADDR
@findex KERNEL_U_ADDR
Define this to the address of the @code{u} structure (the ``user
struct'', also known as the ``u-page'') in kernel virtual memory. @value{GDBN}
needs to know this so that it can subtract this address from absolute
addresses in the upage, that are obtained via ptrace or from core files.
On systems that don't need this value, set it to zero.
@item KERNEL_U_ADDR_HPUX
@findex KERNEL_U_ADDR_HPUX
Define this to cause @value{GDBN} to determine the address of @code{u} at
runtime, by using HP-style @code{nlist} on the kernel's image in the
root directory.
@item ONE_PROCESS_WRITETEXT
@findex ONE_PROCESS_WRITETEXT
Define this to be able to, when a breakpoint insertion fails, warn the
user that another process may be running with the same executable.
@item PROC_NAME_FMT
@findex PROC_NAME_FMT
Defines the format for the name of a @file{/proc} device. Should be
defined in @file{nm.h} @emph{only} in order to override the default
definition in @file{procfs.c}.
@item REGISTER_U_ADDR
@findex REGISTER_U_ADDR
Defines the offset of the registers in the ``u area''.
@item SHELL_COMMAND_CONCAT
@findex SHELL_COMMAND_CONCAT
If defined, is a string to prefix on the shell command used to start the
inferior.
@item SHELL_FILE
@findex SHELL_FILE
If defined, this is the name of the shell to use to run the inferior.
Defaults to @code{"/bin/sh"}.
@item SOLIB_ADD (@var{filename}, @var{from_tty}, @var{targ}, @var{readsyms})
@findex SOLIB_ADD
Define this to expand into an expression that will cause the symbols in
@var{filename} to be added to @value{GDBN}'s symbol table. If
@var{readsyms} is zero symbols are not read but any necessary low level
processing for @var{filename} is still done.
@item SOLIB_CREATE_INFERIOR_HOOK
@findex SOLIB_CREATE_INFERIOR_HOOK
Define this to expand into any shared-library-relocation code that you
want to be run just after the child process has been forked.
@item START_INFERIOR_TRAPS_EXPECTED
@findex START_INFERIOR_TRAPS_EXPECTED
When starting an inferior, @value{GDBN} normally expects to trap
twice; once when
the shell execs, and once when the program itself execs. If the actual
number of traps is something other than 2, then define this macro to
expand into the number expected.
@item U_REGS_OFFSET
@findex U_REGS_OFFSET
This is the offset of the registers in the upage. It need only be
defined if the generic ptrace register access routines in
@file{infptrace.c} are being used (that is, @file{infptrace.c} is
configured in, and @code{FETCH_INFERIOR_REGISTERS} is not defined). If
the default value from @file{infptrace.c} is good enough, leave it
undefined.
The default value means that u.u_ar0 @emph{points to} the location of
the registers. I'm guessing that @code{#define U_REGS_OFFSET 0} means
that @code{u.u_ar0} @emph{is} the location of the registers.
@item CLEAR_SOLIB
@findex CLEAR_SOLIB
See @file{objfiles.c}.
@item DEBUG_PTRACE
@findex DEBUG_PTRACE
Define this to debug @code{ptrace} calls.
@end table
@node Support Libraries
@chapter Support Libraries
@section BFD
@cindex BFD library
BFD provides support for @value{GDBN} in several ways:
@table @emph
@item identifying executable and core files
BFD will identify a variety of file types, including a.out, coff, and
several variants thereof, as well as several kinds of core files.
@item access to sections of files
BFD parses the file headers to determine the names, virtual addresses,
sizes, and file locations of all the various named sections in files
(such as the text section or the data section). @value{GDBN} simply
calls BFD to read or write section @var{x} at byte offset @var{y} for
length @var{z}.
@item specialized core file support
BFD provides routines to determine the failing command name stored in a
core file, the signal with which the program failed, and whether a core
file matches (i.e.@: could be a core dump of) a particular executable
file.
@item locating the symbol information
@value{GDBN} uses an internal interface of BFD to determine where to find the
symbol information in an executable file or symbol-file. @value{GDBN} itself
handles the reading of symbols, since BFD does not ``understand'' debug
symbols, but @value{GDBN} uses BFD's cached information to find the symbols,
string table, etc.
@end table
@section opcodes
@cindex opcodes library
The opcodes library provides @value{GDBN}'s disassembler. (It's a separate
library because it's also used in binutils, for @file{objdump}).
@section readline
@cindex readline library
The @code{readline} library provides a set of functions for use by applications
that allow users to edit command lines as they are typed in.
@section libiberty
@cindex @code{libiberty} library
The @code{libiberty} library provides a set of functions and features
that integrate and improve on functionality found in modern operating
systems. Broadly speaking, such features can be divided into three
groups: supplemental functions (functions that may be missing in some
environments and operating systems), replacement functions (providing
a uniform and easier to use interface for commonly used standard
functions), and extensions (which provide additional functionality
beyond standard functions).
@value{GDBN} uses various features provided by the @code{libiberty}
library, for instance the C@t{++} demangler, the @acronym{IEEE}
floating format support functions, the input options parser
@samp{getopt}, the @samp{obstack} extension, and other functions.
@subsection @code{obstacks} in @value{GDBN}
@cindex @code{obstacks}
The obstack mechanism provides a convenient way to allocate and free
chunks of memory. Each obstack is a pool of memory that is managed
like a stack. Objects (of any nature, size and alignment) are
allocated and freed in a @acronym{LIFO} fashion on an obstack (see
@code{libiberty}'s documentation for a more detailed explanation of
@code{obstacks}).
The most noticeable use of the @code{obstacks} in @value{GDBN} is in
object files. There is an obstack associated with each internal
representation of an object file. Lots of things get allocated on
these @code{obstacks}: dictionary entries, blocks, blockvectors,
symbols, minimal symbols, types, vectors of fundamental types, class
fields of types, object files section lists, object files section
offset lists, line tables, symbol tables, partial symbol tables,
string tables, symbol table private data, macros tables, debug
information sections and entries, import and export lists (som),
unwind information (hppa), dwarf2 location expressions data. Plus
various strings such as directory names strings, debug format strings,
names of types.
An essential and convenient property of all data on @code{obstacks} is
that memory for it gets allocated (with @code{obstack_alloc}) at
various times during a debugging session, but it is released all at
once using the @code{obstack_free} function. The @code{obstack_free}
function takes a pointer to where in the stack it must start the
deletion from (much like the cleanup chains have a pointer to where to
start the cleanups). Because of the stack like structure of the
@code{obstacks}, this allows to free only a top portion of the
obstack. There are a few instances in @value{GDBN} where such thing
happens. Calls to @code{obstack_free} are done after some local data
is allocated to the obstack. Only the local data is deleted from the
obstack. Of course this assumes that nothing between the
@code{obstack_alloc} and the @code{obstack_free} allocates anything
else on the same obstack. For this reason it is best and safest to
use temporary @code{obstacks}.
Releasing the whole obstack is also not safe per se. It is safe only
under the condition that we know the @code{obstacks} memory is no
longer needed. In @value{GDBN} we get rid of the @code{obstacks} only
when we get rid of the whole objfile(s), for instance upon reading a
new symbol file.
@section gnu-regex
@cindex regular expressions library
Regex conditionals.
@table @code
@item C_ALLOCA
@item NFAILURES
@item RE_NREGS
@item SIGN_EXTEND_CHAR
@item SWITCH_ENUM_BUG
@item SYNTAX_TABLE
@item Sword
@item sparc
@end table
@section Array Containers
@cindex Array Containers
@cindex VEC
Often it is necessary to manipulate a dynamic array of a set of
objects. C forces some bookkeeping on this, which can get cumbersome
and repetitive. The @file{vec.h} file contains macros for defining
and using a typesafe vector type. The functions defined will be
inlined when compiling, and so the abstraction cost should be zero.
Domain checks are added to detect programming errors.
An example use would be an array of symbols or section information.
The array can be grown as symbols are read in (or preallocated), and
the accessor macros provided keep care of all the necessary
bookkeeping. Because the arrays are type safe, there is no danger of
accidentally mixing up the contents. Think of these as C++ templates,
but implemented in C.
Because of the different behavior of structure objects, scalar objects
and of pointers, there are three flavors of vector, one for each of
these variants. Both the structure object and pointer variants pass
pointers to objects around --- in the former case the pointers are
stored into the vector and in the latter case the pointers are
dereferenced and the objects copied into the vector. The scalar
object variant is suitable for @code{int}-like objects, and the vector
elements are returned by value.
There are both @code{index} and @code{iterate} accessors. The iterator
returns a boolean iteration condition and updates the iteration
variable passed by reference. Because the iterator will be inlined,
the address-of can be optimized away.
The vectors are implemented using the trailing array idiom, thus they
are not resizeable without changing the address of the vector object
itself. This means you cannot have variables or fields of vector type
--- always use a pointer to a vector. The one exception is the final
field of a structure, which could be a vector type. You will have to
use the @code{embedded_size} & @code{embedded_init} calls to create
such objects, and they will probably not be resizeable (so don't use
the @dfn{safe} allocation variants). The trailing array idiom is used
(rather than a pointer to an array of data), because, if we allow
@code{NULL} to also represent an empty vector, empty vectors occupy
minimal space in the structure containing them.
Each operation that increases the number of active elements is
available in @dfn{quick} and @dfn{safe} variants. The former presumes
that there is sufficient allocated space for the operation to succeed
(it dies if there is not). The latter will reallocate the vector, if
needed. Reallocation causes an exponential increase in vector size.
If you know you will be adding N elements, it would be more efficient
to use the reserve operation before adding the elements with the
@dfn{quick} operation. This will ensure there are at least as many
elements as you ask for, it will exponentially increase if there are
too few spare slots. If you want reserve a specific number of slots,
but do not want the exponential increase (for instance, you know this
is the last allocation), use a negative number for reservation. You
can also create a vector of a specific size from the get go.
You should prefer the push and pop operations, as they append and
remove from the end of the vector. If you need to remove several items
in one go, use the truncate operation. The insert and remove
operations allow you to change elements in the middle of the vector.
There are two remove operations, one which preserves the element
ordering @code{ordered_remove}, and one which does not
@code{unordered_remove}. The latter function copies the end element
into the removed slot, rather than invoke a memmove operation. The
@code{lower_bound} function will determine where to place an item in
the array using insert that will maintain sorted order.
If you need to directly manipulate a vector, then the @code{address}
accessor will return the address of the start of the vector. Also the
@code{space} predicate will tell you whether there is spare capacity in the
vector. You will not normally need to use these two functions.
Vector types are defined using a
@code{DEF_VEC_@{O,P,I@}(@var{typename})} macro. Variables of vector
type are declared using a @code{VEC(@var{typename})} macro. The
characters @code{O}, @code{P} and @code{I} indicate whether
@var{typename} is an object (@code{O}), pointer (@code{P}) or integral
(@code{I}) type. Be careful to pick the correct one, as you'll get an
awkward and inefficient API if you use the wrong one. There is a
check, which results in a compile-time warning, for the @code{P} and
@code{I} versions, but there is no check for the @code{O} versions, as
that is not possible in plain C.
An example of their use would be,
@smallexample
DEF_VEC_P(tree); // non-managed tree vector.
struct my_struct @{
VEC(tree) *v; // A (pointer to) a vector of tree pointers.
@};
struct my_struct *s;
if (VEC_length(tree, s->v)) @{ we have some contents @}
VEC_safe_push(tree, s->v, decl); // append some decl onto the end
for (ix = 0; VEC_iterate(tree, s->v, ix, elt); ix++)
@{ do something with elt @}
@end smallexample
The @file{vec.h} file provides details on how to invoke the various
accessors provided. They are enumerated here:
@table @code
@item VEC_length
Return the number of items in the array,
@item VEC_empty
Return true if the array has no elements.
@item VEC_last
@itemx VEC_index
Return the last or arbitrary item in the array.
@item VEC_iterate
Access an array element and indicate whether the array has been
traversed.
@item VEC_alloc
@itemx VEC_free
Create and destroy an array.
@item VEC_embedded_size
@itemx VEC_embedded_init
Helpers for embedding an array as the final element of another struct.
@item VEC_copy
Duplicate an array.
@item VEC_space
Return the amount of free space in an array.
@item VEC_reserve
Ensure a certain amount of free space.
@item VEC_quick_push
@itemx VEC_safe_push
Append to an array, either assuming the space is available, or making
sure that it is.
@item VEC_pop
Remove the last item from an array.
@item VEC_truncate
Remove several items from the end of an array.
@item VEC_safe_grow
Add several items to the end of an array.
@item VEC_replace
Overwrite an item in the array.
@item VEC_quick_insert
@itemx VEC_safe_insert
Insert an item into the middle of the array. Either the space must
already exist, or the space is created.
@item VEC_ordered_remove
@itemx VEC_unordered_remove
Remove an item from the array, preserving order or not.
@item VEC_block_remove
Remove a set of items from the array.
@item VEC_address
Provide the address of the first element.
@item VEC_lower_bound
Binary search the array.
@end table
@section include
@node Coding
@chapter Coding
This chapter covers topics that are lower-level than the major
algorithms of @value{GDBN}.
@section Cleanups
@cindex cleanups
Cleanups are a structured way to deal with things that need to be done
later.
When your code does something (e.g., @code{xmalloc} some memory, or
@code{open} a file) that needs to be undone later (e.g., @code{xfree}
the memory or @code{close} the file), it can make a cleanup. The
cleanup will be done at some future point: when the command is finished
and control returns to the top level; when an error occurs and the stack
is unwound; or when your code decides it's time to explicitly perform
cleanups. Alternatively you can elect to discard the cleanups you
created.
Syntax:
@table @code
@item struct cleanup *@var{old_chain};
Declare a variable which will hold a cleanup chain handle.
@findex make_cleanup
@item @var{old_chain} = make_cleanup (@var{function}, @var{arg});
Make a cleanup which will cause @var{function} to be called with
@var{arg} (a @code{char *}) later. The result, @var{old_chain}, is a
handle that can later be passed to @code{do_cleanups} or
@code{discard_cleanups}. Unless you are going to call
@code{do_cleanups} or @code{discard_cleanups}, you can ignore the result
from @code{make_cleanup}.
@findex do_cleanups
@item do_cleanups (@var{old_chain});
Do all cleanups added to the chain since the corresponding
@code{make_cleanup} call was made.
@findex discard_cleanups
@item discard_cleanups (@var{old_chain});
Same as @code{do_cleanups} except that it just removes the cleanups from
the chain and does not call the specified functions.
@end table
Cleanups are implemented as a chain. The handle returned by
@code{make_cleanups} includes the cleanup passed to the call and any
later cleanups appended to the chain (but not yet discarded or
performed). E.g.:
@smallexample
make_cleanup (a, 0);
@{
struct cleanup *old = make_cleanup (b, 0);
make_cleanup (c, 0)
...
do_cleanups (old);
@}
@end smallexample
@noindent
will call @code{c()} and @code{b()} but will not call @code{a()}. The
cleanup that calls @code{a()} will remain in the cleanup chain, and will
be done later unless otherwise discarded.@refill
Your function should explicitly do or discard the cleanups it creates.
Failing to do this leads to non-deterministic behavior since the caller
will arbitrarily do or discard your functions cleanups. This need leads
to two common cleanup styles.
The first style is try/finally. Before it exits, your code-block calls
@code{do_cleanups} with the old cleanup chain and thus ensures that your
code-block's cleanups are always performed. For instance, the following
code-segment avoids a memory leak problem (even when @code{error} is
called and a forced stack unwind occurs) by ensuring that the
@code{xfree} will always be called:
@smallexample
struct cleanup *old = make_cleanup (null_cleanup, 0);
data = xmalloc (sizeof blah);
make_cleanup (xfree, data);
... blah blah ...
do_cleanups (old);
@end smallexample
The second style is try/except. Before it exits, your code-block calls
@code{discard_cleanups} with the old cleanup chain and thus ensures that
any created cleanups are not performed. For instance, the following
code segment, ensures that the file will be closed but only if there is
an error:
@smallexample
FILE *file = fopen ("afile", "r");
struct cleanup *old = make_cleanup (close_file, file);
... blah blah ...
discard_cleanups (old);
return file;
@end smallexample
Some functions, e.g., @code{fputs_filtered()} or @code{error()}, specify
that they ``should not be called when cleanups are not in place''. This
means that any actions you need to reverse in the case of an error or
interruption must be on the cleanup chain before you call these
functions, since they might never return to your code (they
@samp{longjmp} instead).
@section Per-architecture module data
@cindex per-architecture module data
@cindex multi-arch data
@cindex data-pointer, per-architecture/per-module
The multi-arch framework includes a mechanism for adding module
specific per-architecture data-pointers to the @code{struct gdbarch}
architecture object.
A module registers one or more per-architecture data-pointers using:
@deftypefun struct gdbarch_data *gdbarch_data_register_pre_init (gdbarch_data_pre_init_ftype *@var{pre_init})
@var{pre_init} is used to, on-demand, allocate an initial value for a
per-architecture data-pointer using the architecture's obstack (passed
in as a parameter). Since @var{pre_init} can be called during
architecture creation, it is not parameterized with the architecture.
and must not call modules that use per-architecture data.
@end deftypefun
@deftypefun struct gdbarch_data *gdbarch_data_register_post_init (gdbarch_data_post_init_ftype *@var{post_init})
@var{post_init} is used to obtain an initial value for a
per-architecture data-pointer @emph{after}. Since @var{post_init} is
always called after architecture creation, it both receives the fully
initialized architecture and is free to call modules that use
per-architecture data (care needs to be taken to ensure that those
other modules do not try to call back to this module as that will
create in cycles in the initialization call graph).
@end deftypefun
These functions return a @code{struct gdbarch_data} that is used to
identify the per-architecture data-pointer added for that module.
The per-architecture data-pointer is accessed using the function:
@deftypefun void *gdbarch_data (struct gdbarch *@var{gdbarch}, struct gdbarch_data *@var{data_handle})
Given the architecture @var{arch} and module data handle
@var{data_handle} (returned by @code{gdbarch_data_register_pre_init}
or @code{gdbarch_data_register_post_init}), this function returns the
current value of the per-architecture data-pointer. If the data
pointer is @code{NULL}, it is first initialized by calling the
corresponding @var{pre_init} or @var{post_init} method.
@end deftypefun
The examples below assume the following definitions:
@smallexample
struct nozel @{ int total; @};
static struct gdbarch_data *nozel_handle;
@end smallexample
A module can extend the architecture vector, adding additional
per-architecture data, using the @var{pre_init} method. The module's
per-architecture data is then initialized during architecture
creation.
In the below, the module's per-architecture @emph{nozel} is added. An
architecture can specify its nozel by calling @code{set_gdbarch_nozel}
from @code{gdbarch_init}.
@smallexample
static void *
nozel_pre_init (struct obstack *obstack)
@{
struct nozel *data = OBSTACK_ZALLOC (obstack, struct nozel);
return data;
@}
@end smallexample
@smallexample
extern void
set_gdbarch_nozel (struct gdbarch *gdbarch, int total)
@{
struct nozel *data = gdbarch_data (gdbarch, nozel_handle);
data->total = nozel;
@}
@end smallexample
A module can on-demand create architecture dependant data structures
using @code{post_init}.
In the below, the nozel's total is computed on-demand by
@code{nozel_post_init} using information obtained from the
architecture.
@smallexample
static void *
nozel_post_init (struct gdbarch *gdbarch)
@{
struct nozel *data = GDBARCH_OBSTACK_ZALLOC (gdbarch, struct nozel);
nozel->total = gdbarch@dots{} (gdbarch);
return data;
@}
@end smallexample
@smallexample
extern int
nozel_total (struct gdbarch *gdbarch)
@{
struct nozel *data = gdbarch_data (gdbarch, nozel_handle);
return data->total;
@}
@end smallexample
@section Wrapping Output Lines
@cindex line wrap in output
@findex wrap_here
Output that goes through @code{printf_filtered} or @code{fputs_filtered}
or @code{fputs_demangled} needs only to have calls to @code{wrap_here}
added in places that would be good breaking points. The utility
routines will take care of actually wrapping if the line width is
exceeded.
The argument to @code{wrap_here} is an indentation string which is
printed @emph{only} if the line breaks there. This argument is saved
away and used later. It must remain valid until the next call to
@code{wrap_here} or until a newline has been printed through the
@code{*_filtered} functions. Don't pass in a local variable and then
return!
It is usually best to call @code{wrap_here} after printing a comma or
space. If you call it before printing a space, make sure that your
indentation properly accounts for the leading space that will print if
the line wraps there.
Any function or set of functions that produce filtered output must
finish by printing a newline, to flush the wrap buffer, before switching
to unfiltered (@code{printf}) output. Symbol reading routines that
print warnings are a good example.
@section @value{GDBN} Coding Standards
@cindex coding standards
@value{GDBN} follows the GNU coding standards, as described in
@file{etc/standards.texi}. This file is also available for anonymous
FTP from GNU archive sites. @value{GDBN} takes a strict interpretation
of the standard; in general, when the GNU standard recommends a practice
but does not require it, @value{GDBN} requires it.
@value{GDBN} follows an additional set of coding standards specific to
@value{GDBN}, as described in the following sections.
@subsection ISO C
@value{GDBN} assumes an ISO/IEC 9899:1990 (a.k.a.@: ISO C90) compliant
compiler.
@value{GDBN} does not assume an ISO C or POSIX compliant C library.
@subsection Memory Management
@value{GDBN} does not use the functions @code{malloc}, @code{realloc},
@code{calloc}, @code{free} and @code{asprintf}.
@value{GDBN} uses the functions @code{xmalloc}, @code{xrealloc} and
@code{xcalloc} when allocating memory. Unlike @code{malloc} et.al.@:
these functions do not return when the memory pool is empty. Instead,
they unwind the stack using cleanups. These functions return
@code{NULL} when requested to allocate a chunk of memory of size zero.
@emph{Pragmatics: By using these functions, the need to check every
memory allocation is removed. These functions provide portable
behavior.}
@value{GDBN} does not use the function @code{free}.
@value{GDBN} uses the function @code{xfree} to return memory to the
memory pool. Consistent with ISO-C, this function ignores a request to
free a @code{NULL} pointer.
@emph{Pragmatics: On some systems @code{free} fails when passed a
@code{NULL} pointer.}
@value{GDBN} can use the non-portable function @code{alloca} for the
allocation of small temporary values (such as strings).
@emph{Pragmatics: This function is very non-portable. Some systems
restrict the memory being allocated to no more than a few kilobytes.}
@value{GDBN} uses the string function @code{xstrdup} and the print
function @code{xstrprintf}.
@emph{Pragmatics: @code{asprintf} and @code{strdup} can fail. Print
functions such as @code{sprintf} are very prone to buffer overflow
errors.}
@subsection Compiler Warnings
@cindex compiler warnings
With few exceptions, developers should avoid the configuration option
@samp{--disable-werror} when building @value{GDBN}. The exceptions
are listed in the file @file{gdb/MAINTAINERS}. The default, when
building with @sc{gcc}, is @samp{--enable-werror}.
This option causes @value{GDBN} (when built using GCC) to be compiled
with a carefully selected list of compiler warning flags. Any warnings
from those flags are treated as errors.
The current list of warning flags includes:
@table @samp
@item -Wall
Recommended @sc{gcc} warnings.
@item -Wdeclaration-after-statement
@sc{gcc} 3.x (and later) and @sc{c99} allow declarations mixed with
code, but @sc{gcc} 2.x and @sc{c89} do not.
@item -Wpointer-arith
@item -Wformat-nonliteral
Non-literal format strings, with a few exceptions, are bugs - they
might contain unintended user-supplied format specifiers.
Since @value{GDBN} uses the @code{format printf} attribute on all
@code{printf} like functions this checks not just @code{printf} calls
but also calls to functions such as @code{fprintf_unfiltered}.
@item -Wno-pointer-sign
In version 4.0, GCC began warning about pointer argument passing or
assignment even when the source and destination differed only in
signedness. However, most @value{GDBN} code doesn't distinguish
carefully between @code{char} and @code{unsigned char}. In early 2006
the @value{GDBN} developers decided correcting these warnings wasn't
worth the time it would take.
@item -Wno-unused-parameter
Due to the way that @value{GDBN} is implemented many functions have
unused parameters. Consequently this warning is avoided. The macro
@code{ATTRIBUTE_UNUSED} is not used as it leads to false negatives ---
it is not an error to have @code{ATTRIBUTE_UNUSED} on a parameter that
is being used.
@item -Wno-unused
@itemx -Wno-switch
@itemx -Wno-char-subscripts
These are warnings which might be useful for @value{GDBN}, but are
currently too noisy to enable with @samp{-Werror}.
@end table
@subsection Formatting
@cindex source code formatting
The standard GNU recommendations for formatting must be followed
strictly.
A function declaration should not have its name in column zero. A
function definition should have its name in column zero.
@smallexample
/* Declaration */
static void foo (void);
/* Definition */
void
foo (void)
@{
@}
@end smallexample
@emph{Pragmatics: This simplifies scripting. Function definitions can
be found using @samp{^function-name}.}
There must be a space between a function or macro name and the opening
parenthesis of its argument list (except for macro definitions, as
required by C). There must not be a space after an open paren/bracket
or before a close paren/bracket.
While additional whitespace is generally helpful for reading, do not use
more than one blank line to separate blocks, and avoid adding whitespace
after the end of a program line (as of 1/99, some 600 lines had
whitespace after the semicolon). Excess whitespace causes difficulties
for @code{diff} and @code{patch} utilities.
Pointers are declared using the traditional K&R C style:
@smallexample
void *foo;
@end smallexample
@noindent
and not:
@smallexample
void * foo;
void* foo;
@end smallexample
@subsection Comments
@cindex comment formatting
The standard GNU requirements on comments must be followed strictly.
Block comments must appear in the following form, with no @code{/*}- or
@code{*/}-only lines, and no leading @code{*}:
@smallexample
/* Wait for control to return from inferior to debugger. If inferior
gets a signal, we may decide to start it up again instead of
returning. That is why there is a loop in this function. When
this function actually returns it means the inferior should be left
stopped and @value{GDBN} should read more commands. */
@end smallexample
(Note that this format is encouraged by Emacs; tabbing for a multi-line
comment works correctly, and @kbd{M-q} fills the block consistently.)
Put a blank line between the block comments preceding function or
variable definitions, and the definition itself.
In general, put function-body comments on lines by themselves, rather
than trying to fit them into the 20 characters left at the end of a
line, since either the comment or the code will inevitably get longer
than will fit, and then somebody will have to move it anyhow.
@subsection C Usage
@cindex C data types
Code must not depend on the sizes of C data types, the format of the
host's floating point numbers, the alignment of anything, or the order
of evaluation of expressions.
@cindex function usage
Use functions freely. There are only a handful of compute-bound areas
in @value{GDBN} that might be affected by the overhead of a function
call, mainly in symbol reading. Most of @value{GDBN}'s performance is
limited by the target interface (whether serial line or system call).
However, use functions with moderation. A thousand one-line functions
are just as hard to understand as a single thousand-line function.
@emph{Macros are bad, M'kay.}
(But if you have to use a macro, make sure that the macro arguments are
protected with parentheses.)
@cindex types
Declarations like @samp{struct foo *} should be used in preference to
declarations like @samp{typedef struct foo @{ @dots{} @} *foo_ptr}.
@subsection Function Prototypes
@cindex function prototypes
Prototypes must be used when both @emph{declaring} and @emph{defining}
a function. Prototypes for @value{GDBN} functions must include both the
argument type and name, with the name matching that used in the actual
function definition.
All external functions should have a declaration in a header file that
callers include, except for @code{_initialize_*} functions, which must
be external so that @file{init.c} construction works, but shouldn't be
visible to random source files.
Where a source file needs a forward declaration of a static function,
that declaration must appear in a block near the top of the source file.
@subsection Internal Error Recovery
During its execution, @value{GDBN} can encounter two types of errors.
User errors and internal errors. User errors include not only a user
entering an incorrect command but also problems arising from corrupt
object files and system errors when interacting with the target.
Internal errors include situations where @value{GDBN} has detected, at
run time, a corrupt or erroneous situation.
When reporting an internal error, @value{GDBN} uses
@code{internal_error} and @code{gdb_assert}.
@value{GDBN} must not call @code{abort} or @code{assert}.
@emph{Pragmatics: There is no @code{internal_warning} function. Either
the code detected a user error, recovered from it and issued a
@code{warning} or the code failed to correctly recover from the user
error and issued an @code{internal_error}.}
@subsection File Names
Any file used when building the core of @value{GDBN} must be in lower
case. Any file used when building the core of @value{GDBN} must be 8.3
unique. These requirements apply to both source and generated files.
@emph{Pragmatics: The core of @value{GDBN} must be buildable on many
platforms including DJGPP and MacOS/HFS. Every time an unfriendly file
is introduced to the build process both @file{Makefile.in} and
@file{configure.in} need to be modified accordingly. Compare the
convoluted conversion process needed to transform @file{COPYING} into
@file{copying.c} with the conversion needed to transform
@file{version.in} into @file{version.c}.}
Any file non 8.3 compliant file (that is not used when building the core
of @value{GDBN}) must be added to @file{gdb/config/djgpp/fnchange.lst}.
@emph{Pragmatics: This is clearly a compromise.}
When @value{GDBN} has a local version of a system header file (ex
@file{string.h}) the file name based on the POSIX header prefixed with
@file{gdb_} (@file{gdb_string.h}). These headers should be relatively
independent: they should use only macros defined by @file{configure},
the compiler, or the host; they should include only system headers; they
should refer only to system types. They may be shared between multiple
programs, e.g.@: @value{GDBN} and @sc{gdbserver}.
For other files @samp{-} is used as the separator.
@subsection Include Files
A @file{.c} file should include @file{defs.h} first.
A @file{.c} file should directly include the @code{.h} file of every
declaration and/or definition it directly refers to. It cannot rely on
indirect inclusion.
A @file{.h} file should directly include the @code{.h} file of every
declaration and/or definition it directly refers to. It cannot rely on
indirect inclusion. Exception: The file @file{defs.h} does not need to
be directly included.
An external declaration should only appear in one include file.
An external declaration should never appear in a @code{.c} file.
Exception: a declaration for the @code{_initialize} function that
pacifies @option{-Wmissing-declaration}.
A @code{typedef} definition should only appear in one include file.
An opaque @code{struct} declaration can appear in multiple @file{.h}
files. Where possible, a @file{.h} file should use an opaque
@code{struct} declaration instead of an include.
All @file{.h} files should be wrapped in:
@smallexample
#ifndef INCLUDE_FILE_NAME_H
#define INCLUDE_FILE_NAME_H
header body
#endif
@end smallexample
@subsection Clean Design and Portable Implementation
@cindex design
In addition to getting the syntax right, there's the little question of
semantics. Some things are done in certain ways in @value{GDBN} because long
experience has shown that the more obvious ways caused various kinds of
trouble.
@cindex assumptions about targets
You can't assume the byte order of anything that comes from a target
(including @var{value}s, object files, and instructions). Such things
must be byte-swapped using @code{SWAP_TARGET_AND_HOST} in
@value{GDBN}, or one of the swap routines defined in @file{bfd.h},
such as @code{bfd_get_32}.
You can't assume that you know what interface is being used to talk to
the target system. All references to the target must go through the
current @code{target_ops} vector.
You can't assume that the host and target machines are the same machine
(except in the ``native'' support modules). In particular, you can't
assume that the target machine's header files will be available on the
host machine. Target code must bring along its own header files --
written from scratch or explicitly donated by their owner, to avoid
copyright problems.
@cindex portability
Insertion of new @code{#ifdef}'s will be frowned upon. It's much better
to write the code portably than to conditionalize it for various
systems.
@cindex system dependencies
New @code{#ifdef}'s which test for specific compilers or manufacturers
or operating systems are unacceptable. All @code{#ifdef}'s should test
for features. The information about which configurations contain which
features should be segregated into the configuration files. Experience
has proven far too often that a feature unique to one particular system
often creeps into other systems; and that a conditional based on some
predefined macro for your current system will become worthless over
time, as new versions of your system come out that behave differently
with regard to this feature.
Adding code that handles specific architectures, operating systems,
target interfaces, or hosts, is not acceptable in generic code.
@cindex portable file name handling
@cindex file names, portability
One particularly notorious area where system dependencies tend to
creep in is handling of file names. The mainline @value{GDBN} code
assumes Posix semantics of file names: absolute file names begin with
a forward slash @file{/}, slashes are used to separate leading
directories, case-sensitive file names. These assumptions are not
necessarily true on non-Posix systems such as MS-Windows. To avoid
system-dependent code where you need to take apart or construct a file
name, use the following portable macros:
@table @code
@findex HAVE_DOS_BASED_FILE_SYSTEM
@item HAVE_DOS_BASED_FILE_SYSTEM
This preprocessing symbol is defined to a non-zero value on hosts
whose filesystems belong to the MS-DOS/MS-Windows family. Use this
symbol to write conditional code which should only be compiled for
such hosts.
@findex IS_DIR_SEPARATOR
@item IS_DIR_SEPARATOR (@var{c})
Evaluates to a non-zero value if @var{c} is a directory separator
character. On Unix and GNU/Linux systems, only a slash @file{/} is
such a character, but on Windows, both @file{/} and @file{\} will
pass.
@findex IS_ABSOLUTE_PATH
@item IS_ABSOLUTE_PATH (@var{file})
Evaluates to a non-zero value if @var{file} is an absolute file name.
For Unix and GNU/Linux hosts, a name which begins with a slash
@file{/} is absolute. On DOS and Windows, @file{d:/foo} and
@file{x:\bar} are also absolute file names.
@findex FILENAME_CMP
@item FILENAME_CMP (@var{f1}, @var{f2})
Calls a function which compares file names @var{f1} and @var{f2} as
appropriate for the underlying host filesystem. For Posix systems,
this simply calls @code{strcmp}; on case-insensitive filesystems it
will call @code{strcasecmp} instead.
@findex DIRNAME_SEPARATOR
@item DIRNAME_SEPARATOR
Evaluates to a character which separates directories in
@code{PATH}-style lists, typically held in environment variables.
This character is @samp{:} on Unix, @samp{;} on DOS and Windows.
@findex SLASH_STRING
@item SLASH_STRING
This evaluates to a constant string you should use to produce an
absolute filename from leading directories and the file's basename.
@code{SLASH_STRING} is @code{"/"} on most systems, but might be
@code{"\\"} for some Windows-based ports.
@end table
In addition to using these macros, be sure to use portable library
functions whenever possible. For example, to extract a directory or a
basename part from a file name, use the @code{dirname} and
@code{basename} library functions (available in @code{libiberty} for
platforms which don't provide them), instead of searching for a slash
with @code{strrchr}.
Another way to generalize @value{GDBN} along a particular interface is with an
attribute struct. For example, @value{GDBN} has been generalized to handle
multiple kinds of remote interfaces---not by @code{#ifdef}s everywhere, but
by defining the @code{target_ops} structure and having a current target (as
well as a stack of targets below it, for memory references). Whenever
something needs to be done that depends on which remote interface we are
using, a flag in the current target_ops structure is tested (e.g.,
@code{target_has_stack}), or a function is called through a pointer in the
current target_ops structure. In this way, when a new remote interface
is added, only one module needs to be touched---the one that actually
implements the new remote interface. Other examples of
attribute-structs are BFD access to multiple kinds of object file
formats, or @value{GDBN}'s access to multiple source languages.
Please avoid duplicating code. For example, in @value{GDBN} 3.x all
the code interfacing between @code{ptrace} and the rest of
@value{GDBN} was duplicated in @file{*-dep.c}, and so changing
something was very painful. In @value{GDBN} 4.x, these have all been
consolidated into @file{infptrace.c}. @file{infptrace.c} can deal
with variations between systems the same way any system-independent
file would (hooks, @code{#if defined}, etc.), and machines which are
radically different don't need to use @file{infptrace.c} at all.
All debugging code must be controllable using the @samp{set debug
@var{module}} command. Do not use @code{printf} to print trace
messages. Use @code{fprintf_unfiltered(gdb_stdlog, ...}. Do not use
@code{#ifdef DEBUG}.
@node Porting GDB
@chapter Porting @value{GDBN}
@cindex porting to new machines
Most of the work in making @value{GDBN} compile on a new machine is in
specifying the configuration of the machine. This is done in a
dizzying variety of header files and configuration scripts, which we
hope to make more sensible soon. Let's say your new host is called an
@var{xyz} (e.g., @samp{sun4}), and its full three-part configuration
name is @code{@var{arch}-@var{xvend}-@var{xos}} (e.g.,
@samp{sparc-sun-sunos4}). In particular:
@itemize @bullet
@item
In the top level directory, edit @file{config.sub} and add @var{arch},
@var{xvend}, and @var{xos} to the lists of supported architectures,
vendors, and operating systems near the bottom of the file. Also, add
@var{xyz} as an alias that maps to
@code{@var{arch}-@var{xvend}-@var{xos}}. You can test your changes by
running
@smallexample
./config.sub @var{xyz}
@end smallexample
@noindent
and
@smallexample
./config.sub @code{@var{arch}-@var{xvend}-@var{xos}}
@end smallexample
@noindent
which should both respond with @code{@var{arch}-@var{xvend}-@var{xos}}
and no error messages.
@noindent
You need to port BFD, if that hasn't been done already. Porting BFD is
beyond the scope of this manual.
@item
To configure @value{GDBN} itself, edit @file{gdb/configure.host} to recognize
your system and set @code{gdb_host} to @var{xyz}, and (unless your
desired target is already available) also edit @file{gdb/configure.tgt},
setting @code{gdb_target} to something appropriate (for instance,
@var{xyz}).
@emph{Maintainer's note: Work in progress. The file
@file{gdb/configure.host} originally needed to be modified when either a
new native target or a new host machine was being added to @value{GDBN}.
Recent changes have removed this requirement. The file now only needs
to be modified when adding a new native configuration. This will likely
changed again in the future.}
@item
Finally, you'll need to specify and define @value{GDBN}'s host-, native-, and
target-dependent @file{.h} and @file{.c} files used for your
configuration.
@end itemize
@node Versions and Branches
@chapter Versions and Branches
@section Versions
@value{GDBN}'s version is determined by the file
@file{gdb/version.in} and takes one of the following forms:
@table @asis
@item @var{major}.@var{minor}
@itemx @var{major}.@var{minor}.@var{patchlevel}
an official release (e.g., 6.2 or 6.2.1)
@item @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD}
a snapshot taken at @var{YYYY}-@var{MM}-@var{DD}-gmt (e.g.,
6.1.50.20020302, 6.1.90.20020304, or 6.1.0.20020308)
@item @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD}-cvs
a @sc{cvs} check out drawn on @var{YYYY}-@var{MM}-@var{DD} (e.g.,
6.1.50.20020302-cvs, 6.1.90.20020304-cvs, or 6.1.0.20020308-cvs)
@item @var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD} (@var{vendor})
a vendor specific release of @value{GDBN}, that while based on@*
@var{major}.@var{minor}.@var{patchlevel}.@var{YYYY}@var{MM}@var{DD},
may include additional changes
@end table
@value{GDBN}'s mainline uses the @var{major} and @var{minor} version
numbers from the most recent release branch, with a @var{patchlevel}
of 50. At the time each new release branch is created, the mainline's
@var{major} and @var{minor} version numbers are updated.
@value{GDBN}'s release branch is similar. When the branch is cut, the
@var{patchlevel} is changed from 50 to 90. As draft releases are
drawn from the branch, the @var{patchlevel} is incremented. Once the
first release (@var{major}.@var{minor}) has been made, the
@var{patchlevel} is set to 0 and updates have an incremented
@var{patchlevel}.
For snapshots, and @sc{cvs} check outs, it is also possible to
identify the @sc{cvs} origin:
@table @asis
@item @var{major}.@var{minor}.50.@var{YYYY}@var{MM}@var{DD}
drawn from the @sc{head} of mainline @sc{cvs} (e.g., 6.1.50.20020302)
@item @var{major}.@var{minor}.90.@var{YYYY}@var{MM}@var{DD}
@itemx @var{major}.@var{minor}.91.@var{YYYY}@var{MM}@var{DD} @dots{}
drawn from a release branch prior to the release (e.g.,
6.1.90.20020304)
@item @var{major}.@var{minor}.0.@var{YYYY}@var{MM}@var{DD}
@itemx @var{major}.@var{minor}.1.@var{YYYY}@var{MM}@var{DD} @dots{}
drawn from a release branch after the release (e.g., 6.2.0.20020308)
@end table
If the previous @value{GDBN} version is 6.1 and the current version is
6.2, then, substituting 6 for @var{major} and 1 or 2 for @var{minor},
here's an illustration of a typical sequence:
@smallexample
<HEAD>
|
6.1.50.20020302-cvs
|
+--------------------------.
| <gdb_6_2-branch>
| |
6.2.50.20020303-cvs 6.1.90 (draft #1)
| |
6.2.50.20020304-cvs 6.1.90.20020304-cvs
| |
6.2.50.20020305-cvs 6.1.91 (draft #2)
| |
6.2.50.20020306-cvs 6.1.91.20020306-cvs
| |
6.2.50.20020307-cvs 6.2 (release)
| |
6.2.50.20020308-cvs 6.2.0.20020308-cvs
| |
6.2.50.20020309-cvs 6.2.1 (update)
| |
6.2.50.20020310-cvs <branch closed>
|
6.2.50.20020311-cvs
|
+--------------------------.
| <gdb_6_3-branch>
| |
6.3.50.20020312-cvs 6.2.90 (draft #1)
| |
@end smallexample
@section Release Branches
@cindex Release Branches
@value{GDBN} draws a release series (6.2, 6.2.1, @dots{}) from a
single release branch, and identifies that branch using the @sc{cvs}
branch tags:
@smallexample
gdb_@var{major}_@var{minor}-@var{YYYY}@var{MM}@var{DD}-branchpoint
gdb_@var{major}_@var{minor}-branch
gdb_@var{major}_@var{minor}-@var{YYYY}@var{MM}@var{DD}-release
@end smallexample
@emph{Pragmatics: To help identify the date at which a branch or
release is made, both the branchpoint and release tags include the
date that they are cut (@var{YYYY}@var{MM}@var{DD}) in the tag. The
branch tag, denoting the head of the branch, does not need this.}
@section Vendor Branches
@cindex vendor branches
To avoid version conflicts, vendors are expected to modify the file
@file{gdb/version.in} to include a vendor unique alphabetic identifier
(an official @value{GDBN} release never uses alphabetic characters in
its version identifier). E.g., @samp{6.2widgit2}, or @samp{6.2 (Widgit
Inc Patch 2)}.
@section Experimental Branches
@cindex experimental branches
@subsection Guidelines
@value{GDBN} permits the creation of branches, cut from the @sc{cvs}
repository, for experimental development. Branches make it possible
for developers to share preliminary work, and maintainers to examine
significant new developments.
The following are a set of guidelines for creating such branches:
@table @emph
@item a branch has an owner
The owner can set further policy for a branch, but may not change the
ground rules. In particular, they can set a policy for commits (be it
adding more reviewers or deciding who can commit).
@item all commits are posted
All changes committed to a branch shall also be posted to
@email{gdb-patches@@sources.redhat.com, the @value{GDBN} patches
mailing list}. While commentary on such changes are encouraged, people
should remember that the changes only apply to a branch.
@item all commits are covered by an assignment
This ensures that all changes belong to the Free Software Foundation,
and avoids the possibility that the branch may become contaminated.
@item a branch is focused
A focused branch has a single objective or goal, and does not contain
unnecessary or irrelevant changes. Cleanups, where identified, being
be pushed into the mainline as soon as possible.
@item a branch tracks mainline
This keeps the level of divergence under control. It also keeps the
pressure on developers to push cleanups and other stuff into the
mainline.
@item a branch shall contain the entire @value{GDBN} module
The @value{GDBN} module @code{gdb} should be specified when creating a
branch (branches of individual files should be avoided). @xref{Tags}.
@item a branch shall be branded using @file{version.in}
The file @file{gdb/version.in} shall be modified so that it identifies
the branch @var{owner} and branch @var{name}, e.g.,
@samp{6.2.50.20030303_owner_name} or @samp{6.2 (Owner Name)}.
@end table
@subsection Tags
@anchor{Tags}
To simplify the identification of @value{GDBN} branches, the following
branch tagging convention is strongly recommended:
@table @code
@item @var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint
@itemx @var{owner}_@var{name}-@var{YYYYMMDD}-branch
The branch point and corresponding branch tag. @var{YYYYMMDD} is the
date that the branch was created. A branch is created using the
sequence: @anchor{experimental branch tags}
@smallexample
cvs rtag @var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint gdb
cvs rtag -b -r @var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint \
@var{owner}_@var{name}-@var{YYYYMMDD}-branch gdb
@end smallexample
@item @var{owner}_@var{name}-@var{yyyymmdd}-mergepoint
The tagged point, on the mainline, that was used when merging the branch
on @var{yyyymmdd}. To merge in all changes since the branch was cut,
use a command sequence like:
@smallexample
cvs rtag @var{owner}_@var{name}-@var{yyyymmdd}-mergepoint gdb
cvs update \
-j@var{owner}_@var{name}-@var{YYYYMMDD}-branchpoint
-j@var{owner}_@var{name}-@var{yyyymmdd}-mergepoint
@end smallexample
@noindent
Similar sequences can be used to just merge in changes since the last
merge.
@end table
@noindent
For further information on @sc{cvs}, see
@uref{http://www.gnu.org/software/cvs/, Concurrent Versions System}.
@node Start of New Year Procedure
@chapter Start of New Year Procedure
@cindex new year procedure
At the start of each new year, the following actions should be performed:
@itemize @bullet
@item
Rotate the ChangeLog file
The current @file{ChangeLog} file should be renamed into
@file{ChangeLog-YYYY} where YYYY is the year that has just passed.
A new @file{ChangeLog} file should be created, and its contents should
contain a reference to the previous ChangeLog. The following should
also be preserved at the end of the new ChangeLog, in order to provide
the appropriate settings when editing this file with Emacs:
@smallexample
Local Variables:
mode: change-log
left-margin: 8
fill-column: 74
version-control: never
End:
@end smallexample
@item
Add an entry for the newly created ChangeLog file (@file{ChangeLog-YYYY})
in @file{gdb/config/djgpp/fnchange.lst}.
@item
Update the copyright year in the startup message
Update the copyright year in file @file{top.c}, function
@code{print_gdb_version}.
@end itemize
@node Releasing GDB
@chapter Releasing @value{GDBN}
@cindex making a new release of gdb
@section Branch Commit Policy
The branch commit policy is pretty slack. @value{GDBN} releases 5.0,
5.1 and 5.2 all used the below:
@itemize @bullet
@item
The @file{gdb/MAINTAINERS} file still holds.
@item
Don't fix something on the branch unless/until it is also fixed in the
trunk. If this isn't possible, mentioning it in the @file{gdb/PROBLEMS}
file is better than committing a hack.
@item
When considering a patch for the branch, suggested criteria include:
Does it fix a build? Does it fix the sequence @kbd{break main; run}
when debugging a static binary?
@item
The further a change is from the core of @value{GDBN}, the less likely
the change will worry anyone (e.g., target specific code).
@item
Only post a proposal to change the core of @value{GDBN} after you've
sent individual bribes to all the people listed in the
@file{MAINTAINERS} file @t{;-)}
@end itemize
@emph{Pragmatics: Provided updates are restricted to non-core
functionality there is little chance that a broken change will be fatal.
This means that changes such as adding a new architectures or (within
reason) support for a new host are considered acceptable.}
@section Obsoleting code
Before anything else, poke the other developers (and around the source
code) to see if there is anything that can be removed from @value{GDBN}
(an old target, an unused file).
Obsolete code is identified by adding an @code{OBSOLETE} prefix to every
line. Doing this means that it is easy to identify something that has
been obsoleted when greping through the sources.
The process is done in stages --- this is mainly to ensure that the
wider @value{GDBN} community has a reasonable opportunity to respond.
Remember, everything on the Internet takes a week.
@enumerate
@item
Post the proposal on @email{gdb@@sources.redhat.com, the GDB mailing
list} Creating a bug report to track the task's state, is also highly
recommended.
@item
Wait a week or so.
@item
Post the proposal on @email{gdb-announce@@sources.redhat.com, the GDB
Announcement mailing list}.
@item
Wait a week or so.
@item
Go through and edit all relevant files and lines so that they are
prefixed with the word @code{OBSOLETE}.
@item
Wait until the next GDB version, containing this obsolete code, has been
released.
@item
Remove the obsolete code.
@end enumerate
@noindent
@emph{Maintainer note: While removing old code is regrettable it is
hopefully better for @value{GDBN}'s long term development. Firstly it
helps the developers by removing code that is either no longer relevant
or simply wrong. Secondly since it removes any history associated with
the file (effectively clearing the slate) the developer has a much freer
hand when it comes to fixing broken files.}
@section Before the Branch
The most important objective at this stage is to find and fix simple
changes that become a pain to track once the branch is created. For
instance, configuration problems that stop @value{GDBN} from even
building. If you can't get the problem fixed, document it in the
@file{gdb/PROBLEMS} file.
@subheading Prompt for @file{gdb/NEWS}
People always forget. Send a post reminding them but also if you know
something interesting happened add it yourself. The @code{schedule}
script will mention this in its e-mail.
@subheading Review @file{gdb/README}
Grab one of the nightly snapshots and then walk through the
@file{gdb/README} looking for anything that can be improved. The
@code{schedule} script will mention this in its e-mail.
@subheading Refresh any imported files.
A number of files are taken from external repositories. They include:
@itemize @bullet
@item
@file{texinfo/texinfo.tex}
@item
@file{config.guess} et.@: al.@: (see the top-level @file{MAINTAINERS}
file)
@item
@file{etc/standards.texi}, @file{etc/make-stds.texi}
@end itemize
@subheading Check the ARI
@uref{http://sources.redhat.com/gdb/ari,,A.R.I.} is an @code{awk} script
(Awk Regression Index ;-) that checks for a number of errors and coding
conventions. The checks include things like using @code{malloc} instead
of @code{xmalloc} and file naming problems. There shouldn't be any
regressions.
@subsection Review the bug data base
Close anything obviously fixed.
@subsection Check all cross targets build
The targets are listed in @file{gdb/MAINTAINERS}.
@section Cut the Branch
@subheading Create the branch
@smallexample
$ u=5.1
$ v=5.2
$ V=`echo $v | sed 's/\./_/g'`
$ D=`date -u +%Y-%m-%d`
$ echo $u $V $D
5.1 5_2 2002-03-03
$ echo cvs -f -d :ext:sources.redhat.com:/cvs/src rtag \
-D $D-gmt gdb_$V-$D-branchpoint insight
cvs -f -d :ext:sources.redhat.com:/cvs/src rtag
-D 2002-03-03-gmt gdb_5_2-2002-03-03-branchpoint insight
$ ^echo ^^
...
$ echo cvs -f -d :ext:sources.redhat.com:/cvs/src rtag \
-b -r gdb_$V-$D-branchpoint gdb_$V-branch insight
cvs -f -d :ext:sources.redhat.com:/cvs/src rtag \
-b -r gdb_5_2-2002-03-03-branchpoint gdb_5_2-branch insight
$ ^echo ^^
...
$
@end smallexample
@itemize @bullet
@item
By using @kbd{-D YYYY-MM-DD-gmt}, the branch is forced to an exact
date/time.
@item
The trunk is first tagged so that the branch point can easily be found.
@item
Insight, which includes @value{GDBN}, is tagged at the same time.
@item
@file{version.in} gets bumped to avoid version number conflicts.
@item
The reading of @file{.cvsrc} is disabled using @file{-f}.
@end itemize
@subheading Update @file{version.in}
@smallexample
$ u=5.1
$ v=5.2
$ V=`echo $v | sed 's/\./_/g'`
$ echo $u $v$V
5.1 5_2
$ cd /tmp
$ echo cvs -f -d :ext:sources.redhat.com:/cvs/src co \
-r gdb_$V-branch src/gdb/version.in
cvs -f -d :ext:sources.redhat.com:/cvs/src co
-r gdb_5_2-branch src/gdb/version.in
$ ^echo ^^
U src/gdb/version.in
$ cd src/gdb
$ echo $u.90-0000-00-00-cvs > version.in
$ cat version.in
5.1.90-0000-00-00-cvs
$ cvs -f commit version.in
@end smallexample
@itemize @bullet
@item
@file{0000-00-00} is used as a date to pump prime the version.in update
mechanism.
@item
@file{.90} and the previous branch version are used as fairly arbitrary
initial branch version number.
@end itemize
@subheading Update the web and news pages
Something?
@subheading Tweak cron to track the new branch
The file @file{gdbadmin/cron/crontab} contains gdbadmin's cron table.
This file needs to be updated so that:
@itemize @bullet
@item
A daily timestamp is added to the file @file{version.in}.
@item
The new branch is included in the snapshot process.
@end itemize
@noindent
See the file @file{gdbadmin/cron/README} for how to install the updated
cron table.
The file @file{gdbadmin/ss/README} should also be reviewed to reflect
any changes. That file is copied to both the branch/ and current/
snapshot directories.
@subheading Update the NEWS and README files
The @file{NEWS} file needs to be updated so that on the branch it refers
to @emph{changes in the current release} while on the trunk it also
refers to @emph{changes since the current release}.
The @file{README} file needs to be updated so that it refers to the
current release.
@subheading Post the branch info
Send an announcement to the mailing lists:
@itemize @bullet
@item
@email{gdb-announce@@sources.redhat.com, GDB Announcement mailing list}
@item
@email{gdb@@sources.redhat.com, GDB Discussion mailing list} and
@email{gdb-testers@@sources.redhat.com, GDB Testers mailing list}
@end itemize
@emph{Pragmatics: The branch creation is sent to the announce list to
ensure that people people not subscribed to the higher volume discussion
list are alerted.}
The announcement should include:
@itemize @bullet
@item
The branch tag.
@item
How to check out the branch using CVS.
@item
The date/number of weeks until the release.
@item
The branch commit policy still holds.
@end itemize
@section Stabilize the branch
Something goes here.
@section Create a Release
The process of creating and then making available a release is broken
down into a number of stages. The first part addresses the technical
process of creating a releasable tar ball. The later stages address the
process of releasing that tar ball.
When making a release candidate just the first section is needed.
@subsection Create a release candidate
The objective at this stage is to create a set of tar balls that can be
made available as a formal release (or as a less formal release
candidate).
@subsubheading Freeze the branch
Send out an e-mail notifying everyone that the branch is frozen to
@email{gdb-patches@@sources.redhat.com}.
@subsubheading Establish a few defaults.
@smallexample
$ b=gdb_5_2-branch
$ v=5.2
$ t=/sourceware/snapshot-tmp/gdbadmin-tmp
$ echo $t/$b/$v
/sourceware/snapshot-tmp/gdbadmin-tmp/gdb_5_2-branch/5.2
$ mkdir -p $t/$b/$v
$ cd $t/$b/$v
$ pwd
/sourceware/snapshot-tmp/gdbadmin-tmp/gdb_5_2-branch/5.2
$ which autoconf
/home/gdbadmin/bin/autoconf
$
@end smallexample
@noindent
Notes:
@itemize @bullet
@item
Check the @code{autoconf} version carefully. You want to be using the
version taken from the @file{binutils} snapshot directory, which can be
found at @uref{ftp://sources.redhat.com/pub/binutils/}. It is very
unlikely that a system installed version of @code{autoconf} (e.g.,
@file{/usr/bin/autoconf}) is correct.
@end itemize
@subsubheading Check out the relevant modules:
@smallexample
$ for m in gdb insight
do
( mkdir -p $m && cd $m && cvs -q -f -d /cvs/src co -P -r $b $m )
done
$
@end smallexample
@noindent
Note:
@itemize @bullet
@item
The reading of @file{.cvsrc} is disabled (@file{-f}) so that there isn't
any confusion between what is written here and what your local
@code{cvs} really does.
@end itemize
@subsubheading Update relevant files.
@table @file
@item gdb/NEWS
Major releases get their comments added as part of the mainline. Minor
releases should probably mention any significant bugs that were fixed.
Don't forget to include the @file{ChangeLog} entry.
@smallexample
$ emacs gdb/src/gdb/NEWS
...
c-x 4 a
...
c-x c-s c-x c-c
$ cp gdb/src/gdb/NEWS insight/src/gdb/NEWS
$ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog
@end smallexample
@item gdb/README
You'll need to update:
@itemize @bullet
@item
The version.
@item
The update date.
@item
Who did it.
@end itemize
@smallexample
$ emacs gdb/src/gdb/README
...
c-x 4 a
...
c-x c-s c-x c-c
$ cp gdb/src/gdb/README insight/src/gdb/README
$ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog
@end smallexample
@emph{Maintainer note: Hopefully the @file{README} file was reviewed
before the initial branch was cut so just a simple substitute is needed
to get it updated.}
@emph{Maintainer note: Other projects generate @file{README} and
@file{INSTALL} from the core documentation. This might be worth
pursuing.}
@item gdb/version.in
@smallexample
$ echo $v > gdb/src/gdb/version.in
$ cat gdb/src/gdb/version.in
5.2
$ emacs gdb/src/gdb/version.in
...
c-x 4 a
... Bump to version ...
c-x c-s c-x c-c
$ cp gdb/src/gdb/version.in insight/src/gdb/version.in
$ cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog
@end smallexample
@end table
@subsubheading Do the dirty work
This is identical to the process used to create the daily snapshot.
@smallexample
$ for m in gdb insight
do
( cd $m/src && gmake -f src-release $m.tar )
done
@end smallexample
If the top level source directory does not have @file{src-release}
(@value{GDBN} version 5.3.1 or earlier), try these commands instead:
@smallexample
$ for m in gdb insight
do
( cd $m/src && gmake -f Makefile.in $m.tar )
done
@end smallexample
@subsubheading Check the source files
You're looking for files that have mysteriously disappeared.
@kbd{distclean} has the habit of deleting files it shouldn't. Watch out
for the @file{version.in} update @kbd{cronjob}.
@smallexample
$ ( cd gdb/src && cvs -f -q -n update )
M djunpack.bat
? gdb-5.1.91.tar
? proto-toplev
@dots{} lots of generated files @dots{}
M gdb/ChangeLog
M gdb/NEWS
M gdb/README
M gdb/version.in
@dots{} lots of generated files @dots{}
$
@end smallexample
@noindent
@emph{Don't worry about the @file{gdb.info-??} or
@file{gdb/p-exp.tab.c}. They were generated (and yes @file{gdb.info-1}
was also generated only something strange with CVS means that they
didn't get suppressed). Fixing it would be nice though.}
@subsubheading Create compressed versions of the release
@smallexample
$ cp */src/*.tar .
$ cp */src/*.bz2 .
$ ls -F
gdb/ gdb-5.2.tar insight/ insight-5.2.tar
$ for m in gdb insight
do
bzip2 -v -9 -c $m-$v.tar > $m-$v.tar.bz2
gzip -v -9 -c $m-$v.tar > $m-$v.tar.gz
done
$
@end smallexample
@noindent
Note:
@itemize @bullet
@item
A pipe such as @kbd{bunzip2 < xxx.bz2 | gzip -9 > xxx.gz} is not since,
in that mode, @code{gzip} does not know the name of the file and, hence,
can not include it in the compressed file. This is also why the release
process runs @code{tar} and @code{bzip2} as separate passes.
@end itemize
@subsection Sanity check the tar ball
Pick a popular machine (Solaris/PPC?) and try the build on that.
@smallexample
$ bunzip2 < gdb-5.2.tar.bz2 | tar xpf -
$ cd gdb-5.2
$ ./configure
$ make
@dots{}
$ ./gdb/gdb ./gdb/gdb
GNU gdb 5.2
@dots{}
(gdb) b main
Breakpoint 1 at 0x80732bc: file main.c, line 734.
(gdb) run
Starting program: /tmp/gdb-5.2/gdb/gdb
Breakpoint 1, main (argc=1, argv=0xbffff8b4) at main.c:734
734 catch_errors (captured_main, &args, "", RETURN_MASK_ALL);
(gdb) print args
$1 = @{argc = 136426532, argv = 0x821b7f0@}
(gdb)
@end smallexample
@subsection Make a release candidate available
If this is a release candidate then the only remaining steps are:
@enumerate
@item
Commit @file{version.in} and @file{ChangeLog}
@item
Tweak @file{version.in} (and @file{ChangeLog} to read
@var{L}.@var{M}.@var{N}-0000-00-00-cvs so that the version update
process can restart.
@item
Make the release candidate available in
@uref{ftp://sources.redhat.com/pub/gdb/snapshots/branch}
@item
Notify the relevant mailing lists ( @email{gdb@@sources.redhat.com} and
@email{gdb-testers@@sources.redhat.com} that the candidate is available.
@end enumerate
@subsection Make a formal release available
(And you thought all that was required was to post an e-mail.)
@subsubheading Install on sware
Copy the new files to both the release and the old release directory:
@smallexample
$ cp *.bz2 *.gz ~ftp/pub/gdb/old-releases/
$ cp *.bz2 *.gz ~ftp/pub/gdb/releases
@end smallexample
@noindent
Clean up the releases directory so that only the most recent releases
are available (e.g. keep 5.2 and 5.2.1 but remove 5.1):
@smallexample
$ cd ~ftp/pub/gdb/releases
$ rm @dots{}
@end smallexample
@noindent
Update the file @file{README} and @file{.message} in the releases
directory:
@smallexample
$ vi README
@dots{}
$ rm -f .message
$ ln README .message
@end smallexample
@subsubheading Update the web pages.
@table @file
@item htdocs/download/ANNOUNCEMENT
This file, which is posted as the official announcement, includes:
@itemize @bullet
@item
General announcement.
@item
News. If making an @var{M}.@var{N}.1 release, retain the news from
earlier @var{M}.@var{N} release.
@item
Errata.
@end itemize
@item htdocs/index.html
@itemx htdocs/news/index.html
@itemx htdocs/download/index.html
These files include:
@itemize @bullet
@item
Announcement of the most recent release.
@item
News entry (remember to update both the top level and the news directory).
@end itemize
These pages also need to be regenerate using @code{index.sh}.
@item download/onlinedocs/
You need to find the magic command that is used to generate the online
docs from the @file{.tar.bz2}. The best way is to look in the output
from one of the nightly @code{cron} jobs and then just edit accordingly.
Something like:
@smallexample
$ ~/ss/update-web-docs \
~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \
$PWD/www \
/www/sourceware/htdocs/gdb/download/onlinedocs \
gdb
@end smallexample
@item download/ari/
Just like the online documentation. Something like:
@smallexample
$ /bin/sh ~/ss/update-web-ari \
~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \
$PWD/www \
/www/sourceware/htdocs/gdb/download/ari \
gdb
@end smallexample
@end table
@subsubheading Shadow the pages onto gnu
Something goes here.
@subsubheading Install the @value{GDBN} tar ball on GNU
At the time of writing, the GNU machine was @kbd{gnudist.gnu.org} in
@file{~ftp/gnu/gdb}.
@subsubheading Make the @file{ANNOUNCEMENT}
Post the @file{ANNOUNCEMENT} file you created above to:
@itemize @bullet
@item
@email{gdb-announce@@sources.redhat.com, GDB Announcement mailing list}
@item
@email{info-gnu@@gnu.org, General GNU Announcement list} (but delay it a
day or so to let things get out)
@item
@email{bug-gdb@@gnu.org, GDB Bug Report mailing list}
@end itemize
@subsection Cleanup
The release is out but you're still not finished.
@subsubheading Commit outstanding changes
In particular you'll need to commit any changes to:
@itemize @bullet
@item
@file{gdb/ChangeLog}
@item
@file{gdb/version.in}
@item
@file{gdb/NEWS}
@item
@file{gdb/README}
@end itemize
@subsubheading Tag the release
Something like:
@smallexample
$ d=`date -u +%Y-%m-%d`
$ echo $d
2002-01-24
$ ( cd insight/src/gdb && cvs -f -q update )
$ ( cd insight/src && cvs -f -q tag gdb_5_2-$d-release )
@end smallexample
Insight is used since that contains more of the release than
@value{GDBN}.
@subsubheading Mention the release on the trunk
Just put something in the @file{ChangeLog} so that the trunk also
indicates when the release was made.
@subsubheading Restart @file{gdb/version.in}
If @file{gdb/version.in} does not contain an ISO date such as
@kbd{2002-01-24} then the daily @code{cronjob} won't update it. Having
committed all the release changes it can be set to
@file{5.2.0_0000-00-00-cvs} which will restart things (yes the @kbd{_}
is important - it affects the snapshot process).
Don't forget the @file{ChangeLog}.
@subsubheading Merge into trunk
The files committed to the branch may also need changes merged into the
trunk.
@subsubheading Revise the release schedule
Post a revised release schedule to @email{gdb@@sources.redhat.com, GDB
Discussion List} with an updated announcement. The schedule can be
generated by running:
@smallexample
$ ~/ss/schedule `date +%s` schedule
@end smallexample
@noindent
The first parameter is approximate date/time in seconds (from the epoch)
of the most recent release.
Also update the schedule @code{cronjob}.
@section Post release
Remove any @code{OBSOLETE} code.
@node Testsuite
@chapter Testsuite
@cindex test suite
The testsuite is an important component of the @value{GDBN} package.
While it is always worthwhile to encourage user testing, in practice
this is rarely sufficient; users typically use only a small subset of
the available commands, and it has proven all too common for a change
to cause a significant regression that went unnoticed for some time.
The @value{GDBN} testsuite uses the DejaGNU testing framework. The
tests themselves are calls to various @code{Tcl} procs; the framework
runs all the procs and summarizes the passes and fails.
@section Using the Testsuite
@cindex running the test suite
To run the testsuite, simply go to the @value{GDBN} object directory (or to the
testsuite's objdir) and type @code{make check}. This just sets up some
environment variables and invokes DejaGNU's @code{runtest} script. While
the testsuite is running, you'll get mentions of which test file is in use,
and a mention of any unexpected passes or fails. When the testsuite is
finished, you'll get a summary that looks like this:
@smallexample
=== gdb Summary ===
# of expected passes 6016
# of unexpected failures 58
# of unexpected successes 5
# of expected failures 183
# of unresolved testcases 3
# of untested testcases 5
@end smallexample
To run a specific test script, type:
@example
make check RUNTESTFLAGS='@var{tests}'
@end example
where @var{tests} is a list of test script file names, separated by
spaces.
The ideal test run consists of expected passes only; however, reality
conspires to keep us from this ideal. Unexpected failures indicate
real problems, whether in @value{GDBN} or in the testsuite. Expected
failures are still failures, but ones which have been decided are too
hard to deal with at the time; for instance, a test case might work
everywhere except on AIX, and there is no prospect of the AIX case
being fixed in the near future. Expected failures should not be added
lightly, since you may be masking serious bugs in @value{GDBN}.
Unexpected successes are expected fails that are passing for some
reason, while unresolved and untested cases often indicate some minor
catastrophe, such as the compiler being unable to deal with a test
program.
When making any significant change to @value{GDBN}, you should run the
testsuite before and after the change, to confirm that there are no
regressions. Note that truly complete testing would require that you
run the testsuite with all supported configurations and a variety of
compilers; however this is more than really necessary. In many cases
testing with a single configuration is sufficient. Other useful
options are to test one big-endian (Sparc) and one little-endian (x86)
host, a cross config with a builtin simulator (powerpc-eabi,
mips-elf), or a 64-bit host (Alpha).
If you add new functionality to @value{GDBN}, please consider adding
tests for it as well; this way future @value{GDBN} hackers can detect
and fix their changes that break the functionality you added.
Similarly, if you fix a bug that was not previously reported as a test
failure, please add a test case for it. Some cases are extremely
difficult to test, such as code that handles host OS failures or bugs
in particular versions of compilers, and it's OK not to try to write
tests for all of those.
DejaGNU supports separate build, host, and target machines. However,
some @value{GDBN} test scripts do not work if the build machine and
the host machine are not the same. In such an environment, these scripts
will give a result of ``UNRESOLVED'', like this:
@smallexample
UNRESOLVED: gdb.base/example.exp: This test script does not work on a remote host.
@end smallexample
@section Testsuite Organization
@cindex test suite organization
The testsuite is entirely contained in @file{gdb/testsuite}. While the
testsuite includes some makefiles and configury, these are very minimal,
and used for little besides cleaning up, since the tests themselves
handle the compilation of the programs that @value{GDBN} will run. The file
@file{testsuite/lib/gdb.exp} contains common utility procs useful for
all @value{GDBN} tests, while the directory @file{testsuite/config} contains
configuration-specific files, typically used for special-purpose
definitions of procs like @code{gdb_load} and @code{gdb_start}.
The tests themselves are to be found in @file{testsuite/gdb.*} and
subdirectories of those. The names of the test files must always end
with @file{.exp}. DejaGNU collects the test files by wildcarding
in the test directories, so both subdirectories and individual files
get chosen and run in alphabetical order.
The following table lists the main types of subdirectories and what they
are for. Since DejaGNU finds test files no matter where they are
located, and since each test file sets up its own compilation and
execution environment, this organization is simply for convenience and
intelligibility.
@table @file
@item gdb.base
This is the base testsuite. The tests in it should apply to all
configurations of @value{GDBN} (but generic native-only tests may live here).
The test programs should be in the subset of C that is valid K&R,
ANSI/ISO, and C@t{++} (@code{#ifdef}s are allowed if necessary, for instance
for prototypes).
@item gdb.@var{lang}
Language-specific tests for any language @var{lang} besides C. Examples are
@file{gdb.cp} and @file{gdb.java}.
@item gdb.@var{platform}
Non-portable tests. The tests are specific to a specific configuration
(host or target), such as HP-UX or eCos. Example is @file{gdb.hp}, for
HP-UX.
@item gdb.@var{compiler}
Tests specific to a particular compiler. As of this writing (June
1999), there aren't currently any groups of tests in this category that
couldn't just as sensibly be made platform-specific, but one could
imagine a @file{gdb.gcc}, for tests of @value{GDBN}'s handling of GCC
extensions.
@item gdb.@var{subsystem}
Tests that exercise a specific @value{GDBN} subsystem in more depth. For
instance, @file{gdb.disasm} exercises various disassemblers, while
@file{gdb.stabs} tests pathways through the stabs symbol reader.
@end table
@section Writing Tests
@cindex writing tests
In many areas, the @value{GDBN} tests are already quite comprehensive; you
should be able to copy existing tests to handle new cases.
You should try to use @code{gdb_test} whenever possible, since it
includes cases to handle all the unexpected errors that might happen.
However, it doesn't cost anything to add new test procedures; for
instance, @file{gdb.base/exprs.exp} defines a @code{test_expr} that
calls @code{gdb_test} multiple times.
Only use @code{send_gdb} and @code{gdb_expect} when absolutely
necessary. Even if @value{GDBN} has several valid responses to
a command, you can use @code{gdb_test_multiple}. Like @code{gdb_test},
@code{gdb_test_multiple} recognizes internal errors and unexpected
prompts.
Do not write tests which expect a literal tab character from @value{GDBN}.
On some operating systems (e.g.@: OpenBSD) the TTY layer expands tabs to
spaces, so by the time @value{GDBN}'s output reaches expect the tab is gone.
The source language programs do @emph{not} need to be in a consistent
style. Since @value{GDBN} is used to debug programs written in many different
styles, it's worth having a mix of styles in the testsuite; for
instance, some @value{GDBN} bugs involving the display of source lines would
never manifest themselves if the programs used GNU coding style
uniformly.
@node Hints
@chapter Hints
Check the @file{README} file, it often has useful information that does not
appear anywhere else in the directory.
@menu
* Getting Started:: Getting started working on @value{GDBN}
* Debugging GDB:: Debugging @value{GDBN} with itself
@end menu
@node Getting Started,,, Hints
@section Getting Started
@value{GDBN} is a large and complicated program, and if you first starting to
work on it, it can be hard to know where to start. Fortunately, if you
know how to go about it, there are ways to figure out what is going on.
This manual, the @value{GDBN} Internals manual, has information which applies
generally to many parts of @value{GDBN}.
Information about particular functions or data structures are located in
comments with those functions or data structures. If you run across a
function or a global variable which does not have a comment correctly
explaining what is does, this can be thought of as a bug in @value{GDBN}; feel
free to submit a bug report, with a suggested comment if you can figure
out what the comment should say. If you find a comment which is
actually wrong, be especially sure to report that.
Comments explaining the function of macros defined in host, target, or
native dependent files can be in several places. Sometimes they are
repeated every place the macro is defined. Sometimes they are where the
macro is used. Sometimes there is a header file which supplies a
default definition of the macro, and the comment is there. This manual
also documents all the available macros.
@c (@pxref{Host Conditionals}, @pxref{Target
@c Conditionals}, @pxref{Native Conditionals}, and @pxref{Obsolete
@c Conditionals})
Start with the header files. Once you have some idea of how
@value{GDBN}'s internal symbol tables are stored (see @file{symtab.h},
@file{gdbtypes.h}), you will find it much easier to understand the
code which uses and creates those symbol tables.
You may wish to process the information you are getting somehow, to
enhance your understanding of it. Summarize it, translate it to another
language, add some (perhaps trivial or non-useful) feature to @value{GDBN}, use
the code to predict what a test case would do and write the test case
and verify your prediction, etc. If you are reading code and your eyes
are starting to glaze over, this is a sign you need to use a more active
approach.
Once you have a part of @value{GDBN} to start with, you can find more
specifically the part you are looking for by stepping through each
function with the @code{next} command. Do not use @code{step} or you
will quickly get distracted; when the function you are stepping through
calls another function try only to get a big-picture understanding
(perhaps using the comment at the beginning of the function being
called) of what it does. This way you can identify which of the
functions being called by the function you are stepping through is the
one which you are interested in. You may need to examine the data
structures generated at each stage, with reference to the comments in
the header files explaining what the data structures are supposed to
look like.
Of course, this same technique can be used if you are just reading the
code, rather than actually stepping through it. The same general
principle applies---when the code you are looking at calls something
else, just try to understand generally what the code being called does,
rather than worrying about all its details.
@cindex command implementation
A good place to start when tracking down some particular area is with
a command which invokes that feature. Suppose you want to know how
single-stepping works. As a @value{GDBN} user, you know that the
@code{step} command invokes single-stepping. The command is invoked
via command tables (see @file{command.h}); by convention the function
which actually performs the command is formed by taking the name of
the command and adding @samp{_command}, or in the case of an
@code{info} subcommand, @samp{_info}. For example, the @code{step}
command invokes the @code{step_command} function and the @code{info
display} command invokes @code{display_info}. When this convention is
not followed, you might have to use @code{grep} or @kbd{M-x
tags-search} in emacs, or run @value{GDBN} on itself and set a
breakpoint in @code{execute_command}.
@cindex @code{bug-gdb} mailing list
If all of the above fail, it may be appropriate to ask for information
on @code{bug-gdb}. But @emph{never} post a generic question like ``I was
wondering if anyone could give me some tips about understanding
@value{GDBN}''---if we had some magic secret we would put it in this manual.
Suggestions for improving the manual are always welcome, of course.
@node Debugging GDB,,,Hints
@section Debugging @value{GDBN} with itself
@cindex debugging @value{GDBN}
If @value{GDBN} is limping on your machine, this is the preferred way to get it
fully functional. Be warned that in some ancient Unix systems, like
Ultrix 4.2, a program can't be running in one process while it is being
debugged in another. Rather than typing the command @kbd{@w{./gdb
./gdb}}, which works on Suns and such, you can copy @file{gdb} to
@file{gdb2} and then type @kbd{@w{./gdb ./gdb2}}.
When you run @value{GDBN} in the @value{GDBN} source directory, it will read a
@file{.gdbinit} file that sets up some simple things to make debugging
gdb easier. The @code{info} command, when executed without a subcommand
in a @value{GDBN} being debugged by gdb, will pop you back up to the top level
gdb. See @file{.gdbinit} for details.
If you use emacs, you will probably want to do a @code{make TAGS} after
you configure your distribution; this will put the machine dependent
routines for your local machine where they will be accessed first by
@kbd{M-.}
Also, make sure that you've either compiled @value{GDBN} with your local cc, or
have run @code{fixincludes} if you are compiling with gcc.
@section Submitting Patches
@cindex submitting patches
Thanks for thinking of offering your changes back to the community of
@value{GDBN} users. In general we like to get well designed enhancements.
Thanks also for checking in advance about the best way to transfer the
changes.
The @value{GDBN} maintainers will only install ``cleanly designed'' patches.
This manual summarizes what we believe to be clean design for @value{GDBN}.
If the maintainers don't have time to put the patch in when it arrives,
or if there is any question about a patch, it goes into a large queue
with everyone else's patches and bug reports.
@cindex legal papers for code contributions
The legal issue is that to incorporate substantial changes requires a
copyright assignment from you and/or your employer, granting ownership
of the changes to the Free Software Foundation. You can get the
standard documents for doing this by sending mail to @code{gnu@@gnu.org}
and asking for it. We recommend that people write in "All programs
owned by the Free Software Foundation" as "NAME OF PROGRAM", so that
changes in many programs (not just @value{GDBN}, but GAS, Emacs, GCC,
etc) can be
contributed with only one piece of legalese pushed through the
bureaucracy and filed with the FSF. We can't start merging changes until
this paperwork is received by the FSF (their rules, which we follow
since we maintain it for them).
Technically, the easiest way to receive changes is to receive each
feature as a small context diff or unidiff, suitable for @code{patch}.
Each message sent to me should include the changes to C code and
header files for a single feature, plus @file{ChangeLog} entries for
each directory where files were modified, and diffs for any changes
needed to the manuals (@file{gdb/doc/gdb.texinfo} or
@file{gdb/doc/gdbint.texinfo}). If there are a lot of changes for a
single feature, they can be split down into multiple messages.
In this way, if we read and like the feature, we can add it to the
sources with a single patch command, do some testing, and check it in.
If you leave out the @file{ChangeLog}, we have to write one. If you leave
out the doc, we have to puzzle out what needs documenting. Etc., etc.
The reason to send each change in a separate message is that we will not
install some of the changes. They'll be returned to you with questions
or comments. If we're doing our job correctly, the message back to you
will say what you have to fix in order to make the change acceptable.
The reason to have separate messages for separate features is so that
the acceptable changes can be installed while one or more changes are
being reworked. If multiple features are sent in a single message, we
tend to not put in the effort to sort out the acceptable changes from
the unacceptable, so none of the features get installed until all are
acceptable.
If this sounds painful or authoritarian, well, it is. But we get a lot
of bug reports and a lot of patches, and many of them don't get
installed because we don't have the time to finish the job that the bug
reporter or the contributor could have done. Patches that arrive
complete, working, and well designed, tend to get installed on the day
they arrive. The others go into a queue and get installed as time
permits, which, since the maintainers have many demands to meet, may not
be for quite some time.
Please send patches directly to
@email{gdb-patches@@sources.redhat.com, the @value{GDBN} maintainers}.
@section Obsolete Conditionals
@cindex obsolete code
Fragments of old code in @value{GDBN} sometimes reference or set the following
configuration macros. They should not be used by new code, and old uses
should be removed as those parts of the debugger are otherwise touched.
@table @code
@item STACK_END_ADDR
This macro used to define where the end of the stack appeared, for use
in interpreting core file formats that don't record this address in the
core file itself. This information is now configured in BFD, and @value{GDBN}
gets the info portably from there. The values in @value{GDBN}'s configuration
files should be moved into BFD configuration files (if needed there),
and deleted from all of @value{GDBN}'s config files.
Any @file{@var{foo}-xdep.c} file that references STACK_END_ADDR
is so old that it has never been converted to use BFD. Now that's old!
@end table
@include observer.texi
@raisesections
@include fdl.texi
@lowersections
@node Index
@unnumbered Index
@printindex cp
@bye