Return target_xfer_status in to_xfer_partial
This patch does the conversion of to_xfer_partial from LONGEST (*to_xfer_partial) (struct target_ops *ops, enum target_object object, const char *annex, gdb_byte *readbuf, const gdb_byte *writebuf, ULONGEST offset, ULONGEST len); to enum target_xfer_status (*to_xfer_partial) (struct target_ops *ops, enum target_object object, const char *annex, gdb_byte *readbuf, const gdb_byte *writebuf, ULONGEST offset, ULONGEST len, ULONGEST *xfered_len); It changes to_xfer_partial return the transfer status and the transfered length by *XFERED_LEN. Generally, the return status has three stats, - TARGET_XFER_OK, - TARGET_XFER_EOF, - TARGET_XFER_E_XXXX, See the comments to them in 'enum target_xfer_status'. Note that Pedro suggested not name TARGET_XFER_DONE, as it is confusing, compared with "TARGET_XFER_OK". We finally name it TARGET_XFER_EOF. With this change, GDB core can handle unavailable data in a convenient way. The rationale behind this change was mentioned here https://sourceware.org/ml/gdb-patches/2013-10/msg00761.html Consider an object/value like this: 0 100 150 200 512 DDDDDDDDDDDxxxxxxxxxDDDDDD...DDIIIIIIIIIIII..III where D is valid data, and xxx is unavailable data, and I is beyond the end of the object (Invalid). Currently, if we start the xfer at 0, requesting, say 512 bytes, we'll first get back 100 bytes. The xfer machinery then retries fetching [100,512), and gets back TARGET_XFER_E_UNAVAILABLE. That's sufficient when you're either interested in either having the whole of the 512 bytes available, or erroring out. But, in this scenario, we're interested in the data at [150,512). The problem is that the last TARGET_XFER_E_UNAVAILABLE gives us no indication where to start the read next. We'd need something like: get me [0,512) >>> <<< here's [0,100), *xfered_len is 100, returns TARGET_XFER_OK get me [100,512) >>> (**1) <<< [100,150) is unavailable, *xfered_len is 50, return TARGET_XFER_E_UNAVAILABLE. get me [150,512) >>> <<< here's [150,200), *xfered_len is 50, return TARGET_XFER_OK. get me [200,512) >>> <<< no more data, return TARGET_XFER_EOF. This naturally implies pushing down the decision of whether to return TARGET_XFER_E_UNAVAILABLE or something else down to the target. (Which kinds of leads back to tfile itself reading from RO memory from file (though we could export a function in exec.c for that that tfile delegates to, instead of re-adding the old code). Beside this change, we also add a macro TARGET_XFER_STATUS_ERROR_P to check whether a status is an error or not, to stop using "status < 0". This patch also eliminates the comparison between status and 0. No target implementations to to_xfer_partial adapts this new interface. The interface still behaves as before. gdb: 2014-02-11 Yao Qi <yao@codesourcery.com> * target.h (enum target_xfer_error): Rename to ... (enum target_xfer_status): ... it. New. All users updated. (enum target_xfer_status) <TARGET_XFER_OK>, <TARGET_XFER_EOF>: New. (TARGET_XFER_STATUS_ERROR_P): New macro. (target_xfer_error_to_string): Remove declaration. (target_xfer_status_to_string): Declare. (target_xfer_partial_ftype): Adjust it. (struct target_ops) <to_xfer_partial>: Return target_xfer_status. Add argument xfered_len. Update comments. * target.c (target_xfer_error_to_string): Rename to ... (target_xfer_status_to_string): ... it. New. All callers updated. (target_read_live_memory): Likewise. Call target_xfer_partial instead of target_read. (memory_xfer_live_readonly_partial): Return target_xfer_status. Add argument xfered_len. (raw_memory_xfer_partial): Likewise. (memory_xfer_partial_1): Likewise. (memory_xfer_partial): Likewise. (target_xfer_partial): Likewise. Check *XFERED_LEN is set properly. Update debug message. (default_xfer_partial, current_xfer_partial): Likewise. (target_write_partial): Likewise. (target_read_partial): Likewise. All callers updated. (read_whatever_is_readable): Likewise. (target_write_with_progress): Likewise. (target_read_alloc_1): Likewise. * aix-thread.c (aix_thread_xfer_partial): Likewise. * auxv.c (procfs_xfer_auxv): Likewise. (ld_so_xfer_auxv, memory_xfer_auxv): Likewise. * bfd-target.c (target_bfd_xfer_partial): Likewise. * bsd-kvm.c (bsd_kvm_xfer_partial): Likewise. * bsd-uthread.c (bsd_uthread_xfer_partia): Likewise. * corefile.c (read_memory): Adjust. * corelow.c (core_xfer_partial): Likewise. * ctf.c (ctf_xfer_partial): Likewise. * darwin-nat.c (darwin_read_dyld_info): Likewise. All callers updated. (darwin_xfer_partial): Likewise. * exec.c (section_table_xfer_memory_partial): Likewise. All callers updated. (exec_xfer_partial): Likewise. * exec.h (section_table_xfer_memory_partial): Update declaration. * gnu-nat.c (gnu_xfer_memory): Likewise. Assert 'res' is not negative. (gnu_xfer_partial): Likewise. * ia64-hpux-nat.c (ia64_hpux_xfer_memory_no_bs): Likewise. (ia64_hpux_xfer_memory, ia64_hpux_xfer_uregs): Likewise. (ia64_hpux_xfer_solib_got): Likewise. * inf-ptrace.c (inf_ptrace_xfer_partial): Likewise. Change type of 'partial_len' to ULONGEST. * inf-ttrace.c (inf_ttrace_xfer_partial): Likewise. * linux-nat.c (linux_xfer_siginfo ): Likewise. (linux_nat_xfer_partial): Likewise. (linux_proc_xfer_partial, linux_xfer_partial): Likewise. (linux_proc_xfer_spu, linux_nat_xfer_osdata): Likewise. * monitor.c (monitor_xfer_memory): Likewise. (monitor_xfer_partial): Likewise. * procfs.c (procfs_xfer_partial): Likewise. * record-btrace.c (record_btrace_xfer_partial): Likewise. * record-full.c (record_full_xfer_partial): Likewise. (record_full_core_xfer_partial): Likewise. * remote-sim.c (gdbsim_xfer_memory): Likewise. (gdbsim_xfer_partial): Likewise. * remote.c (remote_write_bytes_aux): Likewise. All callers updated. (remote_write_bytes, remote_read_bytes): Likewise. All callers updated. (remote_flash_erase): Likewise. All callers updated. (remote_write_qxfer): Likewise. All callers updated. (remote_read_qxfer): Likewise. All callers updated. (remote_xfer_partial): Likewise. * rs6000-nat.c (rs6000_xfer_partial): Likewise. (rs6000_xfer_shared_libraries): Likewise. * sol-thread.c (sol_thread_xfer_partial): Likewise. (sol_thread_xfer_partial): Likewise. * sparc-nat.c (sparc_xfer_wcookie): Likewise. (sparc_xfer_partial): Likewise. * spu-linux-nat.c (spu_proc_xfer_spu): Likewise. All callers updated. (spu_xfer_partial): Likewise. * spu-multiarch.c (spu_xfer_partial): Likewise. * tracepoint.c (tfile_xfer_partial): Likewise. * windows-nat.c (windows_xfer_memory): Likewise. (windows_xfer_shared_libraries): Likewise. (windows_xfer_partial): Likewise. * valprint.c: Replace 'target_xfer_error' with 'target_xfer_status' in comments.
This commit is contained in:
parent
a8e6308380
commit
9b409511d0
34 changed files with 893 additions and 454 deletions
|
@ -1,3 +1,98 @@
|
|||
2014-02-11 Yao Qi <yao@codesourcery.com>
|
||||
|
||||
* target.h (enum target_xfer_error): Rename to ...
|
||||
(enum target_xfer_status): ... it. New. All users updated.
|
||||
(enum target_xfer_status) <TARGET_XFER_OK>, <TARGET_XFER_EOF>:
|
||||
New.
|
||||
(TARGET_XFER_STATUS_ERROR_P): New macro.
|
||||
(target_xfer_error_to_string): Remove declaration.
|
||||
(target_xfer_status_to_string): Declare.
|
||||
(target_xfer_partial_ftype): Adjust it.
|
||||
(struct target_ops) <to_xfer_partial>: Return
|
||||
target_xfer_status. Add argument xfered_len. Update
|
||||
comments.
|
||||
* target.c (target_xfer_error_to_string): Rename to ...
|
||||
(target_xfer_status_to_string): ... it. New. All callers
|
||||
updated.
|
||||
(target_read_live_memory): Likewise. Call target_xfer_partial
|
||||
instead of target_read.
|
||||
(memory_xfer_live_readonly_partial): Return
|
||||
target_xfer_status. Add argument xfered_len.
|
||||
(raw_memory_xfer_partial): Likewise.
|
||||
(memory_xfer_partial_1): Likewise.
|
||||
(memory_xfer_partial): Likewise.
|
||||
(target_xfer_partial): Likewise. Check *XFERED_LEN is set
|
||||
properly. Update debug message.
|
||||
(default_xfer_partial, current_xfer_partial): Likewise.
|
||||
(target_write_partial): Likewise.
|
||||
(target_read_partial): Likewise. All callers updated.
|
||||
(read_whatever_is_readable): Likewise.
|
||||
(target_write_with_progress): Likewise.
|
||||
(target_read_alloc_1): Likewise.
|
||||
|
||||
* aix-thread.c (aix_thread_xfer_partial): Likewise.
|
||||
* auxv.c (procfs_xfer_auxv): Likewise.
|
||||
(ld_so_xfer_auxv, memory_xfer_auxv): Likewise.
|
||||
* bfd-target.c (target_bfd_xfer_partial): Likewise.
|
||||
* bsd-kvm.c (bsd_kvm_xfer_partial): Likewise.
|
||||
* bsd-uthread.c (bsd_uthread_xfer_partia): Likewise.
|
||||
* corefile.c (read_memory): Adjust.
|
||||
* corelow.c (core_xfer_partial): Likewise.
|
||||
* ctf.c (ctf_xfer_partial): Likewise.
|
||||
* darwin-nat.c (darwin_read_dyld_info): Likewise. All callers
|
||||
updated.
|
||||
(darwin_xfer_partial): Likewise.
|
||||
* exec.c (section_table_xfer_memory_partial): Likewise. All
|
||||
callers updated.
|
||||
(exec_xfer_partial): Likewise.
|
||||
* exec.h (section_table_xfer_memory_partial): Update
|
||||
declaration.
|
||||
* gnu-nat.c (gnu_xfer_memory): Likewise. Assert 'res' is not
|
||||
negative.
|
||||
(gnu_xfer_partial): Likewise.
|
||||
* ia64-hpux-nat.c (ia64_hpux_xfer_memory_no_bs): Likewise.
|
||||
(ia64_hpux_xfer_memory, ia64_hpux_xfer_uregs): Likewise.
|
||||
(ia64_hpux_xfer_solib_got): Likewise.
|
||||
* inf-ptrace.c (inf_ptrace_xfer_partial): Likewise. Change
|
||||
type of 'partial_len' to ULONGEST.
|
||||
* inf-ttrace.c (inf_ttrace_xfer_partial): Likewise.
|
||||
* linux-nat.c (linux_xfer_siginfo ): Likewise.
|
||||
(linux_nat_xfer_partial): Likewise.
|
||||
(linux_proc_xfer_partial, linux_xfer_partial): Likewise.
|
||||
(linux_proc_xfer_spu, linux_nat_xfer_osdata): Likewise.
|
||||
* monitor.c (monitor_xfer_memory): Likewise.
|
||||
(monitor_xfer_partial): Likewise.
|
||||
* procfs.c (procfs_xfer_partial): Likewise.
|
||||
* record-btrace.c (record_btrace_xfer_partial): Likewise.
|
||||
* record-full.c (record_full_xfer_partial): Likewise.
|
||||
(record_full_core_xfer_partial): Likewise.
|
||||
* remote-sim.c (gdbsim_xfer_memory): Likewise.
|
||||
(gdbsim_xfer_partial): Likewise.
|
||||
* remote.c (remote_write_bytes_aux): Likewise. All callers
|
||||
updated.
|
||||
(remote_write_bytes, remote_read_bytes): Likewise. All
|
||||
callers updated.
|
||||
(remote_flash_erase): Likewise. All callers updated.
|
||||
(remote_write_qxfer): Likewise. All callers updated.
|
||||
(remote_read_qxfer): Likewise. All callers updated.
|
||||
(remote_xfer_partial): Likewise.
|
||||
* rs6000-nat.c (rs6000_xfer_partial): Likewise.
|
||||
(rs6000_xfer_shared_libraries): Likewise.
|
||||
* sol-thread.c (sol_thread_xfer_partial): Likewise.
|
||||
(sol_thread_xfer_partial): Likewise.
|
||||
* sparc-nat.c (sparc_xfer_wcookie): Likewise.
|
||||
(sparc_xfer_partial): Likewise.
|
||||
* spu-linux-nat.c (spu_proc_xfer_spu): Likewise. All callers
|
||||
updated.
|
||||
(spu_xfer_partial): Likewise.
|
||||
* spu-multiarch.c (spu_xfer_partial): Likewise.
|
||||
* tracepoint.c (tfile_xfer_partial): Likewise.
|
||||
* windows-nat.c (windows_xfer_memory): Likewise.
|
||||
(windows_xfer_shared_libraries): Likewise.
|
||||
(windows_xfer_partial): Likewise.
|
||||
* valprint.c: Replace 'target_xfer_error' with
|
||||
'target_xfer_status' in comments.
|
||||
|
||||
2014-02-11 Simon Marchi <simon.marchi@ericsson.com> (tiny patch)
|
||||
|
||||
Checked in by Joel Brobecker <brobecker@adacore.com>.
|
||||
|
|
|
@ -1682,19 +1682,19 @@ aix_thread_store_registers (struct target_ops *ops,
|
|||
inferior's OBJECT:ANNEX space and GDB's READBUF/WRITEBUF buffer.
|
||||
Return the number of bytes actually transferred. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
aix_thread_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct cleanup *old_chain = save_inferior_ptid ();
|
||||
LONGEST xfer;
|
||||
enum target_xfer_status xfer;
|
||||
struct target_ops *beneath = find_target_beneath (ops);
|
||||
|
||||
inferior_ptid = pid_to_ptid (ptid_get_pid (inferior_ptid));
|
||||
xfer = beneath->to_xfer_partial (beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
xfer = beneath->to_xfer_partial (beneath, object, annex, readbuf,
|
||||
writebuf, offset, len, xfered_len);
|
||||
|
||||
do_cleanups (old_chain);
|
||||
return xfer;
|
||||
|
|
60
gdb/auxv.c
60
gdb/auxv.c
|
@ -38,15 +38,16 @@
|
|||
/* This function handles access via /proc/PID/auxv, which is a common
|
||||
method for native targets. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
procfs_xfer_auxv (gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
char *pathname;
|
||||
int fd;
|
||||
LONGEST n;
|
||||
ssize_t l;
|
||||
|
||||
pathname = xstrprintf ("/proc/%d/auxv", ptid_get_pid (inferior_ptid));
|
||||
fd = gdb_open_cloexec (pathname, writebuf != NULL ? O_WRONLY : O_RDONLY, 0);
|
||||
|
@ -56,24 +57,32 @@ procfs_xfer_auxv (gdb_byte *readbuf,
|
|||
|
||||
if (offset != (ULONGEST) 0
|
||||
&& lseek (fd, (off_t) offset, SEEK_SET) != (off_t) offset)
|
||||
n = -1;
|
||||
l = -1;
|
||||
else if (readbuf != NULL)
|
||||
n = read (fd, readbuf, len);
|
||||
l = read (fd, readbuf, (size_t) len);
|
||||
else
|
||||
n = write (fd, writebuf, len);
|
||||
l = write (fd, writebuf, (size_t) len);
|
||||
|
||||
(void) close (fd);
|
||||
|
||||
return n;
|
||||
if (l < 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
else if (l == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) l;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
/* This function handles access via ld.so's symbol `_dl_auxv'. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ld_so_xfer_auxv (gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct minimal_symbol *msym;
|
||||
CORE_ADDR data_address, pointer_address;
|
||||
|
@ -132,7 +141,10 @@ ld_so_xfer_auxv (gdb_byte *readbuf,
|
|||
if (writebuf != NULL)
|
||||
{
|
||||
if (target_write_memory (data_address, writebuf, len) == 0)
|
||||
return len;
|
||||
{
|
||||
*xfered_len = (ULONGEST) len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
else
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
@ -147,7 +159,7 @@ ld_so_xfer_auxv (gdb_byte *readbuf,
|
|||
return TARGET_XFER_E_IO;
|
||||
|
||||
if (extract_typed_address (ptr_buf, ptr_type) == AT_NULL)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
retval = 0;
|
||||
|
@ -166,12 +178,12 @@ ld_so_xfer_auxv (gdb_byte *readbuf,
|
|||
|
||||
block &= -auxv_pair_size;
|
||||
if (block == 0)
|
||||
return retval;
|
||||
break;
|
||||
|
||||
if (target_read_memory (data_address, readbuf, block) != 0)
|
||||
{
|
||||
if (block <= auxv_pair_size)
|
||||
return retval;
|
||||
break;
|
||||
|
||||
block = auxv_pair_size;
|
||||
continue;
|
||||
|
@ -189,27 +201,31 @@ ld_so_xfer_auxv (gdb_byte *readbuf,
|
|||
retval += auxv_pair_size;
|
||||
|
||||
if (extract_typed_address (readbuf, ptr_type) == AT_NULL)
|
||||
return retval;
|
||||
{
|
||||
*xfered_len = (ULONGEST) retval;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
readbuf += auxv_pair_size;
|
||||
block -= auxv_pair_size;
|
||||
}
|
||||
}
|
||||
|
||||
return retval;
|
||||
*xfered_len = (ULONGEST) retval;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* This function is called like a to_xfer_partial hook, but must be
|
||||
called with TARGET_OBJECT_AUXV. It handles access to AUXV. */
|
||||
|
||||
LONGEST
|
||||
enum target_xfer_status
|
||||
memory_xfer_auxv (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex,
|
||||
gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
gdb_assert (object == TARGET_OBJECT_AUXV);
|
||||
gdb_assert (readbuf || writebuf);
|
||||
|
@ -223,14 +239,14 @@ memory_xfer_auxv (struct target_ops *ops,
|
|||
|
||||
if (current_inferior ()->attach_flag != 0)
|
||||
{
|
||||
LONGEST retval;
|
||||
enum target_xfer_status ret;
|
||||
|
||||
retval = ld_so_xfer_auxv (readbuf, writebuf, offset, len);
|
||||
if (retval != -1)
|
||||
return retval;
|
||||
ret = ld_so_xfer_auxv (readbuf, writebuf, offset, len, xfered_len);
|
||||
if (ret != TARGET_XFER_E_IO)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return procfs_xfer_auxv (readbuf, writebuf, offset, len);
|
||||
return procfs_xfer_auxv (readbuf, writebuf, offset, len, xfered_len);
|
||||
}
|
||||
|
||||
/* Read one auxv entry from *READPTR, not reading locations >= ENDPTR.
|
||||
|
|
|
@ -36,12 +36,13 @@ struct target_bfd_data
|
|||
struct target_section_table table;
|
||||
};
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
target_bfd_xfer_partial (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
|
@ -49,7 +50,7 @@ target_bfd_xfer_partial (struct target_ops *ops,
|
|||
{
|
||||
struct target_bfd_data *data = ops->to_data;
|
||||
return section_table_xfer_memory_partial (readbuf, writebuf,
|
||||
offset, len,
|
||||
offset, len, xfered_len,
|
||||
data->table.sections,
|
||||
data->table.sections_end,
|
||||
NULL);
|
||||
|
|
|
@ -131,16 +131,28 @@ bsd_kvm_xfer_memory (CORE_ADDR addr, ULONGEST len,
|
|||
return nbytes;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
bsd_kvm_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return bsd_kvm_xfer_memory (offset, len, readbuf, writebuf);
|
||||
{
|
||||
LONGEST ret = bsd_kvm_xfer_memory (offset, len, readbuf, writebuf);
|
||||
|
||||
if (ret < 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
else if (ret == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) ret;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
default:
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
|
@ -334,15 +334,15 @@ bsd_uthread_store_registers (struct target_ops *ops,
|
|||
/* FIXME: This function is only there because otherwise GDB tries to
|
||||
invoke deprecate_xfer_memory. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
bsd_uthread_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
gdb_assert (ops->beneath->to_xfer_partial);
|
||||
return ops->beneath->to_xfer_partial (ops->beneath, object, annex, readbuf,
|
||||
writebuf, offset, len);
|
||||
writebuf, offset, len, xfered_len);
|
||||
}
|
||||
|
||||
static ptid_t
|
||||
|
|
|
@ -194,7 +194,7 @@ Use the \"file\" or \"exec-file\" command."));
|
|||
|
||||
|
||||
char *
|
||||
memory_error_message (enum target_xfer_error err,
|
||||
memory_error_message (enum target_xfer_status err,
|
||||
struct gdbarch *gdbarch, CORE_ADDR memaddr)
|
||||
{
|
||||
switch (err)
|
||||
|
@ -209,8 +209,8 @@ memory_error_message (enum target_xfer_error err,
|
|||
paddress (gdbarch, memaddr));
|
||||
default:
|
||||
internal_error (__FILE__, __LINE__,
|
||||
"unhandled target_xfer_error: %s (%s)",
|
||||
target_xfer_error_to_string (err),
|
||||
"unhandled target_xfer_status: %s (%s)",
|
||||
target_xfer_status_to_string (err),
|
||||
plongest (err));
|
||||
}
|
||||
}
|
||||
|
@ -218,7 +218,7 @@ memory_error_message (enum target_xfer_error err,
|
|||
/* Report a memory error by throwing a suitable exception. */
|
||||
|
||||
void
|
||||
memory_error (enum target_xfer_error err, CORE_ADDR memaddr)
|
||||
memory_error (enum target_xfer_status err, CORE_ADDR memaddr)
|
||||
{
|
||||
char *str;
|
||||
enum errors exception = GDB_NO_ERROR;
|
||||
|
@ -247,20 +247,27 @@ memory_error (enum target_xfer_error err, CORE_ADDR memaddr)
|
|||
void
|
||||
read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, ssize_t len)
|
||||
{
|
||||
LONGEST xfered = 0;
|
||||
ULONGEST xfered = 0;
|
||||
|
||||
while (xfered < len)
|
||||
{
|
||||
LONGEST xfer = target_xfer_partial (current_target.beneath,
|
||||
TARGET_OBJECT_MEMORY, NULL,
|
||||
myaddr + xfered, NULL,
|
||||
memaddr + xfered, len - xfered);
|
||||
enum target_xfer_status status;
|
||||
ULONGEST xfered_len;
|
||||
|
||||
if (xfer == 0)
|
||||
status = target_xfer_partial (current_target.beneath,
|
||||
TARGET_OBJECT_MEMORY, NULL,
|
||||
myaddr + xfered, NULL,
|
||||
memaddr + xfered, len - xfered,
|
||||
&xfered_len);
|
||||
|
||||
if (status == TARGET_XFER_EOF)
|
||||
memory_error (TARGET_XFER_E_IO, memaddr + xfered);
|
||||
if (xfer < 0)
|
||||
memory_error (xfer, memaddr + xfered);
|
||||
xfered += xfer;
|
||||
|
||||
if (TARGET_XFER_STATUS_ERROR_P (status))
|
||||
memory_error (status, memaddr + xfered);
|
||||
|
||||
gdb_assert (status == TARGET_XFER_OK);
|
||||
xfered += xfered_len;
|
||||
QUIT;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -672,17 +672,17 @@ get_core_siginfo (bfd *abfd, gdb_byte *readbuf, ULONGEST offset, ULONGEST len)
|
|||
return len;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
core_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return section_table_xfer_memory_partial (readbuf, writebuf,
|
||||
offset, len,
|
||||
offset, len, xfered_len,
|
||||
core_data->sections,
|
||||
core_data->sections_end,
|
||||
NULL);
|
||||
|
@ -702,19 +702,22 @@ core_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
size = bfd_section_size (core_bfd, section);
|
||||
if (offset >= size)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
size -= offset;
|
||||
if (size > len)
|
||||
size = len;
|
||||
if (size > 0
|
||||
&& !bfd_get_section_contents (core_bfd, section, readbuf,
|
||||
(file_ptr) offset, size))
|
||||
|
||||
if (size == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
if (!bfd_get_section_contents (core_bfd, section, readbuf,
|
||||
(file_ptr) offset, size))
|
||||
{
|
||||
warning (_("Couldn't read NT_AUXV note in core file."));
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
return size;
|
||||
*xfered_len = (ULONGEST) size;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
return TARGET_XFER_E_IO;
|
||||
|
||||
|
@ -738,15 +741,19 @@ core_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
size -= offset;
|
||||
if (size > len)
|
||||
size = len;
|
||||
if (size > 0
|
||||
&& !bfd_get_section_contents (core_bfd, section, readbuf,
|
||||
(file_ptr) offset, size))
|
||||
|
||||
if (size == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
if (!bfd_get_section_contents (core_bfd, section, readbuf,
|
||||
(file_ptr) offset, size))
|
||||
{
|
||||
warning (_("Couldn't read StackGhost cookie in core file."));
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
return size;
|
||||
*xfered_len = (ULONGEST) size;
|
||||
return TARGET_XFER_OK;
|
||||
|
||||
}
|
||||
return TARGET_XFER_E_IO;
|
||||
|
||||
|
@ -756,9 +763,17 @@ core_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
if (writebuf)
|
||||
return TARGET_XFER_E_IO;
|
||||
return
|
||||
gdbarch_core_xfer_shared_libraries (core_gdbarch,
|
||||
readbuf, offset, len);
|
||||
else
|
||||
{
|
||||
*xfered_len = gdbarch_core_xfer_shared_libraries (core_gdbarch,
|
||||
readbuf,
|
||||
offset, len);
|
||||
|
||||
if (*xfered_len == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
/* FALL THROUGH */
|
||||
|
||||
|
@ -768,9 +783,18 @@ core_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
if (writebuf)
|
||||
return TARGET_XFER_E_IO;
|
||||
return
|
||||
gdbarch_core_xfer_shared_libraries_aix (core_gdbarch,
|
||||
readbuf, offset, len);
|
||||
else
|
||||
{
|
||||
*xfered_len
|
||||
= gdbarch_core_xfer_shared_libraries_aix (core_gdbarch,
|
||||
readbuf, offset,
|
||||
len);
|
||||
|
||||
if (*xfered_len == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
/* FALL THROUGH */
|
||||
|
||||
|
@ -793,19 +817,22 @@ core_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
size = bfd_section_size (core_bfd, section);
|
||||
if (offset >= size)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
size -= offset;
|
||||
if (size > len)
|
||||
size = len;
|
||||
if (size > 0
|
||||
&& !bfd_get_section_contents (core_bfd, section, readbuf,
|
||||
(file_ptr) offset, size))
|
||||
|
||||
if (size == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
if (!bfd_get_section_contents (core_bfd, section, readbuf,
|
||||
(file_ptr) offset, size))
|
||||
{
|
||||
warning (_("Couldn't read SPU section in core file."));
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
return size;
|
||||
*xfered_len = (ULONGEST) size;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
else if (readbuf)
|
||||
{
|
||||
|
@ -818,20 +845,36 @@ core_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
list.pos = 0;
|
||||
list.written = 0;
|
||||
bfd_map_over_sections (core_bfd, add_to_spuid_list, &list);
|
||||
return list.written;
|
||||
|
||||
if (list.written == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) list.written;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
return TARGET_XFER_E_IO;
|
||||
|
||||
case TARGET_OBJECT_SIGNAL_INFO:
|
||||
if (readbuf)
|
||||
return get_core_siginfo (core_bfd, readbuf, offset, len);
|
||||
{
|
||||
LONGEST l = get_core_siginfo (core_bfd, readbuf, offset, len);
|
||||
|
||||
if (l > 0)
|
||||
{
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
return TARGET_XFER_E_IO;
|
||||
|
||||
default:
|
||||
if (ops->beneath != NULL)
|
||||
return ops->beneath->to_xfer_partial (ops->beneath, object,
|
||||
annex, readbuf,
|
||||
writebuf, offset, len);
|
||||
writebuf, offset, len,
|
||||
xfered_len);
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
}
|
||||
|
|
21
gdb/ctf.c
21
gdb/ctf.c
|
@ -1359,11 +1359,11 @@ ctf_fetch_registers (struct target_ops *ops,
|
|||
OFFSET is within the range, read the contents from events to
|
||||
READBUF. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ctf_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
/* We're only doing regular memory for now. */
|
||||
if (object != TARGET_OBJECT_MEMORY)
|
||||
|
@ -1449,7 +1449,13 @@ ctf_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
/* Restore the position. */
|
||||
bt_iter_set_pos (bt_ctf_get_iter (ctf_iter), pos);
|
||||
|
||||
return amt;
|
||||
if (amt == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = amt;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
if (bt_iter_next (bt_ctf_get_iter (ctf_iter)) < 0)
|
||||
|
@ -1487,7 +1493,14 @@ ctf_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
amt = bfd_get_section_contents (exec_bfd, s,
|
||||
readbuf, offset - vma, amt);
|
||||
return amt;
|
||||
|
||||
if (amt == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = amt;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1892,9 +1892,9 @@ out:
|
|||
|
||||
#ifdef TASK_DYLD_INFO_COUNT
|
||||
/* This is not available in Darwin 9. */
|
||||
static int
|
||||
static enum target_xfer_status
|
||||
darwin_read_dyld_info (task_t task, CORE_ADDR addr, gdb_byte *rdaddr,
|
||||
ULONGEST length)
|
||||
ULONGEST length, ULONGEST *xfered_len)
|
||||
{
|
||||
struct task_dyld_info task_dyld_info;
|
||||
mach_msg_type_number_t count = TASK_DYLD_INFO_COUNT;
|
||||
|
@ -1902,7 +1902,7 @@ darwin_read_dyld_info (task_t task, CORE_ADDR addr, gdb_byte *rdaddr,
|
|||
kern_return_t kret;
|
||||
|
||||
if (addr >= sz)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
kret = task_info (task, TASK_DYLD_INFO, (task_info_t) &task_dyld_info, &count);
|
||||
MACH_CHECK_ERROR (kret);
|
||||
|
@ -1912,7 +1912,8 @@ darwin_read_dyld_info (task_t task, CORE_ADDR addr, gdb_byte *rdaddr,
|
|||
if (addr + length > sz)
|
||||
length = sz - addr;
|
||||
memcpy (rdaddr, (char *)&task_dyld_info + addr, length);
|
||||
return length;
|
||||
*xfered_len = (ULONGEST) length;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
@ -1922,7 +1923,7 @@ static LONGEST
|
|||
darwin_xfer_partial (struct target_ops *ops,
|
||||
enum target_object object, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct inferior *inf = current_inferior ();
|
||||
|
||||
|
@ -1935,8 +1936,19 @@ darwin_xfer_partial (struct target_ops *ops,
|
|||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return darwin_read_write_inferior (inf->private->task, offset,
|
||||
readbuf, writebuf, len);
|
||||
{
|
||||
int l = darwin_read_write_inferior (inf->private->task, offset,
|
||||
readbuf, writebuf, len);
|
||||
|
||||
if (l == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
gdb_assert (l > 0);
|
||||
*xfered_len = (ULONGEST) l;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
#ifdef TASK_DYLD_INFO_COUNT
|
||||
case TARGET_OBJECT_DARWIN_DYLD_INFO:
|
||||
if (writebuf != NULL || readbuf == NULL)
|
||||
|
@ -1944,7 +1956,8 @@ darwin_xfer_partial (struct target_ops *ops,
|
|||
/* Support only read. */
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
return darwin_read_dyld_info (inf->private->task, offset, readbuf, len);
|
||||
return darwin_read_dyld_info (inf->private->task, offset, readbuf, len,
|
||||
xfered_len);
|
||||
#endif
|
||||
default:
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
28
gdb/exec.c
28
gdb/exec.c
|
@ -566,9 +566,10 @@ section_table_available_memory (VEC(mem_range_s) *memory,
|
|||
return memory;
|
||||
}
|
||||
|
||||
int
|
||||
enum target_xfer_status
|
||||
section_table_xfer_memory_partial (gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len,
|
||||
struct target_section *sections,
|
||||
struct target_section *sections_end,
|
||||
const char *section_name)
|
||||
|
@ -602,7 +603,14 @@ section_table_xfer_memory_partial (gdb_byte *readbuf, const gdb_byte *writebuf,
|
|||
res = bfd_get_section_contents (abfd, asect,
|
||||
readbuf, memaddr - p->addr,
|
||||
len);
|
||||
return (res != 0) ? len : 0;
|
||||
|
||||
if (res != 0)
|
||||
{
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
else
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
else if (memaddr >= p->endaddr)
|
||||
{
|
||||
|
@ -621,12 +629,18 @@ section_table_xfer_memory_partial (gdb_byte *readbuf, const gdb_byte *writebuf,
|
|||
res = bfd_get_section_contents (abfd, asect,
|
||||
readbuf, memaddr - p->addr,
|
||||
len);
|
||||
return (res != 0) ? len : 0;
|
||||
if (res != 0)
|
||||
{
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
else
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 0; /* We can't help. */
|
||||
return TARGET_XFER_EOF; /* We can't help. */
|
||||
}
|
||||
|
||||
static struct target_section_table *
|
||||
|
@ -635,17 +649,17 @@ exec_get_section_table (struct target_ops *ops)
|
|||
return current_target_sections;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
exec_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct target_section_table *table = target_get_section_table (ops);
|
||||
|
||||
if (object == TARGET_OBJECT_MEMORY)
|
||||
return section_table_xfer_memory_partial (readbuf, writebuf,
|
||||
offset, len,
|
||||
offset, len, xfered_len,
|
||||
table->sections,
|
||||
table->sections_end,
|
||||
NULL);
|
||||
|
|
12
gdb/exec.h
12
gdb/exec.h
|
@ -74,11 +74,13 @@ extern VEC(mem_range_s) *
|
|||
|
||||
One, and only one, of readbuf or writebuf must be non-NULL. */
|
||||
|
||||
extern int section_table_xfer_memory_partial (gdb_byte *, const gdb_byte *,
|
||||
ULONGEST, ULONGEST,
|
||||
struct target_section *,
|
||||
struct target_section *,
|
||||
const char *);
|
||||
extern enum target_xfer_status
|
||||
section_table_xfer_memory_partial (gdb_byte *,
|
||||
const gdb_byte *,
|
||||
ULONGEST, ULONGEST, ULONGEST *,
|
||||
struct target_section *,
|
||||
struct target_section *,
|
||||
const char *);
|
||||
|
||||
/* Set the loaded address of a section. */
|
||||
extern void exec_set_section_address (const char *, int, CORE_ADDR);
|
||||
|
|
|
@ -41,12 +41,12 @@ extern int have_core_file_p (void);
|
|||
|
||||
/* Report a memory error with error(). */
|
||||
|
||||
extern void memory_error (enum target_xfer_error status, CORE_ADDR memaddr);
|
||||
extern void memory_error (enum target_xfer_status status, CORE_ADDR memaddr);
|
||||
|
||||
/* The string 'memory_error' would use as exception message. Space
|
||||
for the result is malloc'd, caller must free. */
|
||||
|
||||
extern char *memory_error_message (enum target_xfer_error err,
|
||||
extern char *memory_error_message (enum target_xfer_status err,
|
||||
struct gdbarch *gdbarch, CORE_ADDR memaddr);
|
||||
|
||||
/* Like target_read_memory, but report an error if can't read. */
|
||||
|
|
|
@ -2475,9 +2475,9 @@ out:
|
|||
|
||||
/* Helper for gnu_xfer_partial that handles memory transfers. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
gnu_xfer_memory (gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
CORE_ADDR memaddr, ULONGEST len)
|
||||
CORE_ADDR memaddr, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
task_t task = (gnu_current_inf
|
||||
? (gnu_current_inf->task
|
||||
|
@ -2502,23 +2502,28 @@ gnu_xfer_memory (gdb_byte *readbuf, const gdb_byte *writebuf,
|
|||
host_address_to_string (readbuf));
|
||||
res = gnu_read_inferior (task, memaddr, readbuf, len);
|
||||
}
|
||||
gdb_assert (res >= 0);
|
||||
if (res == 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
return res;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) res;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
/* Target to_xfer_partial implementation. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
gnu_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return gnu_xfer_memory (readbuf, writebuf, offset, len);
|
||||
|
||||
return gnu_xfer_memory (readbuf, writebuf, offset, len, xfered_len);
|
||||
default:
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
|
|
@ -342,10 +342,11 @@ static target_xfer_partial_ftype *super_xfer_partial;
|
|||
/* The "xfer_partial" routine for a memory region that is completely
|
||||
outside of the backing-store region. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ia64_hpux_xfer_memory_no_bs (struct target_ops *ops, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
CORE_ADDR addr, LONGEST len)
|
||||
CORE_ADDR addr, LONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
/* Memory writes need to be aligned on 16byte boundaries, at least
|
||||
when writing in the text section. On the other hand, the size
|
||||
|
@ -367,17 +368,17 @@ ia64_hpux_xfer_memory_no_bs (struct target_ops *ops, const char *annex,
|
|||
NULL /* write */,
|
||||
aligned_addr, addr - aligned_addr);
|
||||
if (status <= 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
memcpy (aligned_buf + (addr - aligned_addr), writebuf, len);
|
||||
|
||||
return super_xfer_partial (ops, TARGET_OBJECT_MEMORY, annex,
|
||||
NULL /* read */, aligned_buf /* write */,
|
||||
aligned_addr, aligned_len);
|
||||
aligned_addr, aligned_len, xfered_len);
|
||||
}
|
||||
else
|
||||
/* Memory read or properly aligned memory write. */
|
||||
return super_xfer_partial (ops, TARGET_OBJECT_MEMORY, annex, readbuf,
|
||||
writebuf, addr, len);
|
||||
writebuf, addr, len, xfered_len);
|
||||
}
|
||||
|
||||
/* Read LEN bytes at ADDR from memory, and store it in BUF. This memory
|
||||
|
@ -517,10 +518,10 @@ ia64_hpux_get_register_from_save_state_t (int regnum, int reg_size)
|
|||
/* The "xfer_partial" target_ops routine for ia64-hpux, in the case
|
||||
where the requested object is TARGET_OBJECT_MEMORY. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ia64_hpux_xfer_memory (struct target_ops *ops, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
CORE_ADDR addr, ULONGEST len)
|
||||
CORE_ADDR addr, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
CORE_ADDR bsp, bspstore;
|
||||
CORE_ADDR start_addr, short_len;
|
||||
|
@ -563,7 +564,7 @@ ia64_hpux_xfer_memory (struct target_ops *ops, const char *annex,
|
|||
status = ia64_hpux_xfer_memory_no_bs (ops, annex, readbuf, writebuf,
|
||||
addr, short_len);
|
||||
if (status <= 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
/* 2. Memory region after BSP. */
|
||||
|
@ -581,7 +582,7 @@ ia64_hpux_xfer_memory (struct target_ops *ops, const char *annex,
|
|||
writebuf ? writebuf + (start_addr - addr) : NULL,
|
||||
start_addr, short_len);
|
||||
if (status <= 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
/* 3. Memory region between BSPSTORE and BSP. */
|
||||
|
@ -606,10 +607,11 @@ ia64_hpux_xfer_memory (struct target_ops *ops, const char *annex,
|
|||
writebuf ? writebuf + (start_addr - addr) : NULL,
|
||||
start_addr, short_len);
|
||||
if (status < 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
return len;
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* Handle the transfer of TARGET_OBJECT_HPUX_UREGS objects on ia64-hpux.
|
||||
|
@ -619,10 +621,10 @@ ia64_hpux_xfer_memory (struct target_ops *ops, const char *annex,
|
|||
we do not currently do not need these transfers), and will raise
|
||||
a failed assertion if WRITEBUF is not NULL. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ia64_hpux_xfer_uregs (struct target_ops *ops, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, LONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
int status;
|
||||
|
||||
|
@ -631,7 +633,9 @@ ia64_hpux_xfer_uregs (struct target_ops *ops, const char *annex,
|
|||
status = ia64_hpux_read_register_from_save_state_t (offset, readbuf, len);
|
||||
if (status < 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
return len;
|
||||
|
||||
*xfered_len = (ULONGEST) len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* Handle the transfer of TARGET_OBJECT_HPUX_SOLIB_GOT objects on ia64-hpux.
|
||||
|
@ -640,10 +644,10 @@ ia64_hpux_xfer_uregs (struct target_ops *ops, const char *annex,
|
|||
we do not currently do not need these transfers), and will raise
|
||||
a failed assertion if WRITEBUF is not NULL. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ia64_hpux_xfer_solib_got (struct target_ops *ops, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
CORE_ADDR fun_addr;
|
||||
/* The linkage pointer. We use a uint64_t to make sure that the size
|
||||
|
@ -656,7 +660,7 @@ ia64_hpux_xfer_solib_got (struct target_ops *ops, const char *annex,
|
|||
gdb_assert (writebuf == NULL);
|
||||
|
||||
if (offset > sizeof (got))
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
fun_addr = string_to_core_addr (annex);
|
||||
got = ia64_hpux_get_solib_linkage_addr (fun_addr);
|
||||
|
@ -665,28 +669,32 @@ ia64_hpux_xfer_solib_got (struct target_ops *ops, const char *annex,
|
|||
len = sizeof (got) - offset;
|
||||
memcpy (readbuf, &got + offset, len);
|
||||
|
||||
return len;
|
||||
*xfered_len = (ULONGEST) len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* The "to_xfer_partial" target_ops routine for ia64-hpux. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
ia64_hpux_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST val;
|
||||
enum target_xfer_status val;
|
||||
|
||||
if (object == TARGET_OBJECT_MEMORY)
|
||||
val = ia64_hpux_xfer_memory (ops, annex, readbuf, writebuf, offset, len);
|
||||
val = ia64_hpux_xfer_memory (ops, annex, readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
else if (object == TARGET_OBJECT_HPUX_UREGS)
|
||||
val = ia64_hpux_xfer_uregs (ops, annex, readbuf, writebuf, offset, len);
|
||||
val = ia64_hpux_xfer_uregs (ops, annex, readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
else if (object == TARGET_OBJECT_HPUX_SOLIB_GOT)
|
||||
val = ia64_hpux_xfer_solib_got (ops, annex, readbuf, writebuf, offset,
|
||||
len);
|
||||
len, xfered_len);
|
||||
else
|
||||
val = super_xfer_partial (ops, object, annex, readbuf, writebuf, offset,
|
||||
len);
|
||||
len, xfered_len);
|
||||
|
||||
return val;
|
||||
}
|
||||
|
|
|
@ -459,11 +459,11 @@ inf_ptrace_wait (struct target_ops *ops,
|
|||
inferior's OBJECT:ANNEX space and GDB's READBUF/WRITEBUF buffer.
|
||||
Return the number of bytes actually transferred. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
inf_ptrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
pid_t pid = ptid_get_pid (inferior_ptid);
|
||||
|
||||
|
@ -490,13 +490,16 @@ inf_ptrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
errno = 0;
|
||||
if (ptrace (PT_IO, pid, (caddr_t)&piod, 0) == 0)
|
||||
/* Return the actual number of bytes read or written. */
|
||||
return piod.piod_len;
|
||||
{
|
||||
*xfered_len = piod.piod_len;
|
||||
/* Return the actual number of bytes read or written. */
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
/* If the PT_IO request is somehow not supported, fallback on
|
||||
using PT_WRITE_D/PT_READ_D. Otherwise we will return zero
|
||||
to indicate failure. */
|
||||
if (errno != EINVAL)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
#endif
|
||||
{
|
||||
|
@ -506,7 +509,7 @@ inf_ptrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
gdb_byte byte[sizeof (PTRACE_TYPE_RET)];
|
||||
} buffer;
|
||||
ULONGEST rounded_offset;
|
||||
LONGEST partial_len;
|
||||
ULONGEST partial_len;
|
||||
|
||||
/* Round the start offset down to the next long word
|
||||
boundary. */
|
||||
|
@ -552,7 +555,7 @@ inf_ptrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
(PTRACE_TYPE_ARG3)(uintptr_t)rounded_offset,
|
||||
buffer.word);
|
||||
if (errno)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -563,13 +566,14 @@ inf_ptrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
(PTRACE_TYPE_ARG3)(uintptr_t)rounded_offset,
|
||||
0);
|
||||
if (errno)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
/* Copy appropriate bytes out of the buffer. */
|
||||
memcpy (readbuf, buffer.byte + (offset - rounded_offset),
|
||||
partial_len);
|
||||
}
|
||||
|
||||
return partial_len;
|
||||
*xfered_len = partial_len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
case TARGET_OBJECT_UNWIND_TABLE:
|
||||
|
@ -592,8 +596,11 @@ inf_ptrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
errno = 0;
|
||||
if (ptrace (PT_IO, pid, (caddr_t)&piod, 0) == 0)
|
||||
/* Return the actual number of bytes read or written. */
|
||||
return piod.piod_len;
|
||||
{
|
||||
*xfered_len = piod.piod_len;
|
||||
/* Return the actual number of bytes read or written. */
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
|
@ -1222,16 +1222,26 @@ inf_ttrace_xfer_memory (CORE_ADDR addr, ULONGEST len,
|
|||
return len;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
inf_ttrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return inf_ttrace_xfer_memory (offset, len, readbuf, writebuf);
|
||||
{
|
||||
LONGEST val = inf_ttrace_xfer_memory (offset, len, readbuf, writebuf);
|
||||
|
||||
if (val == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) val;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
case TARGET_OBJECT_UNWIND_TABLE:
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
|
@ -3862,10 +3862,11 @@ siginfo_fixup (siginfo_t *siginfo, gdb_byte *inf_siginfo, int direction)
|
|||
}
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
linux_xfer_siginfo (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
int pid;
|
||||
siginfo_t siginfo;
|
||||
|
@ -3912,27 +3913,28 @@ linux_xfer_siginfo (struct target_ops *ops, enum target_object object,
|
|||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
return len;
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
linux_nat_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct cleanup *old_chain;
|
||||
LONGEST xfer;
|
||||
enum target_xfer_status xfer;
|
||||
|
||||
if (object == TARGET_OBJECT_SIGNAL_INFO)
|
||||
return linux_xfer_siginfo (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
/* The target is connected but no live inferior is selected. Pass
|
||||
this request down to a lower stratum (e.g., the executable
|
||||
file). */
|
||||
if (object == TARGET_OBJECT_MEMORY && ptid_equal (inferior_ptid, null_ptid))
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
old_chain = save_inferior_ptid ();
|
||||
|
||||
|
@ -3940,7 +3942,7 @@ linux_nat_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
inferior_ptid = pid_to_ptid (ptid_get_lwp (inferior_ptid));
|
||||
|
||||
xfer = linux_ops->to_xfer_partial (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
do_cleanups (old_chain);
|
||||
return xfer;
|
||||
|
@ -4110,11 +4112,11 @@ linux_nat_make_corefile_notes (bfd *obfd, int *note_size)
|
|||
can be much more efficient than banging away at PTRACE_PEEKTEXT,
|
||||
but it doesn't support writes. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
linux_proc_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, LONGEST len)
|
||||
ULONGEST offset, LONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST ret;
|
||||
int fd;
|
||||
|
@ -4125,7 +4127,7 @@ linux_proc_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
/* Don't bother for one word. */
|
||||
if (len < 3 * sizeof (long))
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
/* We could keep this file open and cache it - possibly one per
|
||||
thread. That requires some juggling, but is even faster. */
|
||||
|
@ -4133,7 +4135,7 @@ linux_proc_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
ptid_get_pid (inferior_ptid));
|
||||
fd = gdb_open_cloexec (filename, O_RDONLY | O_LARGEFILE, 0);
|
||||
if (fd == -1)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
/* If pread64 is available, use it. It's faster if the kernel
|
||||
supports it (only one syscall), and it's 64-bit safe even on
|
||||
|
@ -4149,7 +4151,14 @@ linux_proc_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
ret = len;
|
||||
|
||||
close (fd);
|
||||
return ret;
|
||||
|
||||
if (ret == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = ret;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -4205,11 +4214,12 @@ spu_enumerate_spu_ids (int pid, gdb_byte *buf, ULONGEST offset, ULONGEST len)
|
|||
|
||||
/* Implement the to_xfer_partial interface for the TARGET_OBJECT_SPU
|
||||
object type, using the /proc file system. */
|
||||
static LONGEST
|
||||
|
||||
static enum target_xfer_status
|
||||
linux_proc_xfer_spu (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
char buf[128];
|
||||
int fd = 0;
|
||||
|
@ -4221,7 +4231,19 @@ linux_proc_xfer_spu (struct target_ops *ops, enum target_object object,
|
|||
if (!readbuf)
|
||||
return TARGET_XFER_E_IO;
|
||||
else
|
||||
return spu_enumerate_spu_ids (pid, readbuf, offset, len);
|
||||
{
|
||||
LONGEST l = spu_enumerate_spu_ids (pid, readbuf, offset, len);
|
||||
|
||||
if (l < 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
else if (l == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) l;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
xsnprintf (buf, sizeof buf, "/proc/%d/fd/%s", pid, annex);
|
||||
|
@ -4233,7 +4255,7 @@ linux_proc_xfer_spu (struct target_ops *ops, enum target_object object,
|
|||
&& lseek (fd, (off_t) offset, SEEK_SET) != (off_t) offset)
|
||||
{
|
||||
close (fd);
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
if (writebuf)
|
||||
|
@ -4242,7 +4264,16 @@ linux_proc_xfer_spu (struct target_ops *ops, enum target_object object,
|
|||
ret = read (fd, readbuf, (size_t) len);
|
||||
|
||||
close (fd);
|
||||
return ret;
|
||||
|
||||
if (ret < 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
else if (ret == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) ret;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -4329,34 +4360,40 @@ linux_proc_pending_signals (int pid, sigset_t *pending,
|
|||
do_cleanups (cleanup);
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
linux_nat_xfer_osdata (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
gdb_assert (object == TARGET_OBJECT_OSDATA);
|
||||
|
||||
return linux_common_xfer_osdata (annex, readbuf, offset, len);
|
||||
*xfered_len = linux_common_xfer_osdata (annex, readbuf, offset, len);
|
||||
if (*xfered_len == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
linux_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST xfer;
|
||||
enum target_xfer_status xfer;
|
||||
|
||||
if (object == TARGET_OBJECT_AUXV)
|
||||
return memory_xfer_auxv (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
if (object == TARGET_OBJECT_OSDATA)
|
||||
return linux_nat_xfer_osdata (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
if (object == TARGET_OBJECT_SPU)
|
||||
return linux_proc_xfer_spu (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
/* GDB calculates all the addresses in possibly larget width of the address.
|
||||
Address width needs to be masked before its final use - either by
|
||||
|
@ -4373,12 +4410,12 @@ linux_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
}
|
||||
|
||||
xfer = linux_proc_xfer_partial (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
if (xfer != 0)
|
||||
offset, len, xfered_len);
|
||||
if (xfer != TARGET_XFER_EOF)
|
||||
return xfer;
|
||||
|
||||
return super_xfer_partial (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
}
|
||||
|
||||
static void
|
||||
|
|
|
@ -2018,9 +2018,9 @@ monitor_read_memory (CORE_ADDR memaddr, gdb_byte *myaddr, int len)
|
|||
/* Helper for monitor_xfer_partial that handles memory transfers.
|
||||
Arguments are like target_xfer_partial. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
monitor_xfer_memory (gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST memaddr, ULONGEST len)
|
||||
ULONGEST memaddr, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
int res;
|
||||
|
||||
|
@ -2036,22 +2036,27 @@ monitor_xfer_memory (gdb_byte *readbuf, const gdb_byte *writebuf,
|
|||
res = monitor_read_memory (memaddr, readbuf, len);
|
||||
}
|
||||
|
||||
if (res == 0)
|
||||
if (res <= 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
return res;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) res;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
/* Target to_xfer_partial implementation. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
monitor_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return monitor_xfer_memory (readbuf, writebuf, offset, len);
|
||||
return monitor_xfer_memory (readbuf, writebuf, offset, len, xfered_len);
|
||||
|
||||
default:
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
10
gdb/procfs.c
10
gdb/procfs.c
|
@ -3973,10 +3973,11 @@ wait_again:
|
|||
/* Perform a partial transfer to/from the specified object. For
|
||||
memory transfers, fall back to the old memory xfer functions. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
procfs_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
|
@ -3992,13 +3993,14 @@ procfs_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
#ifdef NEW_PROC_API
|
||||
case TARGET_OBJECT_AUXV:
|
||||
return memory_xfer_auxv (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
#endif
|
||||
|
||||
default:
|
||||
if (ops->beneath != NULL)
|
||||
return ops->beneath->to_xfer_partial (ops->beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -802,11 +802,11 @@ record_btrace_is_replaying (void)
|
|||
|
||||
/* The to_xfer_partial method of target record-btrace. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
record_btrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct target_ops *t;
|
||||
|
||||
|
@ -821,7 +821,10 @@ record_btrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
/* We do not allow writing memory in general. */
|
||||
if (writebuf != NULL)
|
||||
return TARGET_XFER_E_UNAVAILABLE;
|
||||
{
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_E_UNAVAILABLE;
|
||||
}
|
||||
|
||||
/* We allow reading readonly memory. */
|
||||
section = target_section_by_addr (ops, offset);
|
||||
|
@ -838,6 +841,7 @@ record_btrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
}
|
||||
}
|
||||
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_E_UNAVAILABLE;
|
||||
}
|
||||
}
|
||||
|
@ -847,8 +851,9 @@ record_btrace_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
for (ops = ops->beneath; ops != NULL; ops = ops->beneath)
|
||||
if (ops->to_xfer_partial != NULL)
|
||||
return ops->to_xfer_partial (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_E_UNAVAILABLE;
|
||||
}
|
||||
|
||||
|
|
|
@ -1649,11 +1649,11 @@ record_full_store_registers (struct target_ops *ops,
|
|||
In replay mode, we cannot write memory unles we are willing to
|
||||
invalidate the record/replay log from this point forward. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
record_full_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
if (!record_full_gdb_operation_disable
|
||||
&& (object == TARGET_OBJECT_MEMORY
|
||||
|
@ -1708,7 +1708,7 @@ record_full_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
|
||||
return record_full_beneath_to_xfer_partial
|
||||
(record_full_beneath_to_xfer_partial_ops, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len, xfered_len);
|
||||
}
|
||||
|
||||
/* This structure represents a breakpoint inserted while the record
|
||||
|
@ -2188,12 +2188,12 @@ record_full_core_store_registers (struct target_ops *ops,
|
|||
|
||||
/* "to_xfer_partial" method for prec over corefile. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
record_full_core_xfer_partial (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
if (object == TARGET_OBJECT_MEMORY)
|
||||
{
|
||||
|
@ -2223,7 +2223,9 @@ record_full_core_xfer_partial (struct target_ops *ops,
|
|||
{
|
||||
if (readbuf)
|
||||
memset (readbuf, 0, len);
|
||||
return len;
|
||||
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
/* Get record_full_core_buf_entry. */
|
||||
for (entry = record_full_core_buf_list; entry;
|
||||
|
@ -2245,7 +2247,7 @@ record_full_core_xfer_partial (struct target_ops *ops,
|
|||
&entry->buf))
|
||||
{
|
||||
xfree (entry);
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
entry->prev = record_full_core_buf_list;
|
||||
record_full_core_buf_list = entry;
|
||||
|
@ -2260,13 +2262,14 @@ record_full_core_xfer_partial (struct target_ops *ops,
|
|||
return record_full_beneath_to_xfer_partial
|
||||
(record_full_beneath_to_xfer_partial_ops,
|
||||
object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
memcpy (readbuf, entry->buf + sec_offset,
|
||||
(size_t) len);
|
||||
}
|
||||
|
||||
return len;
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2278,7 +2281,7 @@ record_full_core_xfer_partial (struct target_ops *ops,
|
|||
|
||||
return record_full_beneath_to_xfer_partial
|
||||
(record_full_beneath_to_xfer_partial_ops, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len, xfered_len);
|
||||
}
|
||||
|
||||
/* "to_insert_breakpoint" method for prec over corefile. */
|
||||
|
|
|
@ -1061,10 +1061,10 @@ gdbsim_prepare_to_store (struct target_ops *self, struct regcache *regcache)
|
|||
/* Helper for gdbsim_xfer_partial that handles memory transfers.
|
||||
Arguments are like target_xfer_partial. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
gdbsim_xfer_memory (struct target_ops *target,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST memaddr, ULONGEST len)
|
||||
ULONGEST memaddr, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct sim_inferior_data *sim_data
|
||||
= get_sim_inferior_data (current_inferior (), SIM_INSTANCE_NOT_NEEDED);
|
||||
|
@ -1074,7 +1074,7 @@ gdbsim_xfer_memory (struct target_ops *target,
|
|||
request to be passed to a lower target, hopefully an exec
|
||||
file. */
|
||||
if (!target->to_has_memory (target))
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
if (!sim_data->program_loaded)
|
||||
error (_("No program loaded."));
|
||||
|
@ -1108,20 +1108,30 @@ gdbsim_xfer_memory (struct target_ops *target,
|
|||
if (remote_debug && len > 0)
|
||||
dump_mem (readbuf, len);
|
||||
}
|
||||
return l;
|
||||
if (l > 0)
|
||||
{
|
||||
*xfered_len = (ULONGEST) l;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
else if (l == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
/* Target to_xfer_partial implementation. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
gdbsim_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return gdbsim_xfer_memory (ops, readbuf, writebuf, offset, len);
|
||||
return gdbsim_xfer_memory (ops, readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
|
||||
default:
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
131
gdb/remote.c
131
gdb/remote.c
|
@ -6838,14 +6838,15 @@ check_binary_download (CORE_ADDR addr)
|
|||
If USE_LENGTH is 0, then the <LENGTH> field and the preceding comma
|
||||
are omitted.
|
||||
|
||||
Returns the number of bytes transferred, or a negative value (an
|
||||
'enum target_xfer_error' value) for error. Only transfer a single
|
||||
packet. */
|
||||
Return the transferred status, error or OK (an
|
||||
'enum target_xfer_status' value). Save the number of bytes
|
||||
transferred in *XFERED_LEN. Only transfer a single packet. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
remote_write_bytes_aux (const char *header, CORE_ADDR memaddr,
|
||||
const gdb_byte *myaddr, ULONGEST len,
|
||||
char packet_format, int use_length)
|
||||
ULONGEST *xfered_len, char packet_format,
|
||||
int use_length)
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
char *p;
|
||||
|
@ -6862,7 +6863,7 @@ remote_write_bytes_aux (const char *header, CORE_ADDR memaddr,
|
|||
_("remote_write_bytes_aux: bad packet format"));
|
||||
|
||||
if (len == 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
payload_size = get_memory_write_packet_size ();
|
||||
|
||||
|
@ -6985,7 +6986,8 @@ remote_write_bytes_aux (const char *header, CORE_ADDR memaddr,
|
|||
|
||||
/* Return NR_BYTES, not TODO, in case escape chars caused us to send
|
||||
fewer bytes than we'd planned. */
|
||||
return nr_bytes;
|
||||
*xfered_len = (ULONGEST) nr_bytes;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* Write memory data directly to the remote machine.
|
||||
|
@ -6994,12 +6996,13 @@ remote_write_bytes_aux (const char *header, CORE_ADDR memaddr,
|
|||
MYADDR is the address of the buffer in our space.
|
||||
LEN is the number of bytes.
|
||||
|
||||
Returns number of bytes transferred, or a negative value (an 'enum
|
||||
target_xfer_error' value) for error. Only transfer a single
|
||||
packet. */
|
||||
Return the transferred status, error or OK (an
|
||||
'enum target_xfer_status' value). Save the number of bytes
|
||||
transferred in *XFERED_LEN. Only transfer a single packet. */
|
||||
|
||||
static LONGEST
|
||||
remote_write_bytes (CORE_ADDR memaddr, const gdb_byte *myaddr, ULONGEST len)
|
||||
static enum target_xfer_status
|
||||
remote_write_bytes (CORE_ADDR memaddr, const gdb_byte *myaddr, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
char *packet_format = 0;
|
||||
|
||||
|
@ -7022,7 +7025,8 @@ remote_write_bytes (CORE_ADDR memaddr, const gdb_byte *myaddr, ULONGEST len)
|
|||
}
|
||||
|
||||
return remote_write_bytes_aux (packet_format,
|
||||
memaddr, myaddr, len, packet_format[0], 1);
|
||||
memaddr, myaddr, len, xfered_len,
|
||||
packet_format[0], 1);
|
||||
}
|
||||
|
||||
/* Read memory data directly from the remote machine.
|
||||
|
@ -7031,11 +7035,13 @@ remote_write_bytes (CORE_ADDR memaddr, const gdb_byte *myaddr, ULONGEST len)
|
|||
MYADDR is the address of the buffer in our space.
|
||||
LEN is the number of bytes.
|
||||
|
||||
Returns number of bytes transferred, or a negative value (an 'enum
|
||||
target_xfer_error' value) for error. */
|
||||
Return the transferred status, error or OK (an
|
||||
'enum target_xfer_status' value). Save the number of bytes
|
||||
transferred in *XFERED_LEN. */
|
||||
|
||||
static LONGEST
|
||||
remote_read_bytes (CORE_ADDR memaddr, gdb_byte *myaddr, ULONGEST len)
|
||||
static enum target_xfer_status
|
||||
remote_read_bytes (CORE_ADDR memaddr, gdb_byte *myaddr, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
int max_buf_size; /* Max size of packet output buffer. */
|
||||
|
@ -7072,7 +7078,8 @@ remote_read_bytes (CORE_ADDR memaddr, gdb_byte *myaddr, ULONGEST len)
|
|||
p = rs->buf;
|
||||
i = hex2bin (p, myaddr, todo);
|
||||
/* Return what we have. Let higher layers handle partial reads. */
|
||||
return i;
|
||||
*xfered_len = (ULONGEST) i;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
|
||||
|
@ -7144,18 +7151,19 @@ remote_flash_erase (struct target_ops *ops,
|
|||
do_cleanups (back_to);
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
remote_flash_write (struct target_ops *ops,
|
||||
ULONGEST address, LONGEST length,
|
||||
const gdb_byte *data)
|
||||
static enum target_xfer_status
|
||||
remote_flash_write (struct target_ops *ops, ULONGEST address,
|
||||
ULONGEST length, ULONGEST *xfered_len,
|
||||
const gdb_byte *data)
|
||||
{
|
||||
int saved_remote_timeout = remote_timeout;
|
||||
LONGEST ret;
|
||||
enum target_xfer_status ret;
|
||||
struct cleanup *back_to = make_cleanup (restore_remote_timeout,
|
||||
&saved_remote_timeout);
|
||||
&saved_remote_timeout);
|
||||
|
||||
remote_timeout = remote_flash_timeout;
|
||||
ret = remote_write_bytes_aux ("vFlashWrite:", address, data, length, 'X', 0);
|
||||
ret = remote_write_bytes_aux ("vFlashWrite:", address, data, length,
|
||||
xfered_len,'X', 0);
|
||||
do_cleanups (back_to);
|
||||
|
||||
return ret;
|
||||
|
@ -8737,10 +8745,10 @@ the loaded file\n"));
|
|||
into remote target. The number of bytes written to the remote
|
||||
target is returned, or -1 for error. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
remote_write_qxfer (struct target_ops *ops, const char *object_name,
|
||||
const char *annex, const gdb_byte *writebuf,
|
||||
ULONGEST offset, LONGEST len,
|
||||
ULONGEST offset, LONGEST len, ULONGEST *xfered_len,
|
||||
struct packet_config *packet)
|
||||
{
|
||||
int i, buf_len;
|
||||
|
@ -8768,7 +8776,9 @@ remote_write_qxfer (struct target_ops *ops, const char *object_name,
|
|||
return TARGET_XFER_E_IO;
|
||||
|
||||
unpack_varlen_hex (rs->buf, &n);
|
||||
return n;
|
||||
|
||||
*xfered_len = n;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* Read OBJECT_NAME/ANNEX from the remote target using a qXfer packet.
|
||||
|
@ -8778,10 +8788,11 @@ remote_write_qxfer (struct target_ops *ops, const char *object_name,
|
|||
EOF. PACKET is checked and updated to indicate whether the remote
|
||||
target supports this object. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
remote_read_qxfer (struct target_ops *ops, const char *object_name,
|
||||
const char *annex,
|
||||
gdb_byte *readbuf, ULONGEST offset, LONGEST len,
|
||||
ULONGEST *xfered_len,
|
||||
struct packet_config *packet)
|
||||
{
|
||||
struct remote_state *rs = get_remote_state ();
|
||||
|
@ -8797,7 +8808,8 @@ remote_read_qxfer (struct target_ops *ops, const char *object_name,
|
|||
if (strcmp (object_name, rs->finished_object) == 0
|
||||
&& strcmp (annex ? annex : "", rs->finished_annex) == 0
|
||||
&& offset == rs->finished_offset)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
|
||||
/* Otherwise, we're now reading something different. Discard
|
||||
the cache. */
|
||||
|
@ -8848,13 +8860,20 @@ remote_read_qxfer (struct target_ops *ops, const char *object_name,
|
|||
rs->finished_offset = offset + i;
|
||||
}
|
||||
|
||||
return i;
|
||||
if (i == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = i;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
struct remote_state *rs;
|
||||
int i;
|
||||
|
@ -8869,20 +8888,16 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
/* Handle memory using the standard memory routines. */
|
||||
if (object == TARGET_OBJECT_MEMORY)
|
||||
{
|
||||
LONGEST xfered;
|
||||
|
||||
/* If the remote target is connected but not running, we should
|
||||
pass this request down to a lower stratum (e.g. the executable
|
||||
file). */
|
||||
if (!target_has_execution)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
if (writebuf != NULL)
|
||||
xfered = remote_write_bytes (offset, writebuf, len);
|
||||
return remote_write_bytes (offset, writebuf, len, xfered_len);
|
||||
else
|
||||
xfered = remote_read_bytes (offset, readbuf, len);
|
||||
|
||||
return xfered;
|
||||
return remote_read_bytes (offset, readbuf, len, xfered_len);
|
||||
}
|
||||
|
||||
/* Handle SPU memory using qxfer packets. */
|
||||
|
@ -8890,12 +8905,12 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
if (readbuf)
|
||||
return remote_read_qxfer (ops, "spu", annex, readbuf, offset, len,
|
||||
&remote_protocol_packets
|
||||
[PACKET_qXfer_spu_read]);
|
||||
xfered_len, &remote_protocol_packets
|
||||
[PACKET_qXfer_spu_read]);
|
||||
else
|
||||
return remote_write_qxfer (ops, "spu", annex, writebuf, offset, len,
|
||||
&remote_protocol_packets
|
||||
[PACKET_qXfer_spu_write]);
|
||||
xfered_len, &remote_protocol_packets
|
||||
[PACKET_qXfer_spu_write]);
|
||||
}
|
||||
|
||||
/* Handle extra signal info using qxfer packets. */
|
||||
|
@ -8903,11 +8918,11 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
if (readbuf)
|
||||
return remote_read_qxfer (ops, "siginfo", annex, readbuf, offset, len,
|
||||
&remote_protocol_packets
|
||||
xfered_len, &remote_protocol_packets
|
||||
[PACKET_qXfer_siginfo_read]);
|
||||
else
|
||||
return remote_write_qxfer (ops, "siginfo", annex,
|
||||
writebuf, offset, len,
|
||||
writebuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets
|
||||
[PACKET_qXfer_siginfo_write]);
|
||||
}
|
||||
|
@ -8916,7 +8931,7 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
if (readbuf)
|
||||
return remote_read_qxfer (ops, "statictrace", annex,
|
||||
readbuf, offset, len,
|
||||
readbuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets
|
||||
[PACKET_qXfer_statictrace_read]);
|
||||
else
|
||||
|
@ -8931,7 +8946,8 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_FLASH:
|
||||
return remote_flash_write (ops, offset, len, writebuf);
|
||||
return remote_flash_write (ops, offset, len, xfered_len,
|
||||
writebuf);
|
||||
|
||||
default:
|
||||
return TARGET_XFER_E_IO;
|
||||
|
@ -8949,56 +8965,62 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
case TARGET_OBJECT_AUXV:
|
||||
gdb_assert (annex == NULL);
|
||||
return remote_read_qxfer (ops, "auxv", annex, readbuf, offset, len,
|
||||
xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_auxv]);
|
||||
|
||||
case TARGET_OBJECT_AVAILABLE_FEATURES:
|
||||
return remote_read_qxfer
|
||||
(ops, "features", annex, readbuf, offset, len,
|
||||
(ops, "features", annex, readbuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_features]);
|
||||
|
||||
case TARGET_OBJECT_LIBRARIES:
|
||||
return remote_read_qxfer
|
||||
(ops, "libraries", annex, readbuf, offset, len,
|
||||
(ops, "libraries", annex, readbuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_libraries]);
|
||||
|
||||
case TARGET_OBJECT_LIBRARIES_SVR4:
|
||||
return remote_read_qxfer
|
||||
(ops, "libraries-svr4", annex, readbuf, offset, len,
|
||||
(ops, "libraries-svr4", annex, readbuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_libraries_svr4]);
|
||||
|
||||
case TARGET_OBJECT_MEMORY_MAP:
|
||||
gdb_assert (annex == NULL);
|
||||
return remote_read_qxfer (ops, "memory-map", annex, readbuf, offset, len,
|
||||
xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_memory_map]);
|
||||
|
||||
case TARGET_OBJECT_OSDATA:
|
||||
/* Should only get here if we're connected. */
|
||||
gdb_assert (rs->remote_desc);
|
||||
return remote_read_qxfer
|
||||
(ops, "osdata", annex, readbuf, offset, len,
|
||||
(ops, "osdata", annex, readbuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_osdata]);
|
||||
|
||||
case TARGET_OBJECT_THREADS:
|
||||
gdb_assert (annex == NULL);
|
||||
return remote_read_qxfer (ops, "threads", annex, readbuf, offset, len,
|
||||
xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_threads]);
|
||||
|
||||
case TARGET_OBJECT_TRACEFRAME_INFO:
|
||||
gdb_assert (annex == NULL);
|
||||
return remote_read_qxfer
|
||||
(ops, "traceframe-info", annex, readbuf, offset, len,
|
||||
(ops, "traceframe-info", annex, readbuf, offset, len, xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_traceframe_info]);
|
||||
|
||||
case TARGET_OBJECT_FDPIC:
|
||||
return remote_read_qxfer (ops, "fdpic", annex, readbuf, offset, len,
|
||||
xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_fdpic]);
|
||||
|
||||
case TARGET_OBJECT_OPENVMS_UIB:
|
||||
return remote_read_qxfer (ops, "uib", annex, readbuf, offset, len,
|
||||
xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_uib]);
|
||||
|
||||
case TARGET_OBJECT_BTRACE:
|
||||
return remote_read_qxfer (ops, "btrace", annex, readbuf, offset, len,
|
||||
xfered_len,
|
||||
&remote_protocol_packets[PACKET_qXfer_btrace]);
|
||||
|
||||
default:
|
||||
|
@ -9049,7 +9071,8 @@ remote_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
getpkt (&rs->buf, &rs->buf_size, 0);
|
||||
strcpy ((char *) readbuf, rs->buf);
|
||||
|
||||
return strlen ((char *) readbuf);
|
||||
*xfered_len = strlen ((char *) readbuf);
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
static int
|
||||
|
|
|
@ -379,11 +379,11 @@ rs6000_store_inferior_registers (struct target_ops *ops,
|
|||
inferior's OBJECT:ANNEX space and GDB's READBUF/WRITEBUF buffer.
|
||||
Return the number of bytes actually transferred. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
rs6000_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
pid_t pid = ptid_get_pid (inferior_ptid);
|
||||
int arch64 = ARCH64 ();
|
||||
|
@ -393,7 +393,7 @@ rs6000_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
case TARGET_OBJECT_LIBRARIES_AIX:
|
||||
return rs6000_xfer_shared_libraries (ops, object, annex,
|
||||
readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
{
|
||||
union
|
||||
|
@ -451,7 +451,7 @@ rs6000_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
(int *) (uintptr_t) rounded_offset,
|
||||
buffer.word, NULL);
|
||||
if (errno)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
if (readbuf)
|
||||
|
@ -465,14 +465,15 @@ rs6000_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
(int *)(uintptr_t)rounded_offset,
|
||||
0, NULL);
|
||||
if (errno)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
/* Copy appropriate bytes out of the buffer. */
|
||||
memcpy (readbuf, buffer.byte + (offset - rounded_offset),
|
||||
partial_len);
|
||||
}
|
||||
|
||||
return partial_len;
|
||||
*xfered_len = (ULONGEST) partial_len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
default:
|
||||
|
@ -682,11 +683,11 @@ rs6000_ptrace_ldinfo (ptid_t ptid)
|
|||
/* Implement the to_xfer_partial target_ops method for
|
||||
TARGET_OBJECT_LIBRARIES_AIX objects. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
rs6000_xfer_shared_libraries
|
||||
(struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
gdb_byte *ldi_buf;
|
||||
ULONGEST result;
|
||||
|
@ -707,7 +708,14 @@ rs6000_xfer_shared_libraries
|
|||
xfree (ldi_buf);
|
||||
|
||||
do_cleanups (cleanup);
|
||||
return result;
|
||||
|
||||
if (result == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = result;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
void _initialize_rs6000_nat (void);
|
||||
|
|
|
@ -542,13 +542,13 @@ sol_thread_store_registers (struct target_ops *ops,
|
|||
target_write_partial for details of each variant. One, and only
|
||||
one, of readbuf or writebuf must be non-NULL. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
sol_thread_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
int retval;
|
||||
enum target_xfer_status retval;
|
||||
struct cleanup *old_chain;
|
||||
struct target_ops *beneath = find_target_beneath (ops);
|
||||
|
||||
|
@ -564,8 +564,8 @@ sol_thread_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
inferior_ptid = procfs_first_available ();
|
||||
}
|
||||
|
||||
retval = beneath->to_xfer_partial (beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
retval = beneath->to_xfer_partial (beneath, object, annex, readbuf,
|
||||
writebuf, offset, len, xfered_len);
|
||||
|
||||
do_cleanups (old_chain);
|
||||
|
||||
|
|
|
@ -258,10 +258,11 @@ sparc_store_inferior_registers (struct target_ops *ops,
|
|||
|
||||
/* Fetch StackGhost Per-Process XOR cookie. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
sparc_xfer_wcookie (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
unsigned long wcookie = 0;
|
||||
char *buf = (char *)&wcookie;
|
||||
|
@ -270,7 +271,7 @@ sparc_xfer_wcookie (struct target_ops *ops, enum target_object object,
|
|||
gdb_assert (readbuf && writebuf == NULL);
|
||||
|
||||
if (offset == sizeof (unsigned long))
|
||||
return 0; /* Signal EOF. */
|
||||
return TARGET_XFER_EOF; /* Signal EOF. */
|
||||
if (offset > sizeof (unsigned long))
|
||||
return TARGET_XFER_E_IO;
|
||||
|
||||
|
@ -310,22 +311,24 @@ sparc_xfer_wcookie (struct target_ops *ops, enum target_object object,
|
|||
len = sizeof (unsigned long) - offset;
|
||||
|
||||
memcpy (readbuf, buf + offset, len);
|
||||
return len;
|
||||
*xfered_len = (ULONGEST) len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
target_xfer_partial_ftype *inf_ptrace_xfer_partial;
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
sparc_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
if (object == TARGET_OBJECT_WCOOKIE)
|
||||
return sparc_xfer_wcookie (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
|
||||
return inf_ptrace_xfer_partial (ops, object, annex, readbuf, writebuf,
|
||||
offset, len);
|
||||
offset, len, xfered_len);
|
||||
}
|
||||
|
||||
/* Create a prototype generic SPARC target. The client can override
|
||||
|
|
|
@ -228,10 +228,10 @@ parse_spufs_run (int *fd, ULONGEST *addr)
|
|||
|
||||
/* Copy LEN bytes at OFFSET in spufs file ANNEX into/from READBUF or WRITEBUF,
|
||||
using the /proc file system. */
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
spu_proc_xfer_spu (const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
char buf[128];
|
||||
int fd = 0;
|
||||
|
@ -239,7 +239,7 @@ spu_proc_xfer_spu (const char *annex, gdb_byte *readbuf,
|
|||
int pid = ptid_get_pid (inferior_ptid);
|
||||
|
||||
if (!annex)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
xsnprintf (buf, sizeof buf, "/proc/%d/fd/%s", pid, annex);
|
||||
fd = open (buf, writebuf? O_WRONLY : O_RDONLY);
|
||||
|
@ -250,7 +250,7 @@ spu_proc_xfer_spu (const char *annex, gdb_byte *readbuf,
|
|||
&& lseek (fd, (off_t) offset, SEEK_SET) != (off_t) offset)
|
||||
{
|
||||
close (fd);
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
if (writebuf)
|
||||
|
@ -259,7 +259,15 @@ spu_proc_xfer_spu (const char *annex, gdb_byte *readbuf,
|
|||
ret = read (fd, readbuf, (size_t) len);
|
||||
|
||||
close (fd);
|
||||
return ret;
|
||||
if (ret < 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
else if (ret == 0)
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
{
|
||||
*xfered_len = (ULONGEST) ret;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
@ -361,12 +369,13 @@ spu_symbol_file_add_from_memory (int inferior_fd)
|
|||
|
||||
gdb_byte id[128];
|
||||
char annex[32];
|
||||
int len;
|
||||
ULONGEST len;
|
||||
enum target_xfer_status status;
|
||||
|
||||
/* Read object ID. */
|
||||
xsnprintf (annex, sizeof annex, "%d/object-id", inferior_fd);
|
||||
len = spu_proc_xfer_spu (annex, id, NULL, 0, sizeof id);
|
||||
if (len <= 0 || len >= sizeof id)
|
||||
status = spu_proc_xfer_spu (annex, id, NULL, 0, sizeof id, &len);
|
||||
if (status != TARGET_XFER_OK || len >= sizeof id)
|
||||
return;
|
||||
id[len] = 0;
|
||||
addr = strtoulst ((const char *) id, NULL, 16);
|
||||
|
@ -515,9 +524,12 @@ spu_fetch_inferior_registers (struct target_ops *ops,
|
|||
gdb_byte buf[16 * SPU_NUM_GPRS];
|
||||
char annex[32];
|
||||
int i;
|
||||
ULONGEST len;
|
||||
|
||||
xsnprintf (annex, sizeof annex, "%d/regs", fd);
|
||||
if (spu_proc_xfer_spu (annex, buf, NULL, 0, sizeof buf) == sizeof buf)
|
||||
if ((spu_proc_xfer_spu (annex, buf, NULL, 0, sizeof buf, &len)
|
||||
== TARGET_XFER_OK)
|
||||
&& len == sizeof buf)
|
||||
for (i = 0; i < SPU_NUM_GPRS; i++)
|
||||
regcache_raw_supply (regcache, i, buf + i*16);
|
||||
}
|
||||
|
@ -549,24 +561,26 @@ spu_store_inferior_registers (struct target_ops *ops,
|
|||
gdb_byte buf[16 * SPU_NUM_GPRS];
|
||||
char annex[32];
|
||||
int i;
|
||||
ULONGEST len;
|
||||
|
||||
for (i = 0; i < SPU_NUM_GPRS; i++)
|
||||
regcache_raw_collect (regcache, i, buf + i*16);
|
||||
|
||||
xsnprintf (annex, sizeof annex, "%d/regs", fd);
|
||||
spu_proc_xfer_spu (annex, NULL, buf, 0, sizeof buf);
|
||||
spu_proc_xfer_spu (annex, NULL, buf, 0, sizeof buf, &len);
|
||||
}
|
||||
}
|
||||
|
||||
/* Override the to_xfer_partial routine. */
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
spu_xfer_partial (struct target_ops *ops,
|
||||
enum target_object object, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
if (object == TARGET_OBJECT_SPU)
|
||||
return spu_proc_xfer_spu (annex, readbuf, writebuf, offset, len);
|
||||
return spu_proc_xfer_spu (annex, readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
|
||||
if (object == TARGET_OBJECT_MEMORY)
|
||||
{
|
||||
|
@ -575,16 +589,17 @@ spu_xfer_partial (struct target_ops *ops,
|
|||
char mem_annex[32], lslr_annex[32];
|
||||
gdb_byte buf[32];
|
||||
ULONGEST lslr;
|
||||
LONGEST ret;
|
||||
enum target_xfer_status ret;
|
||||
|
||||
/* We must be stopped on a spu_run system call. */
|
||||
if (!parse_spufs_run (&fd, &addr))
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
/* Use the "mem" spufs file to access SPU local store. */
|
||||
xsnprintf (mem_annex, sizeof mem_annex, "%d/mem", fd);
|
||||
ret = spu_proc_xfer_spu (mem_annex, readbuf, writebuf, offset, len);
|
||||
if (ret > 0)
|
||||
ret = spu_proc_xfer_spu (mem_annex, readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
if (ret == TARGET_XFER_OK)
|
||||
return ret;
|
||||
|
||||
/* SPU local store access wraps the address around at the
|
||||
|
@ -593,12 +608,13 @@ spu_xfer_partial (struct target_ops *ops,
|
|||
trying the original address first, and getting end-of-file. */
|
||||
xsnprintf (lslr_annex, sizeof lslr_annex, "%d/lslr", fd);
|
||||
memset (buf, 0, sizeof buf);
|
||||
if (spu_proc_xfer_spu (lslr_annex, buf, NULL, 0, sizeof buf) <= 0)
|
||||
if (spu_proc_xfer_spu (lslr_annex, buf, NULL, 0, sizeof buf, xfered_len)
|
||||
!= TARGET_XFER_OK)
|
||||
return ret;
|
||||
|
||||
lslr = strtoulst ((const char *) buf, NULL, 16);
|
||||
return spu_proc_xfer_spu (mem_annex, readbuf, writebuf,
|
||||
offset & lslr, len);
|
||||
offset & lslr, len, xfered_len);
|
||||
}
|
||||
|
||||
return TARGET_XFER_E_IO;
|
||||
|
|
|
@ -245,10 +245,11 @@ spu_store_registers (struct target_ops *ops,
|
|||
}
|
||||
|
||||
/* Override the to_xfer_partial routine. */
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
spu_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
struct target_ops *ops_beneath = find_target_beneath (ops);
|
||||
while (ops_beneath && !ops_beneath->to_xfer_partial)
|
||||
|
@ -263,15 +264,15 @@ spu_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
char mem_annex[32], lslr_annex[32];
|
||||
gdb_byte buf[32];
|
||||
ULONGEST lslr;
|
||||
LONGEST ret;
|
||||
enum target_xfer_status ret;
|
||||
|
||||
if (fd >= 0)
|
||||
{
|
||||
xsnprintf (mem_annex, sizeof mem_annex, "%d/mem", fd);
|
||||
ret = ops_beneath->to_xfer_partial (ops_beneath, TARGET_OBJECT_SPU,
|
||||
mem_annex, readbuf, writebuf,
|
||||
addr, len);
|
||||
if (ret > 0)
|
||||
addr, len, xfered_len);
|
||||
if (ret == TARGET_XFER_OK)
|
||||
return ret;
|
||||
|
||||
/* SPU local store access wraps the address around at the
|
||||
|
@ -282,18 +283,19 @@ spu_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
memset (buf, 0, sizeof buf);
|
||||
if (ops_beneath->to_xfer_partial (ops_beneath, TARGET_OBJECT_SPU,
|
||||
lslr_annex, buf, NULL,
|
||||
0, sizeof buf) <= 0)
|
||||
0, sizeof buf, xfered_len)
|
||||
!= TARGET_XFER_OK)
|
||||
return ret;
|
||||
|
||||
lslr = strtoulst ((char *) buf, NULL, 16);
|
||||
return ops_beneath->to_xfer_partial (ops_beneath, TARGET_OBJECT_SPU,
|
||||
mem_annex, readbuf, writebuf,
|
||||
addr & lslr, len);
|
||||
addr & lslr, len, xfered_len);
|
||||
}
|
||||
}
|
||||
|
||||
return ops_beneath->to_xfer_partial (ops_beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len, xfered_len);
|
||||
}
|
||||
|
||||
/* Override the to_search_memory routine. */
|
||||
|
|
263
gdb/target.c
263
gdb/target.c
|
@ -1185,7 +1185,7 @@ target_translate_tls_address (struct objfile *objfile, CORE_ADDR offset)
|
|||
}
|
||||
|
||||
const char *
|
||||
target_xfer_error_to_string (enum target_xfer_error err)
|
||||
target_xfer_status_to_string (enum target_xfer_status err)
|
||||
{
|
||||
#define CASE(X) case X: return #X
|
||||
switch (err)
|
||||
|
@ -1312,11 +1312,12 @@ target_section_by_addr (struct target_ops *target, CORE_ADDR addr)
|
|||
/* Read memory from the live target, even if currently inspecting a
|
||||
traceframe. The return is the same as that of target_read. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
target_read_live_memory (enum target_object object,
|
||||
ULONGEST memaddr, gdb_byte *myaddr, ULONGEST len)
|
||||
ULONGEST memaddr, gdb_byte *myaddr, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST ret;
|
||||
enum target_xfer_status ret;
|
||||
struct cleanup *cleanup;
|
||||
|
||||
/* Switch momentarily out of tfind mode so to access live memory.
|
||||
|
@ -1326,8 +1327,8 @@ target_read_live_memory (enum target_object object,
|
|||
cleanup = make_cleanup_restore_traceframe_number ();
|
||||
set_traceframe_number (-1);
|
||||
|
||||
ret = target_read (current_target.beneath, object, NULL,
|
||||
myaddr, memaddr, len);
|
||||
ret = target_xfer_partial (current_target.beneath, object, NULL,
|
||||
myaddr, NULL, memaddr, len, xfered_len);
|
||||
|
||||
do_cleanups (cleanup);
|
||||
return ret;
|
||||
|
@ -1340,11 +1341,11 @@ target_read_live_memory (enum target_object object,
|
|||
For interface/parameters/return description see target.h,
|
||||
to_xfer_partial. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
memory_xfer_live_readonly_partial (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
gdb_byte *readbuf, ULONGEST memaddr,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
struct target_section *secp;
|
||||
struct target_section_table *table;
|
||||
|
@ -1368,7 +1369,7 @@ memory_xfer_live_readonly_partial (struct target_ops *ops,
|
|||
{
|
||||
/* Entire transfer is within this section. */
|
||||
return target_read_live_memory (object, memaddr,
|
||||
readbuf, len);
|
||||
readbuf, len, xfered_len);
|
||||
}
|
||||
else if (memaddr >= p->endaddr)
|
||||
{
|
||||
|
@ -1380,30 +1381,32 @@ memory_xfer_live_readonly_partial (struct target_ops *ops,
|
|||
/* This section overlaps the transfer. Just do half. */
|
||||
len = p->endaddr - memaddr;
|
||||
return target_read_live_memory (object, memaddr,
|
||||
readbuf, len);
|
||||
readbuf, len, xfered_len);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
}
|
||||
|
||||
/* Read memory from more than one valid target. A core file, for
|
||||
instance, could have some of memory but delegate other bits to
|
||||
the target below it. So, we must manually try all targets. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
raw_memory_xfer_partial (struct target_ops *ops, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST memaddr, LONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST memaddr, LONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST res;
|
||||
enum target_xfer_status res;
|
||||
|
||||
do
|
||||
{
|
||||
res = ops->to_xfer_partial (ops, TARGET_OBJECT_MEMORY, NULL,
|
||||
readbuf, writebuf, memaddr, len);
|
||||
if (res > 0)
|
||||
readbuf, writebuf, memaddr, len,
|
||||
xfered_len);
|
||||
if (res == TARGET_XFER_OK)
|
||||
break;
|
||||
|
||||
/* Stop if the target reports that the memory is not available. */
|
||||
|
@ -1425,12 +1428,12 @@ raw_memory_xfer_partial (struct target_ops *ops, gdb_byte *readbuf,
|
|||
/* Perform a partial memory transfer.
|
||||
For docs see target.h, to_xfer_partial. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf, ULONGEST memaddr,
|
||||
ULONGEST len)
|
||||
ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST res;
|
||||
enum target_xfer_status res;
|
||||
int reg_len;
|
||||
struct mem_region *region;
|
||||
struct inferior *inf;
|
||||
|
@ -1449,7 +1452,7 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
|
||||
memaddr = overlay_mapped_address (memaddr, section);
|
||||
return section_table_xfer_memory_partial (readbuf, writebuf,
|
||||
memaddr, len,
|
||||
memaddr, len, xfered_len,
|
||||
table->sections,
|
||||
table->sections_end,
|
||||
section_name);
|
||||
|
@ -1470,7 +1473,7 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
table = target_get_section_table (ops);
|
||||
return section_table_xfer_memory_partial (readbuf, writebuf,
|
||||
memaddr, len,
|
||||
memaddr, len, xfered_len,
|
||||
table->sections,
|
||||
table->sections_end,
|
||||
NULL);
|
||||
|
@ -1511,13 +1514,17 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
|
||||
/* This goes through the topmost target again. */
|
||||
res = memory_xfer_live_readonly_partial (ops, object,
|
||||
readbuf, memaddr, len);
|
||||
if (res > 0)
|
||||
return res;
|
||||
|
||||
/* No use trying further, we know some memory starting
|
||||
at MEMADDR isn't available. */
|
||||
return TARGET_XFER_E_UNAVAILABLE;
|
||||
readbuf, memaddr,
|
||||
len, xfered_len);
|
||||
if (res == TARGET_XFER_OK)
|
||||
return TARGET_XFER_OK;
|
||||
else
|
||||
{
|
||||
/* No use trying further, we know some memory starting
|
||||
at MEMADDR isn't available. */
|
||||
*xfered_len = len;
|
||||
return TARGET_XFER_E_UNAVAILABLE;
|
||||
}
|
||||
}
|
||||
|
||||
/* Don't try to read more than how much is available, in
|
||||
|
@ -1575,19 +1582,23 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
|| (code_cache_enabled_p () && object == TARGET_OBJECT_CODE_MEMORY)))
|
||||
{
|
||||
DCACHE *dcache = target_dcache_get_or_init ();
|
||||
int l;
|
||||
|
||||
if (readbuf != NULL)
|
||||
res = dcache_xfer_memory (ops, dcache, memaddr, readbuf, reg_len, 0);
|
||||
l = dcache_xfer_memory (ops, dcache, memaddr, readbuf, reg_len, 0);
|
||||
else
|
||||
/* FIXME drow/2006-08-09: If we're going to preserve const
|
||||
correctness dcache_xfer_memory should take readbuf and
|
||||
writebuf. */
|
||||
res = dcache_xfer_memory (ops, dcache, memaddr, (void *) writebuf,
|
||||
l = dcache_xfer_memory (ops, dcache, memaddr, (void *) writebuf,
|
||||
reg_len, 1);
|
||||
if (res <= 0)
|
||||
return -1;
|
||||
if (l <= 0)
|
||||
return TARGET_XFER_E_IO;
|
||||
else
|
||||
return res;
|
||||
{
|
||||
*xfered_len = (ULONGEST) l;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
|
||||
/* If none of those methods found the memory we wanted, fall back
|
||||
|
@ -1595,14 +1606,19 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
to_xfer_partial is enough; if it doesn't recognize an object
|
||||
it will call the to_xfer_partial of the next target down.
|
||||
But for memory this won't do. Memory is the only target
|
||||
object which can be read from more than one valid target. */
|
||||
res = raw_memory_xfer_partial (ops, readbuf, writebuf, memaddr, reg_len);
|
||||
object which can be read from more than one valid target.
|
||||
A core file, for instance, could have some of memory but
|
||||
delegate other bits to the target below it. So, we must
|
||||
manually try all targets. */
|
||||
|
||||
res = raw_memory_xfer_partial (ops, readbuf, writebuf, memaddr, reg_len,
|
||||
xfered_len);
|
||||
|
||||
/* Make sure the cache gets updated no matter what - if we are writing
|
||||
to the stack. Even if this write is not tagged as such, we still need
|
||||
to update the cache. */
|
||||
|
||||
if (res > 0
|
||||
if (res == TARGET_XFER_OK
|
||||
&& inf != NULL
|
||||
&& writebuf != NULL
|
||||
&& target_dcache_init_p ()
|
||||
|
@ -1612,7 +1628,7 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
DCACHE *dcache = target_dcache_get ();
|
||||
|
||||
dcache_update (dcache, memaddr, (void *) writebuf, res);
|
||||
dcache_update (dcache, memaddr, (void *) writebuf, reg_len);
|
||||
}
|
||||
|
||||
/* If we still haven't got anything, return the last error. We
|
||||
|
@ -1623,25 +1639,26 @@ memory_xfer_partial_1 (struct target_ops *ops, enum target_object object,
|
|||
/* Perform a partial memory transfer. For docs see target.h,
|
||||
to_xfer_partial. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
memory_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf, ULONGEST memaddr,
|
||||
ULONGEST len)
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST memaddr, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
int res;
|
||||
enum target_xfer_status res;
|
||||
|
||||
/* Zero length requests are ok and require no work. */
|
||||
if (len == 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
/* Fill in READBUF with breakpoint shadows, or WRITEBUF with
|
||||
breakpoint insns, thus hiding out from higher layers whether
|
||||
there are software breakpoints inserted in the code stream. */
|
||||
if (readbuf != NULL)
|
||||
{
|
||||
res = memory_xfer_partial_1 (ops, object, readbuf, NULL, memaddr, len);
|
||||
res = memory_xfer_partial_1 (ops, object, readbuf, NULL, memaddr, len,
|
||||
xfered_len);
|
||||
|
||||
if (res > 0 && !show_memory_breakpoints)
|
||||
if (res == TARGET_XFER_OK && !show_memory_breakpoints)
|
||||
breakpoint_xfer_memory (readbuf, NULL, NULL, memaddr, res);
|
||||
}
|
||||
else
|
||||
|
@ -1661,7 +1678,8 @@ memory_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
memcpy (buf, writebuf, len);
|
||||
|
||||
breakpoint_xfer_memory (NULL, buf, writebuf, memaddr, len);
|
||||
res = memory_xfer_partial_1 (ops, object, NULL, buf, memaddr, len);
|
||||
res = memory_xfer_partial_1 (ops, object, NULL, buf, memaddr, len,
|
||||
xfered_len);
|
||||
|
||||
do_cleanups (old_chain);
|
||||
}
|
||||
|
@ -1687,39 +1705,43 @@ make_show_memory_breakpoints_cleanup (int show)
|
|||
|
||||
/* For docs see target.h, to_xfer_partial. */
|
||||
|
||||
LONGEST
|
||||
enum target_xfer_status
|
||||
target_xfer_partial (struct target_ops *ops,
|
||||
enum target_object object, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
LONGEST retval;
|
||||
enum target_xfer_status retval;
|
||||
|
||||
gdb_assert (ops->to_xfer_partial != NULL);
|
||||
|
||||
/* Transfer is done when LEN is zero. */
|
||||
if (len == 0)
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
|
||||
if (writebuf && !may_write_memory)
|
||||
error (_("Writing to memory is not allowed (addr %s, len %s)"),
|
||||
core_addr_to_string_nz (offset), plongest (len));
|
||||
|
||||
*xfered_len = 0;
|
||||
|
||||
/* If this is a memory transfer, let the memory-specific code
|
||||
have a look at it instead. Memory transfers are more
|
||||
complicated. */
|
||||
if (object == TARGET_OBJECT_MEMORY || object == TARGET_OBJECT_STACK_MEMORY
|
||||
|| object == TARGET_OBJECT_CODE_MEMORY)
|
||||
retval = memory_xfer_partial (ops, object, readbuf,
|
||||
writebuf, offset, len);
|
||||
writebuf, offset, len, xfered_len);
|
||||
else if (object == TARGET_OBJECT_RAW_MEMORY)
|
||||
{
|
||||
/* Request the normal memory object from other layers. */
|
||||
retval = raw_memory_xfer_partial (ops, readbuf, writebuf, offset, len);
|
||||
retval = raw_memory_xfer_partial (ops, readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
}
|
||||
else
|
||||
retval = ops->to_xfer_partial (ops, object, annex, readbuf,
|
||||
writebuf, offset, len);
|
||||
writebuf, offset, len, xfered_len);
|
||||
|
||||
if (targetdebug)
|
||||
{
|
||||
|
@ -1727,25 +1749,26 @@ target_xfer_partial (struct target_ops *ops,
|
|||
|
||||
fprintf_unfiltered (gdb_stdlog,
|
||||
"%s:target_xfer_partial "
|
||||
"(%d, %s, %s, %s, %s, %s) = %s",
|
||||
"(%d, %s, %s, %s, %s, %s) = %d, %s",
|
||||
ops->to_shortname,
|
||||
(int) object,
|
||||
(annex ? annex : "(null)"),
|
||||
host_address_to_string (readbuf),
|
||||
host_address_to_string (writebuf),
|
||||
core_addr_to_string_nz (offset),
|
||||
pulongest (len), plongest (retval));
|
||||
pulongest (len), retval,
|
||||
pulongest (*xfered_len));
|
||||
|
||||
if (readbuf)
|
||||
myaddr = readbuf;
|
||||
if (writebuf)
|
||||
myaddr = writebuf;
|
||||
if (retval > 0 && myaddr != NULL)
|
||||
if (retval == TARGET_XFER_OK && myaddr != NULL)
|
||||
{
|
||||
int i;
|
||||
|
||||
fputs_unfiltered (", bytes =", gdb_stdlog);
|
||||
for (i = 0; i < retval; i++)
|
||||
for (i = 0; i < *xfered_len; i++)
|
||||
{
|
||||
if ((((intptr_t) &(myaddr[i])) & 0xf) == 0)
|
||||
{
|
||||
|
@ -1763,12 +1786,19 @@ target_xfer_partial (struct target_ops *ops,
|
|||
|
||||
fputc_unfiltered ('\n', gdb_stdlog);
|
||||
}
|
||||
|
||||
/* Check implementations of to_xfer_partial update *XFERED_LEN
|
||||
properly. Do assertion after printing debug messages, so that we
|
||||
can find more clues on assertion failure from debugging messages. */
|
||||
if (retval == TARGET_XFER_OK || retval == TARGET_XFER_E_UNAVAILABLE)
|
||||
gdb_assert (*xfered_len > 0);
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
/* Read LEN bytes of target memory at address MEMADDR, placing the
|
||||
results in GDB's memory at MYADDR. Returns either 0 for success or
|
||||
a target_xfer_error value if any error occurs.
|
||||
TARGET_XFER_E_IO if any error occurs.
|
||||
|
||||
If an error occurs, no guarantee is made about the contents of the data at
|
||||
MYADDR. In particular, the caller should not depend upon partial reads
|
||||
|
@ -1837,7 +1867,7 @@ target_read_code (CORE_ADDR memaddr, gdb_byte *myaddr, ssize_t len)
|
|||
}
|
||||
|
||||
/* Write LEN bytes from MYADDR to target memory at address MEMADDR.
|
||||
Returns either 0 for success or a target_xfer_error value if any
|
||||
Returns either 0 for success or TARGET_XFER_E_IO if any
|
||||
error occurs. If an error occurs, no guarantee is made about how
|
||||
much data got written. Callers that can deal with partial writes
|
||||
should call target_write. */
|
||||
|
@ -1855,7 +1885,7 @@ target_write_memory (CORE_ADDR memaddr, const gdb_byte *myaddr, ssize_t len)
|
|||
}
|
||||
|
||||
/* Write LEN bytes from MYADDR to target raw memory at address
|
||||
MEMADDR. Returns either 0 for success or a target_xfer_error value
|
||||
MEMADDR. Returns either 0 for success or TARGET_XFER_E_IO
|
||||
if any error occurs. If an error occurs, no guarantee is made
|
||||
about how much data got written. Callers that can deal with
|
||||
partial writes should call target_write. */
|
||||
|
@ -1966,10 +1996,11 @@ show_trust_readonly (struct ui_file *file, int from_tty,
|
|||
|
||||
/* More generic transfers. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
default_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
if (object == TARGET_OBJECT_MEMORY
|
||||
&& ops->deprecated_xfer_memory != NULL)
|
||||
|
@ -1993,55 +2024,64 @@ default_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
xfered = ops->deprecated_xfer_memory (offset, readbuf, len,
|
||||
0/*read*/, NULL, ops);
|
||||
if (xfered > 0)
|
||||
return xfered;
|
||||
{
|
||||
*xfered_len = (ULONGEST) xfered;
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
else if (xfered == 0 && errno == 0)
|
||||
/* "deprecated_xfer_memory" uses 0, cross checked against
|
||||
ERRNO as one indication of an error. */
|
||||
return 0;
|
||||
return TARGET_XFER_EOF;
|
||||
else
|
||||
return -1;
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
else if (ops->beneath != NULL)
|
||||
return ops->beneath->to_xfer_partial (ops->beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
else
|
||||
return -1;
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
/* The xfer_partial handler for the topmost target. Unlike the default,
|
||||
it does not need to handle memory specially; it just passes all
|
||||
requests down the stack. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
current_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
if (ops->beneath != NULL)
|
||||
return ops->beneath->to_xfer_partial (ops->beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
else
|
||||
return -1;
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
/* Target vector read/write partial wrapper functions. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
target_read_partial (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex, gdb_byte *buf,
|
||||
ULONGEST offset, LONGEST len)
|
||||
ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
return target_xfer_partial (ops, object, annex, buf, NULL, offset, len);
|
||||
return target_xfer_partial (ops, object, annex, buf, NULL, offset, len,
|
||||
xfered_len);
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
target_write_partial (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex, const gdb_byte *buf,
|
||||
ULONGEST offset, LONGEST len)
|
||||
ULONGEST offset, LONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
return target_xfer_partial (ops, object, annex, NULL, buf, offset, len);
|
||||
return target_xfer_partial (ops, object, annex, NULL, buf, offset, len,
|
||||
xfered_len);
|
||||
}
|
||||
|
||||
/* Wrappers to perform the full transfer. */
|
||||
|
@ -2058,17 +2098,25 @@ target_read (struct target_ops *ops,
|
|||
|
||||
while (xfered < len)
|
||||
{
|
||||
LONGEST xfer = target_read_partial (ops, object, annex,
|
||||
(gdb_byte *) buf + xfered,
|
||||
offset + xfered, len - xfered);
|
||||
ULONGEST xfered_len;
|
||||
enum target_xfer_status status;
|
||||
|
||||
status = target_read_partial (ops, object, annex,
|
||||
(gdb_byte *) buf + xfered,
|
||||
offset + xfered, len - xfered,
|
||||
&xfered_len);
|
||||
|
||||
/* Call an observer, notifying them of the xfer progress? */
|
||||
if (xfer == 0)
|
||||
if (status == TARGET_XFER_EOF)
|
||||
return xfered;
|
||||
if (xfer < 0)
|
||||
else if (status == TARGET_XFER_OK)
|
||||
{
|
||||
xfered += xfered_len;
|
||||
QUIT;
|
||||
}
|
||||
else
|
||||
return -1;
|
||||
xfered += xfer;
|
||||
QUIT;
|
||||
|
||||
}
|
||||
return len;
|
||||
}
|
||||
|
@ -2104,6 +2152,7 @@ read_whatever_is_readable (struct target_ops *ops,
|
|||
ULONGEST current_end = end;
|
||||
int forward;
|
||||
memory_read_result_s r;
|
||||
ULONGEST xfered_len;
|
||||
|
||||
/* If we previously failed to read 1 byte, nothing can be done here. */
|
||||
if (end - begin <= 1)
|
||||
|
@ -2116,13 +2165,14 @@ read_whatever_is_readable (struct target_ops *ops,
|
|||
if not. This heuristic is meant to permit reading accessible memory
|
||||
at the boundary of accessible region. */
|
||||
if (target_read_partial (ops, TARGET_OBJECT_MEMORY, NULL,
|
||||
buf, begin, 1) == 1)
|
||||
buf, begin, 1, &xfered_len) == TARGET_XFER_OK)
|
||||
{
|
||||
forward = 1;
|
||||
++current_begin;
|
||||
}
|
||||
else if (target_read_partial (ops, TARGET_OBJECT_MEMORY, NULL,
|
||||
buf + (end-begin) - 1, end - 1, 1) == 1)
|
||||
buf + (end-begin) - 1, end - 1, 1,
|
||||
&xfered_len) == TARGET_XFER_OK)
|
||||
{
|
||||
forward = 0;
|
||||
--current_end;
|
||||
|
@ -2297,19 +2347,24 @@ target_write_with_progress (struct target_ops *ops,
|
|||
|
||||
while (xfered < len)
|
||||
{
|
||||
LONGEST xfer = target_write_partial (ops, object, annex,
|
||||
(gdb_byte *) buf + xfered,
|
||||
offset + xfered, len - xfered);
|
||||
ULONGEST xfered_len;
|
||||
enum target_xfer_status status;
|
||||
|
||||
if (xfer == 0)
|
||||
status = target_write_partial (ops, object, annex,
|
||||
(gdb_byte *) buf + xfered,
|
||||
offset + xfered, len - xfered,
|
||||
&xfered_len);
|
||||
|
||||
if (status == TARGET_XFER_EOF)
|
||||
return xfered;
|
||||
if (xfer < 0)
|
||||
if (TARGET_XFER_STATUS_ERROR_P (status))
|
||||
return -1;
|
||||
|
||||
gdb_assert (status == TARGET_XFER_OK);
|
||||
if (progress)
|
||||
(*progress) (xfer, baton);
|
||||
(*progress) (xfered_len, baton);
|
||||
|
||||
xfered += xfer;
|
||||
xfered += xfered_len;
|
||||
QUIT;
|
||||
}
|
||||
return len;
|
||||
|
@ -2339,7 +2394,6 @@ target_read_alloc_1 (struct target_ops *ops, enum target_object object,
|
|||
{
|
||||
size_t buf_alloc, buf_pos;
|
||||
gdb_byte *buf;
|
||||
LONGEST n;
|
||||
|
||||
/* This function does not have a length parameter; it reads the
|
||||
entire OBJECT). Also, it doesn't support objects fetched partly
|
||||
|
@ -2355,15 +2409,14 @@ target_read_alloc_1 (struct target_ops *ops, enum target_object object,
|
|||
buf_pos = 0;
|
||||
while (1)
|
||||
{
|
||||
n = target_read_partial (ops, object, annex, &buf[buf_pos],
|
||||
buf_pos, buf_alloc - buf_pos - padding);
|
||||
if (n < 0)
|
||||
{
|
||||
/* An error occurred. */
|
||||
xfree (buf);
|
||||
return -1;
|
||||
}
|
||||
else if (n == 0)
|
||||
ULONGEST xfered_len;
|
||||
enum target_xfer_status status;
|
||||
|
||||
status = target_read_partial (ops, object, annex, &buf[buf_pos],
|
||||
buf_pos, buf_alloc - buf_pos - padding,
|
||||
&xfered_len);
|
||||
|
||||
if (status == TARGET_XFER_EOF)
|
||||
{
|
||||
/* Read all there was. */
|
||||
if (buf_pos == 0)
|
||||
|
@ -2372,8 +2425,14 @@ target_read_alloc_1 (struct target_ops *ops, enum target_object object,
|
|||
*buf_p = buf;
|
||||
return buf_pos;
|
||||
}
|
||||
else if (status != TARGET_XFER_OK)
|
||||
{
|
||||
/* An error occurred. */
|
||||
xfree (buf);
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
buf_pos += n;
|
||||
buf_pos += xfered_len;
|
||||
|
||||
/* If the buffer is filling up, expand it. */
|
||||
if (buf_alloc < buf_pos * 2)
|
||||
|
|
45
gdb/target.h
45
gdb/target.h
|
@ -203,10 +203,16 @@ enum target_object
|
|||
/* Possible future objects: TARGET_OBJECT_FILE, ... */
|
||||
};
|
||||
|
||||
/* Possible error codes returned by target_xfer_partial, etc. */
|
||||
/* Possible values returned by target_xfer_partial, etc. */
|
||||
|
||||
enum target_xfer_error
|
||||
enum target_xfer_status
|
||||
{
|
||||
/* Some bytes are transferred. */
|
||||
TARGET_XFER_OK = 1,
|
||||
|
||||
/* No further transfer is possible. */
|
||||
TARGET_XFER_EOF = 0,
|
||||
|
||||
/* Generic I/O error. Note that it's important that this is '-1',
|
||||
as we still have target_xfer-related code returning hardcoded
|
||||
'-1' on error. */
|
||||
|
@ -219,9 +225,11 @@ enum target_xfer_error
|
|||
/* Keep list in sync with target_xfer_error_to_string. */
|
||||
};
|
||||
|
||||
#define TARGET_XFER_STATUS_ERROR_P(STATUS) ((STATUS) < TARGET_XFER_EOF)
|
||||
|
||||
/* Return the string form of ERR. */
|
||||
|
||||
extern const char *target_xfer_error_to_string (enum target_xfer_error err);
|
||||
extern const char *target_xfer_status_to_string (enum target_xfer_status err);
|
||||
|
||||
/* Enumeration of the kinds of traceframe searches that a target may
|
||||
be able to perform. */
|
||||
|
@ -238,14 +246,15 @@ enum trace_find_type
|
|||
typedef struct static_tracepoint_marker *static_tracepoint_marker_p;
|
||||
DEF_VEC_P(static_tracepoint_marker_p);
|
||||
|
||||
typedef LONGEST
|
||||
typedef enum target_xfer_status
|
||||
target_xfer_partial_ftype (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex,
|
||||
gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset,
|
||||
ULONGEST len);
|
||||
ULONGEST len,
|
||||
ULONGEST *xfered_len);
|
||||
|
||||
/* Request that OPS transfer up to LEN 8-bit bytes of the target's
|
||||
OBJECT. The OFFSET, for a seekable object, specifies the
|
||||
|
@ -518,13 +527,14 @@ struct target_ops
|
|||
starting point. The ANNEX can be used to provide additional
|
||||
data-specific information to the target.
|
||||
|
||||
Return the number of bytes actually transfered, zero when no
|
||||
further transfer is possible, and a negative error code (really
|
||||
an 'enum target_xfer_error' value) when the transfer is not
|
||||
supported. Return of a positive value smaller than LEN does
|
||||
not indicate the end of the object, only the end of the
|
||||
transfer; higher level code should continue transferring if
|
||||
desired. This is handled in target.c.
|
||||
Return the transferred status, error or OK (an
|
||||
'enum target_xfer_status' value). Save the number of bytes
|
||||
actually transferred in *XFERED_LEN if transfer is successful
|
||||
(TARGET_XFER_OK) or the number unavailable bytes if the requested
|
||||
data is unavailable (TARGET_XFER_E_UNAVAILABLE). *XFERED_LEN
|
||||
smaller than LEN does not indicate the end of the object, only
|
||||
the end of the transfer; higher level code should continue
|
||||
transferring if desired. This is handled in target.c.
|
||||
|
||||
The interface does not support a "retry" mechanism. Instead it
|
||||
assumes that at least one byte will be transfered on each
|
||||
|
@ -541,10 +551,13 @@ struct target_ops
|
|||
See target_read and target_write for more information. One,
|
||||
and only one, of readbuf or writebuf must be non-NULL. */
|
||||
|
||||
LONGEST (*to_xfer_partial) (struct target_ops *ops,
|
||||
enum target_object object, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len);
|
||||
enum target_xfer_status (*to_xfer_partial) (struct target_ops *ops,
|
||||
enum target_object object,
|
||||
const char *annex,
|
||||
gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len);
|
||||
|
||||
/* Returns the memory map for the target. A return value of NULL
|
||||
means that no memory map is available. If a memory address
|
||||
|
|
|
@ -5113,10 +5113,11 @@ tfile_fetch_registers (struct target_ops *ops,
|
|||
}
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
tfile_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
/* We're only doing regular memory for now. */
|
||||
if (object != TARGET_OBJECT_MEMORY)
|
||||
|
@ -5157,7 +5158,8 @@ tfile_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
if (maddr != offset)
|
||||
lseek (trace_fd, offset - maddr, SEEK_CUR);
|
||||
tfile_read (readbuf, amt);
|
||||
return amt;
|
||||
*xfered_len = amt;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
/* Skip over this block. */
|
||||
|
@ -5191,9 +5193,9 @@ tfile_xfer_partial (struct target_ops *ops, enum target_object object,
|
|||
if (amt > len)
|
||||
amt = len;
|
||||
|
||||
amt = bfd_get_section_contents (exec_bfd, s,
|
||||
readbuf, offset - vma, amt);
|
||||
return amt;
|
||||
*xfered_len = bfd_get_section_contents (exec_bfd, s,
|
||||
readbuf, offset - vma, amt);
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1727,7 +1727,7 @@ val_print_array_elements (struct type *type,
|
|||
|
||||
/* Read LEN bytes of target memory at address MEMADDR, placing the
|
||||
results in GDB's memory at MYADDR. Returns a count of the bytes
|
||||
actually read, and optionally a target_xfer_error value in the
|
||||
actually read, and optionally a target_xfer_status value in the
|
||||
location pointed to by ERRPTR if ERRPTR is non-null. */
|
||||
|
||||
/* FIXME: cagney/1999-10-14: Only used by val_print_string. Can this
|
||||
|
@ -1771,7 +1771,7 @@ partial_memory_read (CORE_ADDR memaddr, gdb_byte *myaddr,
|
|||
each. Fetch at most FETCHLIMIT characters. BUFFER will be set to a newly
|
||||
allocated buffer containing the string, which the caller is responsible to
|
||||
free, and BYTES_READ will be set to the number of bytes read. Returns 0 on
|
||||
success, or a target_xfer_error on failure.
|
||||
success, or a target_xfer_status on failure.
|
||||
|
||||
If LEN > 0, reads the lesser of LEN or FETCHLIMIT characters
|
||||
(including eventual NULs in the middle or end of the string).
|
||||
|
|
|
@ -2413,9 +2413,9 @@ windows_stop (ptid_t ptid)
|
|||
/* Helper for windows_xfer_partial that handles memory transfers.
|
||||
Arguments are like target_xfer_partial. */
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
windows_xfer_memory (gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST memaddr, ULONGEST len)
|
||||
ULONGEST memaddr, ULONGEST len, ULONGEST *xfered_len)
|
||||
{
|
||||
SIZE_T done = 0;
|
||||
BOOL success;
|
||||
|
@ -2443,10 +2443,11 @@ windows_xfer_memory (gdb_byte *readbuf, const gdb_byte *writebuf,
|
|||
if (!success)
|
||||
lasterror = GetLastError ();
|
||||
}
|
||||
*xfered_len = (ULONGEST) done;
|
||||
if (!success && lasterror == ERROR_PARTIAL_COPY && done > 0)
|
||||
return done;
|
||||
return TARGET_XFER_OK;
|
||||
else
|
||||
return success ? done : TARGET_XFER_E_IO;
|
||||
return success ? TARGET_XFER_OK : TARGET_XFER_E_IO;
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -2502,11 +2503,12 @@ windows_pid_to_str (struct target_ops *ops, ptid_t ptid)
|
|||
return normal_pid_to_str (ptid);
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
windows_xfer_shared_libraries (struct target_ops *ops,
|
||||
enum target_object object, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len)
|
||||
enum target_object object, const char *annex,
|
||||
gdb_byte *readbuf, const gdb_byte *writebuf,
|
||||
ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
struct obstack obstack;
|
||||
const char *buf;
|
||||
|
@ -2536,27 +2538,30 @@ windows_xfer_shared_libraries (struct target_ops *ops,
|
|||
}
|
||||
|
||||
obstack_free (&obstack, NULL);
|
||||
return len;
|
||||
*xfered_len = (ULONGEST) len;
|
||||
return TARGET_XFER_OK;
|
||||
}
|
||||
|
||||
static LONGEST
|
||||
static enum target_xfer_status
|
||||
windows_xfer_partial (struct target_ops *ops, enum target_object object,
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len)
|
||||
const char *annex, gdb_byte *readbuf,
|
||||
const gdb_byte *writebuf, ULONGEST offset, ULONGEST len,
|
||||
ULONGEST *xfered_len)
|
||||
{
|
||||
switch (object)
|
||||
{
|
||||
case TARGET_OBJECT_MEMORY:
|
||||
return windows_xfer_memory (readbuf, writebuf, offset, len);
|
||||
return windows_xfer_memory (readbuf, writebuf, offset, len, xfered_len);
|
||||
|
||||
case TARGET_OBJECT_LIBRARIES:
|
||||
return windows_xfer_shared_libraries (ops, object, annex, readbuf,
|
||||
writebuf, offset, len);
|
||||
writebuf, offset, len, xfered_len);
|
||||
|
||||
default:
|
||||
if (ops->beneath != NULL)
|
||||
return ops->beneath->to_xfer_partial (ops->beneath, object, annex,
|
||||
readbuf, writebuf, offset, len);
|
||||
readbuf, writebuf, offset, len,
|
||||
xfered_len);
|
||||
return TARGET_XFER_E_IO;
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Reference in a new issue