We told clients that further loading of variables can be done by
specifying a type cast using the address of a variable that we
returned.
This does not work for registerized variables (or, in general,
variables that have a complex location expression) because we don't
give them unique addresses and we throw away the compositeMemory object
we made to read them.
This commit changes proc so that:
1. variables with location expression divided in pieces do get a unique
memory address
2. the compositeMemory object is saved somewhere
3. when an integer is cast back into a pointer type we look through our
saved compositeMemory objects to see if there is one that covers the
specified address and use it.
The unique memory addresses we generate have the MSB set to 1, as
specified by the Intel 86x64 manual addresses in this form are reserved
for kernel memory (which we can not read anyway) so we are guaranteed
to never generate a fake memory address that overlaps a real memory
address of the application.
The unfortunate side effect of this is that it will break clients that
do not deserialize the address to a 64bit integer. This practice is
contrary to how we defined our types and contrary to the specification
of the JSON format, as of json.org, however it is also fairly common,
due to javascript itself having only 53bit integers.
We could come up with a new mechanism but then even more old clients
would have to be changed.
Truncates the result of binary operations on integers to the size of
the resulting type.
Also rewrites convertInt to not require allocations.
Fixes#2454
Changes print so a format argument can be specified by using '%' as
prefix. For example:
print %x d
will print variable 'd' in hexadecimal. The interpretarion of the
format argument is the same as that of fmt's package.
Fixes#1038Fixes#1800Fixes#2159
* service/dap: support auto-loading of unloaded interfaces
* Make DeepSource happy
* Don't set reference if data failed to auto-load
* Use frame-less expressions
* Refine interface recursion capping test case
Co-authored-by: Polina Sokolova <polinasok@users.noreply.github.com>
- remove github workflow for testing macOS/amd64 that is now covered by
TeamCity
- fix DeepSource glob patterns to actually match what they are intended
to match (did the interpretation change?)
- disable some cgo tests on darwin/arm64
* TeamCity: add linux/arm64/tip configuration
So that it can be tested when we make the next-version-support-branch.
* tests: disable failing cgo tests on arm64
* proc/core: off-by-one error reading ELF core files
core.(*splicedMemory).ReadMemory checked the entry interval
erroneously when dealing with contiguous entries.
* terminal,service,proc/*: adds dump command (gcore equivalent)
Adds the `dump` command that creates a core file from the target process.
Backends will need to implement a new, optional, method `MemoryMap` that
returns a list of mapped memory regions.
Additionally the method `DumpProcessNotes` can be implemented to write out
to the core file notes describing the target process and its threads. If
DumpProcessNotes is not implemented `proc.Dump` will write a description of
the process and its threads in a OS/arch-independent format (that only Delve
understands).
Currently only linux/amd64 implements `DumpProcessNotes`.
Core files are only written in ELF, there is no minidump or macho-o writers.
# Conflicts:
# pkg/proc/proc_test.go
Both structMember and findMethod implemented a depth-first search in
embedded fields but the Go specification requires a breadth-first
search. They also allowed promotion of fields in the concrete type of
embedded interfaces even though this is not allowed by Go.
Furthermore they both lacked protection from infinite recursion
when a type embeds itself and the user requests a non-existent field.
Fixes#2316
evalFunctionCall needs to remove the breakpoint from the current thread
after starting the function call injection, otherwise Continue will
think that the thread is stopped at a breakpoint and return to the user
instead of continuing the call injection.
On linux we can not read memory if the thread we use to do it is
occupied doing certain system calls. The exact conditions when this
happens have never been clear.
This problem was worked around by using the Blocked method which
recognized the most common circumstances where this would happen.
However this is a hack: Blocked returning true doesn't mean that the
problem will manifest and Blocked returning false doesn't necessarily
mean the problem will not manifest. A side effect of this is issue
#2151 where sometimes we can't read the memory of a thread and find its
associated goroutine.
This commit fixes this problem by always reading memory using a thread
we know to be good for this, specifically the one returned by
ContinueOnce. In particular the changes are as follows:
1. Remove (ProcessInternal).CurrentThread and
(ProcessInternal).SetCurrentThread, the "current thread" becomes a
field of Target, CurrentThread becomes a (*Target) method and
(*Target).SwitchThread basically just sets a field Target.
2. The backends keep track of their own internal idea of what the
current thread is, to use it to read memory, this is the thread they
return from ContinueOnce as trapthread
3. The current thread in the backend and the current thread in Target
only ever get synchronized in two places: when the backend creates a
Target object the currentThread field of Target is initialized with the
backend's current thread and when (*Target).Restart gets called (when a
recording is rewound the currentThread used by Target might not exist
anymore).
4. We remove the MemoryReadWriter interface embedded in Thread and
instead add a Memory method to Process that returns a MemoryReadWriter.
The backends will return something here that will read memory using
the current thread saved by the backend.
5. The Thread.Blocked method is removed
One possible problem with this change is processes that have threads
with different memory maps. As far as I can determine this could happen
on old versions of linux but this option was removed in linux 2.5.
Fixes#2151
Adds features to support default file descriptor redirects for the
target process:
1. A new command line flag '--redirect' and '-r' are added to specify
file redirects for the target process
2. New syntax is added to the 'restart' command to specify file
redirects.
3. Interactive instances will check if stdin/stdout and stderr are
terminals and print a helpful error message if they aren't.
Changes implementations of proc.Registers interface and the
op.DwarfRegisters struct so that floating point registers can be loaded
only when they are needed.
Removes the floatingPoint parameter from proc.Thread.Registers.
This accomplishes three things:
1. it simplifies the proc.Thread.Registers interface
2. it makes it impossible to accidentally create a broken set of saved
registers or of op.DwarfRegisters by accidentally calling
Registers(false)
3. it improves general performance of Delve by avoiding to load
floating point registers as much as possible
Floating point registers are loaded under two circumstances:
1. When the Slice method is called with floatingPoint == true
2. When the Copy method is called
Benchmark before:
BenchmarkConditionalBreakpoints-4 1 4327350142 ns/op
Benchmark after:
BenchmarkConditionalBreakpoints-4 1 3852642917 ns/op
Updates #1549
This flag allows users on UNIX systems to set the tty for the program
being debugged by Delve. This is useful for debugging command line
applications which need access to their own TTY, and also for
controlling the output of the debugged programs so that IDEs may open a
dedicated terminal to show the output for the process.
If we call one expression which is the fake method of meanless
string, `delve` will panic. Strengthen the inspection of boundary
conditions when supporting function calls on non-struct types.
Update: #1871
* *: Fix go vet struct complaints
* *: Fix struct vet issue on linux
* *: Ignore proc/native in go vet check
We have to do some unsafe pointer manipulation that will never make go
vet happy within the proc/native package. Ignore it for runs of go vet.
Implement debugging function for 386 on linux with reference to AMD64.
There are a few remaining problems that need to be solved in another time.
1. The stacktrace of cgo are not exactly as expected.
2. Not implement `core` for now.
3. Not implement `call` for now. Can't not find `runtime·debugCallV1` or
similar function in $GOROOT/src/runtime/asm_386.s.
Update #20
Removes the restriction that the DWARF type for the receiver of a method
must be a TypeDef. This seems reasonable in practice, but it turns out
Go DWARF does not consider
```
type X int
```
to be a typedef. This patch also allows for calling a method where the
receiver is not used or passed in, such as:
```
func (_ X) Method() { println("why") }
```
* tests: misc test fixes for go1.14
- math.go is now ambiguous due to changes to the go runtime so specify
that we mean our own math.go in _fixtures
- go list -m requires vendor-mode to be disabled so pass '-mod=' to it
in case user has GOFLAGS=-mod=vendor
- update version of go/packages, required to work with go 1.14 (and
executed go mod vendor)
- Increased goroutine migration in one development version of Go 1.14
revealed a problem with TestCheckpoints in command_test.go and
rr_test.go. The tests were always wrong because Restart(checkpoint)
doesn't change the current thread but we can't assume that when the
checkpoint was taken the current goroutine was running on the same
thread.
* goversion: update maximum supported version
* Makefile: disable testing lldb-server backend on linux with Go 1.14
There seems to be some incompatibility with lldb-server version 6.0.0
on linux and Go 1.14.
* proc/gdbserial: better handling of signals
- if multiple signals are received simultaneously propagate all of them to the
target threads instead of only one.
- debugserver will drop an interrupt request if a target thread simultaneously
receives a signal, handle this situation.
* dwarf/line: normalize backslashes for windows executables
Starting with Go 1.14 the compiler sometimes emits backslashes as well
as forward slashes in debug_line, normalize everything to / for
conformity with the behavior of previous versions.
* proc/native: partial support for Windows async preempt mechanism
See https://github.com/golang/go/issues/36494 for a description of why
full support for 1.14 under windows is problematic.
* proc/native: disable Go 1.14 async preemption on Windows
See https://github.com/golang/go/issues/36494
* pkg/proc: Introduce Target
* pkg/proc: Remove Common.fncallEnabled
Realistically we only block it on recorded backends.
* pkg/proc: Move fncallForG to Target
* pkg/proc: Remove CommonProcess
Remove final bit of functionality stored in CommonProcess and move it to
*Target.
* pkg/proc: Add SupportsFunctionCall to Target
Use the name specified by compile unit attribute DW_AT_go_package_name,
introduced in Go 1.13, to map package names to package paths, instead of
trying to deduce it from names of types.
Also use this mapping for resolving global variables and function
expressions.
Modifies FindFileLocation, FindFunctionLocation and LineToPC as well as
service/debugger to support inlining and introduces the concept of
logical breakpoints.
For inlined functions FindFileLocation, FindFunctionLocation and
LineToPC will now return one PC address for each inlining and one PC
for the concrete implementation of the function (if present).
A proc.Breakpoint will continue to represent a physical breakpoint, at
a single memory location.
Breakpoints returned by service/debugger, however, will represent
logical breakpoints and may be associated with multiple memory
locations and, therefore, multiple proc.Breakpoints.
The necessary logic is introduced in service/debugger so that a change
to a logical breakpoint will be mirrored to all its physical
breakpoints and physical breakpoints are aggregated into a single
logical breakpoint when returned.
program
When evaluating type casts always resolve array types.
Instead of resolving them by looking up the string in debug_info
construct a fake array type so that a type cast to an array type always
works as long as the element type exists.
We already did this for byte arrays, this commit extends this to any
array type. The reason is that we return a fake array type (that
doesn't exist in the target program) for the array of a channel type.
Fixes#1736
Adds a '-r' option to the 'restart' command (and to the Restart API)
that re-records the target when using rr.
Also moves the code to delete the trace directory inside the gdbserial
package.
Trust argument order to determine argument frame layout when calling
functions, this allows calling optimized functions and removes the
special cases for runtime.mallocgc.
Fixes#1589
* proc: allow simultaneous call injection to multiple goroutines
Changes the call injection code so that we can have multiple call
injections going on at the same time as long as they happen on distinct
goroutines.
* proc: fix EvalExpressionWithCalls for constant expressions
The lack of address of constant expressions would confuse EvalExpressionWithCalls
Fixes#1577
* tests: fix tests for Go 1.13
- Go 1.13 doesn't autogenerate init functions anymore, tests that
expected that now fail and should be skipped.
- Plugin tests now need -gcflags'all=-N -l' now, we were probably
getting lucky with -gcflags='-N -l' before.
* proc: allow signed integers as shift counts
Go1.13 allows signed integers to be used as the right hand side of a
shift operator, change eval to match.
* goversion: update maximum supported version
* travis: force Go to use vendor directory
Travis scripts get confused by "go: downloading" lines, the exact
reason is not clear. Testing that the vendor directory is up to date is
a good idea anyway.
The location specified '<fnname>:0' could be used to set a breakpoint
on the entry point of the function (as opposed to locspec '<fnname>'
which sets it after the prologue).
Setting a breakpoint on an entry point is almost never useful, the way
this feature was implemented could cause it to be used accidentally and
there are other ways to accomplish the same task (by setting a
breakpoint on the PC address directly).
Allow changing the value of a string variable to a new literal string,
which requires calling runtime.mallocgc to allocate the string into the
target process.
This means that a command like:
call f("some string")
is now supported.
Additionally the command:
call s = "some string"
is also supported.
Fixes#826
* proc: support nested function calls
Changes the code in fncall.go to support nested function calls.
This changes delays argument evaluation until after we have used
the call injection protocol to allocate an argument frame. When
evaluating the parse tree of an expression we'll initiate each
function call we find on the way down and then complete the function
call on the way up.
For example. in:
f(g(x))
we will:
1. initiate the call injection protocol for f(...)
2. progress it until the point where we have space for the arguments
of 'f' (i.e. when we receive the debugCallAXCompleteCall message
from the target runtime)
3. inititate the call injection protocol for g(...)
4. progress it until the point where we have space for the arguments
of 'g'
5. copy the value of x into the argument frame of 'g'
6. finish the call to g(...)
7. copy the return value of g(x) into the argument frame of 'f'
8. finish the call to f(...)
Updates #119
* proc: bugfix: closure addr was wrong for non-closure functions
The initial implementation of the 'call' command required the
function call to be the root expression, i.e. something like:
double(3) + 1
was not allowed, because the root expression was the binary operator
'+', not the function call.
With this change expressions like the one above and others are
allowed.
This is the first step necessary to implement nested function calls
(where the result of a function call is used as argument to another
function call).
This is implemented by replacing proc.CallFunction with
proc.EvalExpressionWithCalls. EvalExpressionWithCalls will run
proc.(*EvalScope).EvalExpression in a different goroutine. This
goroutine, the 'eval' goroutine, will communicate with the main
goroutine of the debugger by means of two channels: continueRequest
and continueCompleted.
The eval goroutine evaluates the expression recursively, when
a function call is encountered it takes care of setting up the
function call on the target program and writes a request to the
continueRequest channel, this causes the 'main' goroutine to restart
the target program by calling proc.Continue.
Whenever Continue encounters a breakpoint that belongs to the
function call injection protocol (runtime.debugCallV1 and associated
functions) it writes to continueCompleted which resumes the 'eval'
goroutine.
The 'eval' goroutine takes care of implementing the function call
injection protocol.
When the expression is fully evaluated the 'eval' goroutine will
write a special message to 'continueRequest' signaling that the
expression evaluation is terminated which will cause Continue to
return to the user.
Updates #119
This change splits the BinaryInfo object into a slice of Image objects
containing information about the base executable and each loaded shared
library (note: go plugins are shared libraries).
Delve backens are supposed to call BinaryInfo.AddImage whenever they
detect that a new shared library has been loaded.
Member fields of BinaryInfo that are used to speed up access to dwarf
(Functions, packageVars, consts, etc...) remain part of BinaryInfo and
are updated to reference the correct image object. This simplifies this
change.
This approach has a few shortcomings:
1. Multiple shared libraries can define functions or globals with the
same name and we have no way to disambiguate between them.
2. We don't have a way to handle library unloading.
Both of those affect C shared libraries much more than they affect go
plugins. Go plugins can't be unloaded at all and a lot of name
collisions are prevented by import paths.
There's only one problem that is concerning: if two plugins both import
the same package they will end up with multiple definition for the same
function.
For example if two plugins use fmt.Printf the final in-memory image
(and therefore our BinaryInfo object) will end up with two copies of
fmt.Printf at different memory addresses. If a user types
break fmt.Printf
a breakpoint should be created at *both* locations.
Allowing this is a relatively complex change that should be done in a
different PR than this.
For this reason I consider this approach an acceptable and sustainable
stopgap.
Updates #865
Go 1.12 introduced a change to the internal map representation where
empty map cells can be marked with a tophash value of 1 instead of just
0.
Fixes#1531
The repository is being switched from the personal account
github.com/derekparker/delve to the organization account
github.com/go-delve/delve. This patch updates imports and docs, while
preserving things which should not be changed such as my name in the
CHANGELOG and in TODO comments.
Users can create sparse maps in two ways, either by:
a) adding lots of entries to a map and then deleting most of them, or
b) using the make(mapType, N) expression with a very large N
When this happens reading the resulting map will be very slow
because loadMap needs to scan many buckets for each entry it finds.
Technically this is not a bug, the user just created a map that's
very sparse and therefore very slow to read. However it's very
annoying to have the debugger hang for several seconds when trying
to read the local variables just because one of them (which you
might not even be interested into) happens to be a very sparse map.
There is an easy mitigation to this problem: not reading any
additional buckets once we know that we have already read all
entries of the map, or as many entries as we need to fulfill the
MaxArrayValues parameter.
Unfortunately this is mostly useless, a VLSM (Very Large Sparse Map)
with a single entry will still be slow to access, because the single
entry in the map could easily end up in the last bucket.
The obvious solution to this problem is to set a limit to the
number of buckets we read when loading a map. However there is no
good way to set this limit.
If we hardcode it there will be no way to print maps that are beyond
whatever limit we pick.
We could let users (or clients) specify it but the meaning of such
knob would be arcane and they would have no way of picking a good
value (because there is no objectively good value for it).
The solution used in this commit is to set an arbirtray limit on
the number of buckets we read but only when loadMap is invoked
through API calls ListLocalVars and ListFunctionArgs. In this way
`ListLocalVars` and `ListFunctionArgs` (which are often invoked
automatically by GUI clients) remain fast even in presence of a
VLSM, but the contents of the VLSM can still be inspected using
`EvalVariable`.
Some tests used a fake vendor directory placed inside _fixtures to
import some support packages.
In go.mod mode vendor directory are only supported on the root of the
project, which breaks some of our tests.
Since vendor directories outside the root of the project are so rare
anyway it's possible that a future version of go will stop supporting
it even in GOPATH mode.
Also it was weird and unnecessary in the first place anyawy.