We told clients that further loading of variables can be done by
specifying a type cast using the address of a variable that we
returned.
This does not work for registerized variables (or, in general,
variables that have a complex location expression) because we don't
give them unique addresses and we throw away the compositeMemory object
we made to read them.
This commit changes proc so that:
1. variables with location expression divided in pieces do get a unique
memory address
2. the compositeMemory object is saved somewhere
3. when an integer is cast back into a pointer type we look through our
saved compositeMemory objects to see if there is one that covers the
specified address and use it.
The unique memory addresses we generate have the MSB set to 1, as
specified by the Intel 86x64 manual addresses in this form are reserved
for kernel memory (which we can not read anyway) so we are guaranteed
to never generate a fake memory address that overlaps a real memory
address of the application.
The unfortunate side effect of this is that it will break clients that
do not deserialize the address to a 64bit integer. This practice is
contrary to how we defined our types and contrary to the specification
of the JSON format, as of json.org, however it is also fairly common,
due to javascript itself having only 53bit integers.
We could come up with a new mechanism but then even more old clients
would have to be changed.
Adds filtering and grouping to the goroutines command.
The current implementation of the goroutines command is modeled after
the threads command of gdb. It works well for programs that have up to
a couple dozen goroutines but becomes unusable quickly after that.
This commit adds the ability to filter and group goroutines by several
different properties, allowing a better debugging experience on
programs that have hundreds or thousands of goroutines.
If the base address isn't set then indexing and slicing will not work.
Large floating point registers already had the base set but small
general purpose registers did not.
* service/dap: add type information to dap variables
* add comment explaining map type choice
* rename to setClientCapabilities
* respond to review
* update TypeString definition
Changes the expression evaluation code so that register names, when not
shadowed by local or global variables, will evaluate to the current
value of the corresponding CPU register.
This allows a greater flexibility with displaying CPU registers than is
possible with using the ListRegisters API call. Also it allows
debuggers users to view register values even if the frontend they are
using does not implement a register view.
Delve represents registerized variables (fully or partially) using
compositeMemory, implementing proc.(*compositeMemory).WriteMemory is
necessary to make SetVariable and function calls work when Go will
switch to using the register calling convention in 1.17.
This commit also makes some refactoring by moving the code that
converts between register numbers and register names out of pkg/proc
into a different package.
Both structMember and findMethod implemented a depth-first search in
embedded fields but the Go specification requires a breadth-first
search. They also allowed promotion of fields in the concrete type of
embedded interfaces even though this is not allowed by Go.
Furthermore they both lacked protection from infinite recursion
when a type embeds itself and the user requests a non-existent field.
Fixes#2316
On linux we can not read memory if the thread we use to do it is
occupied doing certain system calls. The exact conditions when this
happens have never been clear.
This problem was worked around by using the Blocked method which
recognized the most common circumstances where this would happen.
However this is a hack: Blocked returning true doesn't mean that the
problem will manifest and Blocked returning false doesn't necessarily
mean the problem will not manifest. A side effect of this is issue
#2151 where sometimes we can't read the memory of a thread and find its
associated goroutine.
This commit fixes this problem by always reading memory using a thread
we know to be good for this, specifically the one returned by
ContinueOnce. In particular the changes are as follows:
1. Remove (ProcessInternal).CurrentThread and
(ProcessInternal).SetCurrentThread, the "current thread" becomes a
field of Target, CurrentThread becomes a (*Target) method and
(*Target).SwitchThread basically just sets a field Target.
2. The backends keep track of their own internal idea of what the
current thread is, to use it to read memory, this is the thread they
return from ContinueOnce as trapthread
3. The current thread in the backend and the current thread in Target
only ever get synchronized in two places: when the backend creates a
Target object the currentThread field of Target is initialized with the
backend's current thread and when (*Target).Restart gets called (when a
recording is rewound the currentThread used by Target might not exist
anymore).
4. We remove the MemoryReadWriter interface embedded in Thread and
instead add a Memory method to Process that returns a MemoryReadWriter.
The backends will return something here that will read memory using
the current thread saved by the backend.
5. The Thread.Blocked method is removed
One possible problem with this change is processes that have threads
with different memory maps. As far as I can determine this could happen
on old versions of linux but this option was removed in linux 2.5.
Fixes#2151
Since proc is supposed to work independently from the target
architecture it shouldn't use architecture-dependent types, like
uintptr. For example when reading a 64bit core file on a 32bit
architecture, uintptr will be 32bit but the addresses proc needs to
represent will be 64bit.
An internal breakpoint condition shouldn't ever error:
* use a ThreadContext to evaluate conditions if a goroutine isn't
available
* evaluate runtime.curg to a fake g variable containing only
`goid == 0` when there is no current goroutine
Fixes#2113
Changes implementations of proc.Registers interface and the
op.DwarfRegisters struct so that floating point registers can be loaded
only when they are needed.
Removes the floatingPoint parameter from proc.Thread.Registers.
This accomplishes three things:
1. it simplifies the proc.Thread.Registers interface
2. it makes it impossible to accidentally create a broken set of saved
registers or of op.DwarfRegisters by accidentally calling
Registers(false)
3. it improves general performance of Delve by avoiding to load
floating point registers as much as possible
Floating point registers are loaded under two circumstances:
1. When the Slice method is called with floatingPoint == true
2. When the Copy method is called
Benchmark before:
BenchmarkConditionalBreakpoints-4 1 4327350142 ns/op
Benchmark after:
BenchmarkConditionalBreakpoints-4 1 3852642917 ns/op
Updates #1549
Under some circumstances (methods with non-pointer receivers or from
embedded fields called through an interface) the compiler will
autogenerate wrapper functions.
This commit changes next, step and stepout to skip all autogenerated
wrappers.
Fixes#1908
Instead of rescanning debug_info every time we want to read a function
(either to find inlined calls or its variables) cache the tree of
dwarf.Entry that we would generate and use that.
Benchmark before:
BenchmarkConditionalBreakpoints-4 1 5164689165 ns/op
Benchmark after:
BenchmarkConditionalBreakpoints-4 1 4817425836 ns/op
Updates #1549
Implement debugging function for 386 on linux with reference to AMD64.
There are a few remaining problems that need to be solved in another time.
1. The stacktrace of cgo are not exactly as expected.
2. Not implement `core` for now.
3. Not implement `call` for now. Can't not find `runtime·debugCallV1` or
similar function in $GOROOT/src/runtime/asm_386.s.
Update #20
runtime.g is a large and growing struct, we only need a few fields.
Instead of using loadValue to load the full contents of g, cache its
memory and then only load the fields we care about.
Benchmark before:
BenchmarkConditionalBreakpoints-4 1 14586710018 ns/op
Benchmark after:
BenchmarkConditionalBreakpoints-4 1 12476166303 ns/op
Conditional breakpoint evaluation: 1.45ms -> 1.24ms
Updates #1549
* proc: separate amd64-arch code
separate amd64 code about stacktrace, so we can add arm64 stacktrace code.
* proc: implemente stacktrace of arm64
* delve now can use stack, frame commands on arm64-arch debug.
Co-authored-by: tykcd996 <tang.yuke@zte.com.cn>
Co-authored-by: hengwu0 <wu.heng@zte.com.cn>
* test: remove skip-code of stacktrace on arm64
* add LR DWARF register and remove skip-code for fixed tests
* proc: fix the Continue command after the hardcoded breakpoint on arm64
Arm64 use hardware breakpoint, and it will not set PC to the next instruction like amd64. We should move PC in both runtime.breakpoints and hardcoded breakpoints(probably cgo).
* proc: implement cgo stacktrace on arm64
* proc: combine amd64_stack.go and arm64_stack.go file
* proc: reorganize the stacktrace code
* move SwitchStack function arch-related
* fix Continue command after manual stop on arm64
* add timeout flag to make.go to enable infinite timeouts
Co-authored-by: aarzilli <alessandro.arzilli@gmail.com>
Co-authored-by: hengwu0 <wu.heng@zte.com.cn>
Co-authored-by: tykcd996 <56993522+tykcd996@users.noreply.github.com>
Co-authored-by: Alessandro Arzilli <alessandro.arzilli@gmail.com>
program
When evaluating type casts always resolve array types.
Instead of resolving them by looking up the string in debug_info
construct a fake array type so that a type cast to an array type always
works as long as the element type exists.
We already did this for byte arrays, this commit extends this to any
array type. The reason is that we return a fake array type (that
doesn't exist in the target program) for the array of a channel type.
Fixes#1736
Add options to start a stacktrace from the values saved in the
runtime.g struct as well as a way to disable the stackSwitch logic and
just get a normal stacktrace.
If a closure captures a variable but also defines a variable of the
same name in its root scope the shadowed flag would, sometimes, not be
appropriately applied to the captured variable.
This change:
1. sorts the variable list by depth *and* declaration line, so that
closure captured variables always appear before other root-scope
variables, regardless of the order used by the compiler
2. marks variable with the same name as shadowed even if there is only
one scope at play.
This fixes the problem but as a side effect:
1. programs compiled with Go prior to version 1.9 will have the
shadowed flag applied arbitrarily (previously the shadowed flag was not
applied at all)
2. programs compiled with Go prior to versoin 1.11 will still exhibit
the bug, as they do not have DeclLine information.
Fixes#1672
Moves EvalScope methods to the proper file and organizes everything
together. Also makes some EvalScope methods no longer methods and just
pure functions.
Add variables flag to mark variables that are allocated on a register
(and have no address) and variables that we read as result of a
function call (and are allocated on a stack that no longer exists when
we show them to the user).
Increases the maximum string length from 64 to 1MB when loading strings
for a binary operator, also delays the loading until it's necessary.
This ensures that comparison between strings will always succeed in
reasonable situations.
Fixes#1615
* proc: allow simultaneous call injection to multiple goroutines
Changes the call injection code so that we can have multiple call
injections going on at the same time as long as they happen on distinct
goroutines.
* proc: fix EvalExpressionWithCalls for constant expressions
The lack of address of constant expressions would confuse EvalExpressionWithCalls
Fixes#1577
Allow changing the value of a string variable to a new literal string,
which requires calling runtime.mallocgc to allocate the string into the
target process.
This means that a command like:
call f("some string")
is now supported.
Additionally the command:
call s = "some string"
is also supported.
Fixes#826
* proc: support nested function calls
Changes the code in fncall.go to support nested function calls.
This changes delays argument evaluation until after we have used
the call injection protocol to allocate an argument frame. When
evaluating the parse tree of an expression we'll initiate each
function call we find on the way down and then complete the function
call on the way up.
For example. in:
f(g(x))
we will:
1. initiate the call injection protocol for f(...)
2. progress it until the point where we have space for the arguments
of 'f' (i.e. when we receive the debugCallAXCompleteCall message
from the target runtime)
3. inititate the call injection protocol for g(...)
4. progress it until the point where we have space for the arguments
of 'g'
5. copy the value of x into the argument frame of 'g'
6. finish the call to g(...)
7. copy the return value of g(x) into the argument frame of 'f'
8. finish the call to f(...)
Updates #119
* proc: bugfix: closure addr was wrong for non-closure functions
The initial implementation of the 'call' command required the
function call to be the root expression, i.e. something like:
double(3) + 1
was not allowed, because the root expression was the binary operator
'+', not the function call.
With this change expressions like the one above and others are
allowed.
This is the first step necessary to implement nested function calls
(where the result of a function call is used as argument to another
function call).
This is implemented by replacing proc.CallFunction with
proc.EvalExpressionWithCalls. EvalExpressionWithCalls will run
proc.(*EvalScope).EvalExpression in a different goroutine. This
goroutine, the 'eval' goroutine, will communicate with the main
goroutine of the debugger by means of two channels: continueRequest
and continueCompleted.
The eval goroutine evaluates the expression recursively, when
a function call is encountered it takes care of setting up the
function call on the target program and writes a request to the
continueRequest channel, this causes the 'main' goroutine to restart
the target program by calling proc.Continue.
Whenever Continue encounters a breakpoint that belongs to the
function call injection protocol (runtime.debugCallV1 and associated
functions) it writes to continueCompleted which resumes the 'eval'
goroutine.
The 'eval' goroutine takes care of implementing the function call
injection protocol.
When the expression is fully evaluated the 'eval' goroutine will
write a special message to 'continueRequest' signaling that the
expression evaluation is terminated which will cause Continue to
return to the user.
Updates #119
This change splits the BinaryInfo object into a slice of Image objects
containing information about the base executable and each loaded shared
library (note: go plugins are shared libraries).
Delve backens are supposed to call BinaryInfo.AddImage whenever they
detect that a new shared library has been loaded.
Member fields of BinaryInfo that are used to speed up access to dwarf
(Functions, packageVars, consts, etc...) remain part of BinaryInfo and
are updated to reference the correct image object. This simplifies this
change.
This approach has a few shortcomings:
1. Multiple shared libraries can define functions or globals with the
same name and we have no way to disambiguate between them.
2. We don't have a way to handle library unloading.
Both of those affect C shared libraries much more than they affect go
plugins. Go plugins can't be unloaded at all and a lot of name
collisions are prevented by import paths.
There's only one problem that is concerning: if two plugins both import
the same package they will end up with multiple definition for the same
function.
For example if two plugins use fmt.Printf the final in-memory image
(and therefore our BinaryInfo object) will end up with two copies of
fmt.Printf at different memory addresses. If a user types
break fmt.Printf
a breakpoint should be created at *both* locations.
Allowing this is a relatively complex change that should be done in a
different PR than this.
For this reason I consider this approach an acceptable and sustainable
stopgap.
Updates #865
Go 1.12 introduced a change to the internal map representation where
empty map cells can be marked with a tophash value of 1 instead of just
0.
Fixes#1531