delve/pkg/proc/stack.go

692 lines
21 KiB
Go
Raw Normal View History

2015-06-12 19:49:23 +00:00
package proc
import (
"debug/dwarf"
"errors"
"fmt"
"go/constant"
2017-02-08 00:23:47 +00:00
"github.com/go-delve/delve/pkg/dwarf/frame"
"github.com/go-delve/delve/pkg/dwarf/op"
"github.com/go-delve/delve/pkg/dwarf/reader"
)
2018-03-20 10:05:35 +00:00
// This code is partly adapted from runtime.gentraceback in
// $GOROOT/src/runtime/traceback.go
2016-01-10 08:57:52 +00:00
// Stackframe represents a frame in a system stack.
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
//
// Each stack frame has two locations Current and Call.
//
// For the topmost stackframe Current and Call are the same location.
//
// For stackframes after the first Current is the location corresponding to
// the return address and Call is the location of the CALL instruction that
// was last executed on the frame. Note however that Call.PC is always equal
// to Current.PC, because finding the correct value for Call.PC would
// require disassembling each function in the stacktrace.
//
// For synthetic stackframes generated for inlined function calls Current.Fn
// is the function containing the inlining and Call.Fn in the inlined
// function.
type Stackframe struct {
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
Current, Call Location
// Frame registers.
Regs op.DwarfRegisters
// High address of the stack.
stackHi uint64
// Return address for this stack frame (as read from the stack frame itself).
Ret uint64
// Address to the memory location containing the return address
addrret uint64
2018-03-20 10:05:35 +00:00
// Err is set if an error occurred during stacktrace
Err error
// SystemStack is true if this frame belongs to a system stack.
SystemStack bool
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
// Inlined is true if this frame is actually an inlined call.
Inlined bool
// Bottom is true if this is the bottom of the stack
Bottom bool
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
// lastpc is a memory address guaranteed to belong to the last instruction
// executed in this stack frame.
// For the topmost stack frame this will be the same as Current.PC and
// Call.PC, for other stack frames it will usually be Current.PC-1, but
// could be different when inlined calls are involved in the stacktrace.
// Note that this address isn't guaranteed to belong to the start of an
// instruction and, for this reason, should not be propagated outside of
// pkg/proc.
// Use this value to determine active lexical scopes for the stackframe.
lastpc uint64
// TopmostDefer is the defer that would be at the top of the stack when a
// panic unwind would get to this call frame, in other words it's the first
// deferred function that will be called if the runtime unwinds past this
// call frame.
TopmostDefer *Defer
// Defers is the list of functions deferred by this stack frame (so far).
Defers []*Defer
}
// FrameOffset returns the address of the stack frame, absolute for system
// stack frames or as an offset from stackhi for goroutine stacks (a
// negative value).
func (frame *Stackframe) FrameOffset() int64 {
if frame.SystemStack {
return frame.Regs.CFA
}
return frame.Regs.CFA - int64(frame.stackHi)
}
// FramePointerOffset returns the value of the frame pointer, absolute for
// system stack frames or as an offset from stackhi for goroutine stacks (a
// negative value).
func (frame *Stackframe) FramePointerOffset() int64 {
if frame.SystemStack {
return int64(frame.Regs.BP())
}
return int64(frame.Regs.BP()) - int64(frame.stackHi)
}
// ThreadStacktrace returns the stack trace for thread.
// Note the locations in the array are return addresses not call addresses.
func ThreadStacktrace(thread Thread, depth int) ([]Stackframe, error) {
g, _ := GetG(thread)
if g == nil {
regs, err := thread.Registers()
if err != nil {
return nil, err
}
so := thread.BinInfo().PCToImage(regs.PC())
proc/*: remove proc.Thread.Blocked, refactor memory access (#2206) On linux we can not read memory if the thread we use to do it is occupied doing certain system calls. The exact conditions when this happens have never been clear. This problem was worked around by using the Blocked method which recognized the most common circumstances where this would happen. However this is a hack: Blocked returning true doesn't mean that the problem will manifest and Blocked returning false doesn't necessarily mean the problem will not manifest. A side effect of this is issue #2151 where sometimes we can't read the memory of a thread and find its associated goroutine. This commit fixes this problem by always reading memory using a thread we know to be good for this, specifically the one returned by ContinueOnce. In particular the changes are as follows: 1. Remove (ProcessInternal).CurrentThread and (ProcessInternal).SetCurrentThread, the "current thread" becomes a field of Target, CurrentThread becomes a (*Target) method and (*Target).SwitchThread basically just sets a field Target. 2. The backends keep track of their own internal idea of what the current thread is, to use it to read memory, this is the thread they return from ContinueOnce as trapthread 3. The current thread in the backend and the current thread in Target only ever get synchronized in two places: when the backend creates a Target object the currentThread field of Target is initialized with the backend's current thread and when (*Target).Restart gets called (when a recording is rewound the currentThread used by Target might not exist anymore). 4. We remove the MemoryReadWriter interface embedded in Thread and instead add a Memory method to Process that returns a MemoryReadWriter. The backends will return something here that will read memory using the current thread saved by the backend. 5. The Thread.Blocked method is removed One possible problem with this change is processes that have threads with different memory maps. As far as I can determine this could happen on old versions of linux but this option was removed in linux 2.5. Fixes #2151
2020-11-09 19:28:40 +00:00
it := newStackIterator(thread.BinInfo(), thread.ProcessMemory(), thread.BinInfo().Arch.RegistersToDwarfRegisters(so.StaticBase, regs), 0, nil, -1, nil, 0)
return it.stacktrace(depth)
}
return g.Stacktrace(depth, 0)
}
func (g *G) stackIterator(opts StacktraceOptions) (*stackIterator, error) {
stkbar, err := g.stkbar()
if err != nil {
return nil, err
}
bi := g.variable.bi
if g.Thread != nil {
regs, err := g.Thread.Registers()
if err != nil {
return nil, err
}
so := bi.PCToImage(regs.PC())
return newStackIterator(
proc/*: remove proc.Thread.Blocked, refactor memory access (#2206) On linux we can not read memory if the thread we use to do it is occupied doing certain system calls. The exact conditions when this happens have never been clear. This problem was worked around by using the Blocked method which recognized the most common circumstances where this would happen. However this is a hack: Blocked returning true doesn't mean that the problem will manifest and Blocked returning false doesn't necessarily mean the problem will not manifest. A side effect of this is issue #2151 where sometimes we can't read the memory of a thread and find its associated goroutine. This commit fixes this problem by always reading memory using a thread we know to be good for this, specifically the one returned by ContinueOnce. In particular the changes are as follows: 1. Remove (ProcessInternal).CurrentThread and (ProcessInternal).SetCurrentThread, the "current thread" becomes a field of Target, CurrentThread becomes a (*Target) method and (*Target).SwitchThread basically just sets a field Target. 2. The backends keep track of their own internal idea of what the current thread is, to use it to read memory, this is the thread they return from ContinueOnce as trapthread 3. The current thread in the backend and the current thread in Target only ever get synchronized in two places: when the backend creates a Target object the currentThread field of Target is initialized with the backend's current thread and when (*Target).Restart gets called (when a recording is rewound the currentThread used by Target might not exist anymore). 4. We remove the MemoryReadWriter interface embedded in Thread and instead add a Memory method to Process that returns a MemoryReadWriter. The backends will return something here that will read memory using the current thread saved by the backend. 5. The Thread.Blocked method is removed One possible problem with this change is processes that have threads with different memory maps. As far as I can determine this could happen on old versions of linux but this option was removed in linux 2.5. Fixes #2151
2020-11-09 19:28:40 +00:00
bi, g.variable.mem,
bi.Arch.RegistersToDwarfRegisters(so.StaticBase, regs),
g.stack.hi, stkbar, g.stkbarPos, g, opts), nil
}
so := g.variable.bi.PCToImage(g.PC)
return newStackIterator(
bi, g.variable.mem,
bi.Arch.addrAndStackRegsToDwarfRegisters(so.StaticBase, g.PC, g.SP, g.BP, g.LR),
g.stack.hi, stkbar, g.stkbarPos, g, opts), nil
}
type StacktraceOptions uint16
const (
// StacktraceReadDefers requests a stacktrace decorated with deferred calls
// for each frame.
StacktraceReadDefers StacktraceOptions = 1 << iota
// StacktraceSimple requests a stacktrace where no stack switches will be
// attempted.
StacktraceSimple
// StacktraceG requests a stacktrace starting with the register
// values saved in the runtime.g structure.
StacktraceG
)
// Stacktrace returns the stack trace for a goroutine.
// Note the locations in the array are return addresses not call addresses.
func (g *G) Stacktrace(depth int, opts StacktraceOptions) ([]Stackframe, error) {
it, err := g.stackIterator(opts)
if err != nil {
return nil, err
}
frames, err := it.stacktrace(depth)
if err != nil {
return nil, err
}
if opts&StacktraceReadDefers != 0 {
g.readDefers(frames)
}
return frames, nil
}
2016-01-10 08:57:52 +00:00
// NullAddrError is an error for a null address.
2015-05-07 21:55:06 +00:00
type NullAddrError struct{}
func (n NullAddrError) Error() string {
return "NULL address"
}
// stackIterator holds information
2016-01-10 08:57:52 +00:00
// required to iterate and walk the program
// stack.
type stackIterator struct {
pc uint64
top bool
atend bool
frame Stackframe
bi *BinaryInfo
mem MemoryReadWriter
err error
stackhi uint64
systemstack bool
stackBarrierPC uint64
stkbar []savedLR
// regs is the register set for the current frame
regs op.DwarfRegisters
g *G // the goroutine being stacktraced, nil if we are stacktracing a goroutine-less thread
g0_sched_sp uint64 // value of g0.sched.sp (see comments around its use)
g0_sched_sp_loaded bool // g0_sched_sp was loaded from g0
opts StacktraceOptions
}
type savedLR struct {
ptr uint64
val uint64
}
func newStackIterator(bi *BinaryInfo, mem MemoryReadWriter, regs op.DwarfRegisters, stackhi uint64, stkbar []savedLR, stkbarPos int, g *G, opts StacktraceOptions) *stackIterator {
stackBarrierFunc := bi.LookupFunc["runtime.stackBarrier"] // stack barriers were removed in Go 1.9
var stackBarrierPC uint64
if stackBarrierFunc != nil && stkbar != nil {
stackBarrierPC = stackBarrierFunc.Entry
fn := bi.PCToFunc(regs.PC())
if fn != nil && fn.Name == "runtime.stackBarrier" {
// We caught the goroutine as it's executing the stack barrier, we must
// determine whether or not g.stackPos has already been incremented or not.
if len(stkbar) > 0 && stkbar[stkbarPos].ptr < regs.SP() {
// runtime.stackBarrier has not incremented stkbarPos.
} else if stkbarPos > 0 && stkbar[stkbarPos-1].ptr < regs.SP() {
// runtime.stackBarrier has incremented stkbarPos.
stkbarPos--
} else {
return &stackIterator{err: fmt.Errorf("failed to unwind through stackBarrier at SP %x", regs.SP())}
}
}
stkbar = stkbar[stkbarPos:]
}
systemstack := true
if g != nil {
systemstack = g.SystemStack
}
return &stackIterator{pc: regs.PC(), regs: regs, top: true, bi: bi, mem: mem, err: nil, atend: false, stackhi: stackhi, stackBarrierPC: stackBarrierPC, stkbar: stkbar, systemstack: systemstack, g: g, opts: opts}
}
2016-01-10 08:57:52 +00:00
// Next points the iterator to the next stack frame.
func (it *stackIterator) Next() bool {
if it.err != nil || it.atend {
return false
}
callFrameRegs, ret, retaddr := it.advanceRegs()
it.frame = it.newStackframe(ret, retaddr)
if it.stkbar != nil && it.frame.Ret == it.stackBarrierPC && it.frame.addrret == it.stkbar[0].ptr {
// Skip stack barrier frames
it.frame.Ret = it.stkbar[0].val
it.stkbar = it.stkbar[1:]
}
if it.opts&StacktraceSimple == 0 {
if it.bi.Arch.switchStack(it, &callFrameRegs) {
return true
}
}
if it.frame.Ret <= 0 {
it.atend = true
return true
}
it.top = false
it.pc = it.frame.Ret
it.regs = callFrameRegs
return true
}
func (it *stackIterator) switchToGoroutineStack() {
it.systemstack = false
it.top = false
it.pc = it.g.PC
it.regs.Reg(it.regs.SPRegNum).Uint64Val = it.g.SP
it.regs.AddReg(it.regs.BPRegNum, op.DwarfRegisterFromUint64(it.g.BP))
if it.bi.Arch.Name == "arm64" {
it.regs.Reg(it.regs.LRRegNum).Uint64Val = it.g.LR
}
}
2016-01-10 08:57:52 +00:00
// Frame returns the frame the iterator is pointing at.
func (it *stackIterator) Frame() Stackframe {
it.frame.Bottom = it.atend
return it.frame
}
2016-01-10 08:57:52 +00:00
// Err returns the error encountered during stack iteration.
func (it *stackIterator) Err() error {
return it.err
}
// frameBase calculates the frame base pseudo-register for DWARF for fn and
// the current frame.
func (it *stackIterator) frameBase(fn *Function) int64 {
dwarfTree, err := fn.cu.image.getDwarfTree(fn.offset)
if err != nil {
return 0
}
fb, _, _, _ := it.bi.Location(dwarfTree.Entry, dwarf.AttrFrameBase, it.pc, it.regs)
return fb
}
func (it *stackIterator) newStackframe(ret, retaddr uint64) Stackframe {
if retaddr == 0 {
it.err = NullAddrError{}
return Stackframe{}
}
f, l, fn := it.bi.PCToLine(it.pc)
if fn == nil {
f = "?"
l = -1
} else {
it.regs.FrameBase = it.frameBase(fn)
}
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
r := Stackframe{Current: Location{PC: it.pc, File: f, Line: l, Fn: fn}, Regs: it.regs, Ret: ret, addrret: retaddr, stackHi: it.stackhi, SystemStack: it.systemstack, lastpc: it.pc}
r.Call = r.Current
if !it.top && r.Current.Fn != nil && it.pc != r.Current.Fn.Entry {
// if the return address is the entry point of the function that
// contains it then this is some kind of fake return frame (for example
// runtime.sigreturn) that didn't actually call the current frame,
// attempting to get the location of the CALL instruction would just
// obfuscate what's going on, since there is no CALL instruction.
switch r.Current.Fn.Name {
case "runtime.mstart", "runtime.systemstack_switch":
// these frames are inserted by runtime.systemstack and there is no CALL
// instruction to look for at pc - 1
default:
r.lastpc = it.pc - 1
r.Call.File, r.Call.Line = r.Current.Fn.cu.lineInfo.PCToLine(r.Current.Fn.Entry, it.pc-1)
}
}
return r
}
func (it *stackIterator) stacktrace(depth int) ([]Stackframe, error) {
if depth < 0 {
return nil, errors.New("negative maximum stack depth")
}
if it.opts&StacktraceG != 0 && it.g != nil {
it.switchToGoroutineStack()
it.top = true
}
frames := make([]Stackframe, 0, depth+1)
for it.Next() {
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
frames = it.appendInlineCalls(frames, it.Frame())
if len(frames) >= depth+1 {
break
}
}
if err := it.Err(); err != nil {
if len(frames) == 0 {
return nil, err
}
frames = append(frames, Stackframe{Err: err})
}
return frames, nil
}
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
func (it *stackIterator) appendInlineCalls(frames []Stackframe, frame Stackframe) []Stackframe {
if frame.Call.Fn == nil {
return append(frames, frame)
}
if frame.Call.Fn.cu.lineInfo == nil {
return append(frames, frame)
}
callpc := frame.Call.PC
if len(frames) > 0 {
callpc--
}
dwarfTree, err := frame.Call.Fn.cu.image.getDwarfTree(frame.Call.Fn.offset)
if err != nil {
return append(frames, frame)
}
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
for _, entry := range reader.InlineStack(dwarfTree, callpc) {
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
fnname, okname := entry.Val(dwarf.AttrName).(string)
fileidx, okfileidx := entry.Val(dwarf.AttrCallFile).(int64)
line, okline := entry.Val(dwarf.AttrCallLine).(int64)
if !okname || !okfileidx || !okline {
break
}
if fileidx-1 < 0 || fileidx-1 >= int64(len(frame.Current.Fn.cu.lineInfo.FileNames)) {
break
}
inlfn := &Function{Name: fnname, Entry: frame.Call.Fn.Entry, End: frame.Call.Fn.End, offset: entry.Offset, cu: frame.Call.Fn.cu}
proc: support inlining Go 1.10 added inlined calls to debug_info, this commit adds support for DW_TAG_inlined_call to delve, both for stack traces (where inlined calls will appear as normal stack frames) and to correct the behavior of next, step and stepout. The calls to Next and Frame of stackIterator continue to work unchanged and only return real stack frames, after reading each line appendInlinedCalls is called to unpacked all the inlined calls that involve the current PC. The fake stack frames produced by appendInlinedCalls are distinguished from real stack frames by having the Inlined attribute set to true. Also their Current and Call locations are treated differently. The Call location will be changed to represent the position inside the inlined call, while the Current location will always reference the real stack frame. This is done because: * next, step and stepout need to access the debug_info entry of the real function they are stepping through * we are already manipulating Call in different ways while Current is just what we read from the call stack The strategy remains mostly the same, we disassemble the function and we set a breakpoint on each instruction corresponding to a different file:line. The function in question will be the one corresponding to the first real (i.e. non-inlined) stack frame. * If the current function contains inlined calls, 'next' will not set any breakpoints on instructions that belong to inlined calls. We do not do this for 'step'. * If we are inside an inlined call that makes other inlined functions, 'next' will not set any breakpoints that belong to inlined calls that are children of the current inlined call. * If the current function is inlined the breakpoint on the return address won't be set, because inlined frames don't have a return address. * The code we use for stepout doesn't work at all if we are inside an inlined call, instead we call 'next' but instruct it to remove all PCs belonging to the current inlined call.
2017-11-13 15:54:08 +00:00
frames = append(frames, Stackframe{
Current: frame.Current,
Call: Location{
frame.Call.PC,
frame.Call.File,
frame.Call.Line,
inlfn,
},
Regs: frame.Regs,
stackHi: frame.stackHi,
Ret: frame.Ret,
addrret: frame.addrret,
Err: frame.Err,
SystemStack: frame.SystemStack,
Inlined: true,
lastpc: frame.lastpc,
})
frame.Call.File = frame.Current.Fn.cu.lineInfo.FileNames[fileidx-1].Path
frame.Call.Line = int(line)
}
return append(frames, frame)
}
// advanceRegs calculates it.callFrameRegs using it.regs and the frame
// descriptor entry for the current stack frame.
// it.regs.CallFrameCFA is updated.
func (it *stackIterator) advanceRegs() (callFrameRegs op.DwarfRegisters, ret uint64, retaddr uint64) {
fde, err := it.bi.frameEntries.FDEForPC(it.pc)
var framectx *frame.FrameContext
if _, nofde := err.(*frame.ErrNoFDEForPC); nofde {
framectx = it.bi.Arch.fixFrameUnwindContext(nil, it.pc, it.bi)
} else {
framectx = it.bi.Arch.fixFrameUnwindContext(fde.EstablishFrame(it.pc), it.pc, it.bi)
}
cfareg, err := it.executeFrameRegRule(0, framectx.CFA, 0)
if cfareg == nil {
it.err = fmt.Errorf("CFA becomes undefined at PC %#x", it.pc)
proc: support debugging plugins (#1414) This change splits the BinaryInfo object into a slice of Image objects containing information about the base executable and each loaded shared library (note: go plugins are shared libraries). Delve backens are supposed to call BinaryInfo.AddImage whenever they detect that a new shared library has been loaded. Member fields of BinaryInfo that are used to speed up access to dwarf (Functions, packageVars, consts, etc...) remain part of BinaryInfo and are updated to reference the correct image object. This simplifies this change. This approach has a few shortcomings: 1. Multiple shared libraries can define functions or globals with the same name and we have no way to disambiguate between them. 2. We don't have a way to handle library unloading. Both of those affect C shared libraries much more than they affect go plugins. Go plugins can't be unloaded at all and a lot of name collisions are prevented by import paths. There's only one problem that is concerning: if two plugins both import the same package they will end up with multiple definition for the same function. For example if two plugins use fmt.Printf the final in-memory image (and therefore our BinaryInfo object) will end up with two copies of fmt.Printf at different memory addresses. If a user types break fmt.Printf a breakpoint should be created at *both* locations. Allowing this is a relatively complex change that should be done in a different PR than this. For this reason I consider this approach an acceptable and sustainable stopgap. Updates #865
2019-05-08 21:06:38 +00:00
return op.DwarfRegisters{}, 0, 0
}
it.regs.CFA = int64(cfareg.Uint64Val)
callimage := it.bi.PCToImage(it.pc)
proc: support debugging plugins (#1414) This change splits the BinaryInfo object into a slice of Image objects containing information about the base executable and each loaded shared library (note: go plugins are shared libraries). Delve backens are supposed to call BinaryInfo.AddImage whenever they detect that a new shared library has been loaded. Member fields of BinaryInfo that are used to speed up access to dwarf (Functions, packageVars, consts, etc...) remain part of BinaryInfo and are updated to reference the correct image object. This simplifies this change. This approach has a few shortcomings: 1. Multiple shared libraries can define functions or globals with the same name and we have no way to disambiguate between them. 2. We don't have a way to handle library unloading. Both of those affect C shared libraries much more than they affect go plugins. Go plugins can't be unloaded at all and a lot of name collisions are prevented by import paths. There's only one problem that is concerning: if two plugins both import the same package they will end up with multiple definition for the same function. For example if two plugins use fmt.Printf the final in-memory image (and therefore our BinaryInfo object) will end up with two copies of fmt.Printf at different memory addresses. If a user types break fmt.Printf a breakpoint should be created at *both* locations. Allowing this is a relatively complex change that should be done in a different PR than this. For this reason I consider this approach an acceptable and sustainable stopgap. Updates #865
2019-05-08 21:06:38 +00:00
callFrameRegs = op.DwarfRegisters{StaticBase: callimage.StaticBase, ByteOrder: it.regs.ByteOrder, PCRegNum: it.regs.PCRegNum, SPRegNum: it.regs.SPRegNum, BPRegNum: it.regs.BPRegNum, LRRegNum: it.regs.LRRegNum}
// According to the standard the compiler should be responsible for emitting
// rules for the RSP register so that it can then be used to calculate CFA,
// however neither Go nor GCC do this.
// In the following line we copy GDB's behaviour by assuming this is
// implicit.
// See also the comment in dwarf2_frame_default_init in
// $GDB_SOURCE/dwarf2-frame.c
callFrameRegs.AddReg(callFrameRegs.SPRegNum, cfareg)
for i, regRule := range framectx.Regs {
reg, err := it.executeFrameRegRule(i, regRule, it.regs.CFA)
callFrameRegs.AddReg(i, reg)
if i == framectx.RetAddrReg {
if reg == nil {
if err == nil {
err = fmt.Errorf("Undefined return address at %#x", it.pc)
}
it.err = err
} else {
ret = reg.Uint64Val
}
retaddr = uint64(it.regs.CFA + regRule.Offset)
}
}
if it.bi.Arch.Name == "arm64" {
if ret == 0 && it.regs.Reg(it.regs.LRRegNum) != nil {
ret = it.regs.Reg(it.regs.LRRegNum).Uint64Val
}
}
return callFrameRegs, ret, retaddr
}
func (it *stackIterator) executeFrameRegRule(regnum uint64, rule frame.DWRule, cfa int64) (*op.DwarfRegister, error) {
switch rule.Rule {
default:
fallthrough
case frame.RuleUndefined:
return nil, nil
case frame.RuleSameVal:
if it.regs.Reg(regnum) == nil {
return nil, nil
}
reg := *it.regs.Reg(regnum)
return &reg, nil
case frame.RuleOffset:
return it.readRegisterAt(regnum, uint64(cfa+rule.Offset))
case frame.RuleValOffset:
return op.DwarfRegisterFromUint64(uint64(cfa + rule.Offset)), nil
case frame.RuleRegister:
return it.regs.Reg(rule.Reg), nil
case frame.RuleExpression:
v, _, err := op.ExecuteStackProgram(it.regs, rule.Expression, it.bi.Arch.PtrSize())
if err != nil {
return nil, err
}
return it.readRegisterAt(regnum, uint64(v))
case frame.RuleValExpression:
v, _, err := op.ExecuteStackProgram(it.regs, rule.Expression, it.bi.Arch.PtrSize())
if err != nil {
return nil, err
}
return op.DwarfRegisterFromUint64(uint64(v)), nil
case frame.RuleArchitectural:
return nil, errors.New("architectural frame rules are unsupported")
case frame.RuleCFA:
if it.regs.Reg(rule.Reg) == nil {
return nil, nil
}
return op.DwarfRegisterFromUint64(uint64(int64(it.regs.Uint64Val(rule.Reg)) + rule.Offset)), nil
case frame.RuleFramePointer:
curReg := it.regs.Reg(rule.Reg)
if curReg == nil {
return nil, nil
}
if curReg.Uint64Val <= uint64(cfa) {
return it.readRegisterAt(regnum, curReg.Uint64Val)
}
newReg := *curReg
return &newReg, nil
}
}
func (it *stackIterator) readRegisterAt(regnum uint64, addr uint64) (*op.DwarfRegister, error) {
buf := make([]byte, it.bi.Arch.regSize(regnum))
_, err := it.mem.ReadMemory(buf, addr)
if err != nil {
return nil, err
}
return op.DwarfRegisterFromBytes(buf), nil
}
func (it *stackIterator) loadG0SchedSP() {
if it.g0_sched_sp_loaded {
return
}
it.g0_sched_sp_loaded = true
if it.g != nil {
mvar, _ := it.g.variable.structMember("m")
if mvar != nil {
g0var, _ := mvar.structMember("g0")
if g0var != nil {
g0, _ := g0var.parseG()
if g0 != nil {
it.g0_sched_sp = g0.SP
}
}
}
}
}
// Defer represents one deferred call
type Defer struct {
DeferredPC uint64 // Value of field _defer.fn.fn, the deferred function
DeferPC uint64 // PC address of instruction that added this defer
SP uint64 // Value of SP register when this function was deferred (this field gets adjusted when the stack is moved to match the new stack space)
link *Defer // Next deferred function
argSz int64
variable *Variable
Unreadable error
}
// readDefers decorates the frames with the function deferred at each stack frame.
func (g *G) readDefers(frames []Stackframe) {
curdefer := g.Defer()
i := 0
// scan simultaneously frames and the curdefer linked list, assigning
// defers to their associated frames.
for {
if curdefer == nil || i >= len(frames) {
return
}
if curdefer.Unreadable != nil {
// Current defer is unreadable, stick it into the first available frame
// (so that it can be reported to the user) and exit
frames[i].Defers = append(frames[i].Defers, curdefer)
return
}
if frames[i].Err != nil {
return
}
if frames[i].TopmostDefer == nil {
frames[i].TopmostDefer = curdefer
}
if frames[i].SystemStack || curdefer.SP >= uint64(frames[i].Regs.CFA) {
// frames[i].Regs.CFA is the value that SP had before the function of
// frames[i] was called.
// This means that when curdefer.SP == frames[i].Regs.CFA then curdefer
// was added by the previous frame.
//
// curdefer.SP < frames[i].Regs.CFA means curdefer was added by a
// function further down the stack.
//
// SystemStack frames live on a different physical stack and can't be
// compared with deferred frames.
i++
} else {
frames[i].Defers = append(frames[i].Defers, curdefer)
curdefer = curdefer.Next()
}
}
}
func (d *Defer) load() {
proc: Improve performance of loadMap on very large sparse maps Users can create sparse maps in two ways, either by: a) adding lots of entries to a map and then deleting most of them, or b) using the make(mapType, N) expression with a very large N When this happens reading the resulting map will be very slow because loadMap needs to scan many buckets for each entry it finds. Technically this is not a bug, the user just created a map that's very sparse and therefore very slow to read. However it's very annoying to have the debugger hang for several seconds when trying to read the local variables just because one of them (which you might not even be interested into) happens to be a very sparse map. There is an easy mitigation to this problem: not reading any additional buckets once we know that we have already read all entries of the map, or as many entries as we need to fulfill the MaxArrayValues parameter. Unfortunately this is mostly useless, a VLSM (Very Large Sparse Map) with a single entry will still be slow to access, because the single entry in the map could easily end up in the last bucket. The obvious solution to this problem is to set a limit to the number of buckets we read when loading a map. However there is no good way to set this limit. If we hardcode it there will be no way to print maps that are beyond whatever limit we pick. We could let users (or clients) specify it but the meaning of such knob would be arcane and they would have no way of picking a good value (because there is no objectively good value for it). The solution used in this commit is to set an arbirtray limit on the number of buckets we read but only when loadMap is invoked through API calls ListLocalVars and ListFunctionArgs. In this way `ListLocalVars` and `ListFunctionArgs` (which are often invoked automatically by GUI clients) remain fast even in presence of a VLSM, but the contents of the VLSM can still be inspected using `EvalVariable`.
2018-10-29 11:22:03 +00:00
d.variable.loadValue(LoadConfig{false, 1, 0, 0, -1, 0})
if d.variable.Unreadable != nil {
d.Unreadable = d.variable.Unreadable
return
}
fnvar := d.variable.fieldVariable("fn").maybeDereference()
if fnvar.Addr != 0 {
fnvar = fnvar.loadFieldNamed("fn")
if fnvar.Unreadable == nil {
d.DeferredPC, _ = constant.Uint64Val(fnvar.Value)
}
}
d.DeferPC, _ = constant.Uint64Val(d.variable.fieldVariable("pc").Value)
d.SP, _ = constant.Uint64Val(d.variable.fieldVariable("sp").Value)
d.argSz, _ = constant.Int64Val(d.variable.fieldVariable("siz").Value)
linkvar := d.variable.fieldVariable("link").maybeDereference()
if linkvar.Addr != 0 {
d.link = &Defer{variable: linkvar}
}
}
// errSPDecreased is used when (*Defer).Next detects a corrupted linked
// list, specifically when after followin a link pointer the value of SP
// decreases rather than increasing or staying the same (the defer list is a
// FIFO list, nodes further down the list have been added by function calls
// further down the call stack and therefore the SP should always increase).
var errSPDecreased = errors.New("corrupted defer list: SP decreased")
// Next returns the next defer in the linked list
func (d *Defer) Next() *Defer {
if d.link == nil {
return nil
}
d.link.load()
if d.link.SP < d.SP {
d.link.Unreadable = errSPDecreased
}
return d.link
}
// EvalScope returns an EvalScope relative to the argument frame of this deferred call.
// The argument frame of a deferred call is stored in memory immediately
// after the deferred header.
func (d *Defer) EvalScope(thread Thread) (*EvalScope, error) {
scope, err := GoroutineScope(thread)
if err != nil {
return nil, fmt.Errorf("could not get scope: %v", err)
}
bi := thread.BinInfo()
scope.PC = d.DeferredPC
scope.File, scope.Line, scope.Fn = bi.PCToLine(d.DeferredPC)
if scope.Fn == nil {
return nil, fmt.Errorf("could not find function at %#x", d.DeferredPC)
}
// The arguments are stored immediately after the defer header struct, i.e.
// addr+sizeof(_defer).
if !bi.Arch.usesLR {
// On architectures that don't have a link register CFA is always the address of the first
// argument, that's what we use for the value of CFA.
// For SP we use CFA minus the size of one pointer because that would be
// the space occupied by pushing the return address on the stack during the
// CALL.
scope.Regs.CFA = (int64(d.variable.Addr) + d.variable.RealType.Common().ByteSize)
scope.Regs.Reg(scope.Regs.SPRegNum).Uint64Val = uint64(scope.Regs.CFA - int64(bi.Arch.PtrSize()))
} else {
// On architectures that have a link register CFA and SP have the same
// value but the address of the first argument is at CFA+ptrSize so we set
// CFA to the start of the argument frame minus one pointer size.
scope.Regs.CFA = int64(d.variable.Addr) + d.variable.RealType.Common().ByteSize - int64(bi.Arch.PtrSize())
scope.Regs.Reg(scope.Regs.SPRegNum).Uint64Val = uint64(scope.Regs.CFA)
}
proc: support debugging plugins (#1414) This change splits the BinaryInfo object into a slice of Image objects containing information about the base executable and each loaded shared library (note: go plugins are shared libraries). Delve backens are supposed to call BinaryInfo.AddImage whenever they detect that a new shared library has been loaded. Member fields of BinaryInfo that are used to speed up access to dwarf (Functions, packageVars, consts, etc...) remain part of BinaryInfo and are updated to reference the correct image object. This simplifies this change. This approach has a few shortcomings: 1. Multiple shared libraries can define functions or globals with the same name and we have no way to disambiguate between them. 2. We don't have a way to handle library unloading. Both of those affect C shared libraries much more than they affect go plugins. Go plugins can't be unloaded at all and a lot of name collisions are prevented by import paths. There's only one problem that is concerning: if two plugins both import the same package they will end up with multiple definition for the same function. For example if two plugins use fmt.Printf the final in-memory image (and therefore our BinaryInfo object) will end up with two copies of fmt.Printf at different memory addresses. If a user types break fmt.Printf a breakpoint should be created at *both* locations. Allowing this is a relatively complex change that should be done in a different PR than this. For this reason I consider this approach an acceptable and sustainable stopgap. Updates #865
2019-05-08 21:06:38 +00:00
rdr := scope.Fn.cu.image.dwarfReader
rdr.Seek(scope.Fn.offset)
e, err := rdr.Next()
if err != nil {
return nil, fmt.Errorf("could not read DWARF function entry: %v", err)
}
scope.Regs.FrameBase, _, _, _ = bi.Location(e, dwarf.AttrFrameBase, scope.PC, scope.Regs)
scope.Mem = cacheMemory(scope.Mem, uint64(scope.Regs.CFA), int(d.argSz))
return scope, nil
}