How does the Golang Scheduler work

Diagnosis

introduction

The Go ecosystem offers a wide range of interfaces and tools for identifying logic and throughput problems in Go programs. This article lists the tools available and helps Go users choose the right one for their particular problem.

Diagnostic tools can be divided into the following categories:

  • Profile measurement (profiling): Profile measurement tools analyze the complexity and costs of a Go program, such as memory usage or frequently called functions, in order to locate "expensive" sections of a program.
  • Tracing: Tracing is a way of instrumenting code to analyze delays (latencies) during the runtime of a function call or a user request. Process logs provide an overview of how large the share of the individual components in the total runtime of a system is. These logs can span several Go processes.
  • Debugging: De-dancing allows you to interrupt Go programs and analyze your work. The program status and progress can be checked in this way.
  • Runtime states and runtime events: The accumulation and analysis of runtime states and events offers an abstract view of the quality of Go programs. The rise and fall of performance characteristics indicate changes in throughput, usage and performance.

Note: Individual diagnostic tools can interfere with one another. For example, detailed analysis of memory usage skews analysis of processor usage; and the analysis of the blocking behavior of goroutines influences the tracking of the scheduler. If you use the tools separately, you will get more accurate results.

Profile measurement

Profiling is useful for finding expensive or often called code sections. The Go runtime environment delivers profile data in a format as expected by the visualization tool pprof. Profile data can be collected during testing with or using start and finish marks from the net / http / pprof package. Profile data must first be collected and then the most common code paths must be filtered and displayed using the pprof tool.

The runtime / pprof package offers predefined profiles for this:

  • cpu: The CPU profile recognizes at which point a program is consuming time while it is actually consuming processor cycles (i.e. not when it is sleeping or waiting for input or output operations).
  • heap: The heap profile reports random storage allocations; this is used to monitor current and past memory usage and look for memory leaks.
  • threadcreate: The thread creation profile reports the program sections which create new operating system threads.
  • goroutine: The goroutine profile shows the current "stack trace" for all goroutines.
  • block: The blocking profile shows where goroutines wait for synchronization elementary functions (including timer channels). The blocking profile is not active by default; activate it via.
  • mutex: The Mutex profile reports access conflicts. If you suspect that your processor is not being used due to access conflicts, use this profile. By default, the mutex profile is not active; activate it via.

What other profile measurements are there for Go programs?

Under Linux you can use the perf tools to measure the profile of Go programs. Perf can measure profiles and differentiate cgo / SWIG code from kernel code, so it can provide insight into performance bottlenecks between program and kernel code. The Instruments collection can be used under macOS.

Can I use profile measurement in production?

Yes, it can be done safely - however, some of the profiles (e.g. the CPU profile) involve costs. You should be prepared for performance losses. The loss of performance can be estimated by measuring the additional effort for the profiler before it is used in production.

Perhaps you want to regularly measure the profile of your productive services. If it is a system with many copies of a single process, then you are on the safe side if you periodically select a copy at random. So choose a productive process, then measure every Y seconds for X seconds, save the results for later visualization and analysis; and then repeat that regularly. The results can be examined manually or automatically for a problem. Collecting different types of profile data can interfere with each other, so it is recommended that you collect data separately and one at a time.

What is the best way to visualize profile data?

Our Go tools can display profile data as text, graph or callgrind visualization using. For more information, read "Profiling Go programs".


List of the most expensive function calls in text form.


Representation of the most expensive function calls as a graph.

The HTML representation shows the expensive source code sections line by line on an HTML page. In the following example, 530ms were spent in the function; the cost of each line is displayed.


Presentation of the most expensive function calls as an HTML page.

Another way of displaying profile data is in the form of flame graphs. There you can move on a certain lineage and zoom in and out of code sections. The original pprof supports flame diagrams.


The most expensive code paths can be tracked down with flame diagrams.

Are the predefined profiles already over?

No, in addition to what the runtime environment provides, user-defined profiles can be created with pprof.Profile; the evaluation then takes place again with the tools mentioned.

Can I set the profile data processor (/ debug / pprof / ...) to a different path and port?

Yes. The package registers its handlers with the standard multiplexer (mux), but you can also register them yourself using the "handler" functions exported by the package.

For example, in the following code, the processor for port: 7777 will use:

package main import ("log" "net / http" "net / http / pprof") func main () {mux: = http.NewServeMux () mux.HandleFunc ("/ custom_debug_path / profile", pprof.Profile) log. Fatal (http.ListenAndServe (": 7777", mux))}

Tracing

Tracing (tracing) is a way of instrumenting code for analyzing latency times across a chain of function calls. Go offers the package golang.org/x/net/trace as a minimal tool for the background processing (backend) per Go node, and provides a minimal library for instrumenting the code as well as a simple control. Go also has a tracer that tracks runtime events in a time interval.

Tracing enables us to

  • to instrument and measure latency times in a go process,
  • Determine the cost of certain calls in a long call chain, and
  • Recognize usage frequencies and find out how performance can be improved. Without expiration dates, bottlenecks are difficult to spot.

In a monolithic system, it is relatively easy to get diagnostic data from the program blocks. All parts work within a process and report their log data, errors and other diagnostic data to a common resource. However, once your system has outgrown one process, i.e. becomes a distributed system, it becomes more difficult to follow a call from the web server in the foreground to all background processes until the answer is finally sent back to the user. This is where distributed tracing comes into play.

Distributed tracing is a type of instrumentation of code for the analysis of latency times over the entire duration of a user request. If the system is a distributed one and the usual tools for profile measurement and de-cutting no longer keep pace, you will probably need distributed tools for the throughput analysis of your user inquiries and RPCs (remote procedure call).

Distributed tracing enables us to

  • to instrument and measure latency times in large systems,
  • keep track of all RPCs over the life of a user request and identify integration issues that are only visible in a production environment.
  • find out how performance in our system can be improved. Many bottlenecks are undetectable until a collection of expiration dates is available, and

The Go ecosystem offers various distributed trace libraries for each tracing system as well as those that are independent of the type of background system.

Is it possible somehow to automatically intercept all function calls and generate log data?

This is not foreseen in Go; You will have to manually instrument your code to add start and end marks and annotations to areas.

How should I pass on expiry headers in Go libraries?

You can pass on trace identifiers and markings (trace tags) for the process log with the help of. So far there has been neither a standard for a trace key nor a common representation for trace headers in the industry. Anyone who provides tools for tracing is also responsible for propagating them within their Go library

Which low-level events from the standard library or the runtime environment can still be logged?

In the standard library and the runtime environment, some interfaces for reporting low-level events have been extended. For example, provides an interface for tracking low-level events during the runtime of an outgoing request. Efforts are constantly being made to provide further low-level events in the runtime environment for tracing purposes, as well as to allow Go users to define and record their own events.

Debug

Debugging is the name of the process in which the cause of poor program behavior is identified. Debuggers help us to understand the program flow and its current state. There are several ways of de-dancing; here we focus on linking a debugger to a program and working with a core dump.

The following debuggers are mainly used for Go:

  • Delve: Delve is a debugger for the Go programming language. It supports the Go runtime architecture and Go's standard types. Delve strives to be a complete and reliable debugger for Go programs.
  • GDB: Go supports GDB both with the standard Go compiler and with Gccgo. However, various aspects of stack management, thread usage, and runtime environment differ from what GDB expects. This can irritate the debugger, even if the program was converted with gccgo. GDB can used for Go programs, but is not ideal and can cause confusion.

How well do debuggers handle Go programs?

The compiler performs optimizations such as function embedding (inlining) and variable buffering (registerization). This makes it difficult to ditch with the debugger. There is a constant effort to improve the quality of DWARF information used for optimized binaries. Until then, we recommend converting the code intended for dewancing without optimizing it. The following command creates a package without optimization by the compiler:

$ go build -gcflags = all = "- N -l"

As part of the improvements mentioned, Go 1.10 introduced a new switch for the compiler:. This switch causes the compiler to add a list of locations to optimized binaries. The following command creates a package with optimizations and a DWARF list of storage locations:

$ go build -gcflags = "- dwarflocationlists = true"

What user interface is recommended for debuggers?

Both Delve and GDB offer line commands (CLI), but most Integrated Environments (IDEs) have their own interfaces for dewancing.

Is it possible to examine Go programs after the crash (postmortem)?

A core dump file is a file that contains the memory dump and the status of a running process. It is used on the one hand to examine a program after it crashes and on the other hand to understand its state while it is running. The memory dump is a useful aid for both applications. It is possible to grab dumps of Go programs and then examine them with Delve or GDB; core dump debugging provides step-by-step instructions.

Runtime states and runtime events

The runtime environment provides us with statistical data and reports internal events so that Go users can assess performance and usage problems at this level.

With this statistical data, one can better understand the general quality and performance of a Go program. Here are some commonly used statistics and condition data:

  • reports metrics related to memory allocation and garbage collection. This data shows how much main memory a process occupies, whether the process uses the memory sensibly and where memory leaks can be found.
  • reads statistics about automatic garbage collection. It helps to see how much resources are being used on garbage collection pauses. It also logs a timeline with these pauses including the percentages.
  • returns the current stack trace. This is useful to see how many goroutines are currently active, what they are doing and whether they are blocked.
  • interrupts the execution of all goroutines and allows you to write a memory dump to a file. A heap dump is a snapshot of the memory used by a Go process at a specific point in time. It contains all allocated objects as well as the goroutines, finalization methods and so on.
  • returns the number of currently active goroutines. This value can be observed to see if enough goroutines are being used or to find goroutine leaks.

Tracer

Go brings a trace that captures a wide range of runtime events. Scheduling, system calls, garbage collection and other events are collected by the runtime system and are available for visualization with the Go tool. This tracer is used to detect latency and usage problems. You can examine how well the CPU is being used and when network or system calls are a reason for preemption of goroutines.

The tracer helps

  • understand how your goroutines are performed,
  • Understand some key runtime events such as garbage collection operations, and
  • to recognize poor parallel processing.

However, this tool is not very good at finding critical sections of code, such as those that place excessive stress on memory or the processor. Instead, use profile measurement tools.

Here the visualization shows that the execution has started well at first, but then only works serially. This suggests that there has been competition for a lock on a shared resource, creating a bottleneck.

Take a look at how to collect and analyze expiration dates.

GODEBUG

The runtime environment emits events and other information if the environment variable is set accordingly:

  • Prints out garbage collection events at each collection step, including the size of the collected memory and the duration of the pause.
  • prints a summary of the runtimes as well as information on memory allocations of completed package initializations.
  • prints scheduling events every X milliseconds.

The environment variable GODEBUG can also be used to deactivate extensions of the instruction set in the standard library and the runtime environment.

  • deactivates all optional instruction set extensions.
  • deactivates certain commands from the command set extensions.
    extension is the lower case name of the instruction set extension, for example sse41 or avx.