Production Ready microservices or Kubernetes for dummies in golang
Aug 22, 2023 - 18 min read
Building Production-Ready Services in Go with Kubernetes
In this guide, we'll explore how to build production-level services in Go while leveraging Kubernetes for container orchestration. We'll also use Kind for local development, Docker for building Go images, and Kustomize for managing the environment configuration within the cluster. By the end of this article, you'll have a scalable sales API service equipped with robust logging and centralized configuration.
Be sure to check out the repository
Prerequisites
Before we start, ensure you have the following prerequisites set up:
Project Structure
Let's start by understanding the project's structure:
- App Layer: This layer might contain sub folders like
services
,front-end
, ortooling
, depending on your project's architecture. - Business Layer: This is where you define the core business logic that solves the problem your application addresses. These components are designed for re usability.
- Foundation Layer: Here, you include standard libraries and any experimental packages specific to your project.
- Zarf Layer: This layer handles configuration, serving as a wrapper to manage configuration settings efficiently.
Getting Started
Let's kick things off by creating a new Go project and initializing a Go module:
mkdir -p $GOPATH/src/github.com/ruskeyz/sample-project cd $GOPATH/src/github.com/ruskeyz/sample-project go mod init
Or, you can specify the module path explicitly:
go mod init github.com/samplename/scalable-go-api-k8s
Setting Up a Sales API Service
Now, let's create a simple Go program that sets up the basic structure of a service. This program also demonstrates how to gracefully handle shutdowns when receiving interrupt signals (like SIGINT or SIGTERM) from the operating system.
Below is the code for main.go
:
//app/services/sales-api/main.go package main import ( "context" "os" "os/signal" "runtime" "syscall" "github.com/ruskeyz/scalable-go-api-k8s/foundation/logger")var build = "develop" func main() { log := logger.New(os.Stdout, logger.LevelInfo, "SALES-API") ctx := context.Background() if err := run(ctx, log); err != nil { log.Error(ctx, "startup", "msg", err) os.Exit(1) } } func run(ctx context.Context, log *logger.Logger) error { // ------------------------------------------------------------------------- // GOMAXPROCS log.Info(ctx, "startup", "GOMAXPROCS", runtime.GOMAXPROCS(0), "build", build) shutdown := make(chan os.Signal, 1) signal.Notify(shutdown, syscall.SIGINT, syscall.SIGTERM) // ------------------------------------------------------------------------- // Shutdown sig := <-shutdown log.Info(ctx, "shutdown", "status", "shutdown started", "signal", sig) defer log.Info(ctx, "shutdown", "status", "shutdown complete", "signal", sig) return nil }
In this code:
- We create a logger with the
logger.New
function, setting the log level toLevelInfo
and providing a prefix of "SALES-API." - The
run
function represents the core logic of our program. It logs the number of processor cores being used by Go routines usingruntime.GOMAXPROCS(0)
. - We set up a channel named
shutdown
to receive interrupt signals (SIGINT
andSIGTERM
) from the operating system. - The program waits for a signal to arrive on the
shutdown
channel, indicating that it should shut down gracefully. When a signal is received, the shutdown process is logged, and thedefer
statement ensures that the shutdown completion is also logged after the function returns.
Adding Logging
- Create a
logger
package in your project and place the following code inlogger.go
:
//foundation/logger/logger.go package logger import ( "context" "fmt" "io" "log/slog" "path/filepath" "runtime" "time" ) // Level represents different logging levels. type Level slog.Level // A set of possible logging levels. const ( LevelDebug = Level(slog.LevelDebug) LevelInfo = Level(slog.LevelInfo) LevelWarn = Level(slog.LevelWarn) LevelError = Level(slog.LevelError) ) // Logger represents a logger for logging information. type Logger struct { handler slog.Handler }func New(w io.Writer, minLevel Level, serviceName string) *Logger { // Convert the file name to just the name.ext when this key/value will // be logged. f := func(groups []string, a slog.Attr) slog.Attr { if a.Key == slog.SourceKey { if source, ok := a.Value.Any().(*slog.Source); ok { v := fmt.Sprintf("%s:%d", filepath.Base(source.File), source.Line) return slog.Attr{Key: "file", Value: slog.StringValue(v)} } } return a } // Construct the slog JSON handler for use. handler := slog.Handler(slog.NewJSONHandler(w, &slog.HandlerOptions{AddSource: true, Level: slog.Level(minLevel), ReplaceAttr: f})) // Attributes to add to every log. attrs := []slog.Attr{ {Key: "service", Value: slog.StringValue(serviceName)}, } // Add those attributes and capture the final handler. handler = handler.WithAttrs(attrs) return &Logger{ handler: handler, } } // Debug logs at LevelDebug with the given context. func (log *Logger) Debug(ctx context.Context, msg string, args ...any) { log.write(ctx, LevelDebug, 3, msg, args...) } // Debugc logs the information at the specified call stack position. func (log *Logger) Debugc(ctx context.Context, caller int, msg string, args ...any) { log.write(ctx, LevelDebug, caller, msg, args...) } // Info logs at LevelInfo with the given context. func (log *Logger) Info(ctx context.Context, msg string, args ...any) { log.write(ctx, LevelInfo, 3, msg, args...) } // Infoc logs the information at the specified call stack position. func (log *Logger) Infoc(ctx context.Context, caller int, msg string, args ...any) { log.write(ctx, LevelInfo, caller, msg, args...) } // Warn logs at LevelWarn with the given context. func (log *Logger) Warn(ctx context.Context, msg string, args ...any) { log.write(ctx, LevelWarn, 3, msg, args...) } // Warnc logs the information at the specified call stack position. func (log *Logger) Warnc(ctx context.Context, caller int, msg string, args ...any) { log.write(ctx, LevelWarn, caller, msg, args...) } // Error logs at LevelError with the given context. func (log *Logger) Error(ctx context.Context, msg string, args ...any) { log.write(ctx, LevelError, 3, msg, args...) } // Errorc logs the information at the specified call stack position. func (log *Logger) Errorc(ctx context.Context, caller int, msg string, args ...any) { log.write(ctx, LevelError, caller, msg, args...) }func (log *Logger) write(ctx context.Context, level Level, caller int, msg string, args ...any) { slogLevel := slog.Level(level) if !log.handler.Enabled(ctx, slogLevel) { return } var pcs [1]uintptr runtime.Callers(caller, pcs[:]) r := slog.NewRecord(time.Now(), slogLevel, msg, pcs[0]) r.Add(args...) log.handler.Handle(ctx, r) }
- New Function: The
New
function is used to create a new instance of the logger. It takes anio.Writer
, a minimum log level, and a service name as arguments. The function sets up a JSON-based logging handler using the provided options and attributes. The constructed handler is then associated with the logger instance. - Logging Methods: The
Logger
struct has methods for different log levels (Debug
,Info
,Warn
,Error
) along with their contextual variants (Debugc
,Infoc
,Warnc
,Errorc
). These methods allow you to log messages at different severity levels with optional contextual information. write
Method: Thewrite
method is a private helper function used by the logging methods. It takes the log level, a caller position, a message, and additional arguments. The method checks if the specified log level is enabled by the handler, and if it is, it constructs a log record with the provided information and passes it to the handler'sHandle
method.
Installing dependencies through makefile, running locally
Lets setup the services to be working locally.
Add this to makefile and install the dependencies
//makefile # Check to see if we can use ash, in Alpine images, or default to BASH. SHELL_PATH = /bin/ash SHELL = $(if $(wildcard $(SHELL_PATH)),/bin/ash,/bin/bash) # Deploy First Mentality # ============================================================================== # Brew Installation # # Have brew installed, which simplifies the process of installing all the tooling. # # ============================================================================== # Windows Users ONLY - Install Telepresence # # Unfortunately you can't use brew to install telepresence because you will # receive a bad binary. Please follow these instruction. # # $ sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o /usr/local/bin/telepresence # $ sudo chmod a+x /usr/local/bin/telepresence # # Restart your wsl environment. # ============================================================================== # Linux Users ONLY - Install Telepresence # # https://www.telepresence.io/docs/latest/quick-start/?os=gnu-linux # ============================================================================== # M1 Mac Users ONLY - Uninstall Telepresence If Installed Intel Version # # $ sudo rm -rf /Library/Developer/CommandLineTools # $ sudo xcode-select --install # Then it installed with brew (arm64) # ============================================================================== # Install Tooling and Dependencies # # If you are running a mac machine with brew, run these commands: # $ make dev-brew or make dev-brew-arm64 # $ make dev-docker # $ make dev-gotooling # # If you are running a linux machine with brew, run these commands: # $ make dev-brew-common # $ make dev-docker # $ make dev-gotooling # Follow instructions above for Telepresence. # # If you are a windows user with brew, run these commands: # $ make dev-brew-common # $ make dev-docker # $ make dev-gotooling # Follow instructions above for Telepresence. # ============================================================================== # ============================================================================== # Starting The Project # # If you want to use telepresence (recommended): # $ make dev-up # $ make dev-update-apply # # Note: If you attempted to run with telepresence and it didn't work, you may # want to restart the cluser. # $ make dev-down-local # # ============================================================================== # ============================================================================== # Define dependencies GOLANG := golang:1.21 ALPINE := alpine:3.18 KIND := kindest/node:v1.27.3 TELEPRESENCE := datawire/ambassador-telepresence-manager:2.14.2 KIND_CLUSTER := api-starter-cluster NAMESPACE := sales-system APP := sales BASE_IMAGE_NAME := api-starter/service SERVICE_NAME := sales-api VERSION := 0.0.1 SERVICE_IMAGE := $(BASE_IMAGE_NAME)/$(SERVICE_NAME):$(VERSION) METRICS_IMAGE := $(BASE_IMAGE_NAME)/$(SERVICE_NAME)-metrics:$(VERSION) # VERSION := "0.0.1-$(shell git rev-parse --short HEAD)" this can be used to tie versioning to git # ============================================================================== # Running from within k8s/kind # Install dependencies dev-gotooling: go install github.com/divan/expvarmon@latest go install github.com/rakyll/hey@latest go install honnef.co/go/tools/cmd/staticcheck@latest go install golang.org/x/vuln/cmd/govulncheck@latest go install golang.org/x/tools/cmd/goimports@latest dev-brew-common: brew update brew tap hashicorp/tap brew list kind || brew install kind brew list kubectl || brew install kubectl brew list kustomize || brew install kustomize brew list pgcli || brew install pgcli brew list vault || brew install vault dev-brew: dev-brew-common brew list datawire/blackbird/telepresence || brew install datawire/blackbird/telepresence dev-brew-arm64: dev-brew-common brew list datawire/blackbird/telepresence-arm64 || brew install datawire/blackbird/telepresence-arm64 dev-docker: docker pull $(GOLANG) docker pull $(ALPINE) docker pull $(KIND) docker pull $(TELEPRESENCE) # ============================================================================== # Building containers dev-up: kind create cluster \ --image $(KIND) \ --name $(KIND_CLUSTER) \ --config zarf/k8s/dev/kind-config.yaml kubectl wait --timeout=120s --namespace=local-path-storage --for=condition=Available deployment/local-path-provisioner dev-down: kind delete cluster --name $(KIND_CLUSTER) run: go run app/services/sales-api/main.go dev-logs: kubectl logs --namespace=$(NAMESPACE) -l app=$(APP) --all-containers=true -f --tail=100 --max-log-requests=6 dev-status: kubectl get nodes -o wide kubectl get svc -o wide kubectl get pods -o wide --watch --all-namespaces
When you call make run
, it should give something along those lines! Perfect, it works.
{"time":"2023-08-17T17:12:55.040568+01:00","level":"INFO","file":"main.go:29","msg":"startup","service":"SALES-API","GOMAXPROCS":8} ^C{"time":"2023-08-17T17:13:02.219147+01:00","level":"INFO","file":"main.go:39","msg":"shutdown","service":"SALES-API","status":"shutdown started","signal":2} {"time":"2023-08-17T17:13:02.219458+01:00","level":"INFO","file":"main.go:42","msg":"shutdown","service":"SALES-API","status":"shutdown complete","signal":2} make: *** [run] Error 1
Lets add a simple config for kind
//zarf/k8s/dev/kind-config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane
Creating a Docker Container for the Sales API
To run your Go application within a Docker container, follow these steps:
- Create a Dockerfile named
dockerfile.service
in thezarf/docker
directory:
# Build the Go Binary. FROM golang:1.21 as build_sales-api ENV CGO_ENABLED 0 ARG BUILD_REF # Copy the source code into the container. COPY . /service # Build the service binary. WORKDIR /service/app/services/sales-api RUN go build -ldflags "-X main.build=${BUILD_REF}" # Run the Go Binary in Alpine. FROM alpine:3.18 ARG BUILD_DATE ARG BUILD_REF RUN addgroup -g 1000 -S sales && \ adduser -u 1000 -h /service -G sales -S sales COPY --from=build_sales-api --chown=sales:sales /service/app/services/sales-api/sales-api /service/sales-api WORKDIR /service USER sales CMD ["./sales-api"] LABEL org.opencontainers.image.created="${BUILD_DATE}" \ org.opencontainers.image.title="sales-api" \ org.opencontainers.image.authors="Eli" \ org.opencontainers.image.revision="${BUILD_REF}" \
This Dockerfile does the following:
- Uses a multi-stage build to build the Go binary and then run it in a minimal Alpine Linux-based container.
- Copies the source code into the container.
- Builds the Go binary with a custom build reference, which you can set when building the Docker image. This build reference can be used to track the version of your application.
- Update makefile to create a docker:
//makefile all: service service: docker build \ -f zarf/docker/dockerfile.service \ -t $(SERVICE_IMAGE) \ --build-arg BUILD_REF=$(VERSION) \ --build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \ .
After running make all
, your go service is now running within a Docker container and should give something like this:
api-starter/service/sales-api 0.0.1 ffb71d9f6a92 2 minutes ago 10.2MB
Deploying to Kubernetes with Kustomize
Now, let's deploy the Go service to Kubernetes using Kustomize for managing environment-specific configurations.
- Create a
kustomization.yaml
file in thek8s/base/sales-api
directory:
//zarf/k8s/base/sales/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ./base-sales.yaml
This kustomization.yaml
file lists the Kubernetes resources that need to be deployed for the Sales API service.
- Create a
deployment.yaml
environment file with the following content:
//zarf/k8s/base/sales/base-sales.yaml apiVersion: v1 kind: Namespace metadata: name: sales-system--- apiVersion: apps/v1 kind: Deployment metadata: name: sales namespace: sales-system spec: selector: matchLabels: app: sales template: metadata: labels: app: sales spec: terminationGracePeriodSeconds: 60 containers: - name: sales-api image: service-image env: - name: GOMAXPROCS valueFrom: resourceFieldRef: resource: limits.cpu
In this YAML file:
Namespaces are a way to logically isolate and partition Kubernetes resources within a cluster. In this case, a new Namespace named
sales-system
is being defined.The Deployment manages pods with the label
app: sales
and runs a container namedsales-api
using a specified Docker image (service-image
). The Deployment also includes termination grace period settings for graceful shutdown of pods. This YAML manifest can be applied to a Kubernetes cluster using thekubectl apply -f
command to create the defined resources.The
GOMAXPROCS
environment variable controls the maximum number of operating system threads that Go code can execute concurrently. By tying it to the CPU limits defined for the container, the Go runtime can potentially optimize its thread management based on the available CPU resources.
The purpose of this Kustomization configuration is to apply customization to the Kubernetes resources defined in the base-sales.yaml
file. Kustomize allows you to apply overlays, add labels, modify fields, and manage other aspects of the resources without directly modifying the original resource files. You can use the kubectl apply -k
command to apply Kustomization configurations to your Kubernetes cluster.
- Create a
zarf/k8s/dev/sales/kustomization.yaml
file with the following content:
//:zarf/k8s/dev/sales/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base/sales/ patches: - path: ./dev-sales-patch-deploy.yaml images: - name: service-image newName: api-starter/service/sales-api newTag: 0.0.1
- Setup the kustomize dev config.
//:zarf/k8s/dev/sales/dev-sales-patch-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: sales namespace: sales-system spec: selector: matchLabels: app: sales replicas: 1 strategy: type: Recreate template: metadata: labels: app: sales spec: dnsPolicy: ClusterFirstWithHostNet hostNetwork: true containers: - name: sales-api resources: requests: cpu: "500m" # I need access to 1/2 core on the node. limits: cpu: "500m" # Execute instructions 50ms/100ms on my 1 core.
Strategy Recreate
This specifies that the update strategy is set to "Recreate", which means that when updates are applied, the existing pods are terminated before new ones are created. Should only be used in dev env.
Finally, in the spec, This part of the specification sets the DNS policy for the pods to "ClusterFirstWithHostNet", which means that DNS resolution will be handled as per cluster DNS settings. Additionally, hostNetwork
is set to true
, indicating that pods will share the network namespace of the host.
CPU limits and quotas are worth of an additional research, take a look at the Kubernetes docs
- Finally, lets add this to makefile.
dev-load: kind load docker-image $(SERVICE_IMAGE) --name $(KIND_CLUSTER) dev-apply: kustomize build zarf/k8s/dev/sales | kubectl apply -f - kubectl wait pods --namespace=$(NAMESPACE) --selector app=$(APP) --timeout=120s --for=condition=Ready dev-describe-sales: kubectl describe pod --namespace=$(NAMESPACE) -l app=$(APP)
make dev-load
is responsible for loading a Docker image into a Kubernetes cluster created using kind
(Kubernetes in Docker). It uses the kind load docker-image
command to achieve this. Here's what each part of this target does:
make dev-apply
is responsible for deploying Kubernetes resources using kubectl
and applying the configurations defined in a Kustomize directory. Here's what each part of this target does:
kustomize build zarf/k8s/dev/sales | kubectl apply -f -
: This command uses Kustomize to generate Kubernetes manifest files from the directory zarf/k8s/dev/sales
. The generated manifests are then piped (|
) to the kubectl apply
command, which applies them to the Kubernetes cluster. This allows for resource definitions to be customized and managed through Kustomize.
kubectl wait pods --namespace=$(NAMESPACE) --selector app=$(APP) --timeout=120s --for=condition=Ready
: This command waits for the pods matching the specified selector in the specified namespace to become ready. It uses the kubectl wait
command with options to define the namespace, selector, timeout, and condition (Ready
). This is useful for ensuring that the deployed pods are up and running before proceeding.
Configuring the Service
Effective configuration management is crucial for maintaining a production-ready service. We'll use the conf
module from ardanlabs/conf to manage configuration efficiently.
- First, include the
conf
module in your project. You can add it to your Go module by running:
go get github.com/ardanlabs/conf/v3
- In your
main.go
, import theconf
module and initialise the configuration:
//:app/services/sales-api/main.go package main import ( "context" "errors" "fmt" "os" "os/signal" "runtime""syscall""time""github.com/ardanlabs/conf/v3" "github.com/ruskeyz/scalable-go-api-k8s/foundation/logger" ) var build = "develop" func main() { log := logger.New(os.Stdout, logger.LevelInfo, "SALES-API") ctx := context.Background() if err := run(ctx, log); err != nil { log.Error(ctx, "startup", "msg", err) os.Exit(1) } } func run(ctx context.Context, log *logger.Logger) error { // ------------------------------------------------------------------------- // GOMAXPROCS log.Info(ctx, "startup", "GOMAXPROCS", runtime.GOMAXPROCS(0), "build", build) shutdown := make(chan os.Signal, 1) signal.Notify(shutdown, syscall.SIGINT, syscall.SIGTERM) // -------------------------------------------------------------------------// Configurationcfg := struct {conf.VersionWeb struct {ReadTimeout time.Duration `conf:"default:5s"`WriteTimeout time.Duration `conf:"default:10s"`IdleTimeout time.Duration `conf:"default:120s"`ShutdownTimeout time.Duration `conf:"default:20s"`APIHost string `conf:"default:0.0.0.0:3000"`DebugHost string `conf:"default:0.0.0.0:4000,mask"`}}{Version: conf.Version{Build: build,Desc: "BILL KENNEDY",},} const prefix = "SALES" help, err := conf.Parse(prefix, &cfg) if err != nil { if errors.Is(err, conf.ErrHelpWanted) { fmt.Println(help) return nil } return fmt.Errorf("parsing config: %w", err) } // ------------------------------------------------------------------------- // App Starting log.Info(ctx, "starting service", "version", build) defer log.Info(ctx, "shutdown complete") out, err := conf.String(&cfg) if err != nil { return fmt.Errorf("generating config for output: %w", err) } log.Info(ctx, "startup", "config", out) // END HIGHLIGHT // ------------------------------------------------------------------------- // Shutdown sig := <-shutdown log.Info(ctx, "shutdown", "status", "shutdown started", "signal", sig) defer log.Info(ctx, "shutdown", "status", "shutdown complete", "signal", sig) return nil }
- Now you can use the
cfg
struct to access configuration values throughout your service.
Structured Logging
Structured logging provides clear and organized log entries that are easier to parse and analyze. We'll continue using the logger
package to achieve this.
- Update your
app/tooling/fogfmt/main.go
to log the configuration details during startup:
//:app/tooling/logfmt/main.go // This program takes the structured log output and makes it readable. package main import ( "bufio" "encoding/json" "flag" "fmt" "log" "os" "strings" ) var service string func init() { flag.StringVar(&service, "service", "", "filter which service to see") } func main() { flag.Parse() var b strings.Builder service := strings.ToLower(service) scanner := bufio.NewScanner(os.Stdin) for scanner.Scan() { s := scanner.Text() m := make(map[string]any) err := json.Unmarshal([]byte(s), &m) if err != nil { if service == "" { fmt.Println(s) } continue } // If a service filter was provided, check. if service != "" && strings.ToLower(m["service"].(string)) != service { continue } // I like always having a traceid present in the logs. traceID := "00000000-0000-0000-0000-000000000000" if v, ok := m["trace_id"]; ok { traceID = fmt.Sprintf("%v", v) } // {"time":"2023-06-01T17:21:11.13704718Z","level":"INFO","msg":"startup","service":"SALES-API","GOMAXPROCS":1} // Build out the know portions of the log in the order // I want them in. b.Reset() b.WriteString(fmt.Sprintf("%s: %s: %s: %s: %s: %s: ", m["service"], m["time"], m["file"], m["level"], traceID, m["msg"], )) // Add the rest of the keys ignoring the ones we already // added for the log. for k, v := range m { switch k { case "service", "time", "file", "level", "trace_id", "msg": continue } // It's nice to see the key[value] in this format // especially since map ordering is random. b.WriteString(fmt.Sprintf("%s[%v]: ", k, v)) } // Write the new log format, removing the last : out := b.String() fmt.Println(out[:len(out)-2]) } if err := scanner.Err(); err != nil { log.Println(err) } }
- The
logfmt
tool provided in your project's tooling directory can be used to format structured logs for human readability. This tool takes JSON log entries as input and formats them for easy reading:
- Update
makefile
to accommodate our new logger and run app:
run:go run app/services/sales-api/main.go | go run app/tooling/logfmt/main.go -service=$(SERVICE_NAME)run-help:go run app/services/sales-api/main.go --helptidy:go mod tidy go mod vendordev-restart:kubectl rollout restart deployment $(APP) --namespace=$(NAMESPACE)dev-update: all dev-load dev-restartdev-update-apply: all dev-load dev-applydev-logs:kubectl logs --namespace=$(NAMESPACE) -l app=$(APP) --all-containers=true -f --tail=100 --max-log-requests=6 | go run app/tooling/logfmt/main.go -service=$(SERVICE_NAME)
Conclusion
In this guide, we've covered how to build a production-ready Go service, containerize it with Docker, deploy it to Kubernetes using Kustomize. You can expand upon this foundation to add features like API endpoints, database connections, and more to create a robust, scalable, and maintainable microservice.
Also, we've covered how to configure your Go service effectively using the conf
module, implement structured logging, and set up Telepresence for local development. These practices will help you build and maintain production-ready services with ease.
In the next part of the guide, we'll delve into more advanced topics such as debugging, defining services and handlers, and expanding your microservices architecture. Be sure to check out the repository for the full project source code and updates.
What is the number one lesson you have learned from this article?
Recent publications
Found this article helpful? Try these as well ☺️