Documentation
¶
Overview ¶
Package common provides comprehensive Docker container and image management utilities. This package implements a high-level abstraction over the Docker API, offering functions for container lifecycle management, image operations, network management, and file transfer capabilities.
The package serves as a foundation for containerized application deployment, development workflows, and infrastructure automation. It provides both simple convenience functions and advanced operations for complex Docker workflows.
Core Functionality:
- Container lifecycle management (create, start, stop, remove)
- Image operations (pull, build, push, list)
- File transfer between host and containers
- Network management and container connectivity
- Volume operations and data persistence
- Registry authentication and private image access
Docker API Integration:
Uses the official Docker Go SDK to communicate with Docker Engine via the Docker socket. Supports both local Docker instances and remote Docker hosts through configurable socket connections.
Authentication Support:
Provides registry authentication for private Docker registries including Docker Hub, Azure Container Registry, AWS ECR, and custom registries.
Error Handling Philosophy:
The package uses a mixed approach to error handling - some functions use panic for unrecoverable errors while others return errors for graceful handling. This design supports both scripting scenarios (where failures should halt execution) and application scenarios (where errors should be handled gracefully).
Package common provides core data structures and types for the EVE flow process management system. This package defines the fundamental types used throughout the EVE evaluation service for managing workflow processes, state transitions, and inter-service communication.
The package serves as the foundation for a distributed workflow engine that tracks process execution across multiple services using RabbitMQ for messaging and CouchDB for persistent state storage. It implements a robust state machine pattern with comprehensive audit trails and error handling.
Core Components:
- Process state management with defined state transitions
- Message structures for inter-service communication
- Document models for persistent storage in CouchDB
- Configuration management for service coordination
- Consumer framework for RabbitMQ message processing
State Management Philosophy:
The system implements a finite state machine where processes progress through well-defined states with complete audit trails. Each state transition is recorded with timestamps, metadata, and error information for comprehensive process tracking and debugging.
Distributed Architecture:
Designed for microservices architectures where multiple services coordinate through message passing and shared state storage. The types support both synchronous and asynchronous processing patterns with reliable delivery guarantees through RabbitMQ message acknowledgment.
Data Consistency:
Uses CouchDB's eventual consistency model with document versioning to handle concurrent updates and maintain data integrity across distributed services. All operations include proper revision management for conflict resolution.
Package common provides enhanced logging utilities for structured logging across EVE services. This file extends the base logging functionality with context-aware logging, structured field helpers, and service-specific logging patterns.
Package common provides centralized logging infrastructure for the EVE evaluation system. This package implements intelligent log output routing that automatically directs error messages to stderr while sending other log levels to stdout, enabling proper stream separation for containerized and scripted environments.
The logging system is built on logrus for structured logging capabilities with custom output handling that supports both development workflows and production deployment patterns. It provides a foundation for consistent logging across all services in the EVE system.
Key Features:
- Automatic output stream routing based on log level
- Structured logging with JSON and text format support
- Container-friendly output separation for log aggregation
- Global logger instance for consistent usage patterns
- Integration with monitoring and alerting systems
Output Routing Strategy:
The system implements intelligent output routing where error-level messages are directed to stderr (for immediate attention and error handling) while info, debug, and warning messages go to stdout (for general log processing).
Container Integration:
Designed for containerized environments where stdout and stderr streams can be handled differently by orchestration platforms, log aggregators, and monitoring systems for optimal observability and alerting.
Usage Patterns:
The package provides a global Logger instance that can be used throughout the application for consistent logging behavior. All services should use this logger to ensure uniform output handling and formatting.
Package common provides system utility functions for shell command execution and data processing. This package includes functions for running shell commands, handling privileged operations, and performing common string transformations for system automation and integration tasks.
The package focuses on simplifying system interactions while providing appropriate logging and error handling for operational visibility. It includes both basic shell execution and privileged operations for system administration tasks.
Security Considerations:
All shell execution functions in this package should be used with extreme caution in production environments. They execute arbitrary shell commands and can pose significant security risks if used with untrusted input. Always validate and sanitize input before passing to these functions.
Command Injection Prevention:
The functions use bash -c for command execution, which means they are vulnerable to command injection if used with unsanitized input. Applications using these functions must implement proper input validation and consider using more secure alternatives for production use.
Logging Integration:
All functions integrate with the common logging system to provide operational visibility into command execution, including both successful operations and failures with detailed error information.
Use Cases:
- Development and testing automation
- System administration scripts
- Build and deployment processes
- File system operations and maintenance
- URL processing and normalization
Package common provides common utilities for EVE services
Index ¶
- Variables
- func AddContainerToNetwork(ctx context.Context, cli *client.Client, containerID, networkName string) error
- func AddContainerToNetworkWithClient(ctx context.Context, cli DockerClient, containerID, networkName string) error
- func ContainerExists(ctx context.Context, cli *client.Client, name string) (bool, error)
- func ContainerExistsWithClient(ctx context.Context, cli DockerClient, name string) (bool, error)
- func ContainerRemove(ctx context.Context, cli *client.Client, containerID string, force bool, ...) error
- func ContainerRename(ctx context.Context, cli *client.Client, containerID string, newName string) error
- func ContainerRun(ctx context.Context, cli *client.Client, imageTag, cName string, ...) ([]byte, error)
- func ContainerRunFromEnv(ctx context.Context, cli *client.Client, environment, imageTag, cName string, ...) error
- func ContainerRunWithClient(ctx context.Context, cli DockerClient, imageTag, cName string, ...) ([]byte, error)
- func ContainerStop(ctx context.Context, cli *client.Client, containerID string, timeout int) error
- func ContainerStopAndRemove(ctx context.Context, cli *client.Client, containerID string, stopTimeout int, ...) error
- func Containers(ctx context.Context, cli *client.Client) []containertypes.Summary
- func ContainersListToJSON(ctx context.Context, cli *client.Client) string
- func ContainersListToJSONWithClient(ctx context.Context, cli DockerClient) string
- func Containers_stop_all(ctx context.Context, cli *client.Client)
- func CopyRenameToContainer(ctx context.Context, cli *client.Client, ...) error
- func CopyToContainer(ctx context.Context, cli *client.Client, ...) error
- func CopyToVolume(copyInfo CopyToVolumeOptions) error
- func CreateAndStartContainer(ctx context.Context, cli *client.Client, config containertypes.Config, ...) error
- func CreateAndStartContainerWithClient(ctx context.Context, cli DockerClient, config containertypes.Config, ...) error
- func CreateNetwork(ctx context.Context, cli *client.Client, name string) error
- func CreateNetworkWithClient(ctx context.Context, cli DockerClient, name string) error
- func CreateVolume(ctx context.Context, cli *client.Client, name string) error
- func CreateVolumeWithClient(ctx context.Context, cli DockerClient, name string) error
- func CtxCli(socket string) (context.Context, *client.Client, error)
- func DatabaseFields(operation, table string, rowsAffected int64, duration time.Duration) map[string]interface{}
- func ErrorFields(err error, context string) map[string]interface{}
- func GetEnv(key, defaultValue string) string
- func GetEnvBool(key string, defaultValue bool) bool
- func GetEnvInt(key string, defaultValue int) int
- func HTTPFields(method, path string, statusCode int, duration time.Duration) map[string]interface{}
- func ImageAuthPull(ctx context.Context, cli *client.Client, imageTag, user, pass string)
- func ImageBuild(ctx context.Context, cli *client.Client, workDir string, dockerFile string, ...) error
- func ImagePull(ctx context.Context, cli *client.Client, imageTag string, ...) error
- func ImagePullUpstream(ctx context.Context, cli *client.Client, imageTag string)
- func ImagePullWithClient(ctx context.Context, cli DockerClient, imageTag string, opts *ImagePullOptions) error
- func ImagePush(ctx context.Context, cli *client.Client, tag, user, pass string) error
- func ImagePushWithClient(ctx context.Context, cli DockerClient, tag, user, pass string) error
- func Images(ctx context.Context, cli *client.Client) []image.Summary
- func ImagesList(ctx context.Context, cli *client.Client)
- func LogDuration(logger *ContextLogger, operation string) func()
- func LogOperation(logger *ContextLogger, operation string, fn func() error) error
- func LogPanic(logger *ContextLogger)
- func MaskSecret(secret string) string
- func Must[T any](value T, err error) T
- func MustNoError(err error)
- func NewDockerClient(socket string, sshKeyPath string) (*client.Client, error)
- func NewLogger(config LoggerConfig) *logrus.Logger
- func Ptr[T any](v T) *T
- func PtrValue[T any](ptr *T) T
- func RegistryAuth(username string, password string) string
- func ShellExecute(cmdToRun string) (string, error)
- func ShellSudoExecute(password, cmdToRun string) (string, error)
- func URLToFilePath(url string) string
- type ContainerView
- type ContextLogger
- func (cl *ContextLogger) Debug(msg string)
- func (cl *ContextLogger) Debugf(format string, args ...interface{})
- func (cl *ContextLogger) Error(msg string)
- func (cl *ContextLogger) Errorf(format string, args ...interface{})
- func (cl *ContextLogger) Fatal(msg string)
- func (cl *ContextLogger) Fatalf(format string, args ...interface{})
- func (cl *ContextLogger) Info(msg string)
- func (cl *ContextLogger) Infof(format string, args ...interface{})
- func (cl *ContextLogger) Warn(msg string)
- func (cl *ContextLogger) Warnf(format string, args ...interface{})
- func (cl *ContextLogger) WithContext(ctx context.Context) *ContextLogger
- func (cl *ContextLogger) WithError(err error) *ContextLogger
- func (cl *ContextLogger) WithField(key string, value interface{}) *ContextLogger
- func (cl *ContextLogger) WithFields(fields map[string]interface{}) *ContextLogger
- type CopyToVolumeOptions
- type DockerClient
- type FlowConfig
- type FlowConsumer
- type FlowCouchDBError
- type FlowCouchDBResponse
- type FlowProcessDocument
- type FlowProcessMessage
- type FlowProcessState
- type FlowStateChange
- type ImagePullOptions
- type LogLevel
- type LoggerConfig
- type MockDockerClient
- func (m *MockDockerClient) Close() error
- func (m *MockDockerClient) ContainerCreate(ctx context.Context, config *containertypes.Config, ...) (containertypes.CreateResponse, error)
- func (m *MockDockerClient) ContainerInspect(ctx context.Context, containerID string) (containertypes.InspectResponse, error)
- func (m *MockDockerClient) ContainerList(ctx context.Context, options containertypes.ListOptions) ([]containertypes.Summary, error)
- func (m *MockDockerClient) ContainerLogs(ctx context.Context, containerID string, options containertypes.LogsOptions) (io.ReadCloser, error)
- func (m *MockDockerClient) ContainerRemove(ctx context.Context, containerID string, options containertypes.RemoveOptions) error
- func (m *MockDockerClient) ContainerStart(ctx context.Context, containerID string, options containertypes.StartOptions) error
- func (m *MockDockerClient) ContainerStop(ctx context.Context, containerID string, options containertypes.StopOptions) error
- func (m *MockDockerClient) ContainerWait(ctx context.Context, containerID string, ...) (<-chan containertypes.WaitResponse, <-chan error)
- func (m *MockDockerClient) CopyToContainer(ctx context.Context, containerID, dstPath string, content io.Reader, ...) error
- func (m *MockDockerClient) ImageBuild(ctx context.Context, buildContext io.Reader, options build.ImageBuildOptions) (build.ImageBuildResponse, error)
- func (m *MockDockerClient) ImageList(ctx context.Context, options image.ListOptions) ([]image.Summary, error)
- func (m *MockDockerClient) ImagePull(ctx context.Context, refStr string, options image.PullOptions) (io.ReadCloser, error)
- func (m *MockDockerClient) ImagePush(ctx context.Context, image string, options image.PushOptions) (io.ReadCloser, error)
- func (m *MockDockerClient) NetworkConnect(ctx context.Context, networkID, containerID string, ...) error
- func (m *MockDockerClient) NetworkCreate(ctx context.Context, name string, options networktypes.CreateOptions) (networktypes.CreateResponse, error)
- func (m *MockDockerClient) NetworkInspect(ctx context.Context, networkID string, options networktypes.InspectOptions) (networktypes.Inspect, error)
- func (m *MockDockerClient) NetworkList(ctx context.Context, options networktypes.ListOptions) ([]networktypes.Summary, error)
- func (m *MockDockerClient) NetworkRemove(ctx context.Context, networkID string) error
- func (m *MockDockerClient) VolumeCreate(ctx context.Context, options volume.CreateOptions) (volume.Volume, error)
- func (m *MockDockerClient) VolumeList(ctx context.Context, options volume.ListOptions) (volume.ListResponse, error)
- func (m *MockDockerClient) VolumeRemove(ctx context.Context, volumeID string, force bool) error
- type OutputSplitter
- type StructuredLog
- func (sl *StructuredLog) Level(level LogLevel) *StructuredLog
- func (sl *StructuredLog) Log(msg string)
- func (sl *StructuredLog) Logf(format string, args ...interface{})
- func (sl *StructuredLog) WithError(err error) *StructuredLog
- func (sl *StructuredLog) WithField(key string, value interface{}) *StructuredLog
- func (sl *StructuredLog) WithFields(fields map[string]interface{}) *StructuredLog
Constants ¶
This section is empty.
Variables ¶
var Logger = logrus.New()
Logger provides the global logger instance for the EVE evaluation system. This logger is pre-configured with the OutputSplitter for intelligent log routing and serves as the central logging facility for all services.
Global Logger Benefits:
- Consistent logging behavior across all application components
- Centralized configuration and formatting standards
- Simplified integration with monitoring and alerting systems
- Uniform log structure for parsing and analysis tools
Default Configuration:
The logger is initialized with logrus defaults but can be customized for specific deployment environments: - Output: Directed through OutputSplitter for stream separation - Format: Configurable (text for development, JSON for production) - Level: Configurable based on environment (debug/info/warn/error) - Fields: Support for structured logging with consistent field names
Structured Logging Support:
Leverages logrus's structured logging capabilities for consistent log entry formatting with key-value pairs, making logs machine-readable and suitable for automated processing and analysis.
Configuration Examples:
// Development environment (human-readable)
Logger.SetFormatter(&logrus.TextFormatter{
FullTimestamp: true,
ForceColors: true,
})
Logger.SetLevel(logrus.DebugLevel)
// Production environment (machine-readable)
Logger.SetFormatter(&logrus.JSONFormatter{})
Logger.SetLevel(logrus.InfoLevel)
Usage Patterns:
// Simple logging
Logger.Info("Service started")
Logger.Error("Database connection failed")
// Structured logging with fields
Logger.WithFields(logrus.Fields{
"user_id": "12345",
"action": "login",
}).Info("User authentication successful")
// Error logging with context
Logger.WithError(err).Error("Failed to process request")
Integration with Services:
All EVE services should use this global logger instance to ensure consistent log formatting, routing, and monitoring integration. Custom loggers should only be created for specific use cases that require different output destinations or formatting.
Monitoring Integration:
The logger's output can be easily integrated with monitoring systems: - Prometheus metrics from log parsing - Elasticsearch for log aggregation and search - CloudWatch or similar cloud logging services - Custom alerting based on error log patterns
Performance Considerations:
- Structured logging has minimal overhead with proper field usage
- JSON formatting is efficient for high-volume logging
- Stream separation allows for optimized log processing pipelines
- Configurable log levels prevent debug spam in production
Thread Safety:
The logger is safe for concurrent use across multiple goroutines. Logrus handles synchronization internally, making it suitable for multi-threaded applications and concurrent request processing.
Functions ¶
func AddContainerToNetwork ¶
func AddContainerToNetwork(ctx context.Context, cli *client.Client, containerID, networkName string) error
AddContainerToNetwork connects an existing container to a Docker network. This function provides programmatic network management, enabling containers to join custom networks for service discovery and communication.
Network Connection:
- Connects container to specified network
- Uses default endpoint settings
- Preserves existing network connections
- Enables network-based service discovery
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- containerID: Container ID or name to connect
- networkName: Target network name or ID
Returns:
- error: Network connection or configuration errors
Network Requirements:
- Network must exist before connection attempt
- Container must be created (can be running or stopped)
- Network driver must support container attachment
Endpoint Configuration:
Uses default endpoint settings which provide: - Automatic IP allocation within network subnet - Default network aliases and hostname resolution - Standard network interface configuration
Use Cases:
- Microservice architecture connectivity
- Multi-container application networking
- Service mesh and load balancer integration
- Development environment networking
Error Conditions:
- Network not found
- Container not found
- Container already connected to network
- Network driver limitations
Example Usage:
err := AddContainerToNetwork(ctx, cli, "web-server", "app-network")
func AddContainerToNetworkWithClient ¶ added in v0.0.6
func AddContainerToNetworkWithClient(ctx context.Context, cli DockerClient, containerID, networkName string) error
AddContainerToNetworkWithClient is a testable version of AddContainerToNetwork
func ContainerExists ¶
ContainerExists checks if a container with the specified name exists. This function queries the Docker API to determine container existence, useful for conditional container operations and duplicate prevention.
Search Process:
- Retrieves all containers (running and stopped)
- Searches through container names for exact match
- Handles Docker's name formatting (leading slash)
- Returns boolean result for existence check
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- name: Container name to search for
Returns:
- bool: true if container exists, false otherwise
Name Matching:
Docker prefixes container names with "/" in the API response. This function handles the prefix automatically for accurate matching.
Container States:
Searches both running and stopped containers by using the All: true option in ListOptions, providing comprehensive existence checking.
Error Handling:
Fatal logging on API communication failure, ensuring immediate notification of Docker connectivity issues.
Use Cases:
- Preventing duplicate container creation
- Conditional container operations
- Container lifecycle management
- Deployment script validation
Example Usage:
if ContainerExists(ctx, cli, "my-app") {
Logger.Info("Container already exists")
} else {
// Create new container
}
func ContainerExistsWithClient ¶ added in v0.0.6
ContainerExistsWithClient is a testable version of ContainerExists
func ContainerRemove ¶ added in v0.0.13
func ContainerRemove(ctx context.Context, cli *client.Client, containerID string, force bool, removeVolumes bool) error
ContainerRemove removes a container with options.
Parameters:
- ctx: Context for the Docker operation
- cli: Docker client
- containerID: ID or name of the container to remove
- force: Force removal even if container is running
- removeVolumes: Remove anonymous volumes associated with container
Returns:
- error: Remove operation errors (nil if successful)
Example:
ctx, cli, err := CtxCli("unix:///var/run/docker.sock")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
err = ContainerRemove(ctx, cli, "my-container", true, false)
func ContainerRename ¶ added in v0.0.13
func ContainerRename(ctx context.Context, cli *client.Client, containerID string, newName string) error
ContainerRename renames a running or stopped container.
This function renames a Docker container without requiring a restart. The rename operation is atomic and can be performed on running containers.
Parameters:
- ctx: Context for the Docker operation
- cli: Docker client
- containerID: ID or current name of the container to rename
- newName: New name for the container (must be unique)
Returns:
- error: Rename operation errors (nil if successful)
Example:
ctx, cli, err := CtxCli("unix:///var/run/docker.sock")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
err = ContainerRename(ctx, cli, "old-container-name", "new-container-name")
if err != nil {
log.Fatalf("Failed to rename container: %v", err)
}
func ContainerRun ¶
func ContainerRun(ctx context.Context, cli *client.Client, imageTag, cName string, envVars []string, remove bool) ([]byte, error)
ContainerRun creates and runs a Docker container with specified configuration. This function provides a high-level interface for container execution with automatic lifecycle management and output capture.
Container Lifecycle:
- Creates container with specified image and environment
- Starts container execution
- Waits for container completion
- Captures and returns container output
- Optionally removes container after execution
Configuration Options:
- Image: Docker image to run
- Environment variables: Runtime configuration
- Auto-removal: Cleanup after execution
- Output capture: Stdout/stderr collection
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- imageTag: Docker image to run
- cName: Container name for identification
- envVars: Environment variables (KEY=value format)
- remove: Whether to auto-remove container after execution
Returns:
- []byte: Combined stdout/stderr output from container
- error: Container creation, execution, or output capture errors
Execution Model:
- Synchronous execution (waits for container completion)
- Output is captured and returned as byte array
- Container state is monitored until completion
Error Handling:
Returns errors for graceful handling in calling code: - Container creation failures - Execution errors or timeouts - Output capture issues
Use Cases:
- Batch processing and automation tasks
- Testing and validation workflows
- Data processing and transformation
- Build and deployment pipeline steps
Example Usage:
envs := []string{"ENV=production", "DEBUG=false"}
output, err := ContainerRun(ctx, cli, "myapp:latest", "task-1", envs, true)
func ContainerRunFromEnv ¶
func ContainerRunFromEnv(ctx context.Context, cli *client.Client, environment, imageTag, cName string, command []string) error
ContainerRunFromEnv creates and starts a container with environment loaded from file. This function combines environment file parsing with container creation, providing a convenient way to run containers with externalized configuration.
Configuration Workflow:
- Loads environment variables from specified file
- Creates container with loaded environment
- Starts container execution
- Returns immediately (asynchronous execution)
Environment File Integration:
- Uses parseEnvFile() to load configuration
- Supports standard Docker environment file format
- Handles comments and empty lines gracefully
- Provides error feedback for file issues
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- environment: Base name for environment file (appends .env)
- imageTag: Docker image to run
- cName: Container name for identification
- command: Command to execute in container (can be empty)
Returns:
- error: Environment loading, container creation, or start errors
File Naming Convention:
Environment file is constructed by appending ".env" to the environment parameter. For example, "production" becomes "production.env".
Execution Model:
- Asynchronous execution (does not wait for completion)
- Container runs independently after start
- No output capture or monitoring
Error Handling:
Returns errors for: - Environment file reading/parsing issues - Container creation failures - Container start failures
Use Cases:
- Environment-specific container deployment
- Configuration management with external files
- Development/staging/production environment separation
- Automated deployment with externalized configuration
Example Usage:
cmd := []string{"/app/start.sh", "--config", "/etc/app.conf"}
err := ContainerRunFromEnv(ctx, cli, "production", "myapp:latest", "web-server", cmd)
func ContainerRunWithClient ¶ added in v0.0.6
func ContainerRunWithClient(ctx context.Context, cli DockerClient, imageTag, cName string, envVars []string, remove bool) ([]byte, error)
ContainerRunWithClient is a testable version of ContainerRun
func ContainerStop ¶ added in v0.0.13
ContainerStop stops a running container with a timeout.
Parameters:
- ctx: Context for the Docker operation
- cli: Docker client
- containerID: ID or name of the container to stop
- timeout: Timeout in seconds before forcing kill (0 for immediate kill)
Returns:
- error: Stop operation errors (nil if successful or already stopped)
Example:
ctx, cli, err := CtxCli("unix:///var/run/docker.sock")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
err = ContainerStop(ctx, cli, "my-container", 10)
func ContainerStopAndRemove ¶ added in v0.0.13
func ContainerStopAndRemove(ctx context.Context, cli *client.Client, containerID string, stopTimeout int, removeVolumes bool) error
ContainerStopAndRemove stops and removes a container in one operation. This is a convenience function that combines stop and remove.
Parameters:
- ctx: Context for the Docker operation
- cli: Docker client
- containerID: ID or name of the container to stop and remove
- stopTimeout: Timeout in seconds before forcing kill (0 for immediate)
- removeVolumes: Remove anonymous volumes associated with container
Returns:
- error: First error encountered (nil if both operations successful)
Example:
ctx, cli, err := CtxCli("unix:///var/run/docker.sock")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
err = ContainerStopAndRemove(ctx, cli, "my-container", 10, false)
func Containers ¶
Containers retrieves a list of all running containers from Docker Engine. This function provides access to the raw Docker API container listing with all available container metadata and status information.
Container Information:
Returns complete container summaries including: - Container IDs and names - Current status and state - Image information and tags - Network settings and port mappings - Resource usage and limits - Labels and metadata
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
Returns:
- []containertypes.Summary: Array of container summary objects
Error Handling:
Panics on API communication failure, ensuring immediate notification of Docker connectivity issues or permission problems.
Use Cases:
- Container monitoring and status checking
- Resource utilization analysis
- Container discovery for orchestration
- Health checking and alerting systems
Performance Notes:
- Efficient API call that retrieves all containers in single request
- Consider filtering for large container environments
- Results include only running containers by default
func ContainersListToJSON ¶
ContainersListToJSON exports container information to a JSON file. This function combines container listing and JSON serialization, providing a convenient way to export container inventory for external processing, backup, or integration with other systems.
Output Format:
Creates a JSON file containing an array of ContainerView objects with standardized formatting suitable for consumption by external tools, monitoring systems, or configuration management platforms.
File Operations:
- Generates "containers.json" in the current working directory
- Uses standard JSON marshaling for consistent formatting
- Sets file permissions to 0644 (readable by owner and group)
- Overwrites existing files without confirmation
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
Returns:
- string: Filename of the created JSON file ("containers.json")
Error Handling:
- JSON marshaling errors are logged via Logger.Error
- File writing errors are logged via Logger.Error
- Function continues execution despite errors (logs but doesn't panic)
Use Cases:
- Container inventory export for compliance and auditing
- Integration with external monitoring and alerting systems
- Backup of container configuration for disaster recovery
- Data feed for business intelligence and reporting tools
Integration Patterns:
- CI/CD pipeline integration for deployment tracking
- Monitoring system data ingestion
- Configuration management system synchronization
- Compliance reporting and audit trail generation
func ContainersListToJSONWithClient ¶ added in v0.0.6
func ContainersListToJSONWithClient(ctx context.Context, cli DockerClient) string
ContainersListToJSONWithClient is a testable version of ContainersListToJSON
func Containers_stop_all ¶
Containers_stop_all forcefully stops all running containers on the Docker host. This function performs a bulk stop operation with immediate termination, useful for cleanup operations, testing scenarios, and emergency shutdowns.
Stopping Behavior:
- Uses zero timeout for immediate termination (no graceful shutdown)
- Processes all containers returned by Containers() function
- Stops containers sequentially with progress logging
- Does not remove containers after stopping
Use Cases:
- Development environment cleanup
- Testing infrastructure reset
- Emergency shutdown procedures
- Bulk container management operations
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
Error Handling:
Panics on container stop failure, ensuring immediate notification of operational issues that prevent container shutdown.
Safety Considerations:
- Forceful stop may cause data loss in containers without proper shutdown
- Consider graceful shutdown periods for production environments
- Backup critical data before bulk stop operations
- Verify container dependencies before stopping
Logging:
Provides progress feedback during stop operations, logging container IDs and success status for monitoring and debugging purposes.
Alternative Approaches:
For production environments, consider implementing graceful shutdown with configurable timeout periods and dependency-aware stop ordering.
func CopyRenameToContainer ¶
func CopyRenameToContainer(ctx context.Context, cli *client.Client, containerID, hostFilePath, containerFilePath, containerDestPath string) error
CopyRenameToContainer transfers files from host to container with name change. This function extends CopyToContainer by allowing the copied file to have a different name in the container than on the host filesystem.
Rename Functionality:
- Copies file from hostFilePath on host
- Places file at containerFilePath within container
- Allows complete path and name transformation
- Maintains file content and permissions
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- containerID: Target container ID or name
- hostFilePath: Source file path on host
- containerFilePath: Desired file path/name in container
- containerDestPath: Destination directory in container
Archive Process:
Creates tar archive with custom internal naming, allowing the file to appear with a different name within the container.
Error Handling:
- Panics on archive creation failure (immediate feedback)
- Fatal logging on copy operation failure
- No graceful error return (designed for scripting scenarios)
Use Cases:
- Configuration template deployment with environment-specific names
- Binary deployment with version-specific naming
- Data file processing with standardized naming conventions
- Secret injection with specific file names
Example Usage:
CopyRenameToContainer(ctx, cli, "app-container",
"/host/config.prod.json", "config.json", "/app/config/")
Naming Strategy:
The containerFilePath parameter specifies the exact path and name the file should have within the container, regardless of its original name on the host.
func CopyToContainer ¶
func CopyToContainer(ctx context.Context, cli *client.Client, containerID, hostFilePath, containerDestPath string) error
CopyToContainer transfers files from host to a running Docker container. This function provides a high-level interface for copying files into containers using Docker's tar-based copy API.
Copy Process:
- Creates tar archive from host file/directory
- Transfers archive to container via Docker API
- Extracts archive contents at specified container path
- Preserves file permissions and ownership
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- containerID: Target container ID or name
- hostFilePath: Source path on host filesystem
- containerDestPath: Destination directory in container
Returns:
- error: Archive creation, transfer, or extraction errors
Path Handling:
- hostFilePath can be file or directory
- containerDestPath should be destination directory
- Preserves relative directory structure for directory copies
- Files maintain original names within destination
Copy Options:
- AllowOverwriteDirWithFile: false (prevents file/dir conflicts)
- CopyUIDGID: true (preserves ownership information)
Container Requirements:
- Container must be created (running or stopped)
- Destination path must be valid container path
- Container filesystem must be writable
Error Conditions:
- Source path not found or inaccessible
- Container not found or inaccessible
- Insufficient permissions in container
- Archive creation or transfer failures
Use Cases:
- Configuration file deployment
- Application code updates
- Data import and processing
- Build artifact transfer
Example Usage:
err := CopyToContainer(ctx, cli, "web-server", "/local/config.json", "/app/config/")
func CopyToVolume ¶
func CopyToVolume(copyInfo CopyToVolumeOptions) error
CopyToVolume copies files from host to a Docker volume using a temporary container. This function provides a robust method for populating Docker volumes with host data using rsync for reliable file synchronization.
Copy Process:
- Pulls specified base image (with rsync capability)
- Creates target volume if it doesn't exist
- Creates temporary container with host and volume mounts
- Executes rsync to synchronize files from host to volume
- Waits for operation completion and cleanup
Parameters:
- copyInfo: CopyToVolumeOptions struct containing all operation parameters
Returns:
- error: Image pull, volume creation, container operations, or sync errors
Volume Operations:
- Automatically creates volume if it doesn't exist
- Preserves existing volume data (rsync merge behavior)
- Maintains file permissions and ownership
- Supports incremental updates
Container Strategy:
Uses temporary container with unique UUID name for isolation: - Mounts host path as read-only source (/src) - Mounts target volume as destination (/tgt) - Executes package updates and rsync installation - Performs file synchronization with progress output - Auto-removes container after completion
Rsync Features:
- Preserves permissions and ownership (-Pavz flags)
- Provides progress feedback during transfer
- Handles large file sets efficiently
- Supports incremental synchronization
Error Conditions:
- Base image pull failures
- Volume creation issues
- Container creation or execution problems
- Rsync execution failures
- Mount path accessibility issues
Use Cases:
- Database initialization with seed data
- Static asset deployment to volumes
- Configuration file distribution
- Backup restoration to volumes
- Development environment data setup
Example Usage:
opts := CopyToVolumeOptions{
Ctx: ctx,
Client: cli,
Image: "ubuntu:latest",
Volume: "app-data",
LocalPath: "/host/data",
VolumePath: "/data",
}
err := CopyToVolume(opts)
func CreateAndStartContainer ¶
func CreateAndStartContainer(ctx context.Context, cli *client.Client, config containertypes.Config, hostConfig containertypes.HostConfig, name, networkName string) error
CreateAndStartContainer creates a new container with network configuration and starts it. This function combines container creation with network attachment and startup, providing a convenient interface for deploying networked containers.
Deployment Process:
- Creates container with specified configuration
- Connects container to specified network during creation
- Starts container execution immediately
- Returns after successful startup
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- config: Container configuration (image, environment, etc.)
- hostConfig: Host-specific settings (mounts, ports, etc.)
- name: Container name for identification
- networkName: Network to connect container to
Returns:
- error: Container creation, network connection, or startup errors
Network Integration:
The container is connected to the specified network during creation, enabling immediate network communication and service discovery.
Configuration Support:
- Accepts full containertypes.Config for maximum flexibility
- Supports all HostConfig options (volumes, ports, resources)
- Handles network configuration automatically
Error Handling:
Returns errors for graceful handling: - Container creation failures - Network connection issues - Container startup problems
Use Cases:
- Microservice deployment with service discovery
- Multi-container application orchestration
- Development environment setup
- CI/CD pipeline container deployment
Example Usage:
config := containertypes.Config{
Image: "nginx:latest",
ExposedPorts: nat.PortSet{"80/tcp": struct{}{}},
}
hostConfig := containertypes.HostConfig{
PortBindings: nat.PortMap{"80/tcp": []nat.PortBinding{{HostPort: "8080"}}},
}
err := CreateAndStartContainer(ctx, cli, config, hostConfig, "web-server", "app-network")
func CreateAndStartContainerWithClient ¶ added in v0.0.6
func CreateAndStartContainerWithClient(ctx context.Context, cli DockerClient, config containertypes.Config, hostConfig containertypes.HostConfig, name, networkName string) error
CreateAndStartContainerWithClient is a testable version of CreateAndStartContainer
func CreateNetwork ¶ added in v0.0.5
CreateNetwork creates a custom Docker bridge network for service communication. This function establishes isolated network environments for multi-container applications, enabling secure service-to-service communication and network segmentation.
Network Architecture:
Bridge networks provide several networking features: - Automatic service discovery through container names - Network isolation from other applications - Custom IP address management and allocation - Port exposure control and security - DNS resolution for container communication
Parameters:
- ctx: Context for operation cancellation and timeout control
- cli: Docker client for API communication
- name: Network name for identification and container attachment
Returns:
- error: Network creation failures, name conflicts, or Docker API errors
Bridge Network Benefits:
- Container-to-container communication using container names
- Network isolation from host and other Docker networks
- Automatic IP address assignment and management
- Built-in DNS resolution for service discovery
- Port mapping control for external access
Service Communication:
Containers on the same bridge network can communicate using: - Container names as hostnames for DNS resolution - Internal ports without explicit mapping - Automatic load balancing for replicated services - Secure communication without host network exposure
Security Features:
- Network-level isolation between different applications
- Controlled ingress and egress traffic flow
- No external access unless explicitly configured
- Integration with Docker's security policies
- Support for encrypted overlay networks in swarm mode
Error Conditions:
- Network name conflicts with existing networks
- Docker daemon connectivity or configuration issues
- Network driver failures or misconfigurations
- IP address range conflicts with existing networks
- Insufficient system resources for network creation
Example Usage:
ctx := context.Background()
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
log.Fatal("Failed to create Docker client:", err)
}
defer cli.Close()
err = CreateNetwork(ctx, cli, "app_network")
if err != nil {
log.Printf("Network creation failed: %v", err)
return
}
log.Println("Network created successfully")
Network Naming:
- Use descriptive names that indicate application or purpose
- Include environment indicators (dev, staging, prod)
- Follow consistent naming conventions across deployments
- Consider service or application prefixes
Multi-Service Applications:
Bridge networks enable complex application architectures: - Web applications communicating with databases - Microservice architectures with service discovery - API gateways routing to backend services - Message queues connecting distributed components
Monitoring and Troubleshooting:
- Use docker network inspect for configuration details
- Monitor network connectivity between containers
- Implement health checks for service availability
- Log network-related errors and connectivity issues
- Consider network performance monitoring for optimization
func CreateNetworkWithClient ¶ added in v0.0.6
func CreateNetworkWithClient(ctx context.Context, cli DockerClient, name string) error
CreateNetworkWithClient is a testable version of CreateNetwork
func CreateVolume ¶ added in v0.0.5
CreateVolume creates a named Docker volume for persistent data storage. This function establishes persistent storage that survives container lifecycle events, enabling data persistence, sharing between containers, and backup capabilities.
Volume Management:
Docker volumes provide persistent storage with several advantages: - Data persistence across container restarts and deletions - Sharing data between multiple containers safely - Volumes managed independently from container lifecycle - Better performance than bind mounts on some systems - Easy backup and migration strategies
Parameters:
- ctx: Context for operation cancellation and timeout control
- cli: Docker client for API communication
- name: Volume name for identification and mounting
Returns:
- error: Volume creation failures, name conflicts, or Docker API errors
Storage Drivers:
Docker uses various storage drivers based on the host system: - overlay2: Default on most Linux systems - aufs: Legacy driver for older systems - btrfs: For systems with Btrfs filesystem - zfs: For systems with ZFS filesystem - devicemapper: For systems requiring direct block storage
Volume Lifecycle:
Volumes exist independently of containers: 1. Created explicitly (this function) or implicitly with containers 2. Mounted to one or more containers for data access 3. Data persists when containers are stopped or removed 4. Explicitly removed when no longer needed
Error Conditions:
- Volume name conflicts with existing volumes
- Docker daemon connectivity or configuration issues
- Insufficient disk space for volume creation
- Storage driver failures or misconfigurations
- Permission issues with volume creation
Example Usage:
ctx := context.Background()
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
log.Fatal("Failed to create Docker client:", err)
}
defer cli.Close()
err = CreateVolume(ctx, cli, "postgres_data")
if err != nil {
log.Printf("Volume creation failed: %v", err)
return
}
log.Println("Volume created successfully")
Volume Naming:
- Use descriptive names that indicate data purpose
- Include application or service identifiers
- Consider environment prefixes (dev, staging, prod)
- Follow consistent naming conventions
Data Persistence:
Volumes provide reliable data persistence for: - Database storage and transaction logs - Application configuration and state - User-generated content and uploads - Build artifacts and caches - Log files and audit trails
Volume Sharing:
Multiple containers can mount the same volume: - Read-write access for data collaboration - Read-only mounts for shared configuration - Concurrent access with proper locking mechanisms - Data sharing between application tiers
Backup Strategies:
- Mount volumes to backup containers for data export
- Use docker cp to extract data from volumes
- Implement scheduled backup jobs with volume mounting
- Consider volume snapshots for quick recovery
- Test restoration procedures regularly
Performance Considerations:
- Volumes generally faster than bind mounts
- Performance varies by storage driver and backend
- Monitor disk I/O and implement optimization
- Consider storage backend selection for workloads
- Plan capacity and implement monitoring
Security:
- Volumes inherit host filesystem permissions
- Implement access controls through container users
- Encrypt sensitive data at rest
- Regular security scanning of volume contents
- Implement audit logging for volume access
Best Practices:
- Create volumes before starting dependent containers
- Use meaningful names that reflect the data purpose
- Document volume purposes and data retention policies
- Implement backup strategies for critical data volumes
- Monitor volume usage and implement cleanup procedures
func CreateVolumeWithClient ¶ added in v0.0.6
func CreateVolumeWithClient(ctx context.Context, cli DockerClient, name string) error
CreateVolumeWithClient is a testable version of CreateVolume
func CtxCli ¶
CtxCli creates a Docker client context and API client for Docker operations. This function establishes the connection to Docker Engine and configures the client with appropriate headers and API version settings.
Docker Socket Configuration:
Supports various Docker socket configurations including: - Local Unix socket: "unix:///var/run/docker.sock" - Remote TCP socket: "tcp://remote-host:2376" - Named pipes on Windows: "npipe:////./pipe/docker_engine"
API Version:
Fixed to API version 1.49 for consistent behavior across Docker versions. This version provides compatibility with recent Docker Engine releases while maintaining stability for containerized applications.
Headers Configuration:
Sets default Content-Type header for tar-based operations, which are commonly used for file transfers and context uploads.
Parameters:
- socket: Docker socket URI (local or remote)
Returns:
- context.Context: Background context for Docker operations
- *client.Client: Configured Docker API client
Error Handling:
Panics on client creation failure, ensuring immediate feedback for connection issues or invalid socket configurations.
Example Usage:
ctx, cli, err := CtxCli("unix:///var/run/docker.sock")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
Connection Patterns:
- Local development: Use default Unix socket
- Remote Docker: Use TCP with TLS configuration
- Docker Desktop: Use platform-specific defaults
- CI/CD environments: Use environment-provided socket
func DatabaseFields ¶ added in v0.0.28
func DatabaseFields(operation, table string, rowsAffected int64, duration time.Duration) map[string]interface{}
DatabaseFields returns standard fields for database operation logging
func ErrorFields ¶ added in v0.0.28
ErrorFields returns standard fields for error logging
func GetEnv ¶ added in v0.0.29
GetEnv retrieves an environment variable with a fallback default value
func GetEnvBool ¶ added in v0.0.29
GetEnvBool retrieves a boolean environment variable with a fallback default Accepts: "true", "1", "yes", "on" for true Accepts: "false", "0", "no", "off" for false
func GetEnvInt ¶ added in v0.0.29
GetEnvInt retrieves an integer environment variable with a fallback default
func HTTPFields ¶ added in v0.0.28
HTTPFields returns standard fields for HTTP logging
func ImageAuthPull ¶
ImageAuthPull downloads a Docker image from a private registry with authentication. Deprecated: Use ImagePull(ctx, cli, imageTag, &ImagePullOptions{Username: user, Password: pass}) instead. This function is maintained for backward compatibility.
func ImageBuild ¶
func ImageBuild(ctx context.Context, cli *client.Client, workDir string, dockerFile string, tag string) error
ImageBuild constructs a Docker image from a Dockerfile and build context. This function provides programmatic access to Docker's image building capabilities with support for custom build contexts and configurations.
Build Process:
- Creates tar archive from specified working directory
- Uploads build context to Docker Engine
- Executes Dockerfile instructions in isolated environment
- Streams build output for progress monitoring
- Tags resulting image with specified name
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- workDir: Directory containing Dockerfile and build context
- dockerFile: Dockerfile name (relative to workDir)
- tag: Tag name for the resulting image
Build Context:
- All files in workDir are included in build context
- Large directories may slow build process
- Use .dockerignore to exclude unnecessary files
- Context is compressed and uploaded to Docker Engine
Dockerfile Processing:
- Supports all standard Dockerfile instructions
- Build cache utilization for layer reuse
- Multi-stage builds for optimized images
- Build arguments and environment variables
Error Handling:
- Fatal logging on build failure or I/O issues
- Build errors are streamed to stdout for debugging
Output Streaming:
Build progress, including layer creation and instruction execution, is streamed to stdout for real-time monitoring and debugging.
Optimization Notes:
- Use .dockerignore to minimize build context size
- Order Dockerfile instructions for optimal cache utilization
- Consider multi-stage builds for production images
Example Usage:
ImageBuild(ctx, cli, "/path/to/project", "Dockerfile", "myapp:latest")
func ImagePull ¶
func ImagePull(ctx context.Context, cli *client.Client, imageTag string, opts *ImagePullOptions) error
ImagePull downloads a Docker image from a registry with flexible configuration options. This consolidated function handles all image pulling scenarios: public, private, silent, and with custom options.
Authentication:
- For public registries: leave Username and Password empty
- For private registries: provide Username and Password
- Supports Docker Hub, ACR, ECR, GCR, and self-hosted registries
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- imageTag: Image reference (repository:tag format)
- opts: ImagePullOptions struct (can be nil for defaults)
Image Reference Formats:
- Short names: "nginx:latest" (resolves to docker.io/library/nginx:latest)
- Fully qualified: "docker.io/library/nginx:latest"
- Private registry: "myregistry.com/myimage:v1.0"
Error Handling:
- Returns error for graceful handling (does not panic)
- Caller is responsible for error handling
Example Usage:
// Public image
err := ImagePull(ctx, cli, "nginx:latest", nil)
// Private image with auth
err := ImagePull(ctx, cli, "myregistry.com/app:v1", &ImagePullOptions{
Username: "user",
Password: "pass",
})
// Silent pull (no output)
err := ImagePull(ctx, cli, "nginx:latest", &ImagePullOptions{Silent: true})
// Custom platform
err := ImagePull(ctx, cli, "nginx:latest", &ImagePullOptions{Platform: "linux/arm64"})
func ImagePullUpstream ¶
ImagePullUpstream downloads a Docker image from public registries without authentication. Deprecated: Use ImagePull(ctx, cli, imageTag, nil) instead. This function is maintained for backward compatibility.
func ImagePullWithClient ¶ added in v0.0.6
func ImagePullWithClient(ctx context.Context, cli DockerClient, imageTag string, opts *ImagePullOptions) error
ImagePullWithClient is a testable version of ImagePull
func ImagePush ¶
ImagePush uploads a Docker image to a registry with authentication. This function handles the complete image push process including authentication, upload progress monitoring, and completion confirmation.
Push Process:
- Authenticates with target registry using provided credentials
- Uploads image layers and manifest to registry
- Streams upload progress for monitoring
- Confirms successful completion
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
- tag: Image tag to push (must include registry if not Docker Hub)
- user: Registry username or service account
- pass: Registry password, token, or access key
Registry Requirements:
- Image must be tagged with registry prefix for non-Docker Hub registries
- Example: "myregistry.com/namespace/image:tag"
- Registry must support Docker Registry API v2
Authentication Support:
- Basic authentication (username/password)
- Token-based authentication for cloud registries
- Service account authentication for enterprise environments
Upload Process:
- Only uploads layers not already present in registry (deduplication)
- Progress feedback shows upload status for each layer
- Manifest upload finalizes the image push
Error Handling:
- Fatal logging on push initiation or authentication failure
- I/O errors during upload are reported via fatal logging
Progress Monitoring:
Upload progress is streamed to stdout, showing layer upload status and completion percentages for monitoring purposes.
Example Usage:
ImagePush(ctx, cli, "myregistry.com/myapp:v1.0", "username", "password")
func ImagePushWithClient ¶ added in v0.0.6
func ImagePushWithClient(ctx context.Context, cli DockerClient, tag, user, pass string) error
ImagePushWithClient is a testable version of ImagePush
func Images ¶
Images retrieves a list of all Docker images available on the local Docker host. This function provides access to the complete image inventory including both tagged and untagged images with comprehensive metadata.
Image Information:
Returns detailed image summaries containing: - Image IDs and repository tags - Image size and virtual size information - Creation timestamps and metadata - Parent image relationships - Labels and configuration data
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
Returns:
- []image.Summary: Array of image summary objects from Docker API
Error Handling:
Panics on API communication failure, providing immediate feedback for Docker connectivity issues or permission problems.
Use Cases:
- Image inventory and disk usage analysis
- Security scanning and vulnerability assessment
- Image lifecycle management and cleanup
- Build system integration and optimization
Performance Considerations:
- Single API call retrieves all local images efficiently
- Large image inventories may require pagination in future versions
- Consider filtering options for specific use cases
func ImagesList ¶
ImagesList displays information about all Docker images on the local host. This function provides a simple way to inspect the local image inventory with logging output suitable for debugging and administrative tasks.
Display Format:
Logs each image with: - Image ID for unique identification - Repository tags for version tracking - Human-readable output via Logger.Info
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
Output Example:
sha256:abc123... [nginx:latest nginx:1.21] sha256:def456... [alpine:3.14] sha256:ghi789... [<none>:<none>]
Use Cases:
- Quick image inventory inspection
- Debugging image availability issues
- Administrative image management tasks
- Development environment verification
func LogDuration ¶ added in v0.0.28
func LogDuration(logger *ContextLogger, operation string) func()
LogDuration logs the duration of an operation
func LogOperation ¶ added in v0.0.28
func LogOperation(logger *ContextLogger, operation string, fn func() error) error
LogOperation logs the start and end of an operation with timing
func LogPanic ¶ added in v0.0.28
func LogPanic(logger *ContextLogger)
LogPanic recovers from panics and logs them
func MaskSecret ¶ added in v0.0.29
MaskSecret masks sensitive strings for safe logging Shows first 4 and last 4 characters for strings longer than 8 chars Returns "***" for short strings and "<not set>" for empty strings
Example:
MaskSecret("") // "<not set>"
MaskSecret("short") // "***"
MaskSecret("myverylongsecretkey123") // "myve...y123"
func Must ¶ added in v0.0.29
Must panics if err is not nil, otherwise returns value Useful for initialization code that should fail fast
Example:
config := common.Must(loadConfig())
func MustNoError ¶ added in v0.0.29
func MustNoError(err error)
MustNoError panics if err is not nil Useful for initialization code that should fail fast
Example:
common.MustNoError(db.Init())
func NewDockerClient ¶ added in v0.0.13
NewDockerClient creates a Docker client with support for SSH tunnels. This function handles both local (unix://, tcp://) and remote (ssh://) Docker connections.
Socket Format:
- unix:///var/run/docker.sock - Local Unix socket
- tcp://host:2375 - TCP connection
- ssh://user@host[:port] - SSH tunnel (requires sshKeyPath)
For SSH connections, the client uses an SSH tunnel to forward Docker API requests to the remote host's Docker socket.
Parameters:
- socket: Docker socket URL (unix://, tcp://, or ssh://)
- sshKeyPath: Path to SSH private key (required for ssh:// URLs, ignored otherwise)
Returns:
- *client.Client: Configured Docker client
- error: If client creation fails
Example:
// Local connection
cli, err := NewDockerClient("unix:///var/run/docker.sock", "")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
// SSH connection
cli, err := NewDockerClient("ssh://[email protected]", "/home/user/.ssh/id_rsa")
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
func NewLogger ¶ added in v0.0.28
func NewLogger(config LoggerConfig) *logrus.Logger
NewLogger creates a new configured logger instance
func Ptr ¶ added in v0.0.29
func Ptr[T any](v T) *T
Ptr returns a pointer to the given value Useful for initializing pointer fields in structs
Example:
config := Config{
Enabled: common.Ptr(true),
Count: common.Ptr(42),
}
func PtrValue ¶ added in v0.0.29
func PtrValue[T any](ptr *T) T
PtrValue returns the value of a pointer, or the zero value if nil
Example:
value := common.PtrValue(config.Timeout) // Returns 0 if Timeout is nil
func RegistryAuth ¶
RegistryAuth creates a base64-encoded authentication string for Docker registry operations. This function formats authentication credentials in the format expected by the Docker API for registry authentication during image pull and push operations.
Authentication Flow:
The Docker API requires authentication credentials to be base64-encoded JSON containing username and password. This function handles the encoding process and returns the properly formatted authentication string.
Supported Registries:
- Docker Hub (docker.io)
- Private Docker registries
- Cloud provider registries (Azure ACR, AWS ECR, Google GCR)
- Self-hosted registries (Harbor, Nexus, etc.)
Parameters:
- username: Registry username or service account
- password: Registry password, token, or service account key
Returns:
- string: Base64-encoded authentication string for Docker API
Security Considerations:
- Credentials are temporarily held in memory during encoding
- The returned string contains sensitive authentication data
- Use secure credential storage (environment variables, secrets management)
- Avoid logging or persisting the authentication string
Example Usage:
auth := RegistryAuth("myuser", "mypassword")
// Use auth string in ImagePull or ImagePush operations
Error Handling:
Panics on JSON marshaling failure (should never occur with simple credentials)
func ShellExecute ¶
ShellExecute runs a shell command and returns output or error. This function provides a simple interface for executing shell commands with automatic output capture and error handling.
Execution Process:
- Creates a bash subprocess with the provided command
- Captures both stdout and stderr output
- Executes the command and waits for completion
- Returns output and error for caller to handle
Output Handling:
- Successful execution: Returns stdout output
- Failed execution: Returns error with stderr details
- Both stdout and stderr are captured for complete command output
Parameters:
- cmdToRun: Shell command string to execute (passed to bash -c)
Returns:
- string: stdout output from the command
- error: error with stderr details if command fails, nil on success
Error Handling:
The function returns errors instead of terminating, allowing callers to handle failures appropriately for their use case.
Security Warnings:
CRITICAL: This function is vulnerable to command injection attacks. - Never pass unsanitized user input to this function - Validate all input parameters before execution - Consider using exec.Command with separate arguments for safer execution - Avoid using this function in web applications or with external input
Shell Environment:
Commands are executed in a bash shell environment with: - Full shell command syntax support (pipes, redirects, variables) - Access to environment variables and PATH - Working directory of the calling process - Standard shell expansion and globbing
Example Usage:
// Safe usage with controlled input
ShellExecute("ls -la /tmp")
ShellExecute("docker ps --format table")
// DANGEROUS - vulnerable to injection
userInput := getUserInput() // Could contain "; rm -rf /"
ShellExecute(userInput) // DO NOT DO THIS
Best Practices:
- Use only with trusted, validated input
- Consider exec.Command for safer execution
- Implement timeout mechanisms for long-running commands
- Use in development and automation contexts, not production APIs
Performance Considerations:
- Each call creates a new bash process (overhead for frequent use)
- Output is buffered in memory (consider streaming for large outputs)
- Synchronous execution blocks until command completion
Alternative Approaches:
For production use, consider: - exec.Command with separate arguments - Restricted command execution with allowlists - Sandboxed execution environments - Process isolation and resource limits
func ShellSudoExecute ¶
ShellSudoExecute runs a shell command with sudo privileges using password authentication. This function enables execution of privileged commands by automatically providing the sudo password through stdin, useful for automation tasks requiring elevated privileges.
Sudo Authentication Process:
- Constructs a command that pipes the password to sudo -S
- Uses sudo -S flag to read password from stdin
- Executes the privileged command with elevated permissions
- Relies on ShellExecute for actual command execution and logging
Parameters:
- password: Sudo password for the current user
- cmdToRun: Shell command to execute with sudo privileges
Security Risks:
EXTREME CAUTION REQUIRED: This function has severe security implications: - Passwords are passed as command line arguments (visible in process lists) - Command injection vulnerabilities apply to both password and command - Password may be logged or stored in shell history - Process monitoring tools can capture password in plain text
Security Recommendations:
- Never use this function in production environments
- Use only in isolated development or testing scenarios
- Consider passwordless sudo configuration for automation
- Use dedicated service accounts with specific sudo permissions
- Implement secure credential management instead of hardcoded passwords
Alternative Approaches:
Safer alternatives for privileged operations: - Configure passwordless sudo for specific commands - Use dedicated service accounts with limited privileges - Implement proper credential management systems - Use container-based isolation instead of sudo - Deploy applications with appropriate user permissions
Command Construction:
The function builds a command in the format: echo <password> | sudo -S <command> This approach has inherent security vulnerabilities and should be replaced with more secure authentication mechanisms.
Example Usage:
// DANGEROUS - for development/testing only
ShellSudoExecute("mypassword", "apt update")
ShellSudoExecute("mypassword", "systemctl restart nginx")
Process Visibility Warning:
The constructed command line, including the password, may be visible to: - Process monitoring tools (ps, top, htop) - System administrators with process access - Log files and audit systems - Other users on multi-user systems
Recommended Replacements:
Instead of this function, consider: - Sudo configuration: user ALL=(ALL) NOPASSWD: /specific/command - Service accounts with appropriate permissions - Container-based privilege isolation - SSH key-based authentication for remote operations
func URLToFilePath ¶
URLToFilePath converts a URL to a filesystem-safe filename. This function transforms URLs into valid filenames by removing protocol prefixes and replacing path separators with underscores, useful for caching, logging, and file storage based on URL identifiers.
Transformation Process:
- Removes "https://" prefix if present
- Removes "http://" prefix if present
- Replaces all forward slashes (/) with underscores (_)
- Returns the transformed string suitable for filesystem use
Parameters:
- url: URL string to convert to filesystem-safe format
Returns:
- string: Filesystem-safe filename derived from the URL
Transformation Examples:
"https://example.com/path/to/resource" → "example.com_path_to_resource" "http://api.service.com/v1/users" → "api.service.com_v1_users" "example.com/docs/guide.html" → "example.com_docs_guide.html" "ftp://files.example.com/data" → "ftp:__files.example.com_data"
Use Cases:
- Cache file naming based on URL
- Log file organization by endpoint
- Temporary file creation for URL-based downloads
- Configuration file naming for URL-specific settings
Filename Safety:
The function handles common URL-to-filename conversion needs but may not address all filesystem restrictions: - Converts forward slashes to underscores - Preserves other characters (including potentially problematic ones) - Does not handle maximum filename length restrictions - May not be safe for all filesystems (Windows reserved names, etc.)
Limitations:
- Only handles HTTP/HTTPS protocols explicitly
- Other protocols (ftp, ssh, etc.) are not processed
- Does not handle URL parameters or fragments
- May create long filenames that exceed filesystem limits
- Does not sanitize other potentially problematic characters
Enhanced Safety Considerations:
For production use, consider additional transformations: - Replace or remove other special characters (:, ?, &, etc.) - Implement filename length limits - Handle Windows reserved filenames (CON, PRN, AUX, etc.) - Add file extension based on content type - Hash long URLs to prevent filesystem limitations
Example Usage:
// Basic URL to filename conversion
filename := URLToFilePath("https://api.example.com/v1/data")
// Result: "api.example.com_v1_data"
// Use for cache file creation
cacheFile := "/tmp/cache_" + URLToFilePath(apiURL) + ".json"
// Log file naming
logFile := URLToFilePath(endpoint) + ".log"
Alternative Implementations:
For more robust URL-to-filename conversion: - Use URL parsing libraries for component extraction - Implement comprehensive character sanitization - Add hash-based truncation for long URLs - Include file extension detection based on URL analysis
Thread Safety:
This function is safe for concurrent use as it only performs string operations without modifying shared state or resources.
Types ¶
type ContainerView ¶
type ContainerView struct {
ID string `json:"id"` // Docker container ID
Name string `json:"name"` // Container name
Status string `json:"status"` // Container status
Host string `json:"host"` // Docker host identifier
}
ContainerView represents a simplified view of a Docker container for API responses. This structure provides essential container information in a format suitable for JSON serialization and client consumption.
Fields:
- ID: Container's unique identifier (full Docker ID)
- Name: Human-readable container name
- Status: Current container status (running, stopped, etc.)
- Host: Hostname of the Docker host running the container
Use Cases:
- REST API responses for container listing
- Dashboard and monitoring interfaces
- Container inventory and tracking systems
- Multi-host container management
func ContainersList ¶
func ContainersList(ctx context.Context, cli *client.Client) []ContainerView
ContainersList converts raw container data into a simplified view format. This function transforms Docker's detailed container summaries into a more consumable format suitable for APIs, dashboards, and client applications.
Data Transformation:
- Extracts essential container information
- Adds host identification for multi-host environments
- Simplifies complex Docker metadata into basic fields
- Provides consistent JSON serialization format
Host Information:
Automatically detects and includes the local hostname, enabling container tracking across distributed Docker environments and multi-host orchestration scenarios.
Parameters:
- ctx: Context for operation cancellation and timeout
- cli: Docker API client instance
Returns:
- []ContainerView: Array of simplified container view objects
Data Structure:
Each ContainerView includes: - ID: Full Docker container identifier - Name: Primary container name (first from names array) - Status: Current container status string - Host: Local hostname for multi-host tracking
Use Cases:
- REST API responses for container management interfaces
- Dashboard data feeds for monitoring systems
- Container inventory for orchestration platforms
- Simplified container tracking across environments
Implementation Notes:
- Uses the first name from container.Names array
- Hostname detection uses os.Hostname() for local identification
- Results are suitable for JSON marshaling and API responses
func ContainersListWithClient ¶ added in v0.0.6
func ContainersListWithClient(ctx context.Context, cli DockerClient) []ContainerView
ContainersListWithClient is a testable version of ContainersList
type ContextLogger ¶ added in v0.0.28
type ContextLogger struct {
// contains filtered or unexported fields
}
ContextLogger provides context-aware logging utilities
func NewContextLogger ¶ added in v0.0.28
func NewContextLogger(logger *logrus.Logger, fields map[string]interface{}) *ContextLogger
NewContextLogger creates a new context-aware logger with base fields
func RequestLogger ¶ added in v0.0.28
func RequestLogger(serviceName, method, path, requestID string) *ContextLogger
RequestLogger creates a logger for HTTP request tracking
func ServiceLogger ¶ added in v0.0.28
func ServiceLogger(serviceName, serviceVersion string) *ContextLogger
ServiceLogger creates a logger pre-configured with service metadata Automatically includes the EVE framework version for debugging purposes
func (*ContextLogger) Debug ¶ added in v0.0.28
func (cl *ContextLogger) Debug(msg string)
Debug logs a debug message
func (*ContextLogger) Debugf ¶ added in v0.0.28
func (cl *ContextLogger) Debugf(format string, args ...interface{})
Debugf logs a formatted debug message
func (*ContextLogger) Error ¶ added in v0.0.28
func (cl *ContextLogger) Error(msg string)
Error logs an error message
func (*ContextLogger) Errorf ¶ added in v0.0.28
func (cl *ContextLogger) Errorf(format string, args ...interface{})
Errorf logs a formatted error message
func (*ContextLogger) Fatal ¶ added in v0.0.28
func (cl *ContextLogger) Fatal(msg string)
Fatal logs a fatal message and exits
func (*ContextLogger) Fatalf ¶ added in v0.0.28
func (cl *ContextLogger) Fatalf(format string, args ...interface{})
Fatalf logs a formatted fatal message and exits
func (*ContextLogger) Info ¶ added in v0.0.28
func (cl *ContextLogger) Info(msg string)
Info logs an info message
func (*ContextLogger) Infof ¶ added in v0.0.28
func (cl *ContextLogger) Infof(format string, args ...interface{})
Infof logs a formatted info message
func (*ContextLogger) Warn ¶ added in v0.0.28
func (cl *ContextLogger) Warn(msg string)
Warn logs a warning message
func (*ContextLogger) Warnf ¶ added in v0.0.28
func (cl *ContextLogger) Warnf(format string, args ...interface{})
Warnf logs a formatted warning message
func (*ContextLogger) WithContext ¶ added in v0.0.28
func (cl *ContextLogger) WithContext(ctx context.Context) *ContextLogger
WithContext extracts request/trace IDs from context
func (*ContextLogger) WithError ¶ added in v0.0.28
func (cl *ContextLogger) WithError(err error) *ContextLogger
WithError adds an error to the logger context
func (*ContextLogger) WithField ¶ added in v0.0.28
func (cl *ContextLogger) WithField(key string, value interface{}) *ContextLogger
WithField adds a single field to the logger context
func (*ContextLogger) WithFields ¶ added in v0.0.28
func (cl *ContextLogger) WithFields(fields map[string]interface{}) *ContextLogger
WithFields adds multiple fields to the logger context
type CopyToVolumeOptions ¶
type CopyToVolumeOptions struct {
Ctx context.Context // Operation context
Client *client.Client // Docker API client
Image string // Base image for temporary container
Volume string // Target volume name
LocalPath string // Source path on host
VolumePath string // Destination path in volume
}
CopyToVolumeOptions configures file copy operations from host to Docker volumes. This structure encapsulates all parameters needed for complex file transfer operations that involve temporary containers and volume mounting.
The operation creates a temporary container with both source (host) and target (volume) mounted, then performs file synchronization using rsync.
Fields:
- Ctx: Context for operation cancellation and timeout control
- Client: Docker API client for container operations
- Image: Base image for temporary container (should include rsync)
- Volume: Target Docker volume name
- LocalPath: Source path on the host filesystem
- VolumePath: Destination path within the volume
Workflow:
- Pull the specified base image
- Create Docker volume if it doesn't exist
- Create temporary container with host and volume mounts
- Execute rsync to copy files from host to volume
- Clean up temporary container (auto-remove enabled)
type DockerClient ¶ added in v0.0.6
type DockerClient interface {
// Container operations
ContainerList(ctx context.Context, options containertypes.ListOptions) ([]containertypes.Summary, error)
ContainerCreate(
ctx context.Context,
config *containertypes.Config,
hostConfig *containertypes.HostConfig,
networkingConfig *networktypes.NetworkingConfig,
platform *ocispec.Platform,
containerName string,
) (containertypes.CreateResponse, error)
ContainerStart(ctx context.Context, containerID string, options containertypes.StartOptions) error
ContainerStop(ctx context.Context, containerID string, options containertypes.StopOptions) error
ContainerWait(ctx context.Context, containerID string, condition containertypes.WaitCondition) (<-chan containertypes.WaitResponse, <-chan error)
ContainerLogs(ctx context.Context, containerID string, options containertypes.LogsOptions) (io.ReadCloser, error)
CopyToContainer(ctx context.Context, containerID, dstPath string, content io.Reader, options containertypes.CopyToContainerOptions) error
// Image operations
ImageList(ctx context.Context, options image.ListOptions) ([]image.Summary, error)
ImagePull(ctx context.Context, refStr string, options image.PullOptions) (io.ReadCloser, error)
ImageBuild(ctx context.Context, buildContext io.Reader, options build.ImageBuildOptions) (build.ImageBuildResponse, error)
ImagePush(ctx context.Context, image string, options image.PushOptions) (io.ReadCloser, error)
// Volume operations
VolumeCreate(ctx context.Context, options volume.CreateOptions) (volume.Volume, error)
VolumeList(ctx context.Context, options volume.ListOptions) (volume.ListResponse, error)
VolumeRemove(ctx context.Context, volumeID string, force bool) error
// Network operations
NetworkCreate(ctx context.Context, name string, options networktypes.CreateOptions) (networktypes.CreateResponse, error)
NetworkConnect(ctx context.Context, networkID, containerID string, config *networktypes.EndpointSettings) error
NetworkList(ctx context.Context, options networktypes.ListOptions) ([]networktypes.Summary, error)
// Container lifecycle
ContainerRemove(ctx context.Context, containerID string, options containertypes.RemoveOptions) error
ContainerInspect(ctx context.Context, containerID string) (containertypes.InspectResponse, error)
// Network lifecycle
NetworkInspect(ctx context.Context, networkID string, options networktypes.InspectOptions) (networktypes.Inspect, error)
NetworkRemove(ctx context.Context, networkID string) error
// Client lifecycle
Close() error
}
DockerClient defines the interface for Docker operations. This interface abstracts the Docker SDK client to enable dependency injection and testing with mock implementations.
type FlowConfig ¶ added in v0.0.2
type FlowConfig struct {
RabbitMQURL string // RabbitMQ connection URL with credentials
QueueName string // RabbitMQ queue name for process messages
CouchDBURL string // CouchDB server URL with credentials
DatabaseName string // CouchDB database name for process storage
ApiKey string // API key for authentication and JWT signing
}
FlowConfig holds the complete configuration for the EVE flow processing system. This structure centralizes all configuration parameters required for service coordination, message processing, and data persistence across the distributed system.
Configuration Management:
This structure supports various configuration sources including: - Command-line flags and arguments - Environment variables - Configuration files (YAML, JSON, TOML) - Remote configuration services - Default values for development environments
Field Descriptions:
- RabbitMQURL: Complete connection URL for RabbitMQ message broker
- QueueName: Name of the queue for process state messages
- CouchDBURL: Base URL for CouchDB server (including protocol and port)
- DatabaseName: CouchDB database name for storing process documents
- ApiKey: Authentication key for API access and JWT token generation
Connection Strings:
- RabbitMQURL format: "amqp://username:password@hostname:port/vhost"
- CouchDBURL format: "http://username:password@hostname:port"
- Support for both local and remote service connections
Security Considerations:
- Store sensitive configuration (passwords, API keys) securely
- Use environment variables or secret management systems
- Avoid hardcoding credentials in source code
- Consider certificate-based authentication for production
Environment-Specific Configuration:
Different environments (development, staging, production) should use separate configuration instances with appropriate service endpoints, credentials, and resource names to prevent cross-environment interference.
Validation Requirements:
All fields should be validated for proper format and connectivity during service initialization to fail fast on configuration errors.
Example Configuration:
{
"RabbitMQURL": "amqp://guest:guest@localhost:5672/",
"QueueName": "flow_process_queue",
"CouchDBURL": "http://admin:password@localhost:5984",
"DatabaseName": "flow_processes",
"ApiKey": "your-secret-api-key"
}
type FlowConsumer ¶ added in v0.0.2
type FlowConsumer struct {
// contains filtered or unexported fields
}
FlowConsumer handles RabbitMQ message consumption and processing for the flow system. This structure encapsulates all the state and dependencies required for reliable message processing including connection management, HTTP client configuration, and process state coordination.
Consumer Architecture:
The consumer implements a robust message processing pattern with: - Reliable message delivery through acknowledgments - Automatic reconnection on connection failures - Dead letter queue support for message failures - Graceful shutdown with proper resource cleanup
Field Descriptions:
- config: Complete system configuration including service endpoints
- connection: Active RabbitMQ connection for message operations
- channel: RabbitMQ channel for message consumption and acknowledgment
- httpClient: HTTP client for CouchDB operations with appropriate timeouts
Connection Management:
- Maintains persistent connection to RabbitMQ for efficient message processing
- Implements connection recovery and retry logic for network resilience
- Configures appropriate timeouts and heartbeat intervals
- Supports both local and remote RabbitMQ deployments
Message Processing Flow:
- Receive message from RabbitMQ queue
- Deserialize and validate message structure
- Process state transition and update CouchDB document
- Acknowledge successful processing or reject with requeue
- Log operation results for monitoring and debugging
Error Handling Strategy:
- Transient errors: Reject message with requeue for retry
- Permanent errors: Reject message without requeue (dead letter)
- Processing errors: Log detailed error information for debugging
- Connection errors: Implement automatic reconnection with backoff
Performance Considerations:
- Configurable prefetch count for message batching
- HTTP connection pooling for CouchDB operations
- Concurrent message processing with appropriate limits
- Resource monitoring and cleanup for long-running operations
Monitoring and Observability:
- Message processing metrics (rate, latency, errors)
- Connection health monitoring and alerting
- Resource utilization tracking (memory, connections)
- Business metrics (process completion rates, state distributions)
Lifecycle Management:
- Initialize: Establish connections and validate configuration
- Start: Begin message consumption with proper error handling
- Stop: Graceful shutdown with message acknowledgment completion
- Cleanup: Release resources and close connections properly
Example Usage:
consumer := &FlowConsumer{
config: flowConfig,
httpClient: &http.Client{Timeout: 30 * time.Second},
}
err := consumer.Connect()
if err != nil {
log.Fatal("Failed to connect:", err)
}
defer consumer.Close()
consumer.StartConsuming()
type FlowCouchDBError ¶ added in v0.0.2
type FlowCouchDBError struct {
Error string `json:"error"` // Machine-readable error type
Reason string `json:"reason"` // Human-readable error description
}
FlowCouchDBError represents error responses from CouchDB operations. This structure provides structured error information for diagnosing and handling various failure conditions in CouchDB interactions.
CouchDB Error Model:
CouchDB returns structured error information with both machine-readable error codes and human-readable reason descriptions. This enables both programmatic error handling and useful error reporting.
Common Error Types:
- "not_found": Document or database doesn't exist
- "conflict": Document revision conflict (concurrent modification)
- "forbidden": Access denied or validation failure
- "bad_request": Invalid request format or parameters
- "internal_server_error": CouchDB server-side errors
Field Descriptions:
- Error: Machine-readable error type for programmatic handling
- Reason: Human-readable error description for logging and debugging
Error Handling Strategies:
- "not_found": Check document/database existence and create if needed
- "conflict": Retrieve latest document revision and retry operation
- "forbidden": Check permissions and document validation rules
- "bad_request": Validate request parameters and document structure
Retry Logic:
Some errors (like "conflict") are transient and should trigger retry logic with exponential backoff. Others (like "forbidden") indicate permanent failures requiring different handling approaches.
Logging and Monitoring:
Error information should be logged for operational monitoring and debugging purposes. Track error frequencies to identify systemic issues.
Example Error Response:
{
"error": "conflict",
"reason": "Document update conflict. Document was modified by another process."
}
type FlowCouchDBResponse ¶ added in v0.0.2
type FlowCouchDBResponse struct {
OK bool `json:"ok"` // Operation success indicator
ID string `json:"id"` // Document ID that was operated on
Rev string `json:"rev"` // New document revision after operation
}
FlowCouchDBResponse represents the standard response format from CouchDB operations. This structure captures the essential information returned by CouchDB for successful document operations including creation, updates, and deletions.
CouchDB Operation Context:
CouchDB returns this response format for most document modification operations (PUT, POST, DELETE) to confirm success and provide the new document revision information required for subsequent operations.
Field Descriptions:
- OK: Boolean flag indicating operation success (should always be true in success responses)
- ID: Document identifier that was operated on (confirms target document)
- Rev: New revision ID after the operation (required for future updates)
Revision Management:
The Rev field is critical for CouchDB's MVCC (Multi-Version Concurrency Control). Applications must store and provide this revision ID for subsequent update operations to prevent conflicts and ensure data consistency.
Error Handling:
When OK is false or missing, check for CouchDB error responses instead of this structure. HTTP status codes provide additional error context.
Usage Patterns:
- Store Rev field for future document updates
- Verify ID matches expected document identifier
- Log successful operations for audit and monitoring
Example Response:
{
"ok": true,
"id": "flow_process_eval-2024-001",
"rev": "2-b7c8d9e0f1a2"
}
type FlowProcessDocument ¶ added in v0.0.2
type FlowProcessDocument struct {
ID string `json:"_id"` // CouchDB document ID
Rev string `json:"_rev,omitempty"` // CouchDB revision for MVCC
ProcessID string `json:"process_id"` // Business process identifier
State FlowProcessState `json:"state"` // Current process state
CreatedAt time.Time `json:"created_at"` // Process creation timestamp
UpdatedAt time.Time `json:"updated_at"` // Last update timestamp
History []FlowStateChange `json:"history"` // Complete state change history
Metadata map[string]interface{} `json:"metadata,omitempty"` // Aggregated process metadata
ErrorMsg string `json:"error_message,omitempty"` // Most recent error message
Description string `json:"description,omitempty"` // Most recent description
}
FlowProcessDocument represents the complete process state document stored in CouchDB. This document serves as the authoritative record of process state with full audit history and supports the system's eventual consistency model.
CouchDB Integration:
- Uses CouchDB's document-based storage with automatic revision management
- Document ID format: "flow_process_{ProcessID}" for consistent retrieval
- Revision field (_rev) enables optimistic concurrency control
- Supports CouchDB replication and conflict resolution mechanisms
Document Evolution:
Documents grow over time as state changes are appended to the history array. This provides a complete audit trail while maintaining current state in top-level fields for efficient querying and indexing.
Field Descriptions:
- ID: CouchDB document identifier (includes "_id" prefix)
- Rev: CouchDB revision for concurrency control and conflict resolution
- ProcessID: Business process identifier (extracted from document ID)
- State: Current process state (updated with each transition)
- CreatedAt: Document creation timestamp (process initiation time)
- UpdatedAt: Last modification timestamp (most recent state change)
- History: Complete chronological list of all state transitions
- Metadata: Merged metadata from all state changes (latest values win)
- ErrorMsg: Most recent error message (from latest failed state)
- Description: Most recent description (from latest state change)
Querying Patterns:
- By ProcessID: Direct document retrieval using computed document ID
- By State: Secondary index on current state for bulk operations
- By Timestamp: Range queries on CreatedAt/UpdatedAt for time-based analysis
- By History: Complex queries on state transition patterns and durations
Conflict Resolution:
CouchDB's MVCC handles concurrent updates through revision conflicts. Applications must handle 409 Conflict responses by retrieving current document, merging changes, and retrying the update operation.
Data Retention:
Documents persist indefinitely for audit and compliance purposes. Consider implementing archival strategies for completed processes older than retention policy requirements.
Example Document:
{
"_id": "flow_process_eval-2024-001",
"_rev": "3-a1b2c3d4e5f6",
"process_id": "eval-2024-001",
"state": "successful",
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-15T10:45:00Z",
"history": [
{"state": "started", "timestamp": "2024-01-15T10:00:00Z"},
{"state": "running", "timestamp": "2024-01-15T10:15:00Z"},
{"state": "successful", "timestamp": "2024-01-15T10:45:00Z"}
],
"metadata": {"total_steps": 5, "completion_time": "00:45:00"}
}
type FlowProcessMessage ¶ added in v0.0.2
type FlowProcessMessage struct {
ProcessID string `json:"process_id"` // Unique process identifier
State FlowProcessState `json:"state"` // Target state for transition
Timestamp time.Time `json:"timestamp"` // When state change occurred
Metadata map[string]interface{} `json:"metadata,omitempty"` // Process-specific data
ErrorMsg string `json:"error_message,omitempty"` // Error details for failures
Description string `json:"description,omitempty"` // Human-readable description
}
FlowProcessMessage represents the structure of messages exchanged between services for communicating process state changes and coordination events. This message format serves as the primary communication protocol for the distributed workflow engine.
Message Flow Pattern:
Producer Service → RabbitMQ Queue → Consumer Service → State Update → CouchDB
The message contains all necessary information to process state transitions, update persistent storage, and trigger downstream actions without requiring additional service calls or data lookups.
Field Descriptions:
- ProcessID: Unique identifier for the workflow process (UUID recommended)
- State: Target state for this transition (must follow state machine rules)
- Timestamp: When the state change occurred (for ordering and audit)
- Metadata: Process-specific data and configuration (extensible JSON object)
- ErrorMsg: Error details for failed states (required for StateFailed)
- Description: Human-readable description of the state change
Message Validation:
- ProcessID must be non-empty and unique across the system
- State must be a valid FlowProcessState value
- Timestamp should be set by the producing service (auto-generated if missing)
- ErrorMsg is required when State is StateFailed
Serialization:
Messages are JSON-serialized for RabbitMQ transport with proper field mapping and optional field handling. The structure supports both compact and verbose message formats depending on use case requirements.
Error Handling:
Invalid messages are rejected at the consumer level with appropriate error logging and optional dead letter queue routing for investigation.
Example Message:
{
"process_id": "eval-2024-001",
"state": "running",
"timestamp": "2024-01-15T10:30:00Z",
"metadata": {"step": "validation", "progress": 0.3},
"description": "Starting validation phase"
}
type FlowProcessState ¶ added in v0.0.2
type FlowProcessState string
FlowProcessState represents the possible states of a workflow process in the EVE system. This enumeration defines a finite state machine that governs process lifecycle management with clear state transitions and business logic rules.
State Transition Rules:
StateStarted → StateRunning (process begins execution) StateRunning → StateSuccessful (normal completion) StateRunning → StateFailed (error termination) StateSuccessful → (terminal state, no further transitions) StateFailed → (terminal state, no further transitions)
State Semantics:
- StateStarted: Process has been initiated and queued for execution
- StateRunning: Process is actively executing (may have multiple updates)
- StateSuccessful: Process completed successfully with expected outcomes
- StateFailed: Process terminated due to errors or validation failures
Usage in System:
These states are used consistently across all services to ensure uniform process tracking and enable cross-service coordination. State changes trigger various system behaviors including notifications, cleanup, and dependent process initiation.
Audit and Monitoring:
Each state transition is logged with full context including timestamps, error messages, and metadata for comprehensive audit trails and system monitoring capabilities.
const ( StateStarted FlowProcessState = "started" // Process initiated and queued StateRunning FlowProcessState = "running" // Process actively executing StateSuccessful FlowProcessState = "successful" // Process completed successfully StateFailed FlowProcessState = "failed" // Process terminated with errors )
type FlowStateChange ¶ added in v0.0.2
type FlowStateChange struct {
State FlowProcessState `json:"state"` // Target state of transition
Timestamp time.Time `json:"timestamp"` // When transition occurred
ErrorMsg string `json:"error_message,omitempty"` // Error context for failures
}
FlowStateChange represents a single state transition in the process history. This immutable record captures the exact context of each state change for comprehensive audit trails and process analysis.
Immutability Principle:
Once created, state change records are never modified. This ensures audit trail integrity and enables reliable process analysis, debugging, and compliance reporting.
Field Descriptions:
- State: The process state that was transitioned to
- Timestamp: Exact time when the transition occurred (UTC recommended)
- ErrorMsg: Error context if the transition was due to a failure
Ordering and Consistency:
State changes are appended to the history array in chronological order. Services must ensure proper timestamp ordering when processing messages to maintain logical consistency in the audit trail.
Error Documentation:
When a process transitions to StateFailed, the ErrorMsg field provides detailed error information for debugging and analysis. This information is preserved in the history even if subsequent operations succeed.
Analysis Applications:
History records enable various analytical capabilities: - Process duration calculation (time between started and terminal states) - Failure pattern analysis (common error conditions and timing) - Performance optimization (identifying bottlenecks and slow processes) - SLA monitoring (tracking process completion times)
Storage Efficiency:
While each state change is a separate record, the lightweight structure minimizes storage overhead while maximizing analytical value. Consider implementing history compression for very long-running processes.
Example State Change:
{
"state": "failed",
"timestamp": "2024-01-15T10:30:00Z",
"error_message": "Validation failed: missing required field 'customer_id'"
}
type ImagePullOptions ¶ added in v0.0.5
type ImagePullOptions struct {
// Username for registry authentication (optional)
Username string
// Password for registry authentication (optional)
Password string
// Platform specifies target architecture (e.g., "linux/amd64", "linux/arm64")
Platform string
// Silent suppresses output to stdout (writes to io.Discard instead)
Silent bool
// Custom pull options (optional, overrides other settings if provided)
CustomOptions *image.PullOptions
}
ImagePullOptions contains options for pulling Docker images.
type LoggerConfig ¶ added in v0.0.28
type LoggerConfig struct {
Level LogLevel // Minimum log level
Format string // "json" or "text"
Service string // Service name for all logs
Version string // Service version
AddCaller bool // Add caller information
TimeFormat string // Time format for logs
}
LoggerConfig contains configuration for creating a logger
func DefaultLoggerConfig ¶ added in v0.0.28
func DefaultLoggerConfig() LoggerConfig
DefaultLoggerConfig returns a logger config with sensible defaults
type MockDockerClient ¶ added in v0.0.6
type MockDockerClient struct {
// Containers to return from ContainerList
Containers []containertypes.Summary
// Images to return from ImageList
Images []image.Summary
// Volumes to return from VolumeCreate/VolumeList
Volumes map[string]*volume.Volume
// VolumeListResponse to return from VolumeList
VolumeListResponse *volume.ListResponse
// Networks to return from NetworkCreate/NetworkList
Networks map[string]string
// NetworkListResponse to return from NetworkList
NetworkListResponse []networktypes.Summary
// Error to return from operations
Err error
// Track function calls
ContainerListCalled bool
ContainerCreateCalled bool
ContainerStartCalled bool
ContainerStopCalled bool
ContainerRemoveCalled bool
ImageListCalled bool
ImagePullCalled bool
ImageBuildCalled bool
ImagePushCalled bool
VolumeCreateCalled bool
VolumeListCalled bool
VolumeRemoveCalled bool
NetworkCreateCalled bool
NetworkConnectCalled bool
NetworkListCalled bool
CopyToContainerCalled bool
ContainerWaitCalled bool
ContainerLogsCalled bool
// Store last call parameters
LastContainerID string
LastImageTag string
LastVolumeName string
LastNetworkName string
LastContainerName string
}
MockDockerClient is a mock implementation of Docker client for testing
func NewMockDockerClient ¶ added in v0.0.6
func NewMockDockerClient() *MockDockerClient
NewMockDockerClient creates a new mock Docker client
func (*MockDockerClient) Close ¶ added in v0.0.6
func (m *MockDockerClient) Close() error
Close mocks closing the client
func (*MockDockerClient) ContainerCreate ¶ added in v0.0.6
func (m *MockDockerClient) ContainerCreate( ctx context.Context, config *containertypes.Config, hostConfig *containertypes.HostConfig, networkingConfig *networktypes.NetworkingConfig, platform *ocispec.Platform, containerName string, ) (containertypes.CreateResponse, error)
ContainerCreate mocks creating a container with v28.x signature (includes Platform parameter)
func (*MockDockerClient) ContainerInspect ¶ added in v0.0.13
func (m *MockDockerClient) ContainerInspect(ctx context.Context, containerID string) (containertypes.InspectResponse, error)
ContainerInspect mocks inspecting a container
func (*MockDockerClient) ContainerList ¶ added in v0.0.6
func (m *MockDockerClient) ContainerList(ctx context.Context, options containertypes.ListOptions) ([]containertypes.Summary, error)
ContainerList mocks listing containers
func (*MockDockerClient) ContainerLogs ¶ added in v0.0.6
func (m *MockDockerClient) ContainerLogs(ctx context.Context, containerID string, options containertypes.LogsOptions) (io.ReadCloser, error)
ContainerLogs mocks getting container logs
func (*MockDockerClient) ContainerRemove ¶ added in v0.0.13
func (m *MockDockerClient) ContainerRemove(ctx context.Context, containerID string, options containertypes.RemoveOptions) error
ContainerRemove mocks removing a container
func (*MockDockerClient) ContainerStart ¶ added in v0.0.6
func (m *MockDockerClient) ContainerStart(ctx context.Context, containerID string, options containertypes.StartOptions) error
ContainerStart mocks starting a container
func (*MockDockerClient) ContainerStop ¶ added in v0.0.6
func (m *MockDockerClient) ContainerStop(ctx context.Context, containerID string, options containertypes.StopOptions) error
ContainerStop mocks stopping a container
func (*MockDockerClient) ContainerWait ¶ added in v0.0.6
func (m *MockDockerClient) ContainerWait(ctx context.Context, containerID string, condition containertypes.WaitCondition) (<-chan containertypes.WaitResponse, <-chan error)
ContainerWait mocks waiting for a container
func (*MockDockerClient) CopyToContainer ¶ added in v0.0.6
func (m *MockDockerClient) CopyToContainer(ctx context.Context, containerID, dstPath string, content io.Reader, options containertypes.CopyToContainerOptions) error
CopyToContainer mocks copying files to a container
func (*MockDockerClient) ImageBuild ¶ added in v0.0.6
func (m *MockDockerClient) ImageBuild(ctx context.Context, buildContext io.Reader, options build.ImageBuildOptions) (build.ImageBuildResponse, error)
ImageBuild mocks building an image
func (*MockDockerClient) ImageList ¶ added in v0.0.6
func (m *MockDockerClient) ImageList(ctx context.Context, options image.ListOptions) ([]image.Summary, error)
ImageList mocks listing images
func (*MockDockerClient) ImagePull ¶ added in v0.0.6
func (m *MockDockerClient) ImagePull(ctx context.Context, refStr string, options image.PullOptions) (io.ReadCloser, error)
ImagePull mocks pulling an image
func (*MockDockerClient) ImagePush ¶ added in v0.0.6
func (m *MockDockerClient) ImagePush(ctx context.Context, image string, options image.PushOptions) (io.ReadCloser, error)
ImagePush mocks pushing an image
func (*MockDockerClient) NetworkConnect ¶ added in v0.0.6
func (m *MockDockerClient) NetworkConnect(ctx context.Context, networkID, containerID string, config *networktypes.EndpointSettings) error
NetworkConnect mocks connecting a container to a network
func (*MockDockerClient) NetworkCreate ¶ added in v0.0.6
func (m *MockDockerClient) NetworkCreate(ctx context.Context, name string, options networktypes.CreateOptions) (networktypes.CreateResponse, error)
NetworkCreate mocks creating a network
func (*MockDockerClient) NetworkInspect ¶ added in v0.0.13
func (m *MockDockerClient) NetworkInspect(ctx context.Context, networkID string, options networktypes.InspectOptions) (networktypes.Inspect, error)
NetworkInspect mocks inspecting a network
func (*MockDockerClient) NetworkList ¶ added in v0.0.13
func (m *MockDockerClient) NetworkList(ctx context.Context, options networktypes.ListOptions) ([]networktypes.Summary, error)
NetworkList mocks listing networks
func (*MockDockerClient) NetworkRemove ¶ added in v0.0.13
func (m *MockDockerClient) NetworkRemove(ctx context.Context, networkID string) error
NetworkRemove mocks removing a network
func (*MockDockerClient) VolumeCreate ¶ added in v0.0.6
func (m *MockDockerClient) VolumeCreate(ctx context.Context, options volume.CreateOptions) (volume.Volume, error)
VolumeCreate mocks creating a volume
func (*MockDockerClient) VolumeList ¶ added in v0.0.13
func (m *MockDockerClient) VolumeList(ctx context.Context, options volume.ListOptions) (volume.ListResponse, error)
VolumeList mocks listing volumes
func (*MockDockerClient) VolumeRemove ¶ added in v0.0.13
VolumeRemove mocks removing a volume
type OutputSplitter ¶
type OutputSplitter struct{}
OutputSplitter implements intelligent log output routing based on log content analysis. This custom writer examines log messages and directs them to appropriate output streams (stdout vs stderr) based on their severity level, enabling proper log stream separation for containerized and production environments.
Routing Logic:
The splitter analyzes each log message for error indicators and routes them accordingly: - Error messages (containing "level=error") → stderr - All other messages (info, debug, warn) → stdout
Stream Separation Benefits:
- Monitoring systems can treat error streams with higher priority
- Container orchestrators can route error streams to alerting systems
- Log aggregation tools can apply different processing rules per stream
- Shell scripts can capture and handle error output separately
Container Compatibility:
Docker and Kubernetes environments can capture stdout and stderr independently, enabling sophisticated log processing pipelines where errors trigger immediate notifications while info logs are processed for analytics and debugging.
Performance Characteristics:
- Minimal overhead through simple byte pattern matching
- No regex processing or complex parsing for efficiency
- Direct stream writing without buffering delays
- Suitable for high-throughput logging scenarios
Integration with Logrus:
Works seamlessly with logrus formatters including JSON and text formats. The splitter operates on the final formatted output, ensuring compatibility with all logrus configuration options.
Example Usage:
splitter := &OutputSplitter{}
logger := logrus.New()
logger.SetOutput(splitter)
logger.Info("This goes to stdout")
logger.Error("This goes to stderr")
func (*OutputSplitter) Write ¶
func (splitter *OutputSplitter) Write(p []byte) (n int, err error)
Write implements the io.Writer interface for the OutputSplitter. This method analyzes incoming log data and routes it to the appropriate output stream based on content analysis, enabling intelligent log separation.
Routing Algorithm:
- Examines the byte content for error level indicators
- Routes messages containing "level=error" to stderr
- Routes all other messages to stdout
- Returns the number of bytes written and any I/O errors
Content Analysis:
Uses efficient byte searching to identify error-level messages without complex parsing or regular expressions. The pattern matching is designed to work with logrus's standard output format.
Error Detection Pattern:
Searches for the literal string "level=error" which is produced by logrus when formatting error-level log entries. This pattern is reliable across different logrus formatters and configurations.
Parameters:
- p: Byte slice containing the log message to be written
Returns:
- n: Number of bytes successfully written to the output stream
- err: Any error encountered during the write operation
Stream Selection:
- os.Stderr: Used for error messages requiring immediate attention
- os.Stdout: Used for informational messages and general logging
Error Handling:
Write errors from the underlying streams (stdout/stderr) are propagated back to the caller, maintaining proper error semantics for the io.Writer interface contract.
Concurrency Safety:
The method is safe for concurrent use as it only performs read operations on the input data and writes to thread-safe OS streams. Multiple goroutines can safely use the same OutputSplitter instance.
Performance Notes:
- Uses bytes.Contains for efficient pattern matching
- No memory allocation during normal operation
- Direct stream writing without intermediate buffering
- Minimal CPU overhead suitable for high-frequency logging
Example Message Routing:
Input: `time="2024-01-15T10:30:00Z" level=error msg="Database connection failed"` Output: Routed to stderr Input: `time="2024-01-15T10:30:00Z" level=info msg="Service started successfully"` Output: Routed to stdout
type StructuredLog ¶ added in v0.0.28
type StructuredLog struct {
// contains filtered or unexported fields
}
StructuredLog provides a builder pattern for structured logging
func NewStructuredLog ¶ added in v0.0.28
func NewStructuredLog(logger *logrus.Logger) *StructuredLog
NewStructuredLog creates a new structured log builder
func (*StructuredLog) Level ¶ added in v0.0.28
func (sl *StructuredLog) Level(level LogLevel) *StructuredLog
Level sets the log level
func (*StructuredLog) Log ¶ added in v0.0.28
func (sl *StructuredLog) Log(msg string)
Log outputs the structured log
func (*StructuredLog) Logf ¶ added in v0.0.28
func (sl *StructuredLog) Logf(format string, args ...interface{})
Logf outputs a formatted structured log
func (*StructuredLog) WithError ¶ added in v0.0.28
func (sl *StructuredLog) WithError(err error) *StructuredLog
WithError adds an error to the structured log
func (*StructuredLog) WithField ¶ added in v0.0.28
func (sl *StructuredLog) WithField(key string, value interface{}) *StructuredLog
WithField adds a field to the structured log
func (*StructuredLog) WithFields ¶ added in v0.0.28
func (sl *StructuredLog) WithFields(fields map[string]interface{}) *StructuredLog
WithFields adds multiple fields to the structured log