Namespaced Logging for Nim
namespaced_logging
provides a high-performance, thread-safe logging framework
similar to std/logging with support for namespace-scoped logging
similar to log4j or logback for Nim. It has four main motivating
features:
- Hierarchical, namespaced logging
- Safe and straightforward to use in multi-threaded applications.
- Native support for structured logging.
- Simple, autoconfigured usage pattern mirroring the std/logging interface.
Getting Started
Install the package from nimble:
nimble install namespaced_logging
Usage Patterns
Simple, Autoconfigured Setup
import namespaced_logging/autoconfigured
# Zero configuration of the LogService required, appender/logger configuration
# is immediately available
addLogAppender(initConsoleLogAppender())
info("Application started")
# Set global threshold
setRootLoggingThreshold(lvlWarn)
# Namespaced loggers, thresholds, and appenders supported
addLogAppender(initFileLogAppender(
filePath = "/var/log/app_db.log",
formatter = formatJsonStructuredLog, # provided in namespaced_logging
namespace = "app/db",
threshold = lvlInfo))
# in DB code
let dbLogger = getLogger("app/db/queryplanner")
dbLogger.debug("Beginning query plan...")
# native support for structured logs (import std/json)
dbLogger.debug(%{
"method": "parseParams",
"message": "unrecognized param type",
"invalidType": params[idx].type
})
Manual Configuration
import namespaced_logging
# Manually creating a LogService. This is an independent logging root fully
# isolated from subsequent LogServices initialized
var ls = initLogService()
# Configure logging
ls.addAppender(initConsoleLogAppender())
ls.addAppender(initFileLogAppender("app.log"))
ls.setThreshold("api", lvlWarn)
# Create loggers
let localLogSvc = threadLocalRef(ls)
let apiLogger = localLogSvc.getLogger("api")
let dbLogger = localLogSvc.getLogger("db")
Manual Multithreaded Application
import namespaced_logging
# Main thread setup
var logService = initLogService()
logService.addAppender(initConsoleLogAppender())
var localLogSvc = threadLocalRef(logService) # for use on main thread
# Worker thread function
proc worker(ls: LogService) {.thread.} =
let localLogSvc = threadLocalRef(ls)
let logger = localLogSvc.getLogger("worker")
# Runtime configuration changes
localLogSvc.setThreshold("worker", lvlDebug)
logger.debug("Worker configured")
# Safe thread creation
createThread(workerThread, worker, logService)
Dynamic Configuration
# Configuration can change at runtime
proc configureLogging(localLogSvc: ThreadLocalLogService, verbose: bool) =
if verbose:
localLogSvc.setRootThreshold(lvlDebug)
localLogSvc.addAppender(initFileLogAppender("debug.log"))
else:
localLogSvc.setRootThreshold(lvlInfo)
# Changes automatically propagate to all threads
Loggers and Appenders
The logging system is composed of two main components: loggers and appenders. Loggers are used to create log events, which are then passed to the appenders. Appenders take log events and write them to some destination, such as the console, a file, or a network socket. Appenders also have a logging level threshold, which determines which log events are acted upon by the appender, and, optionally, a namespace filter, which determines from which loggers the appender accepts log events.
Heirarchical Logging Namespaces
Loggers are organized hierarchically, with the hierarchy defined by the logger
scope. A logger with the scope app/service/example
is conceptually a child of
the logger with the scope app/service
. By default, appenders accept log
events from all loggers, but this can be restricted by setting a namespace
filter on the appender. An appender with a namespace set will accept log events
from all loggers with scopes that start with the namespace. For example, an
appender with the namespace app
will accept log events from the loggers
app
, app/service
, and app/service/example
, but not from api/service
.
The other impact of the logger heirarchy is in the effective logging level of the logger. An explicit logging level threshold can be set for any scope. Any scope that does not have an explicit inherits its threshold from ancestor loggers upwards in the scope naming heirarchy. This pattern is explained in detail in the logback documentation and applies in the same manner to loggers in this library.
Notes on Use in Multi-Threaded Applications
The loggers and appenders in this library are thread-safe and are intended to behave more intuitively in a multi-threaded environment than std/logging while presenting a similar API. This is particularly true in environments where the logging setup code may be separated from the thread-management code (in an HTTP server, for example).
The LogService object is the main entry point for the logging system and should be initialized on the main thread. The LogService contains the "source of truth" for logging configuration and is safe to be shared between all threads. Internally all access to the shared LogService configuration is protected by a mutex.
Individual threads should use the threadLocalRef proc to obtain a ThreadLocalLogService reference that can be used to create Logger objects. ThreadLocalLogService objects cache the global LogService state locally to avoid expensive locks on the shared state. Instead an atomic configuration version number is maintained to allow the thread-local state to detect global configuration changes via an inexpensive load call and automatically synchronize only when necessary.
This thread-local caching mechanism is the primary advantage of this logging system over std/logging in a multi-threaded environment as it means that the logging system itself is responsible for making sure appenders are configured for every thread where loggers are used, even if the thread initialization context is separated from the logging setup code.
As described in the Getting Started section, you can use
the namespaced_logging/autoconfigured
import to use a simplified interface
that more closely matches the contract of std/logging. In this
case all thread and state management is done for you. The only limitation is
that you cannot create multiple global LogService instances. In practice this
is an uncommon need.
If you do need or want the flexibility to manage the state yourself, import
namespaced_logging
directly. In this case, the thread which initialized
LogService must also be the longest-living thread that uses that LogService
instance. If the initializing thread terminates or the LogService object in
that thread goes out of scope while other threads are still running and using
the LogService, the global state may be harvested by the garbage collector,
leading to use-after-free errors when other threads attempt to log (likely
causing segfaults).
Architectural Design
Overview
The namespaced logging library is built around a thread-safe architecture that attempts to balance performance, safety, and usability in multithreaded environments. The design centers on two key types (LogService and ThreadLocalLogService) that work together to provide both thread-safe configuration management and efficient logging operations.
Core Architecture Components
GlobalLogService (Internal)
At the heart of the system is the GlobalLogService
, a heap-allocated object
that serves as the single source of truth for logging configuration. This
internal type is not exposed to library users but manages:
- Shared configuration state: Appenders, thresholds, and root logging level
- Synchronization primitives: Locks and atomic variables for thread coordination
- Background I/O threads: Dedicated writer threads for console and file output
- Configuration versioning: Atomic version numbers for efficient change detection
The GlobalLogService
ensures that configuration changes are safely propagated
across all threads while maintaining high performance for logging operations.
LogService vs ThreadLocalLogService
The library exposes two distinct types for different usage patterns:
LogService (Value Type)
type LogService* = object
configVersion: int
global: GlobalLogService
appenders: seq[LogAppender]
thresholds: TableRef[string, Level]
The LogService object is intended to support uses cases such as:
- Main thread initialization: a mutable LogService supports all of the configuration functions you would typically need when initializing logging for an application on the main thread.
- Cross-thread communication: Being an
object
type, LogService follows value semantics and can be safely copied between threads. - Service composition: independently initialized LogService objects are truly independent and multiple can be created and embedded in larger application contexts.
Tip
The LogService object is the object that is intended to be shared across threads.
ThreadLocalLogService (Reference Type)
type ThreadLocalLogService* = ref LogService
The ThreadLocalLogService is a reference to a thread-local copy of a LogService and can be obtained via threadLocalRef. We purposefully use reference semantics within the context of a thread so that Logger objects created within the same thread context share the same ThreadLocalLogService reference, avoiding the need to synchronize every Logger individually.
The ThreadLocalLogService is the object that users are expected to interact with during regular operation and support both the configuration functions of LogService and the creation of Logger objects.
Caution
ThreadLocalLogService objects should never be shared outside the context of the thread in which they were initialized.
Thread Safety Model
Safe Cross-Thread Pattern
# Main thread setup
let logService = initLogService()
logService.addAppender(initConsoleLogAppender())
# Safe: value semantics allow crossing thread boundaries
proc workerThread(ls: LogService) {.thread.} =
# Convert to thread-local reference for efficient operations
let tlls = threadLocalRef(ls)
let logger = tlls.getLogger("worker")
logger.info("Worker thread started")
createThread(worker, workerThread, logService)
Unsafe Pattern (Avoided by Design)
# DON'T DO THIS - unsafe reference sharing
# ThreadLocalLogService should not be shared across threads
let tlls = threadLocalRef(initLogService())
createThread(worker, someProc, tlls) # ❌ Potential GC issues
Configuration Synchronization
Atomic Version Checking
The library uses atomic version numbers to efficiently detect configuration changes:
proc ensureFreshness*(ls: var LogService) =
# Cheap atomic check first
if ls.configVersion == ls.global.configVersion.load():
return # No changes, return immediately
# Only acquire lock and copy if versions differ
withLock ls.global.lock:
ls.configVersion = ls.global.configVersion.load
# Sync state...
This design ensures that:
- Hot path is fast: Most logging operations skip expensive synchronization
- Changes propagate automatically: All threads see configuration updates
- Minimal lock contention: Locks only acquired when configuration changes
Thread-Local Caching
Each thread maintains its own copy of the logging configuration:
- Appenders: Thread-local copies created via
clone()
method - Thresholds: Complete copy of namespace-to-level mappings
- Version tracking: Local version number for change detection
This caching strategy provides:
- High performance: No locks needed for normal logging operations
- Consistency: All threads eventually see the same configuration
- Isolation: Thread-local state prevents cross-thread interference
Error Handling
Overview
The namespaced logging library implements a callback-based error handling system designed to gracefully handle failures that may occur during logging operations. Since logging is typically a non-critical operation, the library prioritizes application stability over guaranteed log delivery, but provides mechanisms for applications to monitor and respond to logging failures.
Error Handler Pattern
The library uses a callback-based error handling pattern where applications can register custom error handlers to be notified when logging operations fail. The error handler receives:
error
: The exception that caused the failuremsg
: A descriptive message providing context about where the error occurred
type ErrorHandlerFunc* = proc(error: ref Exception, msg: string) {.gcsafe, nimcall.}
Thread-Safe Error Reporting
All error handling is thread-safe and uses a separate lock to prevent deadlocks:
proc reportLoggingError(gls: GlobalLogService, err: ref Exception, msg: string) =
var handler: ErrorHandlerFunc
# Quickly grab the handler under lock
withLock gls.errorHandlerLock:
handler = gls.errorHandler
# Call handler outside the lock to avoid blocking other threads
if not handler.isNil:
try: handler(err, msg)
except:
# If custom handler fails, fall back to default
try: defaultErrorHandlerFunc(err, msg)
except Exception: discard
Default Error Handling
Default Behavior
When no custom error handler is configured, the library uses defaultErrorHandlerFunc
, which:
- Attempts to write to stderr: Most likely to be available and monitored
- Includes full context: Error message, stack trace, and context
- Fails silently: If stderr is unavailable, gives up gracefully
proc defaultErrorHandlerFunc*(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
try:
stderr.writeLine("LOGGING ERROR [" & msg & "]: " & err.msg)
stderr.writeLine($err.getStackTrace())
stderr.flushFile()
except Exception:
discard # If we can't write to stderr, there's nothing else we can do
Configuration
Setting Custom Error Handlers
# During initialization
var logService = initLogService(errorHandler = myCustomErrorHandler)
# Or at runtime on either the LogService...
logService.setErrorHandler(myCustomErrorHandler)
# ... or on a ThreadLocalLogService
var localLogSvc = threadLocalRef(logService)
localLogSvc.setErrorHandler(myCustomErrorHandler)
Disabling Error Reporting
proc silentErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
discard # Do nothing
logService.setErrorHandler(silentErrorHandler)
Error Handling Examples
Example 1: Monitoring and Metrics
import std/atomics
var errorCount: Atomic[int]
var lastError: string
proc monitoringErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
errorCount.atomicInc()
lastError = msg & ": " & err.msg
# Still log to stderr for immediate visibility
try:
stderr.writeLine("LOGGING ERROR [" & msg & "]: " & err.msg)
stderr.flushFile()
except: discard
# Usage
let logService = initLogService(errorHandler = monitoringErrorHandler)
# Later, check error status
if errorCount.load() > 0:
echo "Warning: ", errorCount.load(), " logging errors occurred"
echo "Last error: ", lastError
Example 2: Alternative Logging Destination
proc fileErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
try:
var f: File
if open(f, "logging_errors.log", fmAppend):
f.writeLine($now() & " [" & msg & "]: " & err.msg)
f.writeLine($err.getStackTrace())
f.writeLine("---")
f.close()
except:
# If file logging fails, fall back to stderr
try:
stderr.writeLine("LOGGING ERROR [" & msg & "]: " & err.msg)
stderr.flushFile()
except: discard
let logService = initLogService(errorHandler = fileErrorHandler)
Example 3: Development vs Production
proc createErrorHandler(isDevelopment: bool): ErrorHandlerFunc =
if isDevelopment:
# Verbose error reporting for development
proc devErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
stderr.writeLine("=== LOGGING ERROR ===")
stderr.writeLine("Context: " & msg)
stderr.writeLine("Error: " & err.msg)
stderr.writeLine("Stack Trace:")
stderr.writeLine($err.getStackTrace())
stderr.writeLine("====================")
stderr.flushFile()
return devErrorHandler
else:
# Minimal error reporting for production
proc prodErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
try:
var f: File
if open(f, "/var/log/myapp/logging_errors.log", fmAppend):
f.writeLine($now() & " " & msg & ": " & err.msg)
f.close()
except: discard
return prodErrorHandler
# Usage
when defined(release):
let logService = initLogService(errorHandler = createErrorHandler(false))
else:
let logService = initLogService(errorHandler = createErrorHandler(true))
Best Practices
Provide Fallbacks
proc robustErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
# Primary: Send to monitoring system
if not sendToMonitoring(err, msg):
# Secondary: Write to dedicated error log
if not writeToErrorLog(err, msg):
# Tertiary: Use stderr as last resort
try:
stderr.writeLine("LOGGING ERROR [" & msg & "]: " & err.msg)
stderr.flushFile()
except: discard
Keep Error Handlers Simple
# Good: Simple and focused
proc simpleErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
errorCounter.atomicInc()
try:
stderr.writeLine("LOG ERROR: " & msg)
stderr.flushFile()
except: discard
# Avoid: Complex operations that might themselves fail
proc complexErrorHandler(err: ref Exception, msg: string) {.gcsafe, nimcall.} =
# Don't do heavy operations like database writes,
# complex network operations, or file system operations
# that might fail and cause cascading errors