InterviewStack.io LogoInterviewStack.io

Logging and Log Analysis Questions

Covers operating system and application logging architecture, log collection, parsing, analysis, and security monitoring workflows. Topics include where logs are stored on Linux systems, system logging daemons and their configuration such as rsyslog, using the systemd journal and journalctl, and log rotation and retention strategies. Skills include parsing and inspecting logs with command line tools and regular expressions, extracting key fields such as timestamps, user identifiers, internet protocol addresses, actions performed, and error codes, and working with structured log formats such as JavaScript Object Notation. Also includes forwarding logs to centralized systems and agents, transport protocols and collectors, and upstream processing pipelines. For security and monitoring, this covers log aggregation, normalization, event correlation, alerting and thresholding, building searches and dashboards, and deriving forensic and operational insights for incident response and troubleshooting. Candidates may be evaluated on practical configuration tasks, example queries, interpreting log entries, designing log pipelines for reliability and scale, and applying best practices for retention, privacy, and performance.

EasyTechnical
0 practiced
Explain the benefits and risks of centralized logging compared to keeping logs only on each host. Discuss considerations for reliability (agent buffering), latency, privacy and compliance (PII), single points of failure, and mitigations such as multi-region replication and local buffering. Provide a short evaluation checklist to decide if a centralized solution meets enterprise requirements.
MediumSystem Design
0 practiced
Design a reliable forwarding pipeline where rsyslog on each host forwards structured JSON logs to Kafka, which then feeds Elasticsearch for indexing. Include:
- a short rsyslog configuration snippet using omkafka or appropriate module and JSON template- how to ensure retry/backpressure behavior when Kafka is overloaded- partitioning and keying strategy to preserve ordering per flow- how to detect and handle message loss
Explain the trade-offs of using Kafka as the buffer layer.
MediumTechnical
0 practiced
Implement a robust function (in Python or Go) that parses timestamps found in logs which may appear as any of:
- 10/Oct/2024:13:55:36 -0700- 2024-10-09T11:23:45Z- 1700000000 (epoch seconds)
The function should normalize all to UTC ISO8601 strings, handle timezone offsets, and return a clear error for invalid formats. Describe your approach and show key code or algorithmic steps.
HardSystem Design
0 practiced
Design a scalable log ingestion architecture that accepts 100k log events per second from 10,000 hosts, supports near-real-time search (<=30s), multi-tenant isolation, encryption at rest and in transit, and retention tiers (3 days hot, 1 year cold). Include components (agents, messaging layer, stream processors, indexers, object storage), HA strategies, backpressure and buffering design, and an operational runbook for common failure modes.
EasyTechnical
0 practiced
Demonstrate practical journalctl usage: give commands and short explanations for the following tasks:
1) List all boots and show the last boot's ID2) Show logs for unit nginx.service filtered to warning and above3) Follow logs in real time for a unit4) Export journal entries in JSON for downstream parsing
Also explain the significance of fields like _SYSTEMD_UNIT and _PID in the journal.

Unlock Full Question Bank

Get access to hundreds of Logging and Log Analysis interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.