I Spent a Day Learning How the Linux Kernel Actually Works — Here's What Surprised Me Most
System-level engineer building reliable backend systems with a focus on performance, correctness, and real-world constraints. I work across APIs, databases, networking, and infrastructure, enjoy understanding how systems behave under load and failure, and write to break down complex backend and distributed-systems concepts through practical, real-world learnings.
I never studied computer science. My degree is in Arts. I learned to code from YouTube videos, late nights, and a stubborn refusal to accept that I couldn't figure things out on my own. So when I say I sat down one afternoon and tried to understand the Linux kernel, I mean a self-taught developer from a small town in West Bengal decided to go down a rabbit hole that most CS grads skip past even in college.
It started with a simple question I couldn't shake: when I write fs.readFile() in Node.js, what actually happens? Like, what really happens, below the JavaScript, below Node, below everything I can see? I had been building backends for over a year, writing APIs, managing databases, deploying to servers — and I realized I had no real idea what was happening underneath any of it. That bothered me.
The first thing that genuinely surprised me was learning that modern CPUs are not one flat thing. They operate in different modes. When the kernel is running, the CPU is in what's called privileged mode — it can do anything, touch any memory, talk directly to hardware. But when your application runs, the CPU is in a restricted mode. Your Node process, your browser, your terminal — none of them can directly talk to the hardware. They have to ask. That "asking" is called a system call, and it's essentially the only door between your code and the actual machine. I had heard the word syscall before but never understood it was this literal — a controlled gate that switches the CPU from user mode to kernel mode and back.
Then I learned about what happens when you turn on a computer. The kernel doesn't wait for anyone. It boots up, sets up everything it needs, and then manually creates the very first user-space process — PID 1. On most Linux systems today that's systemd. What's interesting is the kernel doesn't use a system call to do this. It builds PID 1 by hand, in kernel code itself. System calls are for user processes talking to the kernel — the kernel doesn't call itself. PID 1 then becomes the parent of everything else that ever runs on that machine. Every process you start, every app you open, everything — it's all a descendant of that first process.
The part that connected the most for me was the fork and exec pattern. When you open an app from a desktop environment or a shell, here's what actually happens: the parent process clones itself using something called fork(). For a brief moment there are two identical copies of that process running. Then the child calls exec(), which completely replaces its memory and code with the new program. It's like making a photocopy of yourself and then the copy transforms into someone else entirely. I find it bizarre that this is how it works under the hood, but also kind of elegant once it clicks.
What tied everything together for me was understanding why the kernel exists at all. It's fundamentally a security layer. User processes can contain malicious code. A badly written app, or a deliberately evil one, should not be able to wipe your disk or intercept your network traffic just because it's running on your machine. The kernel sits between every process and the hardware, and it decides what's allowed. Memory is isolated too — process A cannot read process B's memory. This is the kernel enforcing boundaries. Every time I write code that reads a file or opens a network connection, I'm asking the kernel for permission to do something I fundamentally cannot do on my own.
I want to be honest about something though. I learned all of this on a day when I probably should have been doing DSA practice. I'm in the middle of a job search right now. I was laid off last month and I have interviews coming up. Kernel internals are not what anyone is going to ask me in a first round at a product company. And yet I couldn't help myself, because this is the kind of thing that makes me feel like I actually understand the tools I use instead of just operating them blindly. There's a difference between knowing how to drive and knowing how an engine works. I want to know both.
If you're a self-taught developer like me, I'd say this: you don't need to learn kernel internals to get a job or build great products. But if you're the kind of person who gets annoyed not knowing why something works, even one afternoon spent here is worth it. The mental model you build changes how you think about everything above it — your runtime, your OS, your containers, your servers. It's like finally seeing the floor that everything else is built on.

