Linux is the largest software development project on the planet: Greg Kroah-Hartman – CIO
Greg Kroah-Hartman is second in command in the Linux kernel community. In addition to doing great work on device drivers, he also maintains the stable tree of the Linux kernel.
In his keynote presentation at CoreOS Fest in Berlin this week, Kroah-Hartman offered some inside perspective on just how massive the Linux kernel project is. And I also had a chance to sit down with him to talk about the kernel and security.
Let’s start with the code base. Kroah-Hartman said the latest release (4.5) made two months ago contains over 21 million lines of code.
That’s a huge amount of code and may cause people to think that Linux is becoming way too big to run on the smaller devices. But the fact is vendors don’t run all of these 21 million lines of code on their devices; they choose what they need. As Kroah-Hartman said, “You don’t run all of this stuff. All the drivers for all the hardware are in the kernel all together. My laptop runs about 1.6 million lines of code. Your phone runs about 2.5 million lines of code.”
What everyone does run is the core of the kernel, and that core is about 5 percent of the total Linux code base — 35 percent is networking stuff and over 40 percent is drivers.
More impressive than the amount of code, and what truly makes Linux the world’s largest software project is the fact that last year around 4,000 developers and at least 440 different companies that contributed to the kernel. Kroah-Hartman said, “It’s the largest software development project ever, in the history of computing — by the number of people using it, developing it, and now using it, and the number of companies involved. It’s a huge number of people.”
There are over 10,800 lines of code added, 5,300 lines of code removed and over 1,875 lines of code modified. Every. Single. Day. That amounts to over 8 changes per second.
That’s massive. What it means is that Linux kernel, unlike many other technologies, is constantly changing, evolving and getting better.
Kroah-Hartman said, “When I first started doing this, we were doing two-and-a-half changes an hour. Everybody was like, ‘Huh, no way. We can never go faster; that’s insane.’ Microsoft and Apple said, ‘You guys win.’ Literally, they said that … ‘We cannot compete. You guys are going to go farther than anybody. There’s no way we can keep up.’ We are going faster, and we keep going faster every single time.”
But the amount of changes can also make it look scary if your business relies on Linux. Kroah-Hartman explained why they make so many changes: “We make a lot of changes, and we’re not just making changes because we like to, because that’s more work. We’re really lazy. We’re making changes because we have to. We’re making changes because the world changes. The model of ‘you make a box and you make it static and you throw it in the corner’ doesn’t work, because that box has to touch the world and the world changes. Everything interacts, so you have to evolve. If your operating system does not change, it is dead. It’s that simple. If your device does not change based on the world it interacts with, it is dead. It’s that simple. So look at operating systems that don’t change, nobody uses them anymore.”
To keep things sane despite all the changes there two things that the kernel community does. First, they make time-based releases. And, second, they rely on incremental changes. Once a release is made they start working on the next release. The first rc of the next release comes with everything that developers throw at it, all the new stuff, new features, and then it’s tested vigorously. Once everything is tested well they make the first rc and after that all the following rcs will be bug fix only. So there can be another 7-8 rcs for the same branch that will beat out all the bugs.
Once it’s ready they release the new version of the kernel. So it’s very well tested. But there is one more problem, people running the stable release do need bug fixes for that release, but they don’t want to use release candidates in production, so how do they get bug fixes? The kernel community found the answer some 15 years ago. And that’s Kroah-Hartman’s job. He forks the stable release, let’s say 4.2 and will maintain it with bug fixes and will be releasing 4.2.1, 4.2.2, 4.2.3…and so on.
“The rule is it has to be a bug fix, and it has to be obviously correct, or a new device ID, and it has to be in Linux’s tree. It has to be in Linux’s tree before I will take it into the stable tree. That ensures that the people running and relying on our stable trees, if they jump to the new one, it doesn’t break; nothing happens differently. That’s the rules. Those have worked out really well,” Kroah-Hartman said.
But when the next new version (4.3) comes out Kroah-Hartman will throw away the current version (4.2) and move to maintaining 4.3. One thing that the kernel community is excellent at is ensuring that nothing breaks with such new releases.
“I do a release about once a week and in that, about every release, is about 100, 150 patches per week of stable fixes. That’s a lot. That’s a lot of stuff changing, a lot of stuff being fixed. That’s what we do — stable kernels. The nice thing when 4.3 comes out, I throw it away. I say, ‘Ah! 4.2, I don’t want that anymore,’ and move on because we guarantee that you can continue on and everybody’s happy.”
This article is published as part of the IDG Contributor Network. Want to Join?