Monday, 15 October 2007 17:53

An Ubuntu guide to taming the Linux kernel

By
Although Linux is frequently referred to by the names of various distributions, what can properly be called “Linux” is really the management part of the operating system known as the kernel which interacts with the computer’s hardware. Here’s how the kernel works in Ubuntu, and how to rebuild it.

The kernel is what Linus Torvalds produced in 1991 which, coupled with the GNU Project’s litany of tools, brought the power of UNIX freely to the PC world. Many people worldwide now contribute to the kernel alongside Torvalds, but it is still he who determines what is included in official releases.

Versioning
The kernel is constantly under development. Feature enhancements and bug fixes are made at a rapid pace. Theoretically, you could update your kernel every week but practically, you do not know whether the latest changes are well-tested or not. Fortunately, the Linux kernel team use release management methodologies to provide periodic stable releases that are certified as safe for production, and each release has a unique version number. These consist of a major number, a minor version and a sublevel number, and then for Ubuntu an additional number known as the extraversion level. This latter number reflects patches and add-ons made by Ubuntu’s team to make the kernel work in their distribution. You can determine the version of your kernel by executing the command uname –r.

By convention, kernels with even-numbered minor versions are stable releases, and those with odd-numbered minor versions are development releases which should only be used by those prepared to experiment with code still under test.

The source tree
As you might expect, the kernel is free open-source software. Its source code is freely available and is generally included with all Linux distros even though it may not necessarily be installed without explicit setup options. The source code is not at all needed for the ordinary running of Linux, but If it is installed (either at the time of system setup, or later), it will be found in Ubuntu distributions within the /usr/src directory under a folder named with its version number.

The kernel source can be retrieved in a variety of ways, but the absolute latest will always be available via FTP from ftp.kernel.org in a compressed format.

The source code is structured in a complex directory hierarchy known as the source tree. As the kernel is such a large piece of software, scripts are used (and provided) to compile it and these expect to find files in specific places in the tree.

Browsing the code does give some important insights. One very helpful subdirectory is Documentation which, as you might expect, contains a cornucopia of text files describing the kernels operation and how it really works at a very low-level. Many documents here are aimed at kernel programmers but you will find loads of generally useful information – for instance, the file devices.txt lists all possible devices that are catered for in the system’s /dev directory along with a brief description. If you have received driver error messages citing version numbers, chances are you can get some clues here to what the error is about.


When new releases are issued, there’s no need to download the whole source tree again. A command in the top source directory called patch-kernel lets you pull down all the incremental updates required to update the tree on your system. Once again, this command expects the source tree to have a specific structure, so it is important never to move any of the folders around even if you prefer to organise directories differently.

In the top-level folder of the source tree you will find a file named .config. This is the configuration of the default supplied Linux kernel as it was compiled for your distribution. As Ubuntu – and any other Linux distro – must work for the maximum number of people, some configuration items have to be chosen to serve the widest number of people.

You can make performance gains by building a tailor-made kernel specific to your exact hardware, including the type of processor you have. You may make your own .config file, specifying appropriate options, or you may find one under the /configs subdirectory which suits you; if so, copy it to the root of the source tree and name it .config.

That said, it is important to qualify that Ubuntu uses a modular kernel. Years ago, Linux kernels were a single compiled entity which contained all hardware drivers built-in. There were two big problems: if your kernel didn’t include a driver for necessary hardware then it had to be recompiled with the new driver included. Conversely, if the kernel included too many drivers (just in case) then it was needlessly bloated and consumed more resources than it ought. This was true of most UNIX systems prior to Linux, also.

Modularised drivers
The solution to this was to modularise the kernel, so that only those device drivers absolutely needed for the kernel to start up are compiled in; all others are generated as modules which can be optionally loaded into memory once the system has booted, and only if required.

If you choose to rebuild your kernel, you can opt whether drivers are compiled in to the kernel or made modular. This is an example of where you can tweak the kernel to suit your specific needs and application but the actual performance differences you can expect to receive will be hard to measure and generally changes like this are the foray of deeply technical kernel hackers. I personally advocate managing modules for greater flexibility.

Ubuntu helps look after modules with four important commands:
* lsmod lists all the modules presently loaded.
* Insmod attempts to load a specified module. A full path can be provided, or the command will look for the module under the directory /lib/modules.
* rmmod performs the reverse, attempting to unload the specified module from the currently running kernel. We say “attempt” because it’s possible the named module cannot be loaded or unloaded depending on certain circumstances included dependencies.
* depmod creates a list of modules that are dependent on other modules and thus require them to be loaded before they themself can be loaded. This command can be run at any time, but is also run at system startup, sending output to file /lib/modules/*/modules.dep. If you inspect this file, you will see which modules have dependencies and what they are.

With this under our belt, modprobe is a command that provides similar functionality to insmod and rmmod but with the smarts provided by depmod. modprobe will try to load or unload (using the –r flag) modules, but taking dependent modules into account.


Tuning the kernel
Most pieces of software use configuration files to adjust how they function; the kernel is no different in this regard. One of its most important configuration files is /etc/sysctl.conf which sets the value of a wide range of kernel parameters.

Most every article on tweaking performance out of Ubuntu will invariably direct you to edit /etc/sysctl.conf. For example, broadband users might tweak TCP/IP settings. Or you might adjust how much of the system’s RAM is used for shared memory (which is utilised for inter-process communication between separate running apps.) These articles, and many others, are well worth reading, along with the sysctl.conf man page itself.

You don’t need to reboot to see your modifications take effect; Ubuntu handily provides a command called sysctl to adjust parameters directly on the running kernel. Note that although sysctl will act right away, it won’t persist the modification; if you are happy with it, you need to still modify /etc/sysctl.conf. An example of using sysctl to alter a running kernel is given in the Ubuntu Community Docs to adjust timer resolutions for MIDI purposes.

Compiling the kernel
All this said, there are times when you might want to install a new kernel. For most people this will not be necessary, as Ubuntu provides a range of kernels for different processor types as well as letting modules be manipulated and parameters be tweaked as we have discussed, but rebuilding does allow you to make use of the very latest compilers and libraries, and allows you to exercise full control.

If you wish to go down this route, be sure to make a working boot disk so you can boot back up to a known state should something go wrong. From here, the steps are basically a sequence of ‘make’ commands.

Begin with make mrproper to tidy up the source tree. This command will clear all previously compiled binaries as well as other intermediate files. Note carefully this will remove the essential .config file so either save a copy, or wait until after running mrproper to create it. You can use commands make menuconfig or make xconfig (for a GUI) to adjust this configuration file.
Use make dep to create code dependencies, make clean to prepare sources, and then make bzImage to actually generate a compressed binary image.

Execute make modules to compile modules; make modules_install to install the resulting binaries into /lib/modules. Run make install to copy your new kernel to /boot and to perform other important tasks.

You’ve finished with the source code at this point, but before rebooting to test your new kernel, be sure to verify that the /boot/grub/grub.conf or /etc/lilo.conf boot loader files have been updated to use your new kernel.

While rebooting, monitor the screen for any errors and be sure to inspect /var/log/messages after successfully logging in.

With this information all understood, you now have a good grounding to launch into the kernel and to take full control over how it operates on your own computer.


BACK TO HOME PAGE

NEW OFFER - ITWIRE LAUNCHES PROMOTIONAL NEWS & CONTENT

Recently iTWire remodelled and relaunched how we approach "Sponsored Content" and this is now referred to as "Promotional News and Content”.

This repositioning of our promotional stories has come about due to customer focus groups and their feedback from PR firms, bloggers and advertising firms.

Your Promotional story will be prominently displayed on the Home Page.

We will also provide you with a second post that will be displayed on every page on the right hand side for at least 6 weeks and also it will appear for 4 weeks in the newsletter every day that goes to 75,000 readers twice daily.

POST YOUR NEWS ON ITWIRE NOW!

INVITE DENODO EXECUTIVE VIRTUAL ROUNDTABLE 9/7/20 1:30 PM AEST

CLOUD ADOPTION AND CHALLENGES

Denodo, the leader in data virtualisation, has announced a debate-style three-part Experts Roundtable Series, with the first event to be hosted in the APAC region.

The round table will feature high-level executives and thought leaders from some of the region’s most influential organisations.

They will debate the latest trends in cloud adoption and technologies altering the data management industry.

The debate will centre on the recently-published Denodo 2020 Global Cloud Survey.

To discover more and register for the event, please click the button below.

REGISTER HERE!

BACK TO HOME PAGE
David M Williams

David has been computing since 1984 where he instantly gravitated to the family Commodore 64. He completed a Bachelor of Computer Science degree from 1990 to 1992, commencing full-time employment as a systems analyst at the end of that year. David subsequently worked as a UNIX Systems Manager, Asia-Pacific technical specialist for an international software company, Business Analyst, IT Manager, and other roles. David has been the Chief Information Officer for national public companies since 2007, delivering IT knowledge and business acumen, seeking to transform the industries within which he works. David is also involved in the user group community, the Australian Computer Society technical advisory boards, and education.

BACK TO HOME PAGE

Webinars & Events

VENDOR NEWS

REVIEWS

Comments