Linux Device Drivers (3rd Edition) [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Linux Device Drivers (3rd Edition) [Electronic resources] - نسخه متنی

Jonathan Corbet, Greg Kroah-Hartman, Alessandro Rubini

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید








2.4. Compiling and Loading


The "hello world" example at the
beginning of this chapter included a brief demonstration of building
a module and loading it into the system. There is, of course, a lot
more to that whole process than we have seen so far. This section
provides more detail on how a module author turns source code into an
executing subsystem within the kernel.


2.4.1. Compiling Modules


As the first step, we need to
look
a bit at how modules must be built. The build process for modules
differs significantly from that used for user-space applications; the
kernel is a large, standalone program with detailed and explicit
requirements on how its pieces are put together. The build process
also differs from how things were done with previous versions of the
kernel; the new build system is simpler to use and produces more
correct results, but it looks very different from what came before.
The kernel build system is a complex beast, and we just look at a
tiny piece of it. The files found in the
Documentation/kbuild directory in the kernel
source are required reading for anybody wanting to understand all
that is really going on beneath the surface.

There are some prerequisites that you must get out of the way before
you can build kernel modules. The first is to ensure that you have
sufficiently current versions of the compiler, module utilities, and
other necessary tools. The file
Documentation/Changes in the kernel
documentation directory always lists the required tool versions; you
should consult it before going any further. Trying to build a kernel
(and its modules) with the wrong tool versions can lead to no end of
subtle, difficult problems. Note that, occasionally, a version of the
compiler that is too new can be just as problematic as one that is
too old; the kernel source makes a great many assumptions about the
compiler, and new releases can sometimes break things for a while.

If you still do not have a kernel tree handy, or have not yet
configured and built that kernel, now is the time to go do it. You
cannot build loadable modules for a 2.6 kernel without this tree on
your filesystem. It is also helpful (though not required) to be
actually running the kernel that you are building for.

Once you have everything set up, creating a makefile for your module
is straightforward. In fact, for the "hello
world" example shown earlier in this chapter, a
single line will suffice:

obj-m := hello.o

Readers who are familiar with make, but not with
the 2.6 kernel build system, are likely to be wondering how this
makefile works. The above line is not how a traditional makefile
looks, after all. The answer, of course, is that the kernel build
system handles the rest. The assignment above (which takes advantage
of the extended syntax provided by GNU make)
states that there is one module to be built from the object file
hello.o. The resulting module is named
hello.ko after being built from the object file.

If, instead, you have a module called module.ko
that is generated from two source files (called, say,
file1.c and file2.c), the
correct incantation would be:

obj-m := module.o
module-objs := file1.o file2.o

For a makefile like those shown above to work, it must be invoked
within the context of the larger kernel build system. If your kernel
source tree is located in, say, your
~/kernel-2.6 directory, the
make command required to build your module
(typed in the directory containing the module source and makefile)
would be:

make -C ~/kernel-2.6 M=`pwd` modules

This command starts by changing its directory to the one provided
with the -C option (that is, your kernel source
directory). There it finds the kernel's top-level
makefile. The M= option causes that makefile to
move back into your module source directory before trying to build
the modules target. This target, in turn, refers
to the list of modules found in the obj-m
variable, which we've set to
module.o in our examples.

Typing the previous make command can get
tiresome after a while, so the kernel developers have developed a
sort of makefile idiom, which makes life easier for those building
modules outside of the kernel tree. The trick is to write your
makefile as
follows:

# If KERNELRELEASE is defined, we've been invoked from the
# kernel build system and can use its language.
ifneq ($(KERNELRELEASE),)
obj-m := hello.o
# Otherwise we were called directly from the command
# line; invoke the kernel build system.
else
KERNELDIR ?= /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
default:
$(MAKE) -C $(KERNELDIR) M=$(PWD) modules
endif

Once again, we are seeing the extended GNU make
syntax in action. This makefile is read twice on a typical build.
When the makefile is invoked from the command line, it notices that
the KERNELRELEASE variable has not been set. It
locates the kernel source directory by taking advantage of the fact
that the symbolic link build in the installed
modules directory points back at the kernel build tree. If you are
not actually running the kernel that you are building for, you can
supply a KERNELDIR= option on the command line,
set the KERNELDIR environment variable, or rewrite
the line that sets KERNELDIR in the makefile. Once
the kernel source tree has been found, the makefile invokes the
default: target, which runs a second
make command (parameterized in the makefile as
$(MAKE)) to invoke the kernel build system as
described previously. On the second reading, the makefile sets
obj-m, and the kernel makefiles take care of
actually building the module.

This mechanism for building modules may strike you as a bit unwieldy
and obscure. Once you get used to it, however, you will likely
appreciate the capabilities that have been programmed into the kernel
build system. Do note that the above is not a complete makefile; a
real makefile includes the usual sort of targets for cleaning up
unneeded files, installing modules, etc. See the makefiles in the example
source directory for a complete example.


2.4.2. Loading and Unloading Modules


After the module is built, the


next
step is loading it into the kernel. As we've already
pointed out, insmod does the job for you. The
program loads the module code and data into the kernel, which, in
turn, performs a function similar to that of ld,
in that it links any unresolved symbol in the module to the symbol
table of the kernel. Unlike the linker, however, the kernel
doesn't modify the module's disk
file, but rather an in-memory copy. insmod
accepts a number of command-line options (for details, see the
manpage), and it can assign values to parameters in your module
before linking it to the current kernel. Thus, if a module is
correctly designed, it can be configured at load time; load-time
configuration gives the user more flexibility than compile-time
configuration, which is still used sometimes. Load-time configuration
is explained in Section 2.8 later in this chapter.

Interested readers may want to look at how
the kernel supports insmod: it relies on a
system call defined in kernel/module.c. The
function sys_init_module allocates kernel memory
to hold a module (this memory is allocated
with vmalloc ; see the Section 8.4 in Chapter 2); it then copies the module
text into that memory region, resolves kernel references in the
module via the kernel symbol table, and calls the
module's initialization function to get everything
going.

If you actually
look in the kernel source, you'll find that the
names of the system calls are prefixed with sys_.
This is true for all system calls and no other functions;
it's useful to keep this in mind when grepping for
the system calls in the sources.

The

modprobe utility is worth a quick mention.
modprobe, like insmod,
loads a module into the kernel. It differs in that it will look at
the module to be loaded to see whether it references any symbols that
are not currently defined in the kernel. If any such references are
found, modprobe looks for other modules in the
current module search path that define the relevant symbols. When
modprobe finds those modules (which are needed
by the module being loaded), it loads them into the kernel as well.
If you use insmod in this situation instead, the
command fails with an "unresolved
symbols" message left in the system logfile.

As mentioned before, modules may be removed from the kernel with the
rmmod utility. Note that module removal fails if
the


kernel believes that the module is still in use (e.g., a program
still has an open file for a device exported by the modules), or if
the kernel has been configured to disallow module removal. It is
possible to configure the kernel to allow
"forced" removal of modules, even
when they appear to be busy. If you reach a point where you are
considering using this option, however, things are likely to have
gone wrong badly enough that a reboot may well be the better course
of action.

The lsmod program produces a list of the modules
currently loaded in the kernel. Some other information, such as any
other modules making use of a specific module, is also provided.
lsmod works by reading the
/proc/modules virtual file. Information on
currently loaded modules can also be found in the sysfs virtual
filesystem under /sys/module.


2.4.3. Version Dependency


Bear in
mind that your module's



code
has to be recompiled for each version of the kernel that it is linked
toat least, in the absence of modversions, not covered here as
they are more for distribution makers than developers. Modules are
strongly tied to the data structures and function prototypes defined
in a particular kernel version; the interface seen by a module can
change significantly from one kernel version to the next. This is
especially true of development kernels, of course.

The kernel does not just assume that a given module has been built
against the proper kernel version. One of the steps in the build
process is to link your module against a file (called
vermagic.o) from the current kernel tree; this
object contains a fair amount of information about the kernel the
module was built for, including the target kernel version, compiler
version, and the settings of a number of important configuration
variables. When an attempt is made to load a module, this information
can be tested for compatibility with the running kernel. If things
don't match, the module is not loaded; instead, you
see something like:

# insmod hello.ko
Error inserting './hello.ko': -1 Invalid module format

A look in the system log file (/var/log/messages
or whatever your system is configured to use) will reveal
the specific problem that caused the
module to fail to load.

If you need to compile a module for a specific kernel version, you
will need to use the build system and source tree for that particular
version. A simple change to the KERNELDIR variable
in the example makefile shown previously does the trick.

Kernel
interfaces often
change between releases. If you are
writing a module that is intended to work with multiple versions of
the kernel (especially if it must work across major releases), you
likely have to make use of macros and #ifdef
constructs to make your code build properly. This edition of this
book only concerns itself with one major version of the kernel, so
you do not often see version tests in our example code. But the need
for them does occasionally arise. In such cases, you want to make use
of the definitions found in linux/version.h.
This header file, automatically included by
linux/module.h, defines the
following macros:

UTS_RELEASE


This macro
expands to a string describing the version of this kernel tree. For
example, "2.6.10".


LINUX_VERSION_CODE


This macro expands to the binary
representation of the kernel version, one byte for each part of the
version release number. For example, the code for 2.6.10 is 132618
(i.e., 0x02060a).[2] With this
information, you can (almost) easily determine what version of the
kernel you are dealing with.

[2] This allows up to 256 development
versions between stable versions.



KERNEL_VERSION(major,minor,release)


This is the macro used to build an integer version code from the
individual numbers that build up a version number. For example,
KERNEL_VERSION(2,6,10) expands to 132618. This
macro is very useful when you need to compare the current version and
a known checkpoint.



Most dependencies based on the kernel version can be worked around
with preprocessor conditionals by exploiting
KERNEL_VERSION and
LINUX_VERSION_CODE. Version dependency should,
however, not clutter driver code with hairy #ifdef
conditionals; the best way to deal with incompatibilities is by
confining them to a specific header file. As a general rule, code
which is explicitly version (or platform) dependent should be hidden
behind a low-level macro or function. High-level code can then just
call those functions without concern for the low-level details. Code
written in this way tends to be easier to read and more robust.


2.4.4. Platform Dependency


Each computer platform has its


peculiarities,
and kernel designers are free to exploit all the peculiarities to
achieve better performance in the target object file.

Unlike application developers, who must link their code with
precompiled libraries and stick to conventions on parameter passing,
kernel developers can dedicate some processor registers to specific
roles, and they have done so. Moreover, kernel code can be optimized
for a specific processor in a CPU family to get the best from the
target platform: unlike applications that are often distributed in
binary format, a custom compilation of the kernel can be optimized
for a specific computer set.

For example, the IA32 (x86) architecture has been subdivided into
several different processor types. The old 80386 processor is still
supported (for now), even though its instruction set is, by modern
standards, quite limited. The more modern processors in this
architecture have introduced a number of new capabilities, including
faster instructions for entering the kernel, interprocessor locking,
copying data, etc. Newer processors can also, when operated in the
correct mode, employ 36-bit (or larger) physical addresses, allowing
them to address more than 4 GB of physical memory. Other processor
families have seen similar improvements. The kernel, depending on
various configuration options, can be built to make use of these
additional features.

Clearly, if a module is to work with a given kernel, it must be built
with the same understanding of the target processor as that kernel
was. Once again, the vermagic.o object comes in
to play. When a module is loaded, the kernel checks the
processor-specific configuration options for the module and makes
sure they match the running kernel. If the module was compiled with
different options, it is not loaded.

If you are planning to write a driver for
general distribution, you may well be
wondering just how you can possibly support all these different
variations. The best answer, of course, is to release your driver
under a GPL-compatible license and contribute it to the mainline
kernel. Failing that, distributing your driver in source form and a
set of scripts to compile it on the user's system
may be the best answer. Some vendors have released tools to make this
task easier. If you must distribute your driver in binary form, you
need to look at the different kernels provided by your target
distributions, and provide a version of the module for each. Be sure
to take into account any errata kernels that may have been released
since the distribution was produced. Then, there are licensing issues
to be considered, as we discussed in Section 1.6.
As a general rule,
distributing things in source form is an easier way to make your
way in the world.


    / 202