Building.Open.Source.Network.Security.Tools.Components.And.Techniques [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Building.Open.Source.Network.Security.Tools.Components.And.Techniques [Electronic resources] - نسخه متنی

Mike D. Schiffman

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید







Design Considerations

Libnet's journey through life has been more of a steady evolution than a series of discontinuous revolutions. While the current version, the 31st in four years, is a discontinuous jolt from all previous versions, the interface is far easier to use. The same core functionality found in earlier versions is still available, but the internal mechanisms have undergone a major overhaul. For the application programmer, this situation results in simpler usage and a modest change to the API.

Libnet 1.1.0 is smarter than its predecessors. In previous revisions of the API, the application programmer had to follow six steps to build and send a single packet:



Initialize packet memory—The application programmer had to determine and allocate the correct amount of memory for the packet that he or she wanted to send.



Initialize the network interface—The application programmer had to open the network interface by using the correct primitives for the injection layer (link-layer or raw socket layer) desired. Additionally, if the link-layer interface was employed, he or she had to specify a device.



Build the packet—The application programmer had to take specific care of memory offsets when calling the building functions. Because memory was allocated as one contiguous chunk, the programmer had to know where each packet header was in memory, which required an intimate knowledge of header byte counts.



Perform packet checksums—The application programmer had to perform a checksum for each header that included a checksum field. This process included the IP header when the link-layer interface was used.



Write the packet—The application programmer would then write the packet to the network by using the proper injection method, taking care to specify the proper packet size and a variety of other arguments to the writing function.



Clean up—The application programmer was then responsible for freeing up all allocated memory and closing down the network interface.



While this scenario was a vast improvement over existing mechanisms at the time, it still felt a bit clunky. There were too many low-level issues placed in the hands of the application programmer (and in turn, too many opportunities for syntactic errors to creep into the process).

In order to remove many of these low-level responsibilities, libnet 1.1.0 saw the movement of a great deal of logic away from the exposed API and into the library's internals. The most obvious change from previous versions of libnet is capability of state maintenance. In order for the API to make inferred decisions, libnet needed to remember-certain parameters and keep track of what the application programmer was doing. Some of this state is based on how libnet is initialized and the settings of the control flags, while other data is derived from how the application programmer invoked internal library calls. Libnet internally maintains this state and it is not visible to the application programmer. The result is that libnet is far easier to use. Figure 3.1 illustrates the packet creation and injection process for libnet 1.1.0.



Initialize the library—The application programmer initializes the library, specifying the injection type and an optional network device.



Build the packet—The application programmer builds the packet.



Write the packet—The application programmer then writes the packet to the network.



Shut down the library—The application programmer makes a single call to clean everything up and shut down.




Figure 3.1: Libnet packet creation.

This resulting process is cleaner, more efficient, and much easier to handle. There are fewer places where the application programmer can accidentally taint memory locations and fewer places where something can go grievously wrong. All in all, it is a major improvement.


Libnet Wire Injection Methods


Libnet offers the application programmer the choice between writing packets to the network wire at the raw socket layer or the link-layer. Specified at initialization, the details of both interfaces are handled internally (including startup, writing, and shutdown). Both have different benefits and drawbacks, as described next.

Raw Socket Interface


The raw socket interface is a mid-level interface enabling the application programmer to build and insert packets at the IP layer and above. This interface is the easier of the two to use, because the application programmer does not have to worry about building a link-layer frame header. Additionally, he or she does not have to worry about determining the destination MAC address, which can be a hassle if the packet is ultimately destined for a host that is not on the local network (you might have to add code to perform ARP/routing table lookups to obtain the MAC address of the default gateway). Unfortunately, this simplicity comes at a price; raw sockets across many platforms tend to be "cooked" in that they do not offer a consistent granular level of control over certain IP header values. For instance, every raw socket implementation always computes a (correct) IP checksum before writing a packet out, regardless of whether or not the application programmer wants it to happen. Linux (and probably others) always sets the IP header length field. Solaris always sets the IP fragmentation DF (don't fragment) bit in an attempt to perform path MTU (maximum transmission unit) discovery. Some versions of OpenBSD and FreeBSD require the IP packet length and IP fragmentation fields to be in host-byte order, while others require network-byte order regardless of processor type.

Link-Layer Interface


The link-layer interface is a low-level interface giving the application programmer sovereign control over the entire packet, from the link-layer up. The functionality here is quite simply more robust. The link-layer enables a finergrained control of packet header values because the OS kernel will not touch the packet before it is written out (the exception being that some interface code on some UNIX variants will try to stamp a source MAC address on the packet of the outgoing interface; libnet handles this situation on several variants). This power comes at the cost of additional complexity. The application programmer is responsible for building a link-layer header and filling in all of its values (the IP checksum is optional; libnet can take care of it or specify it to some arbitrary value).


Packet Header Checksum Computation


For packet headers that have checksums, libnet handles them internally by default. The application programmer has the option of specifying one of three behaviors:



Setting the checksum field to 0 signals libnet to compute a checksum for the packet header in question (note that this checksum might be computed over any additional data, such as the case with IP).



Setting the field to any other value causes libnet to skip the checksum calculation for the packet header. This situation enables the application programmer to specify either a precomputed checksum or any arbitrary value for whatever reason.



You can override these two behaviors with a call to

libnet_toggle_checksum(), as we describe later on.



Note that while using the raw-socket layer interface, the IP header checksum is always calculated regardless of what the application programmer sets it to or what behavior libnet_toggle_checksum() tries to set.

/ 135