2008年12月16日 星期二
build 64bit version of openssl
change in main Makefile definitions to:
SHARED_LDFLAGS=-mlp64 -shared -nodefaultlibs
EX_LIBS= -ldl -lgcc
and run make again.
2008年12月10日 星期三
2008年11月24日 星期一
TPROXY take 6 ?
And it is on 2.6.28-rc6
The corresponding iptables version seems to be released soon (1.4.3-rc1 )
It means no more patch to support this feature in kernel/iptables.
2008年11月17日 星期一
2008年10月16日 星期四
RPATH and $ORIGIN
HP-UX Linker and Libraries User's Guide
Linker Tools for Itanium-Based Systems, chatr
Determining How to Link Programs or Libraries (Linker Tasks)
For HPUX/PA-RISC 32-bits, it uses SOM file format. (very similiar to aout format)
Others, ELF is used.
Linux:
ld.so, ld-linux.so* — dynamic linker/loader
UsingOrigin - SCons Wiki
ELF is used.
Solaris:
C. Recording Dependencies with $ORIGIN (Linker and Libraries Guide) - Sun Microsystems
ELF is used.
AIX:
There is no such thing. AIX uses XCOFF for their executable/shared library file format.
It is possible to write a program to modify these setting.
links for ELF
elfedit — examine or edit ELF files - can be used to edit the rpath in Solaris' ELF file.
Executable and Linkable Format - Wikipedia, the free encyclopedia
SourceForge.net: elftoolchain » home
ELF 之 Program Loading 教學文件, #2: Program Header Table
Understanding ELF using readelf and objdump - Linux Forums
[藍森林-自由軟件] - 問一個elf動態連接的問題--
UNIX/LINUX 平台可執行文件格式分析
如何修改動態庫符號表
Working with the ELF Program Format
Intel平台下Linux中ELF文件動態鏈接的加載,解析及實例分析(一):加載
Intel平台下Linux中ELF文件動態鏈接的加載,解析及實例分析(二):函數解析與卸載
2008年10月3日 星期五
2008年9月16日 星期二
2008年9月8日 星期一
2008年9月3日 星期三
2008年8月27日 星期三
2008年8月18日 星期一
CORE Security
2008年7月16日 星期三
Network management with Quake Engine
Leveraging 3D Game Engines (L3DGE): Novel techniques for anomalous traffic detection and collaborative network control. It includes L3DGEWorld (network monitoring), LCMON (supercomputer cluster monitoring) and LupsMON (remote UPS monitoring).
OpenNMS is also interesting on this project [opennms-discuss] Quake interface.
2008年7月13日 星期日
2008年7月9日 星期三
shell parameter expansion
| parameter set and not null | parameter set and null | parameter unset |
${parameter:-word} | substitute parameter | substitute word | substitute word |
${parameter-word} | substitute parameter | substitute null | substitute word |
${parameter:=word} | substitute parameter | assign word | assign word |
${parameter=word} | substitute parameter | substitute parameter | assign null |
${parameter:?word} | substitute parameter | error, exit | error, exit |
${parameter?word} | substitute parameter | substitute null | error, exit |
${parameter:+word} | substitute word | substitute null | substitute null |
${parameter+word} | substitute word | substitute word | substitute null |
2008年7月7日 星期一
python gtk glade3
Custom PyGTK Widgets in Glade3: Part 1 and 2: Custom widget adaptors
Geany at LinuxTOY
Geany is a text editor using the GTK2 toolkit with basic features of an integrated development environment. It was developed to provide a small and fast IDE, which has only a few dependencies from other packages. It supports many filetypes and has some nice features. For more details see About.
glade+pygtk試用札記
Rapid Application Development with Python and Glade
PyGTK lets you to easily create programs with a graphical user interface using the Python programming language. The underlaying GTK+ library provides all kind of visual elements and utilities for it and, if needed, you can develop full featured applications for the GNOME Desktop.
2008年7月6日 星期日
openajax
2008年7月5日 星期六
dojo related
IBM: Technical library view about Ajax Java developers:
WebSphere Application Server: Introducing WebSphere Application Server Version 6.1 Feature Packs
IBM pushes Dojo open source Ajax toolkit
SWiK - IBM +AJAX
Roland Barcia:改善 Dojo 應用程序的初始下載時間
提高基於 Dojo 的 Web 2.0 應用程序的性能
IBM: web development for dojo
Develop HTML widgets with Dojo
Tech Titans Contribute Browser-Boosting Ajax Technologies to Open Source Community "Open Ajax" Initiative Members to Drive Collaborative Innovation to Make the Web Easier to Use
Build enterprise SOA Ajax clients with the Dojo toolkit and JSON-RPC
IBM Mashup Starter Kit
2008年6月30日 星期一
OpenSer and HA/LoadBalancing
This also mention this feature.
Here provides a sample configuration file.
# Made by Deillon Thomas
debug=3 # debug level (cmd line: -dddddddddd)
fork=yes
log_stderror=yes # (cmd line: -E)
children=4
check_via=no
dns=no
rev_dns=no
port=5060
#set module path
mpath="/usr/local/lib64/openser/modules/"
# ------------------ module loading ----------------------------------
loadmodule "maxfwd.so"
loadmodule "sl.so"
loadmodule "dispatcher.so"
loadmodule "tm.so"
loadmodule "mi_fifo.so"
loadmodule "textops.so"
loadmodule "xlog.so"
modparam("mi_fifo","fifo_name", "/tmp/openser_fifo")
#Timer which hits if no final reply for a request
#or ACK for a negative INVITE reply arrives
modparam("tm", "fr_timer", 5)
modparam("dispatcher", "list_file", "dispatcher.list")
modparam("dispatcher", "flags", 2) # fial-over mode
modparam("dispatcher", "dst_avp", "$avp(i:271)")
modparam("dispatcher", "grp_avp", "$avp(i:272)")
modparam("dispatcher", "cnt_avp", "$avp(i:273)")
modparam("dispatcher", "force_dst", 1)
route
{
[...]
if (method=="INVITE") #To complete of course
{
ds_select_dst("1","4"); # 4 = round-robin
t_on_failure("1"); # If there is no response after 5 sec
t_relay();
exit;
}
}
failure_route[1] {
if(t_check_status("408"){ # if timeout
ds_mark_dst(); # do not use this destination anymore
ds_next_dst(); # use next
t_on_failure("1"); # if the next one is dead to !!
t_relay();
}
else {
t_reply("501", "Not Implemented");
}
}
tim jones article on IBM developerWorks
Anatomy of Linux journaling file systems In recent history, journaling file systems were viewed as an oddity and thought of primarily in terms of research. But today, a journaling file system (ext3) is the default in Linux. Discover the ideas behind journaling file systems, and learn how they provide better integrity in the face of a power failure or system crash. Learn about the various journaling file systems in use today, and peek into the next generation of journaling file systems. | 04 Jun 2008 |
Anatomy of Linux flash file systems You've probably heard of Journaling Flash File System (JFFS) and Yet Another Flash File System (YAFFS), but do you know what it means to have a file system that assumes an underlying flash device? This article introduces you to flash file systems for Linux, and explores how they care for their underlying consumable devices (flash parts) through wear leveling, and identifies the various flash file systems available along with their fundamental designs. | 20 May 2008 |
Anatomy of real-time Linux architectures It's not that Linux isn't fast or efficient, but in some cases fast just isn't good enough. What's needed instead is the ability to deterministically meet scheduling deadlines with specific tolerances. Discover the various real-time Linux alternatives and how they achieve real time -- from the early architectures that mimic virtualization solutions to the options available today in the standard 2.6 kernel. | 15 Apr 2008 |
Desktop development for the OLPC laptop The XO laptop (of the One-Laptop-Per-Child initiative) is an inexpensive laptop project intended to help educate children around the world. The laptop includes many innovations, such as a novel, inexpensive, and durable hardware design and the use of GNU/Linux as the underlying operating system. The XO also includes an application environment written in Python with a human interface called Sugar, accessible to everyone (including kids). This article is excerpted from the developerWorks tutorial "Application development for the OLPC laptop," which takes a look at the Sugar APIs and shows how to develop and debug a graphical activity in Sugar using Python. | 26 Feb 2008 |
Anatomy of the Linux SCSI subsystem The Small Computer Systems Interface (SCSI) is a collection of standards that define the interface and protocols for communicating with a large number of devices (predominantly storage related). Linux provides a SCSI subsystem to permit communication with these devices. Linux is a great example of a layered architecture that joins high-level drivers, such as disk or CD-ROM drivers, to a physical interface such as Fibre Channel or Serial Attached SCSI (SAS). This article introduces you to the Linux SCSI subsystem and discusses where this subsystem is going in the future. | 14 Nov 2007 |
Anatomy of Linux synchronization methods In your Linux education, you may have learned about concurrency, critical sections, and locking, but how do you use these concepts within the kernel? This article reviews the locking mechanisms available within the 2.6 kernel, including atomic operators, spinlocks, reader/writer locks, and kernel semaphores. It also explores where each mechanism is most applicable for building safe and efficient kernel code. | 31 Oct 2007 |
Anatomy of the Linux file system When it comes to file systems, Linux is the Swiss Army knife of operating systems. Linux supports a large number of file systems, from journaling to clustering to cryptographic. Linux is a wonderful platform for using standard and more exotic file systems and also for developing file systems. This article explores the virtual file system (VFS) -- sometimes called the virtual filesystem switch -- in the Linux kernel and then reviews some of the major structures that tie file systems together. | 30 Oct 2007 |
System emulation with QEMU QEMU is an open source emulator for complete PC systems. In addition to emulating a processor, QEMU permits emulation of all necessary subsystems, such as networking and video hardware. It also permits emulation of advanced concepts, such as symmetric multiprocessing systems (up to 255 CPUs) and other processor architectures, such as ARM or PowerPC. This article explores QEMU and its architecture and shows how to emulate a guest operating system on a Linux host. | 25 Sep 2007 |
Anatomy of the Linux networking stack One of the greatest features of the Linux operating system is its networking stack. It was initially a derivative of the BSD stack and is well organized with a clean set of interfaces. Its interfaces range from the protocol agnostics, such as the common sockets layer interface or the device layer, to the specific interfaces of the individual networking protocols. This article explores the structure of the Linux networking stack from the perspective of its layers and also examines some of its major structures. | 27 Jun 2007 |
Anatomy of the Linux kernel The Linux kernel is the core of a large and complex operating system, and while it's huge, it is well organized in terms of subsystems and layers. In this article, you explore the general structure of the Linux kernel and get to know its major subsystems and core interfaces. Where possible, you get links to other IBM articles to help you dig deeper. | 06 Jun 2007 |
Anatomy of the Linux slab allocator Good operating system performance depends in part on the operating system's ability to efficiently manage resources. In the old days, heap memory managers were the norm, but performance suffered due to fragmentation and the need for memory reclamation. Today, the Linux kernel uses a method that originated in Solaris but has been used in embedded systems for quite some time, allocating memory as objects based on their size. This article explores the ideas behind the slab allocator and examines its interfaces and their use. | 15 May 2007 |
Sugar, the XO laptop, and One Laptop per Child One Laptop per Child (OLPC) is an organization whose mission is to develop a low-cost laptop (USD100) with accompanying software to spread computer literacy to children around the world. Because the device targets children, it must provide a novel user interface and applications that allow children to experiment with tools for expression and learning. The operating system for the OLPC is a port of the Linux kernel but with a unique interface called Sugar. In this article, learn about the Sugar human interface, see how to virtualize an OLPC laptop on a standard PC using QEMU, and take a tour of Sugar and the OLPC capabilities. | 24 Apr 2007 |
Discover the Linux Kernel Virtual Machine Linux and flexibility go hand in hand, and the options for virtualization are no different. But recently, a change in the Linux virtualization landscape has appeared with the introduction of the Kernel virtual Machine, or KVM. KVM is the first virtualization solution to be part of the mainline Linux kernel (V2.6.20). KVM supports the virtualization of Linux guest operating systems -- even Windows with hardware that is virtualization-aware. Learn about the architecture of the Linux KVM as well as why its tight integration with the kernel may change the way you use Linux. | 18 Apr 2007 |
Virtualization with coLinux Virtualization with VMware, Xen, and Kernel-based Virtual Machine (KVM) are all the rage these days. But did you know that you can run Linux cooperatively with Microsoft Windows? This article explores Cooperative Linux (coLinux), starting with a quick introduction to virtualization and then looking at the approach taken by coLinux. You'll also see how to get coLinux up and running on Windows. | 31 Mar 2007 |
Kernel command using Linux system calls Linux system calls -- we use them every day. But do you know how a system call is performed from user-space to the kernel? Explore the Linux system call interface (SCI), learn how to add new system calls (and alternatives for doing so), and discover utilities related to the SCI. | 21 Mar 2007 |
Linux and symmetric multiprocessing As evidenced by major central processing unit (CPU) vendors, multi-core processors are poised to dominate the desktop and embedded space. With multiprocessing comes greater performance but also new problems. This article explores the ideas behind multiprocessing and developing applications for Linux that exploit SMP. | 14 Mar 2007 |
Parallelize applications for faster Linux booting One of the biggest complaints about Linux, particularly from developers, is the speed with which Linux boots. By default, Linux is a general-purpose operating system that can serve as a client desktop or server right out of the box. Because of this flexibility, Linux serves a wide base but is suboptimal for any particular configuration. This article shows you options to increase the speed with which Linux boots, including two options for parallelizing the initialization process. It also shows you how to visualize graphically the performance of the boot process. | 07 Mar 2007 |
Virtual Linux Virtualization means many things to many people. A big focus of virtualization currently is server virtualization, or the hosting of multiple independent operating systems on a single host computer. This article explores the ideas behind virtualization and then discusses some of the many ways to implement virtualization. We also look at some of the other virtualization technologies out there, such as operating system virtualization on Linux. | 29 Dec 2006 |
Data visualization tools for Linux Applications for graphical visualization of data on Linux are varied, from simple 2-D plots to 3-D surfaces, scientific graphics programming, and graphical simulation. Luckily, there are many open source possibilities, including gnuplot, GNU Octave, Scilab, MayaVi, Maxima, OpenDX, and others. Each has its advantages and disadvantages and targets different applications. Explore a variety of open source graphical visualization tools to better decide which is best for your application. [This article has been updated to include coverage of OpenDX - Ed.] | 30 Nov 2006 |
Build a Web spider on Linux Web spiders are software agents that traverse the Internet gathering, filtering, and potentially aggregating information for a user. Using common scripting languages and their collection of Web modules, you can easily develop Web spiders. This article shows you how to build spiders and scrapers for Linux to crawl a Web site and gather information, stock data, in this case. | 14 Nov 2006 |
Version control for Linux Version control systems, or source management systems, are an important aspect of modern software development. Not using one is like driving a car too fast: it's fun and you might get to your destination faster, but an accident is inevitable. This article provides an overview of Software Configuration Management (SCM) systems and their benefits, including CVS, Subversion, Arch, and Git. It also reviews the most common SCM architectures. Finally, it explores some of the new approaches that are available and how they differ from the earlier methods. [Listing 4 has been updated to reflect improvements to Git's syntax. -Ed.] | 16 Oct 2006 |
Open source robotics toolkits Building a robot involves skills from many disciplines, including embedded firmware and hardware design, sensor selection, controls systems design, and mechanical design. But simulation environments can provide a virtual arena for testing, measuring, and visualizing robotics algorithms without the high cost (and time) of development. This article introduces you to some of the open source robotics toolkits for Linux, demonstrates their capabilities, and helps you decide which is best for you. | 05 Sep 2006 |
Boost application performance using asynchronous I/O The most common input/output (I/O) model used in Linux is synchronous I/O. After a request is made in this model, the application blocks until the request is satisfied. This is a great paradigm because the calling application requires no central processing unit (CPU) while it awaits the completion of the I/O request. But in some cases there's a need to overlap an I/O request with other processing. The Portable Operating System Interface (POSIX) asynchronous I/O (AIO) application program interface (API) provides this capability. In this article, get an overview of the API and see how to use it. | 29 Aug 2006 |
BusyBox simplifies embedded Linux systems BusyBox is a single executable implementation of many standard Linux utilities. BusyBox contains simple utilities, such as cat and echo, as well as larger, more complex tools, such as grep, find, mount, and telnet (albeit, with fewer options than the traditional version); some refer to BusyBox as the Swiss Army knife of utilities. This article explores the purpose of BusyBox, how it works, and why it's important for memory-constrained environments. | 15 Aug 2006 |
Linux initial RAM disk (initrd) overview The Linux initial RAM disk (initrd) is a temporary root file system that is mounted during system boot to support the two-state boot process. The initrd contains various executables and drivers that permit the real root file system to be mounted, after which the initrd RAM disk is unmounted and its memory freed. In many embedded Linux systems, the initrd is the final root file system. This article explores the initial RAM disk for Linux 2.6, including its creation and use in the Linux kernel. | 31 Jul 2006 |
Inside the Linux scheduler The Linux kernel continues to evolve, incorporating new technologies and gaining in reliability, scalability, and performance. One of the most important features of the 2.6 kernel is a scheduler implemented by Ingo Molnar. This scheduler is dynamic, supports load-balancing, and operates in constant time -- O(1). This article explores these attributes of the Linux 2.6 scheduler, and more. | 30 Jun 2006 |
Inside the Linux boot process The process of booting a Linux system consists of a number of stages. But whether you're booting a standard x86 desktop or a deeply embedded PowerPC target, much of the flow is surprisingly similar. This article explores the Linux boot process from the initial bootstrap to the start of the first user-space application. Along the way, you'll learn about various other boot-related topics such as the boot loaders, kernel decompression, the initial RAM disk, and other elements of Linux boot. | 31 May 2006 |
Better networking with SCTP The Stream Control Transmission Protocol (SCTP) is a reliable transport protocol that provides stable, ordered delivery of data between two endpoints (much like TCP) and also preserves data message boundaries (like UDP). However, unlike TCP and UDP, SCTP offers such advantages as multi-homing and multi-streaming capabilities, both of which increase availability. In this article, get to know the key features of SCTP in the Linux 2.6 kernel and take a look at the server and client source code that shows the protocol's ability to deliver multi-streaming. | 28 Feb 2006 |
Boost socket performance on Linux The Sockets API lets you develop client and server applications that can communicate across a local network or across the world via the Internet. Like any API, you can use the Sockets API in ways that promote high performance -- or inhibit it. This article explores four ways to use the Sockets API to squeeze the greatest performance out your application and to tune the GNU/Linux environment to achieve the best results. (Editor's note: we updated Tip 3 to correct an error in the calculation for Bandwidth Delay Product (BDP), spotted by an alert reader.) | 03 Feb 2006 |
Five pitfalls of Linux sockets programming The Sockets API is the de facto standard API for networking applications development. Although the API is simple, new developers can experience some common problems. This article identifies the most common of these pitfalls and shows you how to overcome them. | 20 Sep 2005 |
Anatomy Series in IBM developerWorks
Anatomy of Linux journaling file systems | 04 Jun 2008 |
Anatomy of Linux flash file systems | 20 May 2008 |
Anatomy of Security-Enhanced Linux (SELinux) | 29 Apr 2008 |
Anatomy of real-time Linux architectures | 15 Apr 2008 |
Anatomy of the Linux SCSI subsystem | 14 Nov 2007 |
Anatomy of Linux synchronization methods | 31 Oct 2007 |
Anatomy of the Linux file system | 30 Oct 2007 |
Anatomy of the Linux networking stack | 27 Jun 2007 |
Anatomy of the Linux kernel | 06 Jun 2007 |
Anatomy of the Linux slab allocator | 15 May 2007 |
2008年6月29日 星期日
Add multitouch gesture support to a TouchPad-equipped laptop
Enable 'Three-Finger Swipe,' and open- and close-pinch gestures using synclient and synthetic X events
:
:
GUI for DOS
SEAL (also called XSeal [1]) is a 32-bit graphical user interface for DOS created by Michal Stencl.[2] SEAL requires at least an Intel 80486, 8MB of RAM (although it may be possible to run SEAL on less, doing so is not recommended), a video card supporting 640x480 with 256 colors, 1.6MB of hard drive space, MS-DOS 3.0+ or equivalent (DR-DOS, PC-DOS, FreeDOS). SEAL is licened under the GNU GPL. It hasn't been under development since 2003. The last release, 2.0.11, came out 2002-04-11.[3]
FreeGEM is a computer GUI based on Digital Research's GEM which was first released in 1985. GEM stands for "Graphical Environment Manager". FreeGEM is the free software/open source version of GEM developed after Digital Research released the GEM code under the GPL free software licence.
OpenGEM is a distribution of FreeGEM, a graphical user interface (GUI) for DOS. OpenGEM is a non-multitasking 16-bit GUI.
Ikon is a 32-bit graphical user interface for DOS compatable systems, including FreeDOS. It is written from scratch using DJGPP and Allegro.
QubeOS is a multiplatform and multitasking desktop GUI system, developed in Slovakia. It was developed by Michal Stencl who also created SEAL GUI system.
WINE, ReactOS, Linux Unified kernel
Wine,
ReactOS and
Linux Unified Kernel.
WINE is used to run unmodified Windows AP in Linux.
ReactOS is to provide a free and open source windows os clone.
The Linux Unified Kernel is a project to import certain key features of the Microsoft Windows operating system into the Linux kernel. The project is hosted by Insigma, and is issued under the GPL.
According to its developers, the current version of (0.2.1) provides several Windows mechanism implementations, including process/thread management, object management, virtual memory management, synchronization, etc. The relevant system calls are implemented to replace the services running in the user space, as well.
E/OS (Emulator Operating System) is a virtual machine emulation system.
E/OS is primarily based on the Linux kernel, QEMU, XFree86, and Wine, and is intended to be a replacement for operating systems such as Microsoft Windows, Mac OS, BeOS, OS/2, DOS, and Linux.
HX DOS-Extender is a free DOS extender with built-in Win32 PE file format support. Usually the purpose of a DOS extender is to make protected-mode features, especially large memory and 32-bit addressing, available for DOS applications.
Here is his home page.
run linux ap in windows
There are some other useful links in LinuxLinks for simliar programs.
A Free Computer Ebook Site
2008年6月26日 星期四
Interoperating with Windows Media Player using P/Invoke and C#
Interoperating with Windows Media Player using P/Invoke and C#
quote from the article:
:
:
:
This article hopes to demonstrate:
- How to use P/Invoke to call unmanaged code.
- How to use Spy++ to log Windows messages and get
wParam
andlParam
values. - How to implement
FindWindow()
andSendMessage()
in C#. - How to Interoperate with Windows Media Player.
:
:
2008年6月23日 星期一
Replication/Cluster solution for PostgreSQL
Continuent™ uni/cluster for PostgreSQL is the leading middleware high availability and scalability solution for use with PostgreSQL. Continuent uni/cluster for PostgreSQL supports PostgreSQL’s native SQL and is integrated with native database functions for seamless integration and easy management.
SkypeGarage/DbProjects - Skype Developer Zone
During the development of Skype's backend infrastructure, we have enchanced PostgreSQL database in several ways, which we wish to give back to community. (A little overview of using postgreSQL at Skype can be found here: /SkypePostgresqlWhitepaper .) and Londiste is a PostgreSQL replication engine written in python.
Slony-ISeems good, single master only, master is a single point of failure, no good failover system for electing a new master or having a failed master rejoin the cluster. Slave databases are mostly for safety or for parallelizing queries for performance. Suffers from O(N^2) communications (N = cluster size). with reasonable sysadmin you can implement failover system yourself.
Slony is powerful, trigger based, and highly configurable.
pgPool-I, pgPool-II, pgpool 1/2 is a reasonable solution. it's statement level replication, which has some downsides, but is good for certain things. pgpool 2 has a neat distributed table mechanism which is interesting. You might want to be looking here if you have extremely high ratios of read to write but need to service a huge transaction volume. Supports load-balancing and replication by implementing a proxy that duplicates all updates to all slaves. It can partition data by doing this, and it can semi-intelligently route queries to the appropriate servers.
PGCluster PGCluster which does synchronous multimaster replication. Two single-points failure spots, load balancer and the data replicator. The project has historically looked a bit dead, but they just released a new version and moved to a Trac-based web site at http://www.pgcluster.org/ and http://pgfoundry.org/projects/pgcluster is up to date (at least downloads page) One major downside to PGCluster is that it uses a modified version of PostgreSQL, and it usually lags a few releases behind.
Cybercluster is a PostgreSQL replication solution which makes sure that the database cluster is consistent at every point in time. We rely on a shared-nothing architecture which is perfectly suitable for synchronous multimaster replication.
for more info, refer to
Replication, Clustering, and Connection Pooling
2008年4月30日 星期三
patch kernel without reboot
Ksplice allows system administrators to apply security patches to the Linux kernel without having to reboot. Ksplice takes as input a source code change in unified diff format and the kernel source code to be patched, and it applies the patch to the corresponding running kernel. The running kernel does not need to have been prepared in advance in any way.
To be fully automatic, Ksplice's design is limited to patches that do not introduce semantic changes to data structures, but most Linux kernel security patches don't make these kinds of changes. An evaluation against Linux kernel security patches from May 2005 to December 2007 finds that Ksplice can automatically apply 84% of the 50 significant kernel vulnerabilities from this interval.
...
...
bridge on linux
ebtables - Ethernet-Bridge-tables a filter tool for ethernet bridge as iptables for ip traffics.
2008年4月29日 星期二
zorp I/II
At first glance, stateful packet filtering appears to have conquered the firewall world, both in terms of market share and mind share. The list of products based on stateful packet filtering is a long one, and it includes both the proprietary industry leader, Check Point Firewall-1, and Linux's excellent Netfilter kernel code.
But what about application-layer proxies? Professional firewall engineers have long insisted there's nothing like an application-aware proxy for blocking the widest possible range of network attacks. Indeed, being such a person myself, I've been disheartened to see application-layer proxies increasingly marginalized. In some circles they've even been written off as obsolete for reasons that simply don't warrant, in my opinion, the loss of a powerful security tool. Marketing is at least as big a reason as any other.
Apparently I'm not alone in my opinion. Balazs Scheidler, creator of the essential logging facility Syslog-NG, has created Zorp, an open-source proxy firewall product that is simply brilliant. This month I explain why Zorp has helped resuscitate my faith in the application-layer proxy firewall, and what this means for anyone charged with protecting highly sensitive networks.
At this point, some of you may be asking, “What are application-layer proxying and stateful inspection? And why do I care which is better?” I can explain. Feel free to skip ahead to the next section if you're a grizzled firewall veteran.
A firewall, of course, is a computer or embedded hardware device that separates different networks from one another and regulates what traffic may pass between them. The instructions that determine which network nodes may send what type of network packets and where are called firewall rules or, collectively, the firewall policy.
These rules are what make a firewall different from an ordinary router. Routers must be programmed to know how to move packets from one network to another, but not necessarily whether to allow them to move in any given way. A firewall, on the other hand, discriminates.
One very simple way to categorize packets is by the Internet information in packets' Internet Protocol (IP) headers. An IP header contains basic information, most importantly, protocol type, source and destination addresses, and, if applicable, source and destination ports. The ports actually are part of the next header down in the packet, the UDP header or TCP header. A firewall that looks only at this basic information is called a simple packet filter. Because simple packet filters don't look deeply into each packet, they tend to be quite fast.
However, the IP header of a packet plus its TCP or UDP port number tells us nothing about that packet's relationship to other packets. For example, if we examine the IP header of an HTTP packet, we know it's a TCP packet (thanks to the IP field), where it's from and where it's going (source and destination IP address fields) and what type of application sent it (from the destination port, TCP 80). Table 1 shows an example simple packet-filtering rule.
Table 1. Simple Packet Filter Rules for HTTP
Source IP | Destination IP | Protocol | Source Port | Destination Port | Action |
---|---|---|---|---|---|
Any | 192.168.1.1 | TCP | Any | 80 | Allow |
192.168.1.1 | Any | TCP | 80 | Any | Allow |
But that level of inspection leaves out some key pieces of information about the HTTP connection: whether the packet is establishing a new HTTP session, whether it's part of a session in progress or whether it's simply a random, possibly hostile, packet not correlating to anything at all. This information is left out because crucial session-related information such as TCP flags, TCP sequence numbers and application-level commands, all are contained deeper within the packet than a packet filter digs. That's where stateful packet filtering comes in.
A stateful packet filter, like a simple packet filter, begins by examining each packet's source and destination IP addresses, and source and destination ports. But it also digs deeper into the packet's UDP or TCP header to determine whether the packet is initiating a new connection. If it is, the firewall creates an entry for the new connection in a state table. If it isn't, the stateful packet filter checks the packet against the state table to see if it belongs to an existing connection. A stateful packet filter will block packets that pretend to be part of an existing connection, but aren't. Actually, UDP is connectionless, but a good stateful firewall can guess that an outbound DNS query to a given server on UDP 53 should be followed by an inbound response from that server's UDP port 53. Stateful packet filtering has two main benefits over simple packet filtering.
First, firewall rules can be simpler. Rather than needing to describe both directions of each bi-directional transaction, such as HTTP, firewall rules need address only the initiation of each allowed transaction. Subsequent packets belonging to established, allowed connections can be handled by the firewall's state table, independently of explicit rules. In Table 2 we see that only one rule is needed to allow the same HTTP transaction for which we needed two rules in Table 1.
Table 2. Stateful Packet Filter Rule for HTTP
Source IP | Destination IP | Protocol | Source Port | Destination Port | State | Action |
---|---|---|---|---|---|---|
Any | 192.168.1.1 | TCP | Any | 80 | New | Allow |
The second main benefit of stateful packet filtering is we don't have to do such distasteful things as allowing all inbound TCP and UDP packets from the Internet to enter our internal network if they have a destination port higher than 1024. This is the sort of thing you sometimes must do if you don't have a better way to correlate packets with allowed transactions. In other words, stateful packet filtering provides better security than simple packet filtering.
“Cool”, you say, “stateful packet filters are more efficient and secure”, which is true. But what about the things even stateful packet filters don't consider? What about things like potentially malformed HTTP commands or intentionally overlapping IP fragments? Might there be a type of firewall that examines each packet in its entirety or that has some other means of propagating the fewest anomalous packets possible?
Indeed there is, and it's called an application-layer proxy or application-layer gateway. Whereas packet filters, whether simple or stateful, examine all packets and pass those that are allowed, an application-layer proxy breaks each attempted connection into two, inserting itself in the middle of each transaction as an equal participant. To the client or initiator in each transaction, the firewall acts as the server. To the intended destination, or server, the firewall acts as the client.
Figures 1 and 2 illustrate this difference. In Figure 1, we see that the stateful packet filter passes or blocks transactions but ultimately is an observer in that it passes allowed packets more or less intact, unless, for example, it performs network address translation (NAT). In contrast, in Figure 2 we see that the firewall terminates each allowed connection to itself and initiates a new, proxied connection to each allowed connection's desired actual endpoint.
Proxying comes in two flavors, transparent and nontransparent. In a transparently proxied connection, both parties are unaware that the connection is being proxied; the client system addresses its packets as though there were no firewall, with their true destination IP address. By contrast, in a nontransparently proxied connection the client must address its packets to the firewall rather than to their true destinations. Because the client must, in that case, somehow tell the firewall where to proxy the connection, nontransparent proxying requires clients to run proxy-aware applications. Although most Web browsers and FTP clients can be configured to use a nontransparent proxy, transparent proxies are easier for end users to live with than are nontransparent proxies. Modern application-layer proxies, such as Zorp, are transparent.
Transparent or not, proxying has several important ramifications. First, low-level anomalies, such as strange flags in the IP header, generally are not propagated by the firewall. The firewall initiates the secondary connection in the way that it, not the client system, considers an acceptable manner. Second, because the firewall is re-creating the client connection in its entirety and not merely propagating or trivially rewriting individual packets, the firewall is well positioned to examine the connection at the application layer. This is not a given, however; if the firewall is, say, a SOCKS firewall and not a true application-layer proxy, it simply could copy the data payloads of the client connection packets into those of the new, proxied packets. But if the firewall is application-aware, like Zorp is, the firewall not only examines but makes decisions about the data payloads of all client packets.
Let's look at an example: suppose your public Web server is vulnerable to a buffer-overflow exploit that involves a malformed HTTP GET command containing, say, an abnormally long URL. Your application-layer proxy firewall initially accepts the connection from the client, but upon examining the long URL, closes the connection with an error message to the client and a reset to the server, without ever forwarding the attack payload, the long URL.
The third ramification isn't a positive one: by definition, proxying is more resource-intensive than is packet filtering, and application-aware proxying is especially so. This strike against application-layer proxies is, however, generally overstated. Zorp, for example, can proxy 88Mbps worth of HTTP traffic, nearly twice the capacity of a T-3 WAN connection, running on only a 700MHz Celeron system with 128MB of RAM. Zorp, on a dual-processor Pentium system with 512MB of RAM and SCSI RAID hard drives, can handle around 480Mbps, according to the Zorp Professional v2 Product Description, available at www.balabit.com [3].
In summary, application-layer proxies provide superior protection by inserting themselves in the middle of each network transaction they allow by re-creating all packets from scratch and by making intelligent decisions on what application-layer commands and data to propagate. They accomplish this based on their knowledge about how those applications are supposed to work, not merely on how their container packets ought to look. The main strike against application-layer proxies is performance, but thanks primarily to Moore's Law, this shortcoming is mitigated amply by fast but not necessarily expensive hardware.
In the interest of full disclosure, I should mention one other shortcoming that many people perceive in application-layer proxies, greater complexity. It stands to reason that because application-layer proxies are more sophisticated than packet filters, it should take more sophistication to configure them, in the same way that you need to know more to operate a Mosler safe than to operate your typical bus station locker. It's more work to configure a firewall running Zorp or Secure Computing Sidewinder than it is to configure one running Check Point Firewall-1 or Linux Netfilter/iptables.
But isn't better security worth a little extra work? Like everything else in information security, it's up to you to choose your own trade-off. Maybe the extra work is worth it to you, and maybe it isn't. Either way, I hope this column makes you glad you've got the choice in the first place. The remainder of this article, which continues with at least one more installment, explains precisely what's involved in configuring and using Zorp.
The proxy dæmons that comprise Zorp run on top of the Linux kernel concurrently with the standard Netfilter and Balabit-provided TPROXY kernel modules. In theory, this makes Zorp distribution-agnostic, and it's designed to compile cleanly on any Linux distribution that meets certain requirements (see below). Zorp is developed on Debian Linux, however, and the vast majority of Zorp documentation assumes that you're running Debian too. In fact, Zorp GPL is an official Debian package (as of this writing, in Debian's testing and unstable releases).
Zorp is available in three versions: Zorp GPL, the free GPLed version; Zorp Unofficial, a cutting-edge or beta version of Zorp GPL; and Zorp Professional (or simply Zorp Pro), a commercial product based on but with more features than Zorp GPL. If you purchase Zorp Pro, you get a bootable CD-ROM that installs not only Zorp Pro but ZorpOS, a stripped-down Debian distribution optimized for Zorp. With Zorp Pro, a bare-metal Zorp installation takes less than 15 minutes, excluding subsequent configuration, of course. Anyone who's suffered through lengthy dselect sessions while trying to install just enough Debian for one's needs can appreciate the beauty of this.
Zorp Pro also includes the new Zorp Management Server (ZMS), which allows you to manage multiple Zorp firewalls from a central management host. The host in turn can be operated remotely with ZMC, a GUI client available in both Debian Linux and Windows versions. ZMS is functionally equivalent to Check Point Firewall-1's management module, arguably the biggest reason Check Point has conquered the enterprise firewall world. ZMS has the potential to make Zorp very attractive indeed to sites with a lot of firewalls to manage.
ZMS/ZMC is still a little rough around the edges—Balabit isn't expecting to release a consumer-installable version of that part of Zorp Pro in March 2004 (though at the time of this writing it is being used, successfully, by paying customers). Even if you don't use ZMS/ZMC, Zorp Pro's smooth installation and wide range of features, including several application proxies not supported in Zorp GPL, make Zorp Professional worthwhile.
Unlike Zorp Pro, Zorp GPL and Zorp Unofficial require a working Linux installation that includes the following: glib 2.0, Python 2.1, libcap 1.10 and openssl 0.9.6g. It also requires either a Linux 2.2 kernel compiled with IP, firewalling and transparent proxy support or a Linux 2.4 kernel compiled with iptables, iptables connection tracking, iptables NAT and, using Balabit's TPROXY kernel patch (www.balabit.com/products/oss/tproxy [4]), iptables transparent proxying. All of these features should be compiled as modules.
Once your OS is ready, you either can install Zorp GPL from binary deb packages or compile Zorp GPL from source code (available at www.balabit.com/downloads [5]). Compiling Zorp GPL is a little more involved than your typical ./configure make make install routine; see the Zorp GPL Tutorial at www.balabit.com/products/zorp_gpl/tutorial [6] for detailed instructions.
Next time, I'll describe how to set up Zorp GPL to protect a typical Internet—DMZ—Trusted Network topology.
Mick Bauer, CISSP, is Linux Journal's security editor and an IS security consultant in Minneapolis, Minnesota. He's the author of Building Secure Servers With Linux (O'Reilly & Associates, 2002).
[1] http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/072/7296/7296f1.png
[2] http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/072/7296/7296f2.png
[3] http://www.balabit.com
[4] http://www.balabit.com/products/oss/tproxy
[5] http://www.balabit.com/downloads
[6] http://www.balabit.com/products/zorp_gpl/tutorial
In my last column, I sang the praises of application-layer proxy firewalls and introduced Balazs Scheidler's Zorp firewall suite, available in both commercial and free-of-charge versions. This column continues where we left off, discussing basic Zorp configuration for a simple inside-DMZ-outside scenario. We are going to configure only a couple of services, but this should be enough to help prospective Zorp users begin building their own intelligent firewall systems.
To review, application-layer proxies broker rather than merely pass the traffic that flows through them. For example, when a user on one network initiates an HTTP session on the other side of a proxying firewall, the firewall intercepts and breaks the connection, acting both as the server (from the client's viewpoint) and as the client (from the destination server's standpoint).
Zorp uses transparent proxies, which means that users behind a Zorp firewall need not be aware that the firewall is there; they may target foreign addresses and hostnames without configuring their software to communicate with the proxy. This is an important mitigator against the ugly fact that proxies are inherently more complicated than other kinds of firewalls. With Zorp, all the complexity is in the back end, resulting in much happier end users.
But that doesn't mean Zorp is painful for its administrators, either. I'd rate its complexity as being higher than iptables but lower than sendmail.cf. So without further ado, let's configure ourselves a Zorp firewall.
This article assumes that, per my last column, you've successfully patched your Linux 2.4 kernel and your iptables binary to support the TPROXY module (see www.balabit.com/products/oss/tproxy [1]). It also assumes you have compiled and/or installed packages for libzorpll, zorp and zorp-modules; source code and deb packages are available at www.balabit.com/products/zorp_gpl [2]. My examples further assume you're running Zorp GPL version 2.0, though the examples should apply equally to Zorp Pro 2.0. Zorp Pro has some proxy modules not included with Zorp GPL, but the modules common to both behave the same.
Zorp supports many more than three interfaces per firewall, but the most common firewall architecture nowadays is the three-homed-host architecture shown in Figure 1. This is the architecture I cover here.
Similarly, as you can see in Figure 1, we've got only three data flows: HTTP from the Internet to a DMZed Web server; HTTP from the internal network to the Internet; and HTTP and SSH from the internal network to the DMZ. Absent are things like IMAP, NNTP, FTP and other services that even simple setups commonly use. If you understand how to configure Zorp to accommodate these, though, you should be able to figure out others. I do, however, discuss DNS and SMTP, even though I omitted them from Figure 1.
The first thing we need to do doesn't directly involve Zorp but rather the TPROXY kernel module. In transparent proxying, TPROXY needs a dummy network interface to bind to whenever it splits a data flow in two. This needs to be an interface whose IP address is neither Internet-routable nor associated with any network connected to the firewall.
Linux 2.4 kernels compile with support for dummy network interfaces by default. You should have one, unless you intentionally compiled your kernel without dummy driver support. If so, compile a new kernel with dummy support. All you need to do for TPROXY's purposes, therefore, is explicitly configure dummy0 with a nonroutable and unused address. In Debian, you should add the following lines to /etc/networking/interfaces:
auto dummy0
iface dummy0 inet static
address 1.2.3.4
netmask 255.255.255.255
Other distributions handle network configuration differently—Red Hat and SuSE use ifcfg- files in /etc/sysconfig/network—but hopefully you get the picture. Notice the 32-bit network mask: I repeat, this address must not belong to a real network.
You may be wondering, isn't this article about Zorp and not iptables? Yes, but Zorp runs in conjunction with iptables, not in place of it. TPROXY, in fact, is specifically a Netfilter patch. To use TPROXY, we need to configure it with the iptables command, as we do for the rest of Netfilter. (Netfilter is the proper name for Linux 2.4's firewall code—iptables is its front-end command.)
In addition, it's recommended that you run certain services, namely DNS and SMTP, on the firewall as self-contained proxies. If you do, you need to use iptables to configure your firewall to accept those connections directly. For example, BIND v9 supports split-horizon DNS, in which external clients are served from different zone files than are internal clients. Similarly, Postfix is easy to configure to act as a relay on behalf of internal hosts, but strictly as a local deliverer when dealing with external hosts. It makes sense to run such proxy-like services on a firewall, as long as you configure them extremely carefully.
If you're new to Netfilter/iptables, what follows may make little sense, and space doesn't permit me to explain it all in detail. Zorp is, after all, an advanced tool. In a nutshell, what we're going to do with iptables is run all packets through some simple checks against spoofed IP addresses. We then are going to intercept packets that need to be proxied transparently and process them in custom chains rather than by using the normal FORWARD chain. Technically, nothing is forwarded. Finally, we pass some packets that are destined for the firewall itself.
Zorp Pro includes a group of scripts collectively called iptables-utils, which simplify iptables management for Zorp. A free version of iptables-utils for Zorp GPL 2.0 is available at www.balabit.com/downloads/zorp/zorp-os/pool/i/iptables-utils [4]. I highly recommend iptables-utils, as it makes it much easier to test a new iptables configuration before actually committing it.
Because it uses a syntax that I don't have space here to explain, the following example is instead a conventional iptables startup script. Here are the most important parts of such a script. First should come rules for the special tproxy table that the TPROXY module adds to Netfilter (Listing 1). This is where we define a custom proxy chain for each of our networks: PRblue for proxied connections initiated from our internal network; PRpurple for proxied connections initiated from our DMZ (none, in this scenario); and PRred for proxied connections originating from the Internet.
Listing 1. TPROXY Rules
iptables -t tproxy -P PREROUTING ACCEPT
iptables -t tproxy -A PREROUTING -i eth1 -j PRblue
iptables -t tproxy -A PREROUTING -i eth2 -j PRpurple
iptables -t tproxy -A PREROUTING -i eth0 -j PRred
iptables -t tproxy -P OUTPUT ACCEPT
iptables -t tproxy -N PRblue
iptables -t tproxy -A PRblue -p tcp --dport 80 \
-j TPROXY --on-port 50080
iptables -t tproxy -A PRblue -p tcp --dport 22 \
! -d firewall.example.net -j TPROXY --on-port 50022
iptables -t tproxy -N PRpurple
iptables -t tproxy -N PRred
iptables -t tproxy -A PRred -p tcp --dport 80 \
-j TPROXY --on-port 50080
Several things are worth pointing out in Listing 1. First, notice that the tproxy table contains its own PREROUTING and OUTPUT output chains. In Zorp, we use the tproxy/PREROUTING chain to route packets to the proper custom proxy chain (PRblue), based on the interface each packet enters. As with any custom iptables chain, if a packet passes through one of these without matching a rule, it's sent back to the line immediately following the rule that sent the packet to the custom chain. This is why custom chains don't have default targets.
In the PRblue chain, we've got two rules, one for each type of transaction allowed to originate from the internal network. All outbound HTTP material is proxied, that is, handed to a proxy process listening on port 50080. But in the SSH rule, we tell Netfilter to proxy all outbound SSH traffic unless it's destined for the firewall itself. Although Figure 1 doesn't show such a data flow (Blue→SSH→firewall), we need it in order to administer the firewall. This flow also requires a rule in the regular filter table's INPUT chain. In this example scenario, our DMZed Web server isn't permitted to initiate any connections itself, so we've created a PRpurple chain without actually populating it.
Now we move on to the regular filter table, this is the Netfilter table most of us are used to dealing with—it's the default when you omit the -t option with iptables. Listing 2 shows our example firewall's filter table's INPUT rules.
Listing 2. Filter Table INPUT Chain
iptables -P INPUT DROP
iptables -A INPUT -j noise
iptables -A INPUT -j spoof
iptables -A INPUT -m tproxy -j ACCEPT
iptables -A INPUT -m state \
--state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i eth1 -j LOblue
iptables -A INPUT -i eth0 -j LOred
iptables -A INPUT -i eth2 -j LOpurple
iptables -A INPUT -j LOG --log-prefix "INPUT DROP: "
iptables -A INPUT -j DROP
The first few lines check packets against some custom chains that check for spoofed IP addresses; if they pass those checks, they continue down the INPUT chain. Packets generated by the TPROXY module itself are accepted, as are packets belonging to established allowed transactions and loopback packets (lines 4–6, respectively). Next, as with the tproxy table's PREROUTING chain, we route packets to custom chains based on ingress interface. This time, the custom chains are for packets with local destinations, as opposed to proxied ones, so I've named them LOblue and so forth. Next come our filter table's custom chains (Listing 3).
Listing 3. Custom Chains in the Filter Table
iptables -N LOblue
iptables -A LOblue -p tcp --dport 22 --syn -j ACCEPT
iptables -A LOblue -p udp --dport 53 -j ACCEPT
iptables -A LOblue -p tcp --dport 25 --syn -j ACCEPT
iptables -A LOblue -j LOG --log-prefix "LOblue DROP: "
iptables -A LOblue -j DROP
iptables -N LOpurple
iptables -A LOpurple -p udp --dport 53 -j ACCEPT
iptables -A LOpurple -j LOG \
--log-prefix "LOpurple DROP: "
iptables -A LOpurple -j DROP
iptables -N LOred
iptables -A LOred -p udp -s upstream.dns.server \
-sport 53 -j ACCEPT
iptables -A LOred -p tcp --dport 25 --syn -j ACCEPT
iptables -A LOred -j LOG --log-prefix "LOred DROP: "
iptables -A LOred -j DROP
iptables -N noise
iptables -A noise -p udp --dport 137:139 -j DROP
iptables -A noise -j RETURN
iptables -N spoof
iptables -A spoof -i lo -j RETURN
iptables -A spoof ! -i lo -s 127.0.0.0/8 -j spoofdrop
iptables -A spoof -i eth1 ! -s 10.0.1.0/24 \
-j spoofdrop
iptables -A spoof ! -i eth1 -s 10.0.1.0/24 \
-j spoofdrop
iptables -A spoof -i eth2 ! -s 192.168.1.0/24 \
-j spoofdrop
iptables -A spoof ! -i eth2 -s 192.168.1.0/24 \
-j spoofdrop
iptables -A spoof -j RETURN
iptables -N spoofdrop
iptables -A spoofdrop -j LOG \
--log-prefix "Spoofed packet: "
iptables -A spoofdrop -j DROP
The first three of these custom chains are the most important: LOblue, LOpurple and LOred tell Netfilter how to process packets destined for the firewall itself, based on in which interface the packets arrive. In LOblue, we're accepting DNS queries, SSH connections and SMTP connections. In LOpurple, we're accepting only DNS queries. And in LOred, we're accepting DNS replies from our ISP's DNS server (upstream.dns.server) and SMTP connections. The last three of these custom chains are the simplest: noise filters NETBIOS packets, those notorious clutterers of Linux firewall logs; spoof filters for packets with obviously spoofed, that is, impossible, source IP addresses; and spoofdrop logs and drops packets caught by the spoof chain.
Listing 4 shows the remainder of our example iptables script, an essentially empty FORWARD chain with a default DROP policy and an empty OUTPUT chain with a default ACCEPT chain. Again, this is a proxying firewall, so it won't forward anything. You may be uneasy with the default ACCEPT policy for firewall-originated packets, but this is both necessary and safe on a Zorp firewall.
Listing 4. The Filter Table's FORWARD and OUTPUT Chains
iptables -P FORWARD DROP
iptables -A FORWARD -j LOG \
--log-prefix "FORWARD DROP: "
iptables -A FORWARD -j DROP
iptables -P OUTPUT ACCEPT
Finally, we come to actual Zorp configuration files. These are stored in /etc/zorp, and the first one we tackle is instances.conf, which defines and controls Zorp's instances. Usually, the rule of thumb is to define one instance per network zone, so in our example scenario we have, you guessed it, one instance each for our red, purple and blue zones. Listing 5 shows what such an instances.conf file would look like.
Listing 5. instances.conf
blue -v3 -p /etc/zorp/policy.py \
--autobind-ip 1.2.3.4
purple -v3 -p /etc/zorp/policy.py \
--autobind-ip 1.2.3.4
red -v3 -p /etc/zorp/policy.py \
--autobind-ip 1.2.3.4
The first field in each line is the name of the instance. This is user-definable, but we need to refer to it verbatim in the Zorp configuration file proper, policy.py. Speaking of which, you may use separate configuration files for each instance if you wish, or you may configure multiple zones within a single file. Regardless, the -p option in instances.conf tells Zorp which file to use for each instance.
The -v parameter sets log message verbosity: 3 is the medium setting, and 5 is useful for debugging. This parameter controls only Zorp-generated log messages and has no effect whatsoever on Netfilter/iptables logging. Finally, each line ends with an --autobind-ip setting that determines to which dummy IP Zorp should bind TPROXY when proxying connections. This IP address can and should be shared between all instances. This address, obviously, should be the one you set earlier (see Configuring a Dummy Interface, above).
Your iptables script determines how packets get routed to proxies, and /etc/zorp/instances.conf determines how Zorp starts up. But to tell Zorp's proxies how to behave, you need to set up /etc/zorp/policy.py, or whatever you called the configuration file(s) referenced in instances.conf—policy.py is conventional but not mandatory. This policy file contains two parts. The first part is a global section in which zones are defined based on network addresses and allowed services. The second part is a service-instance definition section in which each instance listed in instances.conf is defined based on which services originate in each and in which those services are mapped to application proxies.
Listing 6 shows a complete global section from our example policy.py. It begins with some import sections, in which essential Python functions are included. Next come our zone definitions. If you set up instances.conf to run one Zorp instance per zone, your zone names here can be similar to or even the same as your instance names. In Listing 6 I've chosen different names in order to illustrate that technically, zone names are distinct from instance names.
Listing 6. policy.py, Part I (Global Settings)
from Zorp.Core import *
from Zorp.Plug import *
from Zorp.Http import *
InetZone("bluezone", "10.0.1.0/24",
outbound_services=["blue_http", "blue_ssh"],
InetZone("purplezone", "192.168.1.0/24",
inbound_services=["blue_http", "blue_ssh",
"red_http"])
InetZone("redzone", "0.0.0.0/0",
outbound_services=["red_http"],
inbound_services=["*"])
InetZone("localzone", "127.0.0.0/8",
inbound_services=["*"])
# end global section
In each zone definition, you can see a network address that corresponds to those in Figure 1 and specifications of which services are allowed. These service names are user-definable and fleshed out in the subsequent service-instance definitions. The important thing to understand about these statements is that inbound and outbound is relative to the zone/network, not to the firewall.
Figure 2 shows what the internal-to-Internet HTTP data flow looks like as a proxied connection. In this illustration, we see this data flow exists both as an outbound connection out of the Internal (blue) zone and an inbound connection to the Internet (red) zone. This is borne out in the respective bluezone and redzone definitions in Listing 6. It's also important to use the same service name in both zone definitions that a given data flow traverses (blue_http in the case of Figure 2 and Listing 6).
The last point to make about Listing 6 is the * wild card signifies all defined services. This is narrower than it might seem; * includes only those services defined in policy.py's service-instance definitions, not all possible services. Remember, Zorp processes only those packets that Netfilter and TPROXY send to it. If a given zone is to allow no outbound or inbound services, the inbound_services or outbound_services parameter may be either omitted or set to [] (empty brackets).
Listing 7 shows our policy.py file's service-instance definitions. The first line of each definition must reference an instance name specified in instances.conf, and the following lines in the definition must be indented because these rules are processed by Python, which is precise about indentation. The definition can't be empty: if no services originate in a given instance, the token pass may be used, as with the purple() instance definition in Listing 7.
Listing 7. policy.py, Part II (Instance Definitions)
def blue():
Service("blue_http", HttpProxy,
router=TransparentRouter())
Service("blue_ssh", PlugProxy,
router=TransparentRouter())
Listener(SockAddrInet('10.0.1.254', 50080),
"blue_http")
Listener(SockAddrInet('10.0.1.254', 50022),
"blue_ssh")
def purple():
pass
def red():
Service("red_http", HttpProxy,
router=DirectedRouter(SockAddrInet('192.168.1.242', 80),
forge_addr=TRUE))
Listener(SockAddrInet('169.254.1.254', 50080),
"red_http")
Otherwise, the definition should consist of one or more Service lines, specifying a service name referenced in one or more zone definitions, and a Zorp proxy module, either a built-in proxy included in the global import statements or defined in a custom class. The last field in a Service line is a router, which specifies where proxied packets should be sent. You can see in Listing 7 that for the red_http service, we've used the forge_addr=TRUE option to pass the source IPs of Web clients intact from the Internet to our Web server. Without this option, all Web traffic hitting the DMZ appears to originate from the firewall itself.
Although in Listing 7 we're using only the HttpProxy and the PlugProxy (a general-service UDP and TCP proxy that copies application data verbatim), Zorp GPL also has proxies for FTP, whois, SSL, telnet and finger. As I mentioned before, you also can create custom classes to alter or augment these proxies. It's easy to create, for example, an HTTP proxy that performs URL filtering or an SSL proxy stacked on an HTTP proxy so HTTPS traffic can be proxied intelligently. Unfortunately, these are advanced topics I can't cover here; fortunately, all of Zorp's Python proxy modules are heavily commented.
The TransparentRouter referenced in Listing 7 simply proxies the packets to the destination IP and port specified by the client. But in the red instance's red_http service, we see that a DirectedRouter, which requires a mandatory destination IP and port, may be specified instead.
Each Service line in a service-instance definition must have a corresponding Listener line. This line tells Zorp to which local (firewall) IP address and port the service should be bound. It may seem counterintuitive that the ports specified in Listing 7's Listener statements are high ports: 50080 instead of 80 and 50022 instead of 22. But remember, each proxy receives its packets from the kernel through Netfilter, not directly from clients. Accordingly, these high ports must correspond to those specified in your tproxy table Netfilter rules (Listing 1).
I mentioned that unlike HttpProxy, which is a fully application-aware proxy that enforces all relevant Internet RFCs for proper HTTP behavior, PlugProxy is a general-service proxy (GSP). Using PlugProxy still gives better protection than does packet filtering on its own, because the very act of proxying, even without application intelligence, insulates your systems from low-level attacks that Netfilter may not catch on its own.
And with that, we've scratched the dense surface of Zorp GPL. This is by far the most complex tool I've covered in these pages, but I think you'll find Zorp to be well worth the time you invest in learning how to use it.
Resources
The English-language home for Balabit, creators of Zorp: www.balabit.com [6].
The root download directory for ZorpOS contains some tools that make using Zorp GPL much easier, including iptables-utils, a TPROXY-enabled Linux kernel and iptables command. In fact, these are the free parts of the Debian distribution included with Zorp Pro, which is why everything in ZorpOS is in the form of Debian packages. If you aren't a Debian user, everything you want is in the subdirectories of pool; at the top of each package's subdirectory are tar.gz files containing source code. If you are a Debian user, you can use the URL as an apt-get source: www.balabit.com/downloads/zorp/zorp-os [7].
The Zorp Users' Mailing List is an amazingly quick and easy way to get help using Zorp, whether Pro or GPL. This URL is the site for subscribing to it or browsing its archives. Note that Balabit is a Hungarian company and its engineers (and some of the most helpful Zorp users) operate in the CET (GMT+1) time zone: https://lists.balabit.hu/mailman/listinfo/zorp [8].
Mick Bauer, CISSP, is Linux Journal's security editor and an IS security consultant in Minneapolis, Minnesota. He's the author of Building Secure Servers With Linux (O'Reilly & Associates, 2002).
[1] http://www.balabit.com/products/oss/tproxy
[2] http://www.balabit.com/products/zorp_gpl
[3] http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/073/7347/7347f1.png
[4] http://www.balabit.com/downloads/zorp/zorp-os/pool/i/iptables-utils
[5] http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/073/7347/7347f2.png
[6] http://www.balabit.com
[7] http://www.balabit.com/downloads/zorp/zorp-os
[8] https://lists.balabit.hu/mailman/listinfo/zorp