{ "pages": [ { "page_number": 1, "text": "S\nE\nC\nU\nR\nI\nT\nY\nT\nO\nO\nL\nS\nO\nN\nC\nD\n-\nR\nO\nM\n®\nP\nR\nE\nS\nS\n®\nLinux Solutions from the Experts at Red Hat\nM o h a m m e d J . K a b i r\n®\n®\n™\n" }, { "page_number": 2, "text": "Red HatLinux\nSecurity and\nOptimization \nMohammed J. Kabir\nHungry Minds, Inc.\nNew York, NY G Indianapolis, IN G Cleveland, OH\n" }, { "page_number": 3, "text": "Trademarks: are trademarks or registered trademarks of Hungry Minds, Inc. All other trademarks are the\nproperty of their respective owners. Hungry Minds, Inc., is not associated with any product or vendor\nmentioned in this book.\nLIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND AUTHOR HAVE USED THEIR\nBEST EFFORTS IN PREPARING THIS BOOK. THE PUBLISHER AND AUTHOR MAKE NO REPRESENTATIONS\nOR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS\nBOOK AND SPECIFICALLY DISCLAIM ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS\nFOR A PARTICULAR PURPOSE. THERE ARE NO WARRANTIES WHICH EXTEND BEYOND THE\nDESCRIPTIONS CONTAINED IN THIS PARAGRAPH. NO WARRANTY MAY BE CREATED OR EXTENDED BY\nSALES REPRESENTATIVES OR WRITTEN SALES MATERIALS. THE ACCURACY AND COMPLETENESS OF\nTHE INFORMATION PROVIDED HEREIN AND THE OPINIONS STATED HEREIN ARE NOT GUARANTEED OR\nWARRANTED TO PRODUCE ANY PARTICULAR RESULTS, AND THE ADVICE AND STRATEGIES\nCONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY INDIVIDUAL. NEITHER THE PUBLISHER NOR\nAUTHOR SHALL BE LIABLE FOR ANY LOSS OF PROFIT OR ANY OTHER COMMERCIAL DAMAGES,\nINCLUDING BUT NOT LIMITED TO SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR OTHER DAMAGES.\nFULFILLMENT OF EACH COUPON OFFER IS THE SOLE RESPONSIBILITY OF THE OFFEROR.\nRed HatLinuxSecurity and Optimization\nPublished by\nHungry Minds, Inc.\n909 Third Avenue\nNew York, NY 10022\nwww.hungryminds.com\nCopyright © 2002 Hungry Minds, Inc. All rights\nreserved. No part of this book, including interior\ndesign, cover design, and icons, may be reproduced\nor transmitted in any form, by any means\n(electronic, photocopying, recording, or otherwise)\nwithout the prior written permission of the publisher.\nLibrary of Congress Control Number: 2001092938\nISBN: 0-7645-4754-2\nPrinted in the United States of America\n10 9 8 7 6 5 4 3 2 1\n1B/SX/RR/QR/IN\nDistributed in the United States by Hungry Minds,\nInc.\nDistributed by CDG Books Canada Inc. for Canada;\nby Transworld Publishers Limited in the United\nKingdom; by IDG Norge Books for Norway; by IDG\nSweden Books for Sweden; by IDG Books Australia\nPublishing Corporation Pty. Ltd. for Australia and\nNew Zealand; by TransQuest Publishers Pte Ltd. for\nSingapore, Malaysia, Thailand, Indonesia, and Hong\nKong; by Gotop Information Inc. for Taiwan; by ICG\nMuse, Inc. for Japan; by Intersoft for South Africa;\nby Eyrolles for France; by International Thomson\nPublishing for Germany, Austria, and Switzerland;\nby Distribuidora Cuspide for Argentina; by LR\nInternational for Brazil; by Galileo Libros for Chile;\nby Ediciones ZETA S.C.R. Ltda. for Peru; by WS\nComputer Publishing Corporation, Inc., for the\nPhilippines; by Contemporanea de Ediciones for\nVenezuela; by Express Computer Distributors for the\nCaribbean and West Indies; by Micronesia Media\nDistributor, Inc. for Micronesia; by Chips\nComputadoras S.A. de C.V. for Mexico; by Editorial\nNorma de Panama S.A. for Panama; by American\nBookshops for Finland.\nFor general information on Hungry Minds’ products\nand services please contact our Customer Care\ndepartment within the U.S. at 800-762-2974, outside\nthe U.S. at 317-572-3993 or fax 317-572-4002.\nFor sales inquiries and reseller information,\nincluding discounts, premium and bulk quantity\nsales, and foreign-language translations, please\ncontact our Customer Care department at\n800-434-3422, fax 317-572-4002 or write to Hungry\nMinds, Inc., Attn: Customer Care Department, 10475\nCrosspoint Boulevard, Indianapolis, IN 46256.\nFor information on licensing foreign or domestic\nrights, please contact our Sub-Rights Customer Care\ndepartment at 212-884-5000.\nFor information on using Hungry Minds’ products\nand services in the classroom or for ordering\nexamination copies, please contact our Educational\nSales department at 800-434-2086 or fax\n317-572-4005.\nFor press review copies, author interviews, or other\npublicity information, please contact our Public\nRelations department at 317-572-3168 or fax\n317-572-4168.\nFor authorization to photocopy items for corporate,\npersonal, or educational use, please contact\nCopyright Clearance Center, 222 Rosewood Drive,\nDanvers, MA 01923, or fax 978-750-4470.\nis a trademark of Hungry Minds, Inc.\n" }, { "page_number": 4, "text": "About the Author\nMohammed Kabir is the founder and CEO of Evoknow, Inc. His company specializes\nin open-source solutions and customer relationship management software develop-\nment. When he is not busy managing software projects or writing books, he enjoys\ntraveling around the world. Kabir studied computer engineering at California State\nUniversity, Sacramento. He is also the author of Red Hat Linux Server and Apache\nServer Bible. He can be reached at kabir@evoknow.com.\nCredits\nACQUISITIONS EDITOR\nDebra Williams Cauley\nPROJECT EDITOR\nPat O’Brien\nTECHNICAL EDITORS\nMatthew Hayden\nSandra “Sam” Moore\nCOPY EDITORS\nBarry Childs-Helton\nStephanie Provines\nEDITORIAL MANAGER\nKyle Looper\nRED HAT PRESS LIAISON\nLorien Golaski, Red Hat\nCommunications Manager\nSENIOR VICE PRESIDENT, TECHNICAL\nPUBLISHING\nRichard Swadley\nVICE PRESIDENT AND PUBLISHER\nMary Bednarek\nPROJECT COORDINATOR\nMaridee Ennis\nGRAPHICS AND PRODUCTION\nSPECIALISTS\nKarl Brandt \nStephanie Jumper \nLaurie Petrone \nBrian Torwelle \nErin Zeltner\nQUALITY CONTROL TECHNICIANS\nLaura Albert \nAndy Hollandbeck\nCarl Pierce\nPERMISSIONS EDITOR\nCarmen Krikorian\nMEDIA DEVELOPMENT SPECIALIST\nMarisa Pearman\nPROOFREADING AND INDEXING\nTECHBOOKS Production Services\n" }, { "page_number": 5, "text": "" }, { "page_number": 6, "text": "This book is dedicated to my wife, who proofs my writing, checks my facts,\nand writes my dedications.\n" }, { "page_number": 7, "text": "Preface\nThis book is focused on two major aspects of Red Hat Linux system administration:\nperformance tuning and security. The tuning solutions discussed in this book will\nhelp your Red Hat Linux system to have better performance. At the same time, the\npractical security solutions discussed in the second half of the book will allow you\nto enhance your system security a great deal. If you are looking for time saving,\npractical solutions to performance and security issues, read on!\nHow This Book is Organized\nThe book has five parts, plus several appendixes.\nPart I: System Performance\nThis part of the book explains the basics of measuring system performance, cus-\ntomizing your Red Hat Linux kernel to tune the operating system, tuning your \nhard disks, and journaling your filesystem to increase file system reliability and\nrobustness.\nPart II: Network and Service Performance\nThis part of the book explains how to tune your important network services,\nincluding Apache Web server, Sendmail and postfix mail servers, and Samba and\nNFS file and printer sharing services.\nPart III: System Security\nThis part of the book covers how to secure your system using kernel-based Linux\nIntrusion Detection System (LIDS) and Libsafe buffer overflow protection mecha-\nnisms. Once you have learned to secure your Red Hat Linux kernel, you can secure\nyour file system using various tools. After securing the kernel and the file system,\nyou can secure user access to your system using such tools as Pluggable\nAuthentication Module (PAM), Open Source Secure Socket Layer (OpenSSL), Secure\nRemote Password (SRP), and xinetd.\nPart IV: Network Service Security\nThis part of the book shows how to secure your Apache Web server, BIND DNS\nserver, Sendmail and postfix SMTP server, POP3 mail server, Wu-FTPD and\nProFTPD FTP servers, and Samba and NFS servers. \nvi\n" }, { "page_number": 8, "text": "Part V: Firewalls\nThis part of the book shows to create packet filtering firewall using iptables, how to\ncreate virtual private networks, and how to use SSL based tunnels to secure access\nto system and services. Finally, you will be introduced to an wide array of security\ntools such as security assessment (audit) tools, port scanners, log monitoring and\nanalysis tools, CGI scanners, password crackers, intrusion detection tools, packet\nfilter tools, and various other security administration utilities.\nAppendixes\nThese elements include important references for Linux network users, plus an\nexplanation of the attached CD-ROM.\nConventions of This Book\nYou don’t have to learn any new conventions to read this book. Just remember the\nusual rules:\nN When you are asked to enter a command, you need press the Enter or the\nReturn key after you type the command at your command prompt. \nN A monospaced font is used to denote configuration or code segment. \nN Text in italic needs to be replaced with relevant information.\nWatch for these icons that occasionally highlight paragraphs.\nThe Note icon indicates that something needs a bit more explanation.\nThe Tip icon tells you something that is likely to save you some time and\neffort.\nPreface\nvii\n" }, { "page_number": 9, "text": "The Caution icon makes you aware of a potential danger.\nThe cross-reference icon tells you that you can find additional information\nin another chapter.\nTell Us What You Think of This Book\nBoth Hungry Minds and I want to know what you think of this book. Give us your\nfeedback. If you are interested in communicating with me directly, send e-mail\nmessages to kabir@evoknow.com. I will do my best to respond promptly.\nviii\nRed Hat Linux Security and Optimization\n" }, { "page_number": 10, "text": "Acknowledgments\nWhile writing this book, I often needed to consult with many developers whose\ntools I covered in this book. I want to specially thank a few such developers who\nhave generously helped me present some of their great work.\nHuagang Xie is the creator and chief developer of the LIDS project. Special\nthanks to him for responding to my email queries and also providing me with a\ngreat deal of information on the topic.\nTimothy K. Tsai, Navjot Singh, and Arash Baratloo are the three members of the\nLibsafe team who greatly helped in presenting the Libsafe information. Very special\nthanks to Tim for taking the time to promptly respond to my emails and providing\nme with a great deal of information on the topic.\nI thank both the Red Hat Press and Hungry Minds teams who made this book a\nreality. It is impossible to list everyone involved but I must mention the following\nkind individuals.\nDebra Williams Cauley provided me with this book opportunity and made sure I\nsaw it through to the end. Thanks, Debra.\nTerri Varveris, the acquisitions editor, took over in Debra’s absence. She made\nsure I had all the help needed to get this done. Thanks, Terri.\nPat O’Brien, the project development editor, kept this project going. I don’t know\nhow I could have done this book without his generous help and suggestions every\nstep of the way. Thanks, Pat.\nMatt Hayden, the technical reviewer, provided numerous technical suggestions,\ntips, and tricks — many of which have been incorporated in the book. Thanks, Matt.\nSheila Kabir, my wife, had to put up with many long work hours during the few\nmonths it took to write this book. Thank you, sweetheart.\nix\n" }, { "page_number": 11, "text": "" }, { "page_number": 12, "text": "Contents at a Glance\nPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi\nAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . ix\nPart I\nSystem Performance \nChapter 1\nPerformance Basics . . . . . . . . . . . . . . . . . . . . . . . . . 3\nChapter 2\nKernel Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\nChapter 3 \nFilesystem Tuning . . . . . . . . . . . . . . . . . . . . . . . . . 39\nPart II\nNetwork and Service Performance \nChapter 4 \nNetwork Performance . . . . . . . . . . . . . . . . . . . . . . 75\nChapter 5 \nWeb Server Performance . . . . . . . . . . . . . . . . . . . . 89\nChapter 6 \nE-Mail Server Performance . . . . . . . . . . . . . . . . . 125\nChapter 7 \nNFS and Samba Server Performance . . . . . . . . . . 141\nPart III\nSystem Security \nChapter 8 \nKernel Security . . . . . . . . . . . . . . . . . . . . . . . . . . 155\nChapter 9 \nSecuring Files and Filesystems . . . . . . . . . . . . . . 179\nChapter 10 \nPAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241\nChapter 11 \nOpenSSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263\nChapter 12 \nShadow Passwords and OpenSSH . . . . . . . . . . . . 277\nChapter 13 \nSecure Remote Passwords . . . . . . . . . . . . . . . . . . 313\nChapter 14 \nxinetd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323\nPart IV\nNetwork Service Security \nChapter 15 \nWeb Server Security . . . . . . . . . . . . . . . . . . . . . . 351\nChapter 16 \nDNS Server Security . . . . . . . . . . . . . . . . . . . . . . 399\nChapter 17 \nE-Mail Server Security . . . . . . . . . . . . . . . . . . . . 415\nChapter 18 \nFTP Server Security . . . . . . . . . . . . . . . . . . . . . . . 443\nChapter 19 \nSamba and NFS Server Security . . . . . . . . . . . . . 473\n" }, { "page_number": 13, "text": "Part V\nFirewalls \nChapter 20 \nFirewalls, VPNs, and SSL Tunnels . . . . . . . . . . . . 491\nChapter 21 \nFirewall Security Tools . . . . . . . . . . . . . . . . . . . . 541\nAppendix A \nIP Network Address Classification . . . . . . . . . . . . 589\nAppendix B \nCommon Linux Commands . . . . . . . . . . . . . . . . . 593\nAppendix C \nInternet Resources . . . . . . . . . . . . . . . . . . . . . . . . 655\nAppendix D \nDealing with Compromised Systems . . . . . . . . . . 661\nAppendix E \nWhat’s On the CD-ROM? . . . . . . . . . . . . . . . . . . . 665\nIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669\nEnd-User License Agreement . . . . . . . . . . . . . . . . 691\n" }, { "page_number": 14, "text": "Contents\nPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi\nAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . ix\nPart I\nSystem Performance \nChapter 1 \nPerformance Basics . . . . . . . . . . . . . . . . . . . . . . . . . 3\nMeasuring System Performance . . . . . . . . . . . . . . . . . . . . . . . 4\nMonitoring system performance with ps . . . . . . . . . . . . . . . . . . . . . 4\nTracking system activity with top . . . . . . . . . . . . . . . . . . . . . . . . . . 6\nChecking memory and I/O with vmstat . . . . . . . . . . . . . . . . . . . . . . 8\nRunning Vtad to analyze your system . . . . . . . . . . . . . . . . . . . . . . 9\nChapter 2 \nKernel Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11\nCompiling and Installing a Custom Kernel . . . . . . . . . . . . . . 11\nDownloading kernel source code (latest distribution) . . . . . . . . . . 11\nCreating the /usr/src/linux symbolic link . . . . . . . . . . . . . . . . . . . 12\nSelecting a kernel-configuration method . . . . . . . . . . . . . . . . . . . 13\nUsing menuconfig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\nCompiling the kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31\nBooting the new kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32\nRunning Demanding Applications . . . . . . . . . . . . . . . . . . . . 35\nChapter 3 \nFilesystem Tuning . . . . . . . . . . . . . . . . . . . . . . . . . 39\nTuning your hard disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39\nTuning ext2 Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44\nChanging the block size of the ext2 filesystem . . . . . . . . . . . . . . . 44\nUsing e2fsprogs to tune ext2 filesystem . . . . . . . . . . . . . . . . . . . . 45\nUsing a Journaling Filesystem . . . . . . . . . . . . . . . . . . . . . . . 48\nCompiling and installing ReiserFS . . . . . . . . . . . . . . . . . . . . . . . . 50\nUsing ReiserFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51\nBenchmarking ReiserFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51\nManaging Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 54\nCompiling and installing the LVM module for kernel . . . . . . . . . . 54\nCreating a logical volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56\nAdding a new disk or partition to a logical volume . . . . . . . . . . . 62\nRemoving a disk or partition from a volume group . . . . . . . . . . . 65\n" }, { "page_number": 15, "text": "Using RAID, SAN, or Storage Appliances . . . . . . . . . . . . . . 66\nUsing Linux Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66\nUsing Hardware RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67\nUsing Storage-Area Networks (SANs) . . . . . . . . . . . . . . . . . . . . . . 67\nUsing Storage Appliances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67\nUsing a RAM-Based Filesystem . . . . . . . . . . . . . . . . . . . . . . 68\nPart II\nNetwork and Service Performance \nChapter 4 \nNetwork Performance . . . . . . . . . . . . . . . . . . . . . . 75\nTuning an Ethernet LAN or WAN . . . . . . . . . . . . . . . . . . . . 75\nUsing network segmentation technique for performance . . . . . . . 77\nUsing switches in place of hubs . . . . . . . . . . . . . . . . . . . . . . . . . . 80\nUsing fast Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81\nUsing a network backbone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82\nUnderstanding and controlling network traffic flow . . . . . . . . . . . 83\nBalancing the traffic load using the DNS server . . . . . . . . . . . . . . 85\nIP Accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85\nIP accounting on a Linux network gateway . . . . . . . . . . . . . . . . . 86\nChapter 5 \nWeb Server Performance . . . . . . . . . . . . . . . . . . . . 89\nCompiling a Lean and Mean Apache . . . . . . . . . . . . . . . . . . 89\nTuning Apache Configuration . . . . . . . . . . . . . . . . . . . . . . . 95\nControlling Apache processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96\nControlling system resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100\nUsing dynamic modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103\nSpeeding Up Static Web Pages . . . . . . . . . . . . . . . . . . . . . . 103\nReducing disk I/O for faster static page delivery . . . . . . . . . . . . . 104\nUsing Kernel HTTP daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105\nSpeeding Up Web Applications . . . . . . . . . . . . . . . . . . . . . 105\nUsing mod_perl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106\nUsing FastCGI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114\nInstalling and configuring FastCGI module for Apache . . . . . . . . 115\nUsing Java servlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117\nUsing Squid proxy-caching server . . . . . . . . . . . . . . . . . . . . . . . . 118\nChapter 6 \nE-Mail Server Performance . . . . . . . . . . . . . . . . . 125\nChoosing Your MTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125\nTuning Sendmail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126\nControlling the maximum size of messages . . . . . . . . . . . . . . . . 127\nCaching Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127\nControlling simultaneous connections . . . . . . . . . . . . . . . . . . . . 130\nLimiting the load placed by Sendmail . . . . . . . . . . . . . . . . . . . . . 131\nxiv\nContents\n" }, { "page_number": 16, "text": "Saving memory when processing the mail queue . . . . . . . . . . . . 131\nControlling number of messages in a queue run . . . . . . . . . . . . . 132\nHandling the full queue situation . . . . . . . . . . . . . . . . . . . . . . . . 132\nTuning Postfix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133\nInstalling Postfix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133\nLimiting number of processes used . . . . . . . . . . . . . . . . . . . . . . . 134\nLimiting maximum message size . . . . . . . . . . . . . . . . . . . . . . . . . 135\nLimiting number of messages in queue . . . . . . . . . . . . . . . . . . . . 135\nLimiting number of simultaneous delivery to a single site . . . . . 135\nControlling queue full situation . . . . . . . . . . . . . . . . . . . . . . . . . 135\nControlling the length a message stays in the queue . . . . . . . . . . 136\nControlling the frequency of the queue . . . . . . . . . . . . . . . . . . . . 136\nUsing PowerMTA for High-Volume Outbound Mail . . . . . . 136\nUsing multiple spool directories for speed . . . . . . . . . . . . . . . . . . 137\nSetting the maximum number of file descriptors . . . . . . . . . . . . 137\nSetting a maximum number of user processes . . . . . . . . . . . . . . 138\nSetting maximum concurrent SMTP connections . . . . . . . . . . . . 138\nMonitoring performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139\nChapter 7 \nNFS and Samba Server Performance . . . . . . . . . . 141\nTuning Samba Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142\nControlling TCP socket options . . . . . . . . . . . . . . . . . . . . . . . . . . 142\nTuning Samba Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145\nTuning NFS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145\nOptimizing read/write block size . . . . . . . . . . . . . . . . . . . . . . . . . 146\nSetting the appropriate Maximum Transmission Unit . . . . . . . . . 149\nRunning optimal number of NFS daemons . . . . . . . . . . . . . . . . . 149\nMonitoring packet fragments . . . . . . . . . . . . . . . . . . . . . . . . . . . 150\nPart III\nSystem Security \nChapter 8 \nKernel Security . . . . . . . . . . . . . . . . . . . . . . . . . . 155\nUsing Linux Intrusion Detection System (LIDS) . . . . . . . . . 155\nBuilding a LIDS-based Linux system . . . . . . . . . . . . . . . . . . . . . . 156\nAdministering LIDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163\nUsing libsafe to Protect Program Stacks . . . . . . . . . . . . . . 173\nCompiling and installing libsafe . . . . . . . . . . . . . . . . . . . . . . . . . 175\nlibsafe in action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178\nChapter 9 \nSecuring Files and Filesystems . . . . . . . . . . . . . . 179\nManaging Files, Directories, and \nUser Group Permissions . . . . . . . . . . . . . . . . . . . . . . . . . 179\nUnderstanding file ownership & permissions . . . . . . . . . . . . . . . 180\nChanging ownership of files and directories using chown . . . . . . 181\nContents\nxv\n" }, { "page_number": 17, "text": "Changing group ownership of files and \ndirectories with chgrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182\nUsing octal numbers to set file and directory permissions . . . . . 182\nUsing permission strings to set access permissions . . . . . . . . . . 185\nChanging access privileges of files and \ndirectories using chmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185\nManaging symbolic links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186\nManaging user group permission . . . . . . . . . . . . . . . . . . . . . . . . 188\nChecking Consistency of Users and Groups . . . . . . . . . . . . 190\nSecuring Files and Directories . . . . . . . . . . . . . . . . . . . . . . 198\nUnderstanding filesystem hierarchy structure . . . . . . . . . . . . . . . 198\nSetting system-wide default permission model using umask . . . . 201\nDealing with world-accessible files . . . . . . . . . . . . . . . . . . . . . . . 203\nDealing with set-UID and set-GID programs . . . . . . . . . . . . . . . . 204\nUsing ext2 Filesystem Security Features . . . . . . . . . . . . . . 208\nUsing chattr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209\nUsing lsattr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210\nUsing a File Integrity Checker . . . . . . . . . . . . . . . . . . . . . . 210\nUsing a home-grown file integrity checker . . . . . . . . . . . . . . . . . 210\nUsing Tripwire Open Source, Linux Edition . . . . . . . . . . . . . . . . . 215\nSetting up Integrity-Checkers . . . . . . . . . . . . . . . . . . . . . . 230\nSetting up AIDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230\nSetting up ICU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231\nCreating a Permission Policy . . . . . . . . . . . . . . . . . . . . . . . 239\nSetting configuration file permissions for users . . . . . . . . . . . . . 239\nSetting default file permissions for users . . . . . . . . . . . . . . . . . . . 240\nSetting executable file permissions . . . . . . . . . . . . . . . . . . . . . . . 240\nChapter 10 \nPAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241\nWhat is PAM? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241\nWorking with a PAM configuration file . . . . . . . . . . . . . . . . . . . 243\nEstablishing a PAM-aware Application . . . . . . . . . . . . . . . 245\nUsing Various PAM Modules to Enhance Security . . . . . . . 248\nControlling access by time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255\nRestricting access to everyone but root . . . . . . . . . . . . . . . . . . . . 257\nManaging system resources among users . . . . . . . . . . . . . . . . . . 258\nSecuring console access using mod_console . . . . . . . . . . . . . . . . 260\nChapter 11 \nOpenSSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263\nUnderstanding How SSL Works . . . . . . . . . . . . . . . . . . . . . 263\nSymmetric encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264\nAsymmetric encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264\nSSL as a protocol for data encryption . . . . . . . . . . . . . . . . . . . . . 264\nUnderstanding OpenSSL . . . . . . . . . . . . . . . . . . . . . . . . . . 266\nUses of OpenSSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266\nGetting OpenSSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267\nxvi\nContents\n" }, { "page_number": 18, "text": "Installing and Configuring OpenSSL . . . . . . . . . . . . . . . . . 267\nOpenSSL prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267\nCompiling and installing OpenSSL . . . . . . . . . . . . . . . . . . . . . . . 268\nUnderstanding Server Certificates . . . . . . . . . . . . . . . . . . . 270\nWhat is a certificate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270\nWhat is a Certificate Authority (CA)? . . . . . . . . . . . . . . . . . . . . . 271\nCommercial CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272\nSelf-certified, private CA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272\nGetting a Server Certificate from a Commercial CA . . . . . . 273\nCreating a Private Certificate Authority . . . . . . . . . . . . . . . 275\nChapter 12 \nShadow Passwords and OpenSSH . . . . . . . . . . . . 277\nUnderstanding User Account Risks . . . . . . . . . . . . . . . . . . 278\nSecuring User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . 279\nUsing shadow passwords and groups . . . . . . . . . . . . . . . . . . . . . 280\nChecking password consistency . . . . . . . . . . . . . . . . . . . . . . . . . 282\nEliminating risky shell services . . . . . . . . . . . . . . . . . . . . . . . . . . 283\nUsing OpenSSH for Secured Remote Access . . . . . . . . . . . . 285\nGetting and installing OpenSSH . . . . . . . . . . . . . . . . . . . . . . . . . 285\nConfiguring OpenSSH service . . . . . . . . . . . . . . . . . . . . . . . . . . . 286\nConnecting to an OpenSSH server . . . . . . . . . . . . . . . . . . . . . . . . 293\nManaging the root Account . . . . . . . . . . . . . . . . . . . . . . . . 298\nLimiting root access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299\nUsing su to become root or another user . . . . . . . . . . . . . . . . . . . 300\nUsing sudo to delegate root access . . . . . . . . . . . . . . . . . . . . . . . 302\nMonitoring Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307\nFinding who is on the system . . . . . . . . . . . . . . . . . . . . . . . . . . . 308\nFinding who was on the system . . . . . . . . . . . . . . . . . . . . . . . . . 309\nCreating a User-Access Security Policy . . . . . . . . . . . . . . . 309\nCreating a User-Termination Security Policy . . . . . . . . . . . 310\nChapter 13 \nSecure Remote Passwords . . . . . . . . . . . . . . . . . . 313\nSetting Up Secure Remote Password Support . . . . . . . . . . . 313\nEstablishing Exponential Password System (EPS) . . . . . . . 314\nUsing the EPS PAM module for password authentication . . . . . . 315\nConverting standard passwords to EPS format . . . . . . . . . . . . . . 316\nUsing SRP-Enabled Telnet Service . . . . . . . . . . . . . . . . . . . 317\nUsing SRP-enabled Telnet clients \nfrom non-Linux platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319\nUsing SRP-Enabled FTP Service . . . . . . . . . . . . . . . . . . . . . 319\nUsing SRP-enabled FTP clients \nfrom non-Linux platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322\nContents\nxvii\n" }, { "page_number": 19, "text": "Chapter 14 \nxinetd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323\nWhat Is xinetd? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323\nSetting Up xinetd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325\nGetting xinetd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325\nCompiling and installing xinetd . . . . . . . . . . . . . . . . . . . . . . . . . 325\nConfiguring xinetd for services . . . . . . . . . . . . . . . . . . . . . . . . . . 329\nStarting, Reloading, and Stopping xinetd . . . . . . . . . . . . . 333\nStrengthening the Defaults in /etc/xinetd.conf . . . . . . . . . 334\nRunning an Internet Daemon Using xinetd . . . . . . . . . . . . 335\nControlling Access by Name or IP Address . . . . . . . . . . . . 337\nControlling Access by Time of Day . . . . . . . . . . . . . . . . . . 338\nReducing Risks of Denial-of-Service Attacks . . . . . . . . . . . 338\nLimiting the number of servers . . . . . . . . . . . . . . . . . . . . . . . . . . 338\nLimiting log file size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339\nLimiting load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339\nLimiting the rate of connections . . . . . . . . . . . . . . . . . . . . . . . . . 340\nCreating an Access-Discriminative Service . . . . . . . . . . . . 341\nRedirecting and Forwarding Clients . . . . . . . . . . . . . . . . . . 342\nUsing TCP Wrapper with xinetd . . . . . . . . . . . . . . . . . . . . . 345\nRunning sshd as xinetd . . . . . . . . . . . . . . . . . . . . . . . . . . . 345\nUsing xadmin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346\nPart IV\nNetwork Service Security \nChapter 15 \nWeb Server Security . . . . . . . . . . . . . . . . . . . . . . 351\nUnderstanding Web Risks . . . . . . . . . . . . . . . . . . . . . . . . . 351\nConfiguring Sensible Security for Apache . . . . . . . . . . . . . 352\nUsing a dedicated user and group for Apache . . . . . . . . . . . . . . . 352\nUsing a safe directory structure . . . . . . . . . . . . . . . . . . . . . . . . . . 352\nUsing appropriate file and directory permissions . . . . . . . . . . . . 354\nUsing directory index file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356\nDisabling default access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358\nDisabling user overrides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358\nUsing Paranoid Configuration . . . . . . . . . . . . . . . . . . . . . . 359\nReducing CGI Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360\nInformation leaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360\nConsumption of system resources . . . . . . . . . . . . . . . . . . . . . . . . 360\nSpoofing of system commands via CGI scripts . . . . . . . . . . . . . . 361\nKeeping user input from making system calls unsafe . . . . . . . . . 361\nUser modification of hidden data in HTML pages . . . . . . . . . . . . 366\nWrapping CGI Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372\nsuEXEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372\nCGIWrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375\nHide clues about your CGI scripts . . . . . . . . . . . . . . . . . . . . . . . . 377\nxviii\nContents\n" }, { "page_number": 20, "text": "Reducing SSI Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378\nLogging Everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379\nRestricting Access to Sensitive Contents . . . . . . . . . . . . . . 382\nUsing IP or hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382\nUsing an HTTP authentication scheme . . . . . . . . . . . . . . . . . . . . 385\nControlling Web Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390\nContent Publishing Guidelines . . . . . . . . . . . . . . . . . . . . . . 392\nUsing Apache-SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394\nCompiling and installing Apache-SSL patches . . . . . . . . . . . . . . 394\nCreating a certificate for your Apache-SSL server . . . . . . . . . . . . 395\nConfiguring Apache for SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396\nTesting the SSL connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398\nChapter 16 \nDNS Server Security . . . . . . . . . . . . . . . . . . . . . . 399\nUnderstanding DNS Spoofing . . . . . . . . . . . . . . . . . . . . . . 399\nChecking DNS Configuring Using Dlint . . . . . . . . . . . . . . . 400\nGetting Dlint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401\nInstalling Dlint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401\nRunning Dlint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402\nSecuring BIND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405\nUsing Transaction Signatures (TSIG) for zone transfers . . . . . . . . 405\nRunning BIND as a non-root user . . . . . . . . . . . . . . . . . . . . . . . . 409\nHiding the BIND version number . . . . . . . . . . . . . . . . . . . . . . . . 409\nLimiting Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410\nTurning off glue fetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411\nchrooting the DNS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412\nUsing DNSSEC (signed zones) . . . . . . . . . . . . . . . . . . . . . . . . . . . 412\nChapter 17 \nE-Mail Server Security . . . . . . . . . . . . . . . . . . . . 415\nWhat Is Open Mail Relay? . . . . . . . . . . . . . . . . . . . . . . . . . 415\nIs My Mail Server Vulnerable? . . . . . . . . . . . . . . . . . . . . . . 417\nSecuring Sendmail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419\nControlling mail relay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422\nEnabling MAPS Realtime Blackhole List (RBL) support . . . . . . . . 425\nSanitizing incoming e-mail using procmail . . . . . . . . . . . . . . . . 429\nOutbound-only Sendmail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437\nRunning Sendmail without root privileges . . . . . . . . . . . . . . . . . 438\nSecuring Postfix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440\nKeeping out spam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440\nHiding internal e-mail addresses by masquerading . . . . . . . . . . . 442\nChapter 18 \nFTP Server Security . . . . . . . . . . . . . . . . . . . . . . . 443\nSecuring WU-FTPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443\nRestricting FTP access by username . . . . . . . . . . . . . . . . . . . . . . 445\nSetting default file permissions for FTP . . . . . . . . . . . . . . . . . . . 447\nContents\nxix\n" }, { "page_number": 21, "text": "Using a chroot jail for FTP sessions . . . . . . . . . . . . . . . . . . . . . . 448\nSecuring WU-FTPD using options in /etc/ftpaccess . . . . . . . . . . . 452\nUsing ProFTPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455\nDownloading, compiling, and installing ProFTPD . . . . . . . . . . . . 456\nConfiguring ProFTPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456\nMonitoring ProFTPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462\nSecuring ProFTPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462\nChapter 19 \nSamba and NFS Server Security . . . . . . . . . . . . . 473\nSecuring Samba Server . . . . . . . . . . . . . . . . . . . . . . . . . . . 473\nChoosing an appropriate security level . . . . . . . . . . . . . . . . . . . . 473\nAvoiding plain-text passwords . . . . . . . . . . . . . . . . . . . . . . . . . . 476\nAllowing access to users from trusted domains . . . . . . . . . . . . . . 477\nControlling Samba access by network interface . . . . . . . . . . . . . 477\nControlling Samba access by hostname or IP addresses . . . . . . . 478\nUsing pam_smb to authenticate all users \nvia a Windows NT server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479\nUsing OpenSSL with Samba . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481\nSecuring NFS Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483\nUsing Cryptographic Filesystems . . . . . . . . . . . . . . . . . . . . 487\nPart V\nFirewalls \nChapter 20 \nFirewalls, VPNs, and SSL Tunnels . . . . . . . . . . . . 491\nPacket-Filtering Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . 491\nEnabling netfilter in the kernel . . . . . . . . . . . . . . . . . . . . . . . . . . 496\nCreating Packet-Filtering Rules with iptables . . . . . . . . . . . 498\nCreating a default policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498\nAppending a rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498\nListing the rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499\nDeleting a rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500\nInserting a new rule within a chain . . . . . . . . . . . . . . . . . . . . . . . 500\nReplacing a rule within a chain . . . . . . . . . . . . . . . . . . . . . . . . . . 500\nCreating SOHO Packet-Filtering Firewalls . . . . . . . . . . . . . 501\nAllowing users at private network access \nto external Web servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504\nAllowing external Web browsers access to a Web server \non your firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505\nDNS client and cache-only services . . . . . . . . . . . . . . . . . . . . . . 506\nSMTP client service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508\nPOP3 client service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508\nPassive-mode FTP client service . . . . . . . . . . . . . . . . . . . . . . . . . 509\nSSH client service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510\nOther new client service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510\nxx\nContents\n" }, { "page_number": 22, "text": "Creating a Simple Firewall . . . . . . . . . . . . . . . . . . . . . . . . . 511\nCreating Transparent, proxy-arp Firewalls . . . . . . . . . . . . . 512\nCreating Corporate Firewalls . . . . . . . . . . . . . . . . . . . . . . . 514\nPurpose of the internal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . 515\nPurpose of the primary firewall . . . . . . . . . . . . . . . . . . . . . . . . . . 515\nSetting up the internal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . 516\nSetting up the primary firewall . . . . . . . . . . . . . . . . . . . . . . . . . . 518\nSecure Virtual Private Network . . . . . . . . . . . . . . . . . . . . . 528\nCompiling and installing FreeS/WAN . . . . . . . . . . . . . . . . . . . . . 529\nCreating a VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530\nStunnel: A Universal SSL Wrapper . . . . . . . . . . . . . . . . . . 536\nCompiling and installing Stunnel . . . . . . . . . . . . . . . . . . . . . . . . 536\nSecuring IMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536\nSecuring POP3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538\nSecuring SMTP for special scenarios . . . . . . . . . . . . . . . . . . . . . . 539\nChapter 21 \nFirewall Security Tools . . . . . . . . . . . . . . . . . . . . 541\nUsing Security Assessment (Audit) Tools . . . . . . . . . . . . . . 541\nUsing SAINT to Perform a Security Audit . . . . . . . . . . . . . . . . . . 541\nSARA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549\nVetesCan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550\nUsing Port Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550\nPerforming Footprint Analysis Using nmap . . . . . . . . . . . . . . . . 550\nUsing PortSentry to Monitor Connections . . . . . . . . . . . . . . . . . . 552\nUsing Nessus Security Scanner . . . . . . . . . . . . . . . . . . . . . . . . . . 558\nUsing Strobe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561\nUsing Log Monitoring and Analysis Tools . . . . . . . . . . . . . 562\nUsing logcheck for detecting unusual log entries . . . . . . . . . . . . 562\nSwatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565\nIPTraf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565\nUsing CGI Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566\nUsing cgichk.pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566\nUsing Whisker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568\nUsing Malice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569\nUsing Password Crackers . . . . . . . . . . . . . . . . . . . . . . . . . . 569\nJohn The Ripper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570\nCrack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571\nUsing Intrusion Detection Tools . . . . . . . . . . . . . . . . . . . . . 571\nTripwire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571\nLIDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571\nUsing Packet Filters and Sniffers . . . . . . . . . . . . . . . . . . . . 572\nSnort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572\nGShield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575\nContents\nxxi\n" }, { "page_number": 23, "text": "Useful Utilities for Security Administrators . . . . . . . . . . . . 575\nUsing Netcat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575\nTcpdump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580\nLSOF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581\nNgrep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586\nAppendix A \nIP Network Address Classification . . . . . . . . . . . . 589\nAppendix B \nCommon Linux Commands . . . . . . . . . . . . . . . . . 593\nAppendix C \nInternet Resources . . . . . . . . . . . . . . . . . . . . . . . . 655\nAppendix D \nDealing with Compromised Systems . . . . . . . . . . 661\nAppendix E \nWhat’s On the CD-ROM? . . . . . . . . . . . . . . . . . . . 665\nIndex. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669\nEnd-User License Agreement . . . . . . . . . . . . . . . . . . . . 691\nxxii\nContents\n" }, { "page_number": 24, "text": "System Performance\nCHAPTER 1\nPerformance Basics\nCHAPTER 2\nKernel Tuning\nCHAPTER 3\nFilesystem Tuning\nPart I\n" }, { "page_number": 25, "text": "" }, { "page_number": 26, "text": "Chapter 1\nPerformance Basics\nIN THIS CHAPTER\nN Assessing system performance accurately\nN Taking your system’s pulse with ps\nN Measuring system activity with top\nN Checking memory, input, and output with vmstat\nN Analyzing with Vtad\nRED HAT LINUX is a great operating system for extracting the last bit of performance\nfrom your computer system, whether it’s a desktop unit or a massive corporate net-\nwork. In a networked environment, optimal performance takes on a whole new\ndimension — the efficient delivery of security services — and the system administra-\ntor is the person expected to deliver. If you’re like most system administrators,\nyou’re probably itching to start tweaking — but before you do, you may want to\ntake a critical look at the whole concept of “high performance.”\nToday’s hardware and bandwidth — fast and relatively cheap — has spoiled many\nof us. The long-running craze to buy the latest computer “toy” has lowered hard-\nware pricing; the push to browse the Web faster has lowered bandwidth pricing\nwhile increasing its carrying capacity. Today, you can buy 1.5GHz systems with\n4GB of RAM and hundreds of GB of disk space (ultra-wide SCSI 160, at that) with-\nout taking a second mortgage on your house. Similarly, about $50 to $300 per\nmonth can buy you a huge amount of bandwidth in the U.S. — even in most metro-\npolitan homes.\nHardware and bandwidth have become commodities in the last few years — but\nare we all happy with the performance of our systems? Most users are likely to agree\nthat even with phenomenal hardware and bandwidth, their computers just don’t\nseem that fast anymore — but how many people distinguish between two systems\nthat seem exactly the same except for processor speed? Unless you play demanding\ncomputer games, you probably wouldn’t notice much difference between 300MHz\nand 500MHz when you run your favorite word processor or Web browser.\nActually, much of what most people accept as “high performance” is based on\ntheir human perception of how fast the downloads take place or how crisp the video\non-screen looks. Real measurement of performance requires accurate tools and\nrepeated sampling of system activity. In a networked environment, the need for such\nmeasurement increases dramatically; for a network administrator, it’s indispensable.\n3\n" }, { "page_number": 27, "text": "Accordingly, this chapter introduces a few simple but useful tools that measure and\nmonitor system performance. Using their data, you can build a more sophisticated per-\nception of how well your hardware actually performs. When you’ve established a reli-\nable baseline for your system’s performance, you can tune it to do just what you want\ndone—starting with the flexibility of the Red Hat Linux operating system, and using\nits advantages as you configure your network to be fast, efficient, and secure.\nMeasuring System Performance\nA good introduction to the use of Linux tools to measure and monitor system per-\nformance is to start with ps, top, vmstat, and Vtad. These programs are easy to\nfind, easy to use, and illustrate the kinds of information an administrator needs to\nkeep an eye on.\nMonitoring system performance with ps\nHaving a realistic idea of what’s running is always the first step in monitoring sys-\ntem performance. The ps Linux utility monitors the processes that are running on\nyour system; you can tell the utility how many (or how few) to monitor.\nThe ps utility shows not only each process, but also how much memory it’s\nusing — as well as how much CPU time, which user owns the process, and many\nother handy bits of data. A sample of the ps command’s output looks like this:\nPID TTY TIME CMD\n4406 pts/1 00:00:00 su\n4407 pts/1 00:00:00 bash\n4480 pts/1 00:00:00 ps\nHere ps reports that three programs are running under the current user ID: su,\nbash, and ps itself. If you want a list of all the processes running on your system,\nyou can run ps aux to get one. A sample of the ps aux command’s output (abbre-\nviated, of course) looks like this:\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nroot 1 0.1 0.1 1324 532 ? S 10:58 0:06 init [3]\nroot 2 0.0 0.0 0 0 ? SW 10:58 0:00 [kflushd]\nroot 3 0.0 0.0 0 0 ? SW 10:58 0:00 [kupdate]\nroot 4 0.0 0.0 0 0 ? SW 10:58 0:00 [kpiod]\nroot 5 0.0 0.0 0 0 ? SW 10:58 0:00 [kswapd]\nroot 6 0.0 0.0 0 0 ? SW< 10:58 0:00 [mdrecoveryd]\nroot 45 0.0 0.0 0 0 ? SW 10:58 0:00 [khubd]\nroot 349 0.0 0.1 1384 612 ? S 10:58 0:00 syslogd -m 0\nroot 359 0.0 0.1 1340 480 ? S 10:58 0:00 klogd\nrpc 374 0.0 0.1 1468 576 ? S 10:58 0:00 portmap\n[Remaining lines omitted]\n4\nPart I: System Performance\n" }, { "page_number": 28, "text": "Sometimes you may want to run ps to monitor a specific process for a certain\nlength of time. For example, say you installed a new Sendmail mail-server patch\nand want to make sure the server is up and running — and you also want to know\nwhether it uses more than its share of system resources. In such a case, you can\ncombine a few Linux commands to get your answers — like this:\nwatch --interval=n\n“ps auxw | grep process_you_want_to_monitor”\nFor example, you run watch --interval=30 “ps auxw | grep sendmail. By\nrunning the ps program every 30 seconds you can see how much resource sendmail\nis using.\nCombining ps with the tree command, you can run pstree, which displays a\ntree structure of all processes running on your system. A sample output of pstree\nlooks like this:\ninit-+-apmd\n|-atd\n|-crond\n|-identd---identd---3*[identd]\n|-kflushd\n|-khubd\n|-klogd\n|-kpiod\n|-kswapd\n|-kupdate\n|-lockd---rpciod\n|-lpd\n|-mdrecoveryd\n|-6*[mingetty]\n|-named\n|-nmbd\n|-portmap\n|-rhnsd\n|-rpc.statd\n|-safe_mysqld---mysqld---mysqld---mysqld\n|-sendmail\n|-smbd---smbd\n|-sshd-+-sshd---bash---su---bash---man---sh---sh-+-groff---grotty\n| | `-less\n| `-sshd---bash---su---bash---pstree\n|-syslogd\n|-xfs\n`-xinetd\nChapter 1: Performance Basics\n5\n" }, { "page_number": 29, "text": "You can see that the parent of all processes is init. One branch of the tree is cre-\nated by safe_mysqld, spawning three mysqld daemon processes. The sshd branch\nshows that the sshd daemon has forked two child daemon processes — which have\nopen bash shells and launched still other processes. The pstree output was gener-\nated by one of the sub-branches of the sshd daemon.\nTracking system activity with top\nThis utility monitors system activity interactively. When you run top from a shell\nwindow or an xterm, it displays all the active processes and updates the screen\n(using a user-configurable interval). A sample top session is shown here:\n12:13pm up 1:15, 2 users, load average: 0.05, 0.07, 0.01\n48 processes: 47 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: 1.1% user, 2.1% system, 0.0% nice, 96.7% idle\nMem: 387312K av, 96876K used, 290436K free, 27192K shrd, 36040K buff\nSwap: 265064K av, 0K used, 265064K free 34236K cached\nPID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n6748 kabir 15 0 1032 1032 832 R 0.9 0.2 0:00 top\n1 root 0 0 532 532 468 S 0.0 0.1 0:06 init\n2 root 0 0 0 0 0 SW 0.0 0.0 0:00 kflushd\n3 root 0 0 0 0 0 SW 0.0 0.0 0:00 kupdate\n4 root 0 0 0 0 0 SW 0.0 0.0 0:00 kpiod\n5 root 0 0 0 0 0 SW 0.0 0.0 0:00 kswapd\n6 root -20 -20 0 0 0 SW< 0.0 0.0 0:00 mdrecoveryd\n45 root 0 0 0 0 0 SW 0.0 0.0 0:00 khubd\n349 root 0 0 612 612 512 S 0.0 0.1 0:00 syslogd\n359 root 0 0 480 480 408 S 0.0 0.1 0:00 klogd\n374 rpc 0 0 576 576 484 S 0.0 0.1 0:00 portmap\n390 root 0 0 0 0 0 SW 0.0 0.0 0:00 lockd\n391 root 0 0 0 0 0 SW 0.0 0.0 0:00 rpciod\n401 rpcuser 0 0 768 768 656 S 0.0 0.1 0:00 rpc.statd\n416 root 0 0 524 524 460 S 0.0 0.1 0:00 apmd\n470 nobody 0 0 720 720 608 S 0.0 0.1 0:00 identd\n477 nobody 0 0 720 720 608 S 0.0 0.1 0:00 identd\n478 nobody 0 0 720 720 608 S 0.0 0.1 0:00 identd\n480 nobody 0 0 720 720 608 S 0.0 0.1 0:00 identd\n482 nobody 0 0 720 720 608 S 0.0 0.1 0:00 identd\n489 daemon 0 0 576 576 500 S 0.0 0.1 0:00 atd\n504 named 0 0 1928 1928 1152 S 0.0 0.4 0:00 named\n535 root 0 0 1040 1040 832 S 0.0 0.2 0:00 xinetd\n550 root 0 0 1168 1168 1040 S 0.0 0.3 0:00 sshd\n571 lp 0 0 888 888 764 S 0.0 0.2 0:00 lpd\n615 root 0 0 1480 1480 1084 S 0.0 0.3 0:00 sendmail\n650 root 0 0 744 744 640 S 0.0 0.1 0:00 crond\n6\nPart I: System Performance\n" }, { "page_number": 30, "text": "657 root 0 0 912 912 756 S 0.0 0.2 0:00 safe_mysqld\n683 mysql 0 0 1376 1376 1008 S 0.0 0.3 0:00 mysqld\n696 xfs 0 0 2528 2528 808 S 0.0 0.6 0:00 xfs\n704 mysql 0 0 1376 1376 1008 S 0.0 0.3 0:00 mysqld\nBy default, top updates its screen every second — an interval you can change by\nusing the d seconds option. For example, to update the screen every 5 seconds, run\nthe top d 5 command. A 5- or 10-second interval is, in fact, more useful than the\ndefault setting. (If you let top update the screen every second, it lists itself in its\nown output as the main resource consumer.) Properly configured, top can perform\ninteractive tasks on processes.\nIf you press the h key while top is running, you will see the following output\nscreen:\nProc-Top Revision 1.2\nSecure mode off; cumulative mode off; noidle mode off\nInteractive commands are:\nspace Update display\n^L Redraw the screen\nfF add and remove fields\noO Change order of displayed fields\nh or ? Print this list\nS Toggle cumulative mode\ni Toggle display of idle processes\nI Toggle between Irix and Solaris views (SMP-only)\nc Toggle display of command name/line\nl Toggle display of load average\nm Toggle display of memory information\nt Toggle display of summary information\nk Kill a task (with any signal)\nr Renice a task\nN Sort by pid (Numerically)\nA Sort by age\nP Sort by CPU usage\nM Sort by resident memory usage\nT Sort by time / cumulative time\nu Show only a specific user\nn or # Set the number of process to show\ns Set the delay in seconds between updates\nW Write configuration file ~/.toprc\nq Quit\nPress any key to continue\nChapter 1: Performance Basics\n7\n" }, { "page_number": 31, "text": "Using the keyboard options listed in the output shown here, you can\nN Control how top displays its output\nN Kill a process or task (if you have the permission)\nChecking memory and I/O with vmstat\nThe vmstat utility also provides interesting information about processes, memory,\nI/O, and CPU activity. When you run this utility without any arguments, the output\nlooks similar to the following:\nprocs memory swap io system cpu\nr b w swpd free buff cache si so bi bo in cs us sy id\n0 0 0 8 8412 45956 52820 0 0 0 0 104 11 66 0 33\nN The procs fields show the number of processes\nI Waiting for run time (r)\nI Blocked (b)\nI Swapped out (w)\nN The memory fields show the kilobytes of\nI Swap memory\nI Free memory\nI Buffered memory\nI Cached memory\nN The swap fields show the kilobytes per second of memory\nI Swapped in from disk (si)\nI Swapped out to disk (so)\nN The io fields show the number of blocks per second\nI Sent to block devices (bi)\nI Received from block devices (bo)\nN The system field shows the number of\nI Interrupts per second (in)\nI Context switches per second (cs)\n8\nPart I: System Performance\n" }, { "page_number": 32, "text": "N The cpu field shows the percentage of total CPU time as\nI User time (us)\nI System time (sy)\nI Idle (id) time\nIf you want vmstat to update information automatically, you can run it as\nvmstat nsec, where nsec is the number of seconds you want it to wait before\nanother update.\nRunning Vtad to analyze your system\nVtad is a Perl-based system-analysis tool that uses the /proc filesystem to deter-\nmine system configuration. You can download Vtad from the following Web\naddress:\nwww.blakeley.com/resources/vtad\nVtad periodically checks your system performance and prescribes remedies. It\nuses a default ruleset that provides the following analysis:\nN Compare /proc/sys/kernel/shmmax with /proc/meminfo/Mem (physical\nmemory)\nIf the shared memory takes up less than 10 percent of physical memory,\nVtad recommends that you increase your system’s shared memory — usu-\nally to 25 percent for a typical system. Doing so helps Web servers like\nApache perform file caching.\nN Compare the /proc/sys/fs/file-max value against\n/proc/sys/fs/inode-max\nYou’re warned if the current values are not ideal. Typically, the Linux ker-\nnel allows three to four times as many open inodes as open files.\nN Check the /proc/sys/net/ipv4/ip_local_port_range file to confirm\nthat the system has 10,000 to 28,000 local ports available.\nThis can boost performance if you have many proxy server connections to\nyour server.\nThe default ruleset also checks for free memory limits, fork rates, disk I/O\nrates, and IP packet rates. Once you have downloaded Vtad, you can run\nit quite easily on a shell or xterm window by using perl vtad.pl com-\nmand. Here is a sample output of the script.\nChapter 1: Performance Basics\n9\n" }, { "page_number": 33, "text": "Checking recommendations for /proc/sys/fs/file-max /proc/sys/kernel/osrelease\n/proc/sys/kernel/shmmax /proc/sys/net/ipv4/ip_local_port_range\napache/conf/httpd.conf/MaxRequestsPerChild\nSun May 20 11:15:14 2001 RED (/proc/sys/kernel/shmmax)\nshmmax-to-physical-memory ratio here 0.1\nREMEDY: raise shmmax (echo 8030208 > /proc/kernel/shmmax)\nVTad 1.0b2 running on Linux 2.2\nSun May 20 11:15:14 2001 RED (/proc/sys/net/ipv4/ip_local_port_range)\nrange of local IP port numbers here 28000\nREMEDY: echo 32768 61000 > /proc/sys/net/ip_local_port_range\nChecking /proc/meminfo/MemFree /proc/meminfo/SwapFree /proc/net/snmp/Ip\n/proc/stat/cpu /proc/stat/disk /proc/stat/processes /proc/sys/fs/file-nr\n/proc/sys/fs/inode-nr every 30 seconds.\nSummary\nKnowing how to measure system performance is critical in understanding bottle-\nnecks and performance issues. Using standard Red Hat Linux tools, you can mea-\nsure many aspects of your system’s performance. Tools such as ps, top, and vmstat\ntell you a lot of how a system is performing. Mastering these tools is an important\nstep for anyone interested in higher performance.\n10\nPart I: System Performance\n" }, { "page_number": 34, "text": "Chapter 2\nKernel Tuning\nIN THIS CHAPTER\nN Configuring kernel source\nN Compiling a new kernel\nN Configuring LILO to load the new kernel\nN Allocating file handles for demanding applications \nIF YOU HAVE INSTALLED THE BASIC Linux kernel that Red Hat supplied, probably it\nisn’t optimized for your system. Usually the vendor-provided kernel of any OS is a\n“generalist” rather than a “specialist” — it has to support most installation scenarios.\nFor example, a run-of-the-mill kernel may support both EIDE and SCSI disks (when\nyou need only SCSI or EIDE support). Granted, using a vendor-provided kernel is\nthe straightforward way to boot up your system — you can custom-compile your\nown kernel and tweak the installation process when you find the time. When you\ndo reach that point, however, the topics discussed in this chapter come in handy.\nCompiling and Installing \na Custom Kernel\nThanks to the Linux kernel developers, creating a custom kernel in Linux is a piece\nof cake. A Linux kernel is modular — the features and functions you want can be\ninstalled individually (as modules). Before you pick and choose the functionality of\nyour OS, however, you build a kernel from source code.\nDownloading kernel source code\n(latest distribution)\nThe first step to a customized kernel is to obtain a firm foundation — the stable\nsource code contained in the Linux kernel. \n1. Download the source code from www.kernel.org or one of its mirror sites\n(listed at the main site itself).\n11\n" }, { "page_number": 35, "text": "2. Extract the source in the /usr/src directory.\nKernel source distributions are named linux-version.tar.gz, where\nversion is the version number of the kernel (for example, linux-2.4.1.\ntar.gz).\nIn this chapter,I assume that you have downloaded and extracted (using the\ntar xvzf linux-2.4.1.tar.gz command) the kernel 2.4.1 source dis-\ntribution from the www.kernel.org site.\nCreating the /usr/src/linux symbolic link\nWhen you extract the kernel source (as discussed in the previous section), a new\ndirectory is created. This new directory must be symbolically linked to\n/usr/src/linux. (A symbolic link is a directory entry that points another directory\nentry to another existing directory.) The source code expects the /usr/src/linux\nsymbolic link entry to point to the real, top-level source code directory. Here is how\nyou create this symbolic link:\n1. Run the ls -l command.\nThe result shows where /usr/src/linux currently points. The -> in the\nls output points to linux-2.4.0. Typically, /usr/src/linux is a symbolic\nlink to the current source distribution of the kernel. For example, on my\nsystem, ls -l reports this:\nlrwxrwxrwx 1 root root 11 Feb 13 16:21 linux -> linux-\n2.4.0\n12\nPart I: System Performance\nDistribution versus kernel — what’s the “real” version?\nNew Linux users often get confused when the version numbers of the distribution and\nthe kernel mismatch. Why (they ask) do I keep talking about Linux 2.4 when what they\nsee on the market is (apparently) 7.x? The answer lies in the nature of the open-\nsource concept: Working independently, various programmers have developed the\nbasic kernel of Linux code in diverse directions — like variations on a theme. Each\nvariation has a series of distributions and a body of users to whom it is distributed.\nThanks to popular, easy-to-recognize distributions like Red Hat Linux, many\nnewcomers think distribution 7.x of Linux is the “only” — or the “latest” — version (and\nthat everything in it is uniformly “version 7.x” as if it were marketed by Microsoft or\nApple). These days (and in this book) I try to overturn that mistaken notion; when I\nrefer to Linux 2.4, I say “Linux kernel 2.4, in distribution 7.x” to be as clear as possible.\n" }, { "page_number": 36, "text": "drwxrwxrwx — not rwxrwxrwx — is in the ls -l output.\n2. Run one of these commands:\nI If /usr/src/linux is a symbolic link, run the rm -f linux command. \nThis removes the symbolic link.\nI If /usr/src/linux is a directory, run the command mv linux\nlinux.oldversion (oldversion is the version number of the current\nkernel). \nThis renames the old kernel source directory, clearing the way for the\ninstallation of the new kernel source.\n3. Run the command ln -s /usr/src/linux-2.4.1 linux.\nThis creates a new symbolic link, linux, that points to the\n/usr/src/linux-2.4.1 directory.\n4. Change your directory path to /usr/src/linux.\nAt this point you have the kernel source distribution ready for configuration.\nNow you are ready to select a kernel configuration method.\nSelecting a kernel-configuration method\nYou can configure a Linux kernel by using one of three commands:\nN make config. This method uses the bash shell; you configure the kernel\nby answering a series of questions prompted on the screen. (This approach\nmay be too slow for advanced users; you can’t go back or skip forward.)\nN make menuconfig. You use a screen-based menu system (a much more\nflexible method) to configure the kernel. (This chapter assumes that you\nuse this method.)\nN make xconfig. This method, which uses the X Window system (a Linux\ngraphical interface), is geared to the individual user’s desktop environ-\nment. I do not recommend it for server administrators; the X Window sys-\ntem is too resource-intensive to use on servers (which already have\nenough to do).\nChapter 2: Kernel Tuning\n13\n" }, { "page_number": 37, "text": "If this isn’t the first time you are configuring the kernel,run make mrproper\nfrom the /usr/src/linux directory to remove all the existing object files\nand clean up the source distribution. Then, from the /usr/src/linux\ndirectory — which is a symbolic link to the Linux kernel (in this example,\n/usr/src/linux-2.4.1) — run the make menuconfig command to\nconfigure Linux.\nUsing menuconfig\nWhen you run the make menuconfig command, it displays a list of submenus in a\nmain menu screen. The result looks like this:\nCode maturity level options --->\nLoadable module support --->\nProcessor type and features --->\nGeneral setup --->\nMemory Technology Devices (MTD) --->\nParallel port support --->\nPlug and Play configuration --->\nBlock devices --->\nMulti-device support (RAID and LVM) --->\nNetworking options --->\nTelephony Support --->\nATA/IDE/MFM/RLL support --->\nSCSI support --->\nI2O device support --->\nNetwork device support --->\nAmateur Radio support --->\nIrDA (infrared) support --->\nISDN subsystem --->\nOld CD-ROM drivers (not SCSI, not IDE) --->\nInput core support --->\nCharacter devices --->\nMultimedia devices --->\nFile systems --->\nConsole drivers --->\nSound --->\nUSB support --->\nKernel hacking --->\n---\nLoad an Alternate Configuration File\nSave Configuration to an Alternate File\n14\nPart I: System Performance\n" }, { "page_number": 38, "text": "In the preceding list, ---> indicates a submenu, which you may also find within\na top-level submenu (such as Network device support menu). \nN Use Up and Down arrow keys on your keyboard to navigate the sub-\nmenus. Press the Enter key to select a menu. \nN Press the space bar to toggle a highlighted option on or off.\nCODE MATURITY LEVEL OPTIONS\nThe very first submenu, Code maturity level options, is the first one to set. This\noption instructs the menuconfig program to hide or display experimental kernel\nfeatures. Though often interesting to the programmer, experimental features are not\nyet considered mature (stable) code.\nSelecting Prompt for development and/or incomplete code/drivers (by pressing\nthe spacebar to put an asterisk between the square brackets next to the option) dis-\nplays many experimental — potentially unreliable — features of the latest kernel.\nThen they show up in other submenu options. If you don’t plan to implement these\nrisky options, why display them?\nMaking this call is harder than it may seem. Experimental features could offer\ninteresting new capabilities; at the same time, you don’t want to put anything\nunreliable on your system. So here’s the rule that I use:\nN Don’t select this option if the system is \nI\nA production server\nI\nThe only system in your home or organization \nUse only mature code if a system must be reliable.\nN If the machine you’re configuring isn’t critical to your home or business,\nyou can enable this option to experiment with new kernel features.\nAny organization that depends on Linux should have at least one separate\nexperimental Linux system so administrators can try new Linux features\nwithout fearing data losses or downtime.\nChapter 2: Kernel Tuning\n15\n" }, { "page_number": 39, "text": "LOADABLE MODULE SUPPORT\nLoadable module support should have all options selected by default, because you\nwill take advantage of Linux kernel’s modular design. \nIn this chapter, I show you how you can build certain features in two forms:\nN Modules \nWhen you compile a feature as a kernel module, it is only loaded when\nneeded. \nThe make menuconfig based kernel configuration interface shows this\noption as [M] next to a feature when you use the space bar to select the\noption.\nN Within the kernel binary\nWhen you choose to compile a feature part of the kernel, it becomes part\nof the kernel image. This means that this feature is always loaded in the\nkernel. \nThe make menuconfig based kernel configuration interface shows this\noption as [*] next to a feature when you use the space bar to select the\noption.\nHARDWARE\nThink of kernel as the interface to your hardware. The better it is tuned to your\nhardware, the better your system works. The following hardware-specific options\nprovide optimal configuration for your system.\nBecause most Linux users run Intel hardware, I focus on Intel-specific\noptions throughout the chapter. I also assume that you use fairly modern\nhardware (less than two years old).\n16\nPart I: System Performance\n" }, { "page_number": 40, "text": "CPU SUPPORT\nLinux kernel can be configured for the Intel x86 instruction set on\nthese CPUs: \nN “386” for \nI\nAMD/Cyrix/Intel 386DX/DXL/SL/SLC/SX\nI\nCyrix/TI486DLC/DLC2\nI\nUMC 486SX-S \nI\nNexGen Nx586\nOnly “386”kernels run on a 386-class machine.\nN “486” for \nI AMD/Cyrix/IBM/Intel 486DX/DX2/DX4 \nI AMD/Cyrix/IBM/Intel SL/SLC/SLC2/SLC3/SX/SX2\nI UMC U5D or U5S\nN “586” for generic Pentium CPUs, possibly lacking the TSC (time stamp\ncounter) register.\nN “Pentium-Classic” for the Intel Pentium.\nN “Pentium-MMX” for the Intel Pentium MMX.\nN “Pentium-Pro” for the Intel Pentium Pro/Celeron/Pentium II.\nN “Pentium-III” for the Intel Pentium III.\nN “Pentium-4” for the Intel Pentium 4\nN “K6” for the AMD K6, K6-II and K6-III (also known as K6-3D).\nN “Athlon” for the AMD Athlon (K7).\nN “Crusoe” for the Transmeta Crusoe series.\nN “Winchip-C6” for original IDT Winchip.\nN “Winchip-2” for IDT Winchip 2.\nN “Winchip-2A” for IDT Winchips with 3dNow! capabilities.\nChapter 2: Kernel Tuning\n17\n" }, { "page_number": 41, "text": "You can find your processor by running the command cat /proc/cpuinfo in\nanother window. The following code is a sample output from this command.\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 8\nmodel name : Pentium III (Coppermine)\nstepping : 1\ncpu MHz : 548.742\ncache size : 256 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat\npse36 mmx fxsr sse\nbogomips : 1094.45\nThe first line in the preceding code shows how many processors you have in the\nsystem. (0 represents a single processor, 1 is two processors, and so on.) “Model\nname” is the processor name that should be selected for the kernel.\nChoosing a specific processor prevents this kernel from running on an x86\nsystem without the same processor.If you compile the kernel to support the\ndefault x386 processor, just about any modern x86 machine (386 or higher)\ncan run the kernel but not necessarily as efficiently as possible. Unless you\nare compiling the kernel for wide use,choosing a particular CPU is best.\nFollow these steps to select the appropriate CPU support: \n1. Select the Processor type and features submenu from the main menu. \nThe first option in the submenu is the currently chosen processor for your\nsystem. If the chosen processor isn’t your exact CPU model, press the\nenter key to see the list of supported processors. \n2. Select the math emulation support. \nIf you use a Pentium-class machine, math emulation is unnecessary. Your\nsystem has a math co-processor. \n18\nPart I: System Performance\n" }, { "page_number": 42, "text": "If you don’t know whether your system has a math co-processor,run the cat\n/proc/cpuinfo and find the fpu column.If you see ‘yes’next to fpu,you have a\nmath coprocessor (also known as an fpu,or floating-point unit).\nIf you have a Pentium Pro; Pentium II or later model Intel CPU; or an Intel\nclone such as Cyrix 6x86, 6x86MX AMD K6-2 (stepping 8 and above), and\nK6-3, enable the Memory Type Range Register (MTRR) support by choosing\nthe Enable MTRR for PentiumPro/II/III and newer AMD K6-2/3 systems\noption.MTRR support can enhance your video performance.\n3. If you have a system with multiple CPUs and want to use multiple CPUs\nusing the symmetric multiprocessing support in the kernel, enable the\nSymmetric multi-processing (SMP) support option.\nWhen you use SMP support,you can’t use the advanced power manage-\nment option.\nMEMORY MODEL\nThis tells the new kernel how much RAM you have or plan on\nadding in the future.\nThe Intel 32-bit address space enables a maximum of 4GB of memory to be used.\nHowever, Linux can use up to 64GB by turning on Intel Physical Address Extension\n(PAE) mode on Intel Architecture 32-bit (IA32) processors such as Pentium Pro,\nPentium II, and Pentium III. In Linux terms, memory above 4GB is high memory.\nTo enable appropriate memory support, follow these steps:\n1. From the main menu, select Processor type and features submenu\n2. Select High Memory Support option.\nTo determine which option is right for your system,you must know the\namount of physical RAM you currently have and will add (if any).\nChapter 2: Kernel Tuning\n19\n" }, { "page_number": 43, "text": "You have three choices:\nI If you never plan on getting more than 1GB for your machine, you\ndon’t need high memory support. Choose the off option.\nI If the machine will have a maximum of 4GB of RAM and currently has\n1GB or more, choose the 4GB option.\nI If the machine has more than 4GB of RAM now and you plan on\nadding more in the future, choose the 64GB option.\nWhen the new kernel is built, memory should be auto-detected. To find how\nmuch RAM is seen by the kernel, run cat /proc/meminfo, which displays output\nas shown below.\nMem: 393277440 308809728 84467712 0 64643072 111517696\nSwap: 271392768 0 271392768\nMemTotal: 384060 kB\nMemFree: 82488 kB\nMemShared: 0 kB\nBuffers: 63128 kB\nCached: 108904 kB\nActive: 5516 kB\nInact_dirty: 166516 kB\nInact_clean: 0 kB\nInact_target: 16 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 384060 kB\nLowFree: 82488 kB\nSwapTotal: 265032 kB\nSwapFree: 265032 kB\nIn the preceding list, MemTotal shows the total memory seen by kernel. In this\ncase, it’s 384060 kilobytes (384MB). Make sure your new kernel reports the amount\nof memory you have installed. If you see a very different number, try rebooting the\nkernel and supplying mem=“nnnMB” at the boot prompt (nnn is the amount of\nmemory in MB). For example, if you have 2GB of RAM, you can enter\nmem=“2048MB” at the LILO prompt. Here’s an example of such a prompt:\nLilo: linux mem=”2048MB”\nDISK SUPPORT\nHard disks are generally the limiting factor in a system’s perfor-\nmance. Therefore, choosing the right disk for your system is quite important.\nGenerally, there are three disk technologies to consider:\n20\nPart I: System Performance\n" }, { "page_number": 44, "text": "N EIDE/IDE/ATA\nEIDE/IDE/ATA are the most common disk drives. \nI They’re cheaper than the other two types. \nI They’re slower than the other two types, so they’re usually used in\nhome or desktop environments where massive disk I/O isn’t common.\nFortunately, EIDE disks are becoming faster.\nN SCSI\nSCSI rules in the server market. A server system without SCSI disks is\nunthinkable to me and many other server administrators.\nN Fiber Channel \nFiber Channel disk is the hottest, youngest disk technology and not widely\nused for reasons such as extremely high price and interconnectivity issues\nassociated with fiver technology. However, Fiber Channel disks are taking\nmarket share from SCSI in the enterprise or high-end storage area networks.\nIf you need Fiber Channel disks, you need to consider a very high-end disk\nsubsystem such as a storage area network (SAN) or a storage appliance. \nChoosing a disk for a system (desktop or server) becomes harder due to the buzz-\nwords in the disk technology market. Table 2-1 explains common acronyms.\nTABLE 2-1: COMMON DISK TECHNOLOGY\nCommon Terms \nMeaning\nStandard Name\nIDE\nIntegrated Disk Electronics.\nATA -1 \nATA\nAT Attachment. \nATA is the superset of the\nIDE specifications.\nFast-IDE or Fast-ATA\nSecond generation IDE.\nATA-2\nEIDE\nEnhanced IDE. It provides support \nATA-3\nfor larger disks, more disks \n(4 instead of 2), and for other \nmass storage units such as tapes \nand CDs.\nUltraDMA/33 or UDMA/33\nUsing fast direct memory access \nATA-4\n(DMA) controller, this type of disk \nprovides faster and more CPU \nnon-intensive transfer rates.\nContinued\nChapter 2: Kernel Tuning\n21\n" }, { "page_number": 45, "text": "TABLE 2-1: COMMON DISK TECHNOLOGY (Continued)\nCommon Terms \nMeaning\nStandard Name\nATAPI\nATA Packet Interface. It’s a protocol \nused by EIDE tape and CD-ROM \ndrives, similar in many respects to \nthe SCSI protocol.\nSCSI or narrow SCSI\nSmall Computer System Interface. \nSCSI-1\nThe initial implementation of SCSI \nwas designed primarily for narrow \n(8-bit), single-ended, synchronous \nor asynchronous disk drives and \nwas very limited relative to today’s \nSCSI. It includes synchronous and \nasynchronous data transfers at \nspeeds up to 5MB per second.\nFast SCSI or Fast-10\nFast SCSI uses 10 MHz bus instead \nSCSI-2\nof 5 MHz bus used in narrow SCSI. \nOn an 8-bit (narrow) SCSI-bus, this \nincreases the theoretical maximum \nspeed from 5MB per second to 10MB \nper second. A 16-bit (wide) bus can \nhave a transfer rate up to 20MB \nper second.\nUltra or Fast-20 SCSI\nSynchronous data transfer option, \nSCSI-3\nwhich enables up to 20 MHz data \nclocking on the bus. 40MB per \nsecond for 16-bit (wide) bus \n(Ultra Wide SCSI).\nUltra 2 or Fast-40 SCSI\nSynchronous data transfer option, \nSCSI-3\nwhich enables up to 40 MHz data \nclocking on the bus. 80MB per \nsecond for 16-bit (wide) bus \n(Ultra2 Wide SCSI)\nUltra 3 or Ultra160 \n160MB per second for wide bus.\nSCSI-3\nor Fast-80\nMost people either use IDE/EIDE hard disks or SCSI disks. Only a few keep both\ntypes in the same machine, which isn’t a problem. If you only have one of these\n22\nPart I: System Performance\n" }, { "page_number": 46, "text": "two in your system, enable support for only the type you need unless you plan on\nadding the other type in the future.\nIf you use at least one EIDE/IDE/ATA hard disk, follow these steps:\n1. Select the ATA/IDE/MFM/RLL support option from the main menu and\nenable the ATA/IDE/MFM/RLL support option by including it as a module.\n2. Select the IDE, ATA, and ATAPI Block devices submenu and enable the\nGeneric PCI IDE chipset support option.\n3. If your disk has direct memory access (DMA) capability, then: \nI Select the Generic PCI bus-master DMA support option. \nI Select the Use PCI DMA by default when available option to make use\nof the direct memory access automatically.\nChapter 3 details how to tune EIDE/IDE/ATA disks with hdparam.\nYou see a lot of options for chipset support.Unless you know your chipset\nand find it in the list,ignore these options.\nIf you use at least one SCSI disk, follow these steps:\n1. Select the SCSI support submenu and choose SCSI support from the sub-\nmenu as a module.\n2. Select the SCSI disk support option as a module. \n3. Select support for any other type of other SCSI device you have, such as\ntape drive or CD.\n4. Select the SCSI low-level drivers submenu, and then select the appropriate\ndriver for your SCSI host adapter.\n5. Disable Probe all LUNs because it can hang the kernel with some SCSI\nhardware.\n6. Disable Verbose SCSI error reporting.\n7. Disable SCSI logging facility.\nChapter 2: Kernel Tuning\n23\n" }, { "page_number": 47, "text": "If you will use only one type of disks (either EIDE/IDE/ATA or SCSI),disabling\nsupport in the kernel for the other disk type saves memory.\nPLUG AND PLAY DEVICE SUPPORT\nIf you have Plug and Play (PNP) devices in\nyour system, follow these steps to enable PNP support in the kernel:\n1. Select the Plug and Play configuration submenu.\n2. Select all options in the submenu to enable Plug and Play hardware \nsupport.\nBLOCK DEVICE SUPPORT\nTo enable support for block devices in the kernel, fol-\nlow these steps:\n1. Select the Block devices submenu.\n2. Select the appropriate block devices you have.\nFor most systems, the Normal PC floppy disk support is sufficient.\nIf you want to use RAM as a filesystem, RAM disk support isn’t best. Instead,\nenable Simple RAM-based filesystem support under File systems submenu.\n3. If a regular file will be a filesystem, enable the loopback device support. \nA loopback device,such as loop0,enables you to mount an ISO 9660 image\nfile (CD filesystem),then explore it from a normal filesystem (such as ext2).\nNETWORK DEVICE SUPPORT\nTo enable network device support in the kernel,\nselect the Network device support submenu and choose the Network device support\noption for your network.\n24\nPart I: System Performance\n" }, { "page_number": 48, "text": "N If you connect your system to an Ethernet (10 or 100 Mbps), select the\nEthernet (10 or 100 Mbps) submenu, choose Ethernet (10 or 100 Mbps)\nsupport, and implement one of these options:\nI If your network interface card vendor is listed in the Ethernet (10 or\n100 Mbps) support menu, select the vendor from that menu.\nI If your PCI-based NIC vendor isn’t listed in the Ethernet (10 or 100\nMbps) support menu, select your vendor in the EISA, VLB, PCI and on-\nboard controllers option list. \nIf you don’t find your PCI NIC vendor in the Ethernet (10 or 100 Mbps) sup-\nport menu or the EISA,VLB, PCI and on-board controllers option list, choose\nthe PCI NE2000 and clones support option.\nI If your ISA NIC vendor isn’t listed in the Ethernet (10 or 100 Mbps)\nsupport menu, select your vendor in the Other ISA cards option.\nIf you don’t find your ISA NIC vendor in the Ethernet (10 or 100 Mbps) sup-\nport menu or the Other ISA cards option list, choose the NE2000/NE1000\nsupport option.\nN If you have at least one gigabit (1000 Mbps) adapter, choose the Ethernet\n(1000 Mbps) submenu and select your gigabit NIC vendor.\nN If you have the hardware to create a wireless LAN, select the Wireless LAN\nsupport and choose appropriate wireless hardware.\nUSB SUPPORT\nIf you have at least one USB device to connect to your Linux sys-\ntem, select the USB support and choose the appropriate options for such features as\nUSB audio/multimedia, modem, and imaging devices.\nUNIVERSAL SYSTEM OPTIONS \nThese configuration options apply for servers, desktops, and laptops.\nNETWORKING SUPPORT\nEven if you don’t want to network the system, you must\nconfigure the networking support from the General setup submenu using the\nNetworking support option. (Some programs assume that kernel has networking\nsupport. By default, networking support is built into the kernel.)\nChapter 2: Kernel Tuning\n25\n" }, { "page_number": 49, "text": "26\nPart I: System Performance\nCheck the Networking options submenu to confirm that these options are\nenabled; enable them if they aren’t already enabled:\nN TCP/IP networking \nN Unix domain socket support \nPCI SUPPORT\nMost modern systems use PCI bus to connect to many devices. If\nPCI support isn’t enabled on the General setup submenu, enable it.\nSYSTEM V IPC AND SYSCTL SUPPORT\nInter Process Communication (IPC) is a\nmechanism that many Linux applications use to communicate with one another. If\nthe System V IPC option isn’t enabled on the General setup submenu, enable it. \nThe sysctl interface is used to dynamically manipulate many kernel parameters.\nIf the Sysctl support option isn’t enabled on the General setup menu, enable it.\nCONSOLE SUPPORT\nThe system console is necessary for a Linux system that\nneeds to be managed by a human, whether the system is a server, desktop, or lap-\ntop. The system console\nN Receives all kernel messages and warnings \nN Enables logins in single-user mode\nTo customize console support, apply these options:\nN Choose the Console drivers submenu, then select the VGA text console\noption.\nN If you want to choose video mode during boot up, apply these steps:\nI Select Video mode selection support option \nI Enter vga=ask option to the LILO prompt during the boot up process\nYou can add this option to the /etc/lilo.conf file and rerun LILO using the\n/sbin/lilo command.\nCHARACTER DEVICE SUPPORT\nYou need virtual terminals on the console to\naccess your system via shell programs. Select virtual terminal support from the\ncharacter devices submenu.\nN Select the character devices submenu and enable Virtual terminals option. \n" }, { "page_number": 50, "text": "Most users want to enable the Support for console on virtual terminal\noption.\nN If you have serial devices (such as mouse or external terminal devices) to\nattach to a serial port, enable serial port support using the Standard/generic\n(8250/16550 and compatible UARTs) serial support option.\nFILESYSTEM SUPPORT\nIt is generally a good idea to enable only the following\nfilesystems support in the kernel:\nN Second Extended Filesystem (ext2) \nThis is the default filesystem for Linux.\nN ISO 9660 CD\nThis is the filesystem for most CD-ROMs.\nN /proc \nThis is the pseudo filesystem used by the kernel and other programs.\nThese should be enabled by default. To ensure that these filesystems are supported,\nselect the File systems submenu and choose these filesystem types from the list.\nDESKTOP/LAPTOP SYSTEM OPTIONS\nIf you are running Linux on desktop or a laptop system, you want such capabilities\nas printing, playing music, and using the The X Window System. Hence, the set-\ntings discussed here enable the kernel level options needed for such goals.\nMOUSE SUPPORT\nIf you have a non-serial, non-USB mouse such as bus-mouse\nor a PS/2 mouse or another non-standard mouse, follow these steps:\n1. Select the Character devices submenu, followed by the Mice submenu.\n2. Select the appropriate mouse support.\nPARALLEL PORT SUPPORT\nTo use a parallel port printer or other parallel port\ndevices, you must enable parallel port support from the Parallel port support sub-\nmenu from the main menu. Follow these steps:\n1. Choose the parallel port support. \n2. Choose Use FIFO/DMA if available from the PC-style hardware option.\nChapter 2: Kernel Tuning\n27\n" }, { "page_number": 51, "text": "MULTIMEDIA SUPPORT\nMost multimedia include sound. To enable sound from\nyour Linux system:\n1. Select the Sound submenu.\n2. Choose the appropriate sound card for your system.\nIf you have audio/video capture hardware or radio cards, follow these steps to\nenable support:\n1. Select the Multimedia devices submenu.\n2. Choose Video For Linux to locate video adapter(s) or FM radio tuner(s)\nyou have on your system.\nJOYSTICK SUPPORT\nJoystick support depends on the Input core support. Follow\nthese steps for joystick support:\n1. Select Input core support submenu, then enable input core support. \n2. Choose Joystick support, then select the Character devices menu.\n3. On the the Joysticks submenu, choose the appropriate joystick controller\nfor your joystick vendor.\nPOWER MANAGEMENT SUPPORT\nLaptop users need to enable power manage-\nment for maximum battery life. For power management, select these options:\nN Select the General setup submenu and choose the Power Management\nsupport option. \nN If your system has Advanced Power Management BIOS, choose Advanced\nPower Management BIOS support.\nDIRECT RENDERING INFRASTRUCTURE (DRI) FOR THE X WINDOW SYSTEM\nIf\nyou have a high-end video card (16 MB or more video memory and chip-level sup-\nport of direct rendering), find whether it can take advantage of the DRI support now\navailable in the X Window System. \n1. Choose the Character devices submenu and select Direct Rendering\nManager (XFree86 DRI support) option.\n2. If you see your video card listed, select it to enable the DRI support.\n28\nPart I: System Performance\n" }, { "page_number": 52, "text": "PCMCIA/CARDBUS SUPPORT\nTo enable PCMCIA/CardBus support, follow these\nsteps:\n1. Select the PCMCIA/CardBus support submenu from the General setup \nsubmenu. \n2. Select the CardBus support option.\nTo use PCMCIA serial devices, follow these steps:\n1. Enable PCMCIA device support from the Character devices submenu. \n2. Select either\nI PCMCIA serial device support\nI CardBus serial device support\nIf you have PCMCIA network devices, follow these steps to support them:\n1. Select the PCMCIA network device support option from the Network\ndevice support submenu.\n2. Select appropriate vendor from the list.\nPPP SUPPORT\nMost desktop or laptop systems use the Point-to-Point Protocol\n(PPP) for dialup network communication.\nTo enable PPP support, select the PPP (point-to-point protocol) support option\nfrom the Network device support submenu.\nSERVER OPTIONS\nUsually, a server system doesn’t need support for such features as sound, power\nmanagement, multimedia, and infrared connectivity, so you shouldn’t enable any\nof these features in the kernel.\nA few very important kernel configuration options can turn your system into a\nhighly reliable server. These options are discussed in the following sections.\nLOGICAL VOLUME MANAGEMENT SUPPORT\nLogical volume is a new feature to\nLinux and can be very useful for a server system with multiple disks. Follow these\nsteps to enable LVM support:\n1. Select the Multi-device support (RAID and LVM) submenu.\n2. Choose the Logical volume manager (LVM) support option.\nChapter 2: Kernel Tuning\n29\n" }, { "page_number": 53, "text": "Chapter 3 explains how to use logical volume management.\nSOFTWARE RAID SUPPORT\nIf you will use software RAID for your server, follow\nthese steps to enable it:\n1. Select the Multi-device support (RAID) submenu.\n2. Choose the RAID support option.\n3. Choose the type of RAID you want to use:\nI Linear (append) mode\nI RAID-0 (striping)\nI RAID-1 (mirroring of similar size disks) \nI RAID 4/5\nPSEUDO TERMINAL (PTY) SUPPORT\nIf you use the server to enable many users to\nconnect via SSH or telnet, you need pseudo terminal (PTY) support. Follow these\nsteps:\n1. Enable PTY support from the Character device submenu by selecting the\nMaximum number of Unix98 PTYs in use (0-2048) option. \nBy default, the system has 256 PTYs. Each login requires a single PTY.\n2. If you expect more than 256 simultaneous login sessions, set a value\nbetween 257 and 2048. \nEach PTY uses at least 2 MB of RAM.Make sure you have plenty of RAM for\nthe number of simultaneous login sessions you select.\nREAL-TIME CLOCK SUPPORT FOR SMP SYSTEM\nIf you use multiple CPU (enabled\nSymmetric Multi Processing support), enable the enhanced Real Time Clock (RTC)\nso that it’s set in an SMP-compatible fashion. To enable RTC, enable Enhanced Real\nTime Clock Support option from the Character devices submenu.\n30\nPart I: System Performance\n" }, { "page_number": 54, "text": "IP PACKET FILTERING (FIREWALL) OPTIONS\nAlthough there are many other options that you can configure in the kernel, the\noptions discussed so far should be a good start for a lean, mean custom kernel for\nyour system. Save the configuration you have created and proceed to compile the\nkernel as discussed in the following sections.\nIf you your server will use the firewall features of Linux, see Chapter 20.\nCompiling the kernel\nCompiling a configured kernel requires checking source code dependencies, then\ncompiling the kernel and module images. The source dependency checks make sure\nthat all source code files are available for the features that you choose. The image\ncreation process compiles the source code and builds binaries for the kernel and the\nmodules.\nCHECKING SOURCE DEPENDENCIES\nBefore you can compile the kernel, you need to ensure that all the source depen-\ndencies are in good shape. \nTo do that, you can run the make depend command from /usr/src/linux as root.\nThis command \nN Performs dependency checks \nN Prepares the source for image compilation\nIf you get any error messages from the preceding command,you might have\na source distribution integrity problem. In such cases, you must download a\nnew copy of the latest stable kernel source and reconfigure it from the\nbeginning.\nAfter you have run this command, you are ready to compile the kernel and its\nmodules.\nCOMPILING IMAGES AND MODULES\nThe kernel compilation involves building an image (binary) file of \nN The kernel itself \nN The necessary kernel modules images\nThe following sections explain how to compile both the kernel image and the\nmodules images.\nChapter 2: Kernel Tuning\n31\n" }, { "page_number": 55, "text": "COMPILING THE KERNEL IMAGE\nTo create the kernel image file, run the make\nbzImage command from /usr/src/linux as root. \nDepending on your processor speed, the compile time can vary from a few\nminutes to hours. On my Pentium III 500 MHz system with 384MB of RAM,\nthe kernel compiles in less than five minutes.\nOnce the make bzImage command is finished, a kernel image file called bzImage\nis created in a directory specific to your system architecture. For example, an x86\nsystem’s new kernel bzImage file is in /usr/src/linux/arch/i386/boot.\nCOMPILING AND INSTALLING THE MODULES\nIn the process of the kernel con-\nfiguration, you have set up at least one feature as kernel modules and, therefore,\nyou need to compile and install the modules. \nUse the following commands to compile and install the kernel modules.\nmake modules\nmake modules_install\nIf you are compiling the same version of the kernel that is currently running\non your system, first back up your modules from /lib/modules/x.y.z\n(where x.y.z is the version number of the current kernel).You can simply run\ncp -r /lib/modules/x.y.z /lib/modules/x.y.z.current (by\nreplacing x.y.z with appropriate version number) to create a backup module\ndirectory with current modules.\nOnce the preceding commands are done, all new modules will be installed in a\nnew subdirectory in the /lib directory.\nBooting the new kernel\nBefore you can boot the new kernel, it must be installed. \nThis is a very important step.You must take great care so you can still boot\nthe old kernel if something goes wrong with the new kernel.\n32\nPart I: System Performance\n" }, { "page_number": 56, "text": "Now you can install the new kernel and configure LILO to boot either kernel.\nINSTALLING THE NEW KERNEL\nThe Linux kernel is kept in /boot directory. If you open your /etc/lilo.conf file\nand look for a line like image=/path/to/kernel, then you see that this usually is\nsomething like image=/boot/vmlinuz-x.y.z (where x.y.z is the version number).\nCopy the new kernel using the cp /usr/src/linux/arch/i386/boot/bzImage\n/boot/vmlinuz-x.y.z (don’t forget to replace x.y.z. with the version number).\nFor example, to install a new 2.4.1 kernel, the copy command is\ncp /usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz-2.4.1\nCONFIGURING LILO\nLILO is the boot loader program and it must be configured before you can boot the\nnew kernel.\nEdit the LILO configuration file called /etc/lilo.conf as follows:\n1. Copy the current lilo section that defines the current image and its \nsettings. \nFor example, Listing 2-1 shows a sample /etc/lilo.conf file with a sin-\ngle kernel definition. As it stand right now, lilo boots the kernel labeled\nlinux (because default = linux is set).\nListing 2-1: /etc/lilo.conf\nboot=/dev/hda\nmap=/boot/map\ninstall=/boot/boot.b\nprompt\ntimeout=50\nmessage=/boot/message\nlinear\ndefault=linux\nimage=/boot/vmlinuz-2.4.0-0.99.11\nlabel=linux\nread-only\nroot=/dev/hda1\n2.\nCopy the following lines and append to the end of the current\n/etc/lilo.conf file.\nimage=/boot/vmlinuz-2.4.0-0.99.11\nlabel=linux\nread-only\nroot=/dev/hda1\nChapter 2: Kernel Tuning\n33\n" }, { "page_number": 57, "text": "3. Change the image path to the new kernel image you copied. For example,\nif you copied the new kernel image\n/usr/src/linux/arch/i386/boot/bzImage to the /boot/vmlinuz-\n2.4.1 directory, then set image=/boot/vmlinuz-2.4.1.\n4. Change the label for this new segment to linux2. The resulting file is\nshown in Listing 2-2.\nListing 2-2: /etc/lilo.conf (updated)\nboot=/dev/hda\nmap=/boot/map\ninstall=/boot/boot.b\nprompt\ntimeout=50\nmessage=/boot/message\nlinear\ndefault=linux\nimage=/boot/vmlinuz-2.4.0-0.99.11\nlabel=linux\nread-only\nroot=/dev/hda1\nimage=/boot/vmlinuz-2.4.1\nlabel=linux2\nread-only\nroot=/dev/hda1\n5. Run /sbin/lilo to reconfigure lilo using the updated /etc/lilo.conf\nfile.\nNever experiment with new kernel from a remote location.Always restart\nthe system from the system console to load a new kernel for the first time.\nREBOOTING NEW KERNEL\nAfter installing the new kernel, follow these steps to reboot for the first time:\n1. Reboot the system from the console, using the /sbin/shutdown -r now\ncommand.\nDuring the reboot process, you see the lilo prompt. \n2. At the lilo prompt, enter linux2.\n34\nPart I: System Performance\n" }, { "page_number": 58, "text": "The default, linux, would load the old kernel.\nWith the new label linux2 associated with the new kernel, your system\nattempts to load the new kernel. Assuming everything goes well, it should\nboot up normally and the login prompt should appear.\n3. At the login prompt, log in as root from the console. \n4. When you are logged in, run the uname -a command, which should dis-\nplay the kernel version number along with other information. \nHere’s a sample output:\nLinux rhat.nitec.com 2.4.1 #2 SMP Wed Feb 14 17:14:02 PST\n2001 i686 unknown\nI marked the version number in bold. The #2 reflects the number of times\nI built this kernel.\nRun the new kernel for several days before making it the default for your system.\nIf the kernel runs for that period without problems — provided you are ready to\nmake this your default kernel — simply edit the /etc/lilo.conf file, change\ndefault=linux to default=linux2, and rerun /sbin/lilo to reconfigure lilo. \nTo keep default=linux, simply switch label=linux2 to label=linux,\nthen remove the old kernel image-definition from the /etc/lilo.conf file or\nchange the label of the old kernel’s image-definition file to something else.\nYou must run /sbin/lilo after you modify /etc/lilo.conf file.\nRunning Demanding Applications\nA lean kernel is a candidate for demanding applications that make heavy use of\nyour resources. Such applications are often not suitable for resource configurations.\nMulti-threaded mail servers have a couple of common problems. Follow these\nsteps to fix them:\nN Running out of filehandles.\nThousands of files can be opened from the message queue. These steps\nallow extra filehandles to accomodate them:\nChapter 2: Kernel Tuning\n35\n" }, { "page_number": 59, "text": "1. Determine the number of filehandles for the entire system. \nTo find the number of filehandles, run the cat /proc/sys/fs/file-\nmax command. You should see a number like 4096 or 8192. \n2. To increase the number of filehandles (often called file descriptors), add\nthe following lines in your /etc/rc.d/rc.local script (replace nnnn with\nthe number of filehandles you need):\necho nnnn > /proc/sys/fs/file-max\nThe following line makes the system-wide filehandles total 10240\n(10K):\necho 10240 > /proc/sys/fs/file-max\nN Starting too many threads\nUsing too many threads will reach the system’s simultaneous process\ncapacity. To set per process filehandle limit, follow these steps:\n1. Edit the /etc/security/limits.conf file and add the following lines:\n* soft nofile 1024\n* hard nofile 8192\nThe preceding code makes the filehandle limit 8192.\n2. Make sure that /etc/pam.d/system-auth has a line like the following:\nsession required /lib/security/pam_limits.so\nThis ensures that a user can open up to 8,192 files simultaneously\nwhen she logs in. To see what kind of system resources a user can con-\nsume, run ulimit -a (assuming you use the bash shell). Here’s a sam-\nple output:\ncore file size (blocks) 1000000\ndata seg size (kbytes) unlimited\nfile size (blocks) unlimited\nmax locked memory (kbytes) unlimited\nmax memory size (kbytes) unlimited\nopen files 1024\npipe size (512 bytes) 8\nstack size (kbytes) 8192\ncpu time (seconds) unlimited\nmax user processes 12287\nvirtual memory (kbytes) unlimited\nIn the preceding code, the open files (filehandles) and max user\nprocesses line are bold. To enable users to run fewer processes, (about\n8192 at most), add the following lines in /etc/security/limits.conf file.\n36\nPart I: System Performance\n" }, { "page_number": 60, "text": "* soft nproc 4096\n* hard nproc 8192\nThis setting applies to both processes and the child threads that each\nprocess opens. \nYou can also configure how much memory a user can consume by using soft\nand hard limits settings in the same file. The memory consumption is con-\ntrolled using such directives as data, memlock, rss, and stack. You can also\ncontrol the CPU usage of a user.Comments in the file provide details on how\nto configure such limits.\nSummary\nConfiguring a custom kernel suits your system needs. A custom kernel is a great\nway to keep your system lean and mean, because it won’t have unnecessary kernel\nmodules or potential crashes due to untested code in the kernel. \nChapter 2: Kernel Tuning\n37\n" }, { "page_number": 61, "text": "" }, { "page_number": 62, "text": "Chapter 3\nFilesystem Tuning\nIN THIS CHAPTER\nN Tuning your hard disks\nN Tuning your ext2 filesystem\nN Using a ReiserFS journaling filesystem\nN Using logical volume management\nN Using a RAM-based filesystem for high-speed access\nA WISE ENGINEER ONCE TOLD ME that anyone you can see moving with your naked\neye isn’t fast enough. I like to spin that around and say that anything in your com-\nputer system that has moving parts isn’t fast enough. Disks, with moving platters,\nare the slowest devices, even today. The filesystems that provide a civilized inter-\nface to your disks are, therefore, inherently slow. Most of the time, the disk is the\nbottleneck of a system.\nIn this chapter, you tune disks and filesystems for speed, reliability, and easy\nadministration.\nTuning your hard disks\nSCSI and IDE are the most common types of hard disk today. SCSI disks and the\nSCSI controllers are much more expensive because they provide more performance\nand flexibility. IDE or the enhanced version of IDE called EIDE drives are more\ncommonplace in the personal and disk I/O non-intensive computing.\nSCSI PERFORMANCE\nIf you have a modern, ultra-wide SCSI disk set up for your Red Hat Linux system,\nyou are already ahead of the curve and should be getting good performance from\nyour disks. If not (even if so), the difference between SCSI and IDE is useful to\nexplore:\nN SCSI disk controllers handle most of the work of transferring data to and\nfrom the disks; IDE disks are controlled directly by the CPU itself. On a\nbusy system, SCSI disks don’t put as much load on the CPU as IDE\ndrives add.\n39\n" }, { "page_number": 63, "text": "N SCSI disks have wider data transfer capabilities, whereas IDE disks are still\nconnected to the system via 16-bit bus.\nIf you need high performance, SCSI is the way to go. Buy brandname SCSI\nadapters and ultra-wide, 10K-RPM or larger SCSI disks and you have done\npretty much all you can do to improve your disk subsystem.\nWhether you choose SCSI or IDE disks, multiple disks are a must if you are seri-\nous about performance.\nN At minimum, use two disks — one for operating systems and software, the\nother for data.\nN For Web servers, I generally recommend a minimum of three disks. The\nthird disk is for the logs generated by the Web sites hosted on the\nmachine. Keeping disk I/O spread across multiple devices minimizes wait\ntime.\nOf course, if you have the budget for it, you can use fiber channel disks or a\nstorage-area network (SAN) solution. Enterprises with high data-storage\ndemands often use SANs. A less expensive option is a hardware/software\nRAID solution,which is also discussed in this chapter.\nEIDE PERFORMANCE\nYou can get better performance from your modern EIDE drive. Before doing any\ntinkering and tuning, however, you must determine how well your drive is per-\nforming. You need a tool to measure the performance of your disk subsystem. The\nhdparam tool is just right for the job; you can download the source distribution of\nthis tool from metalab.unc.edu/pub/Linux/system/hardware/ and compile and\ninstall it as follows:\n1. Use su to navigate to root.\n2. Extract the source distribution in a suitable directory such as\n/usr/local/src.\nFor example, I ran the tar xvzf hdparm-3.9.tar.gz command in\n/usr/local/src to extract the hdparam version 3.9 source distribution.\n40\nPart I: System Performance\n" }, { "page_number": 64, "text": "3. Change to the newly created subdirectory and run the make install\ncommand to compile and install the hdparam binary and the manual page.\nThe binary is by default installed in /usr/local/sbin directory. It’s\ncalled hdparam.\nBack up your data before using hdparam.Because hdparam enables you to\nchange the behavior of your IDE/EIDE disk subsystem — and Murphy’s Law\nalways lurks in the details of any human undertaking — a misconfiguration\ncould cause your system to hang. Also, to make such an event less likely,\nexperiment with hdparam in single-user mode before you use it. You can\nreboot your system and force it into single-user mode by entering linux \nsingle at the lilo prompt during bootup.\nAfter you have installed the hdparam tool, you are ready to investigate the \nperformance of your disk subsystem. Assuming your IDE or EIDE hard disk\nis /dev/hda, run the following command to see the state of your hard disk \nconfiguration:\nhdparam /dev/hda\nYou should see output like the following:\n/dev/hda:\nmultcount = 0 (off)\nI/O support = 0 (default 16-bit)\nunmaskirq = 0 (off)\nusing_dma = 0 (off)\nkeepsettings = 0 (off)\nnowerr = 0 (off)\nreadonly = 0 (off)\nreadahead = 8 (on)\ngeometry = 2494/255/63, sectors = 40079088, start = 0\nAs you can see, almost everything in this default mode is turned off; changing\nsome defaults may enhance your disk performance. Before proceeding, however, we\nneed more information from the hard disk. Run the following command:\nhdparm -i /dev/hda\nChapter 3: Filesystem Tuning\n41\n" }, { "page_number": 65, "text": "This command returns information like the following:\n/dev/hda:\nModel=WDC WD205AA, FwRev=05.05B05, SerialNo=WD-WMA0W1516037\nConfig={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq }\nRawCHS=16383/16/63, TrkSize=57600, SectSize=600, ECCbytes=40\nBuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=16\nCurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=40079088\nIORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}\nPIO modes: pio0 pio1 pio2 pio3 pio4\nDMA modes: mdma0 mdma1 *mdma2 udma0 udma1 udma2 udma3 udma4\nThe preceding command displays the drive identification information (if any)\nthat was available the last time you booted the system — for example, the model,\nconfiguration, drive geometry (cylinders, heads, sectors), track size, sector size,\nbuffer size, supported DMA mode, and PIO mode. Some of this information will\ncome in handy later; you may want to print this screen so you have it in hard copy.\nFor now, test the disk subsystem by using the following command:\n/usr/local/sbin/hdparm -Tt /dev/hda\nYou see results like the following:\n/dev/hda:\nTiming buffer-cache reads: 128 MB in 1.01 seconds = 126.73 MB/sec\nTiming buffered disk reads: 64 MB in 17.27 seconds = 3.71 MB/sec\nThese actual numbers you see reflect the untuned state of your disk subsystem.\nThe -T option tells hdparam to test the cache subsystem (that is, the memory,\nCPU, and buffer cache). The -t tells hdparam to report stats on the disk (/dev/hda),\nreading data not in the cache. Run this command a few times and figure an aver-\nage of the MB per second reported for your disk. This is roughly the performance\nstate of your disk subsystem. In this example, the 3.71MB per second is the read\nperformance, which is low.\nNow improve the performance of your disk. Go back to the hdparam -i\n/dev/hda command output and look for MaxMultSect value. In this example, it’s\n16. Remember that the hdparam /dev/hda command showed that multcount value\nto be 0 (off). This means that multiple-sector mode (that is, IDE block mode) is\nturned off.\nThe multiple sector mode is a feature of most modern IDE hard drives. It enables\nthe drive to transfer multiple disk sectors per I/O interrupt. By default, it’s turned\noff. However, most modern drives can perform 2, 4, 8, or 16 sector transfers per I/O\ninterrupt. If you set this mode to the maximum possible value for your drive (the\nMaxMultiSect value), you should see your system’s throughput increase from 5 to\n50 percent (or more) — while reducing the operating system overhead by 30 to 50\n42\nPart I: System Performance\n" }, { "page_number": 66, "text": "percent. In this example, the MaxMultiSect value is 16, so the -m option of the\nhdparam tool to set this and see how performance increases. Run the following\ncommand:\n/usr/local/sbin/hdparm -m16 /dev/hda\nRunning the performance test using the hdparam -tT /dev/hda command\ndemonstrates the change. For the example system, the change looks like this:\n/dev/hda:\nTiming buffer-cache reads: 128 MB in 1.01 seconds = 126.73 MB/sec\nTiming buffered disk reads: 64 MB in 16.53 seconds = 3.87 MB/sec\nThe performance of the drive has gone up from 3.71MB per second to 3.87MB\nper second. Not much, but not bad. Probably your drive can do better than that if\nyour disk and controller are fairly new. You can probably achieve 20 to 30MB per\nsecond.\nIf hdparam reported that your system’s I/O support setting is 16-bit, and you\nhave a fairly new (one or two years old) disk subsystem, try enabling 32-bit I/O\nsupport. You can do so by using the -c option for hdparam and selecting one of its\nthree values:\nN\n0 enables default 16-bit I/O support\nN\n1 enables 32-bit support\nN\n3 enables 32-bit support with a special synchronization sequence required\nby many IDE/EIDE processors. (This value works well with most systems.)\nSet the options as follows:\n/usr/local/sbin/hdparm -m16 -c3 /dev/hda\nThe command uses the -m16 option (mentioned earlier) and adds -c3 to enable\n32-bit I/O support. Now running the program with the -t option shows the follow-\ning results:\n/dev/hda:\nTiming buffered disk reads: 64 MB in 8.96 seconds = 7.14 MB/sec\nThe performance of the disk subsystem has improved — practically doubled — and\nyou should be able to get even more.\nN If your drive supports direct memory access (DMA), you may be able to\nuse the -d option, which enables DMA mode.\nChapter 3: Filesystem Tuning\n43\n" }, { "page_number": 67, "text": "N Typically, -d1 -X32 options or -d1 -X66 options are used together to\napply the DMA capabilities of your disk subsystem.\nI The first set of options (-d1 -X32) enables the multiword DMA mode2\nfor the drive.\nI The next set of options (-d1 -X66) enables UltraDMA mode2 for drives\nthat support UltraDMA burst timing feature.\nThese options can dramatically increase your disk performance. (I have\nseen 20MB per second transfer rate with these options on various new\nEIDE/ATA drives.)\nN\n-u1 can boost overall system performance by enabling the disk driver to\nunmask other interrupts during the processing of a disk interrupt. That\nmeans the operating system can attend to other interrupts (such as the\nnetwork I/O and serial I/O) while waiting for a disk-based data transfer to\nfinish.\nhdparam offers many other options — but be careful with them. Most of them can\ncorrupt data if used incorrectly. Always back up your data before playing with the\nhdparam tool. Also, after you have found a set of options to work well, you should\nput the hdparam command with options in the /etc/rc.d/rc.local script so that\nthey are set every time you boot the system. For example, I have added the follow-\ning line in the /etc/rc.d/rc.local file in one of my newer Red Hat Linux \nsystems.\nhdparm -m16 -c3 -u1 -d1 -X66 /dev/hda\nTuning ext2 Filesystem\nFor years the ext2 filesystem has been the de facto filesystem for Linux. It isn’t the\ngreatest filesystem in the world but it works reasonably well. One of the ways you\ncan improve its performance is by changing the default block size from 1024 to a\nmultiple of 1024 (usually no more than 4096) for servers with mostly large files.\nHere’s how you can change the block size.\nChanging the block size of the ext2 filesystem\nTo find out what kind of files you have on an ext2 partition, do the following:\n1. Use su to navigate to root; change to the top directory of the ext2\npartition.\n44\nPart I: System Performance\n" }, { "page_number": 68, "text": "2. Run the following command (actually a small, command-line script using\nfind and the awk utility). The script displays all files, their sizes, and the\nsize of the entire partition — both total and average.\nfind . -type f -exec ls -l {} \\; | \\\nawk ‘BEGIN {tsize=0;fcnt=1;} \\\n{ printf(“%03d File: %-060s size: %d bytes\\n”,fcnt++, $9,\n$5); \\\ntsize += $5; } \\\nEND { printf(“Total size = %d\\nAverage file size = %.02f\\n”,\n\\\ntsize, tsize/fcnt); }’\n3. After you know the average size of the filesystem, you can determine\nwhether to change the block size. Say you find out your average file size\nis 8192, which is 2 × 4096. You can change the block size to 4096, pro-\nviding smaller, more manageable files for the ext2 filesystem.\n4. Unfortunately, you can’t alter the block size of an existing ext2 filesystem\nwithout rebuilding it. So you must back up all your files from the file-\nsystem and then rebuild it using the following command:\n/sbin/mke2fs /dev/partition -b 4096\nFor example, if you have backed up the /dev/hda7 partition and want to\nchange the block size to 4096, the command would look like this:\n/sbin/mke2fs /dev/hda7 -b 4096 command.\nChanging the block size to a higher number than the default (1024) may yield\nsignificant performance in raw reading speed by reducing number of seeks, poten-\ntially faster fsck session during boot, and less file fragmentation.\nHowever, increasing the block size blindly (that is, without knowing the average\nfile size) can result in wasted space. For example, if the average file size is 2010\nbytes on a system with 4096 byte blocks, each file wastes on an average 4096 –\n2010 = 2086 bytes! Know your file size before you alter the block size.\nUsing e2fsprogs to tune ext2 filesystem\nTo tune the ext2 filesystem, install the e2fsprogs utility package as follows:\n1. Download the e2fsprogs-version.src.rpm (replace version with the lat-\nest version number) source distribution from www.rpmfind.net. I down-\nloaded the e2fsprogs-1.19-0.src.rpm package. You can also get the\nsource from the e2fsprogs project site at e2fsprogs.sourceforge.net.\nWhen the download is complete, su to root.\nChapter 3: Filesystem Tuning\n45\n" }, { "page_number": 69, "text": "2. Run the rpm -ivh e2fsprogs-version.src.rpm command to extract the\nsource into a /usr/src/redhat/SOURCES/ directory. The source RPM drops\nan e2fsprogs-version.tar.gz file.\nUse the tar xvzf e2fsprogs-version.tar.gz command to extract the\nfile and create a subdirectory called e2fsprogs-version.\n3. Change to the new subdirectory e2fsprogs-version.\n4. Run mkdir build to create a new subdirectory and then change to that\ndirectory.\n5. Run ../configure script to configure the source tree.\n6. Run the make utility to create the binaries.\n7. Run make check to ensure that everything is built correctly.\n8. Run the make install command to install the binaries.\nAfter you have installed the e2fsprogs utilities you can start using them as dis-\ncussed in the following section.\nUSING THE TUNE2FS UTILITY FOR FILESYSTEM TUNING\nYou can use the tune2fs utility to tune various aspects of an ext2 filesystem.\nHowever, never apply the ext2 utilities on a mounted ext2 and always back up\nyour data whenever you are modifying anything belonging to a filesystem. In the\nfollowing section I discuss the tune2fs utility (part of the e2fsprogs package) to\ntune an unmounted ext2 filesystem called /dev/hda7. If you at least one of the set-\ntings discussed below, don’t forget to change the partition name (/dev/hda7) with\nan appropriate name. First let’s take a look at what tune2fs shows as the current\nsettings for the unmounted /dev/hda7. Run the following command:\n/sbin/tune2fs -l /dev/hda7\nThe output should be like the following:\ntune2fs 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09\nFilesystem volume name: \nLast mounted on: \nFilesystem UUID: 5d06c65b-dd11-4df4-9230-a10f2da783f8\nFilesystem magic number: 0xEF53\nFilesystem revision #: 1 (dynamic)\nFilesystem features: filetype sparse_super\nFilesystem state: clean\nErrors behavior: Continue\nFilesystem OS type: Linux\nInode count: 1684480\nBlock count: 13470471\n46\nPart I: System Performance\n" }, { "page_number": 70, "text": "Reserved block count: 673523\nFree blocks: 13225778\nFree inodes: 1674469\nFirst block: 1\nBlock size: 1024\nFragment size: 1024\nBlocks per group: 8192\nFragments per group: 8192\nInodes per group: 1024\nInode blocks per group: 128\nLast mount time: Thu Feb 15 17:51:19 2001\nLast write time: Thu Feb 15 17:51:51 2001\nMount count: 1\nMaximum mount count: 20\nLast checked: Thu Feb 15 17:50:23 2001\nCheck interval: 15552000 (6 months)\nNext check after: Tue Aug 14 18:50:23 2001\nReserved blocks uid: 0 (user root)\nReserved blocks gid: 0 (group root)\nFirst inode: 11\nInode size: 128\nThe very first setting I would like for you to understand is the error behavior.\nThis setting dictates how kernel behaves when errors are detected on the filesystem.\nThere are three possible values for this setting:\nN Continue\nThe default setting is to continue even if there is an error.\nN Remount-ro (readonly)\nN Panic\nThe next setting, mount count, is the number of time you have mounted this\nfilesystem.\nThe next setting shows the maximum mount count (20), which means that after\nthe maximum number of read/write mode mounts the filesystem is subject to a fsck\nchecking session during the next boot cycle.\nThe last checked setting shows the last date at which an fsck check was performed.\nThe check interval for two consecutive fsck sessions. The check interval is only used if\nthe maximum read/write mount count isn’t reached during the interval. If you don’t\nunmount the filesystem for 6 months, then although the mount count is only 2, the\nfsck check is forced because the filesystem exceeded the check interval. The next fsck\ncheck date is shown in next check after setting. The reserved block UID and GID set-\ntings show which user and group has ownership of the reserved portion of this filesys-\ntem. By default, the reserved portion is to be used by super user (UID = 0, GID = 0).\nChapter 3: Filesystem Tuning\n47\n" }, { "page_number": 71, "text": "On an unmounted filesystem such as /dev/hda7, you can change the maximum\nread/write mount count setting to be more suitable for your needs using the -c\noption with tune2fs. For example, /sbin/tune2fs -c 1 /dev/hda7 forces fsck\ncheck on the filesystem every time you boot the system. You can also use the -i\noption to change the time-based fsck check enforcement schedule. For example, the\n/sbin/tune2fs --i7d /dev/hda7 command ensures that fsck checks are enforced\nif the filesystem is remounted in read/write mode after a week. Similarly, the\n/sbin/tune2fs --i0 /dev/hda7 command disables the time-based fsck checks.\nUSING THE E2FSCK UTILITY FOR CHECKING\nAND REPAIRING FILESYSTEM\nIf you have a corrupt ext2 filesystem, you can use the e2fsck utility to fix it. To\ncheck a partition using e2fsck, you must unmount it first and run the\n/sbin/e2fsck /dev/device command where /dev/device is your disk drive. For\nexample, to force fsck check on a device called /dev/hda7, I can use the\n/sbin/e2fsck -f /dev/hda7 command. Such as check may display output as\nshown below.\ne2fsck 1.19, 13-Jul-2000 for EXT2 FS 0.5b, 95/08/09\nPass 1: Checking inodes, blocks, and sizes\nPass 2: Checking directory structure\nPass 3: Checking directory connectivity\nPass 4: Checking reference counts\nPass 5: Checking group summary information\n/dev/hda7: 12/1684256 files (0.0% non-contiguous), 52897/3367617 blocks\nThe e2fsck utility asks you repair questions, which you can avoid by using the \n-p option.\nUsing a Journaling Filesystem\nA journaling filesystem is simply a transaction-based filesystem. Each filesystem\nactivity that changes the filesystem is recorded in a transaction log. In the event of\na crash, the filesystem can replay the necessary transactions to get back to a stable\nstate in a very short time. This is a technique that many database engines such as\nIBM DB2 and Oracle use to ensure that system is always in a known and recover-\nable state.\nThe problem with ext2 filesystem is that in the unfortunate event of a crash, the\nfilesystem can be in such an unclean state that it may be corrupt beyond any\nmeaningful recovery. The fsck program used to check and potentially repair the\nfilesystem often can’t do much to fix such problems. With a journaling filesystem,\nsuch a nightmare is a thing of the past! Because the transaction log records all the\nactivities in the filesystem, a crash recovery is fast and data loss is minimum.\n48\nPart I: System Performance\n" }, { "page_number": 72, "text": "A journaling filesystem doesn’t log data in the log; it simply logs meta-data\nrelated to disk operations so replaying the log only makes the filesystem\nconsistent from the structural relationship and resource allocation point of\nview. Some small data loss is possible. Also, logging is subject to the media\nerrors like all other activity. So if the media is bad, journaling won’t help\nmuch.\nJournaling filesystem is new to Linux but has been around for other platforms.\nThere are several flavors of experimental journaling filesystem available today:\nN IBM developed JFS open source for Linux.\nJFS has been ported from AIX, IBM’s own operating system platform, and\nstill not ready for production use. You can find more information on JFS\nat http://oss.software.ibm.com/developerworks/opensource/jfs.\nN Red Hat’s own ext3 filesystem which is ext2 + journaling capabilities.\nIt’s also not ready for prime time. You can download the alpha release of\next3 ftp site at ftp://ftp.linux.org.uk/pub/linux/sct/fs/jfs/.\nN ReiserFS developed by Namesys is currently included in the Linux kernel\nsource distribution. It has been used more widely than the other journal-\ning filesystems for Linux. So far, it is leading the journaling filesystem\narena for Linux.\nN I discuss how you can use ReiserFS today in a later section. ReiserFS was\ndeveloped by Hans Reiser who has secured funding from commercial\ncompanies such as MP3, BigStorage.com, SuSe, and Ecila.com. These\ncompanies all need better, more flexible filesystems, and can immediately\nchannel early beta user experience back to the developers. You can find\nmore information on ReiserFS at http://www.namesys.com.\nN XFS journaling filesystem developed by Silicon Graphics, Inc. (SGI).\nYou can find more information on XFS at http://oss.sgi.com/\nprojects/xfs/. XFS is a fast, solid 64-bit filesystem, which means that it\ncan support large files (9 million terabytes) and even larger filesystems\n(18 million terabytes).\nBecause ReiserFS is included with Linux kernel 2.4.1 (or above), I discuss how\nyou can use it in the following section.\nChapter 3: Filesystem Tuning\n49\n" }, { "page_number": 73, "text": "As of this writing the ReiserFS filesystem can’t be used with NFS without\npatches,which aren’t officially available for the kernel 2.4.1 o above yet.\nCompiling and installing ReiserFS\nHere’s how you can compile and install ReiserFS (reiserfs) support in Linux kernel\n2.4.1 or above.\n1. Get the latest Linux kernel source from http://www.kernel.org and\nextract it in /usr/src/linux-version directory as root, where version is\nthe current version of the kernel. Here I assume this to be 2.4.1.\n2. Run make menuconfig from the /usr/src/linux-2.4.1.\n3. Select the Code maturity level options submenu and using spacebar select\nthe Prompt for development and/or incomplete code/drivers option. Exit\nthe submenu.\n4. Select the File systems submenu. Using spacebar, select Reiserfs support to\nbe included as a kernel module and exit the submenu.\nDon’t choose the Have reiserfs do extra internal checking option under\nReiserFS support option. If you set this to yes, then reiserfs performs exten-\nsive checks for internal consistency throughout its operation,which makes it\nvery slow.\n5. Ensure that all other kernel features that you use are also selected as usual\n(see Tuning kernel for details).\n6. Exit the main menu and save the kernel configuration.\n7. Run the make dep command to as suggested by the menuconfig program.\n8. Run make bzImage to create the new kernel. Then run make modules and\nmake modules_install to install the new modules in appropriate location.\n9. Change directory to arch/i386/boot directory. Note, if your hardware\narchitecture is Intel, you must replace i386 and possibly need further\ninstructions from a kernel HOW-TO documentation to compile and install\nyour flavor of the kernel. I assume that most readers are i386-based.\n50\nPart I: System Performance\n" }, { "page_number": 74, "text": "10. Copy the bzImage to /boot/vmlinuz-2.4.1 and edit the /etc/lilo.conf\nfile to include a new configuration such as the following:\nimage=/boot/vmlinuz-2.4.1\nlabel=linux2\nread-only\nroot=/dev/hda1\n11. Run the /sbin/lilo command to reconfigure LILO and reboot your sys-\ntem. At the lilo prompt enter linux2 and boot the new kernel. If you have\nany problem, you should be able to reboot to your standard linux kernel,\nwhich should be default automatically.\n12. After you have booted the new kernel, you are ready to use ReiserFS \n(reiserfs).\nUsing ReiserFS\nBecause ReiserFS (reiserfs) is still under the “experimental” category, I highly rec-\nommend restricting it to a non-critical aspect of your system. Ideally, you want to\ndedicate an entire disk or at least one partition for ReiserFS and use it and see how\nyou like it.\nTo use ReiserFS with a new partition called /dev/hda7, simply do the following:\n1. As root, ensure that the partition is set as Linux native (83) by using\nfdisk or another disk-partitioning tool.\n2. Create a ReiserFS (reiserfs) filesystem on the new partition, using the\n/sbin/mkreiserfs /dev/hda7 command.\n3. Create a mount point for the new filesystem. For example, I can create a\nmount point called /jfs, using the mkdir /jfs command.\n4. Mount the filesystem, using the mount -t reiserfs /dev/hda7 /jfs\ncommand. Now you can access it from /jfs mount point.\nBenchmarking ReiserFS\nTo see how a journaling filesystem stacks up against the ext2 filesystem, here’s a\nlittle benchmark you can do on your own.\nI assume that you have created a brand new ReiserFS filesystem on\n/dev/hda7 and can mount it on /jfs.To do this benchmark,you must not\nstore any data in this partition. So back up everything you have in /jfs\nbecause you erase everything on /jfs in this process.\nChapter 3: Filesystem Tuning\n51\n" }, { "page_number": 75, "text": "Create a shell script called reiserfs_vs_ext2.bash in your /tmp directory, as\nshown in Listing 3-1.\nListing 3-1: /tmp/reiserfs_vs_ext2.bash\n#!/bin/bash\n#\n# This script is created based on the file_test script\n# found in the home-grown benchmark found at http://www.namesys.com\n#\nif [ $# -lt 6 ]\nthen\necho Usage: file_test dir_name device nfiles size1 size2 log_name\nexit\nfi\nTESTDIR=$1\nDEVICE=$2\nLOGFILE=$6\n/bin/umount $TESTDIR\n/sbin/mkreiserfs $DEVICE\nmount -t reiserfs $DEVICE $TESTDIR\necho 1. reiserfs 4KB creating files ...\necho “reiserfs 4KB create” $3 “files of size: from “ $4 “to” $5 > $LOGFILE\n(time -p ./mkfile $TESTDIR $3 $4 $5)>> $LOGFILE 2>&1\necho done.\nsync\ndf >> $LOGFILE\n/bin/umount $TESTDIR\n/sbin/mke2fs $DEVICE -b 4096\nmount -t ext2 $DEVICE $TESTDIR\necho 2. ext2fs 4KB creating files ...\necho “ext2fs 4KB create” $3 “files of size: from “ $4 “to” $5 >> $LOGFILE\n(time -p ./mkfile $TESTDIR $3 $4 $5)>> $LOGFILE 2>&1\necho done.\nsync\ndf >> $LOGFILE\n/bin/umount $TESTDIR\nDownload a small C program called mkfile.c in /tmp. This program, devel-\noped by the ReiserFS team, is available at www.namesys.com/filetest/mkfile.c.\nFrom the /tmp directory, compile this program using the gcc -o mkfile mkfile.c\ncommand. Change the permission of the reiserfs_vs_ext2.bash and mkfile\nprogram, using the chimed 755 reiserfs_vs_ext2.bash mkfile command.\n52\nPart I: System Performance\n" }, { "page_number": 76, "text": "Now you are ready to run the benchmark test. Run the following command from\nthe /tmp directory as root:\n./reiserfs_vs_ext2.bash /jfs /dev/hda7 100000 1024 4096 log\nYou are asked to confirm that you want to lose all data in /dev/hda7. Because\nyou have already emptied this partition for testing, specify yes and continue. This\ntest creates 100,000 files that range in size from 1K to 4K, in both the ReiserFS\n(reiserfs) and ext2 filesystems by creating each of these two filesystem in\n/dev/hda7 in turn. The results are recorded in the /tmp/log file. Here is a sample\n/tmp/log file:\nreiserfs 4KB create 100000 files of size: from 1024 to 4096\nreal 338.68\nuser 2.83\nsys 227.83\nFilesystem 1k-blocks Used Available Use% Mounted on\n/dev/hda1 1035660 135600 847452 14% /\n/dev/hda5 4134868 2318896 1605928 60% /usr\n/dev/hda7 13470048 332940 13137108 3% /jfs\next2fs 4KB create 100000 files of size: from 1024 to 4096\nreal 3230.40\nuser 2.87\nsys 3119.12\nFilesystem 1k-blocks Used Available Use% Mounted on\n/dev/hda1 1035660 135608 847444 14% /\n/dev/hda5 4134868 2318896 1605928 60% /usr\n/dev/hda7 13259032 401584 12183928 4% /jfs\nThe report shows that to create 100K files of size 1K–4K, Reiserfs (reiserfs)\ntook 338.68 real-time seconds while ext2 took 3230.40 real-time seconds. So the\nperformance is nice.\nAlthough journaling filesystem support is very new to Linux, it’s gotten a lot of\nattention from the industry interested in using Linux in the enterprise, so journal-\ning filesystems will mature in a fast track. I recommend that you use this flavor of\nthe journaling filesystem on an experimental level and become accustomed to its\nsins and fancies.\nNow lets look at another enterprising effort in the Linux disk management called\nLogical Volume Management or LVM for short. LVM with journaling filesystems\nwill ensures Linux’s stake in the enterprise-computing world. Good news is that\nyou don’t have to have a budget of the size of a large enterprise to get the high reli-\nability and flexibility of a LVM based disk subsystem. Lets see how you can use\nLVM today.\nChapter 3: Filesystem Tuning\n53\n" }, { "page_number": 77, "text": "Managing Logical Volumes\nLogical Volume Management (LVM) and its journaling filesystems ensure Linux a\nsignificant place in the enterprise-computing world. You don’t need the budget of a\nlarge enterprise to get the high reliability and flexibility of an LVM-based disk sub-\nsystem. Here’s how you can use LVM today.\nTraditionally, installing Linux meant partitioning the hard drive(s) into\n/ (root), /usr, /home, and swap space. Problems cropped up if you ran out of disk\nspace in one of these partitions. In most cases, the system administrator would then\ncreate a /usr2 (or /home2) partition and tweak the scripts — or create symbolic links\nto fool the programs into using the new space. Although this practice creates\nunnecessary busy work and makes the system more “customized,” it has been\nacceptable in the administrative scenarios of small-to-mid-size systems. However, a\nlarger, enterprise-class environment sets a different standard; such disk administra-\ntion wastes too many resources, which dictatces a different solution: Grow the\nneeded partitions (such as /usr and /home) by adding new disk media without\nchanging the mount points. This is possible by means of a concept called logical\nvolumes, now available to anyone using Linux.\nThink of logical volumes as a high-level storage layer that encapsulates the\nunderlying physical storage layer. A logical volume can consist of at least one phys-\nical disk — sometimes several — made available as a single mount point such as\n/usr, /home, or /whatever. The benefit is easier administration. Adding storage to\na logical volume means simply adding physical disks to the definition of the vol-\nume; reducing the storage area is a matter of removing physical disks from the log-\nical volume.\nYou can find out more about LVM at http://sistina.com/lvm/.\nCompiling and installing the LVM module for\nkernel\nThe latest Linux kernel 2.4.1 (or above) source distribution ships with the LVM\nsource code. Enabling LVM support is a simple matter of compiling and installing a\nnew kernel, as follows:\n1. su to root and change directory to the top-level kernel source directory\n(for example, /usr/src/linux-2.4.1). Run make menuconfig to config-\nure the kernel.\n54\nPart I: System Performance\n" }, { "page_number": 78, "text": "2. Select the Multi-device support (RAID and LVM) support submenu.\nPress the spacebar once to include Multiple devices driver support\n(RAID and LVM) support in the kernel; then select Logical volume\nmanager (LVM) support as a kernel module. Save the kernel configura-\ntion as usual.\n3. Compile and install the kernel as usual (see Kernel Tuning chapter for\ndetails).\n4. Run /sbin/modprobe lvm-mod to load the LVM kernel module. To verify\nthat this module is loaded properly, run the /sbin/lsmod command and\nyou should see the module listed as one of the loaded kernel modules. Add\nthe following lines in /etc/modules.conf to automatically load the lvm-\nmod module when needed in the future.\nalias block-major-58 lvm-mod\nalias char-major-109 lvm-mod\n5. Create a script called /etc/rc.d/init.d/lvm (as shown in Listing 3-2), to\nstart and stop LVM support automatically (during the boot and shutdown\ncycles respectively).\nListing 3-2: /etc/rc.d/init.d/lvm\n!/bin/bash\n#\n# lvm This shell script takes care of\n# starting and stopping LVM-managed\n# volumes.\n#\n# chkconfig: - 25 2\n# description: LVM is Logical Volume Management\n# Source function library.\n. /etc/rc.d/init.d/functions\n[ -f /sbin/vgscan ] || exit 0\n[ -f /sbin/vgchange ] || exit 0\nRETVAL=0\nstart() {\n# Start LVM.\ngprintf “Starting LVM: “\n/sbin/vgscan;\n/sbin/vgchange -ay\n}\nstop() {\n# Stop LVM.\nContinued\nChapter 3: Filesystem Tuning\n55\n" }, { "page_number": 79, "text": "Listing 3-2 (Continued)\ngprintf “Shutting down LVM: “\n/sbin/vgchange -an\n}\nrestart() {\nstop\nstart\n}\n# See how we were called.\ncase “$1” in\nstart)\nstart\n;;\nstop)\nstop\n;;\nrestart)\nrestart\n;;\n*)\necho “Usage: lvm {start|stop|restart}”\nexit 1\nesac\nexit $?\n6. Create two symblock links to the /etc/rc.d/init.d/lvm script, using\nthe following commands:\nln -s /etc/rc.d/init.d/lvm /etc/rc.d/rc3.d/S25lvm\nln -s /etc/rc.d/init.d/lvm /etc/rc.d/rc3.d/K25lvm\nCreating a logical volume\nIn this example, assume that you have two hard disks called /dev/hda and\n/dev/hdc — and that each one has a free partition: /dev/hda7 (for /dev/hda) and\n/dev/hdc1 (for /dev/hdb). You want to create an LVM using these two partitions.\n1. Run the /sbin/fdisk /dev/hda command. Using fdisk commands, tog-\ngle the appropriate partition’s ID to 8e (Linux LVM). The following listing\nshows the example (edited for brevity) fdisk session to change the ID for\na partition called /dev/hda7. The necessary user input is shown in bold\nletters.\nCommand (m for help): p\nDisk /dev/hda: 255 heads, 63 sectors, 2494 cylinders\nUnits = cylinders of 16065 * 512 bytes\n56\nPart I: System Performance\n" }, { "page_number": 80, "text": "Device Boot Start End Blocks Id System\n/dev/hda1 * 1 131 1052226 83 Linux\n/dev/hda2 262 2494 17936572+ f Win95\n/dev/hda5 262 784 4200934+ 83 Linux\n/dev/hda6 785 817 265041 82 Linux swap\n/dev/hda7 818 2494 13470471 83 Linux\nCommand (m for help): t\nPartition number (1-7): 7\nHex code (type L to list codes): 8e\nChanged system type of partition 7 to 8e (Linux LVM)\nCommand (m for help): p\nDisk /dev/hda: 255 heads, 63 sectors, 2494 cylinders\nUnits = cylinders of 16065 * 512 bytes\nDevice Boot Start End Blocks Id System\n/dev/hda1 * 1 131 1052226 83 Linux\n/dev/hda2 262 2494 17936572+ f Win95\n/dev/hda5 262 784 4200934+ 83 Linux\n/dev/hda6 785 817 265041 82 Linux swap\n/dev/hda7 818 2494 13470471 8e Linux LVM\nCommand (m for help): w\nThe partition table has been altered!\nDo this step for the /dev/hdc1 partition.\n2. Run the /sbin/pvcreate /dev/hda7 /dev/hdc1 command to create two\nphysical volumes.\n3. Run the /sbin/vgcreate big_disk /dev/hda7 /dev/hdc1 command to\ncreate a new volume group called big_disk. The command shows the fol-\nlowing output:\nvgcreate -- INFO: using default physical extent size 4 MB\nvgcreate -- INFO: maximum logical volume size is 255.99\nGigabyte\nvgcreate -- doing automatic backup of volume group “big_disk”\nvgcreate -- volume group “big_disk” successfully created and\nactivated\n4. To confirm that the volume group is created using the /dev/hda7 physical\nvolume, run the /sbin/pvdisplay /dev/hda7 command to display stats\nas shown below:\n--- Physical volume ---\nPV Name /dev/hda7\nVG Name big_disk\nPV Size 12.85 GB / NOT usable 2.76 MB [LVM: 133\nKB]\nPV# 1\nChapter 3: Filesystem Tuning\n57\n" }, { "page_number": 81, "text": "PV Status available\nAllocatable yes\nCur LV 0\nPE Size (KByte) 4096\nTotal PE 3288\nFree PE 3288\nAllocated PE 0\nPV UUID 2IKjJh-MBys-FI6R-JZgl-80ul-uLrc-PTah0a\nAs you can see, the VG Name (volume group name) for /dev/hda7 is\nbig_disk, which is exactly what we want. You can run the same command\nfor /dev/hdc1 as shown below:\n--- Physical volume ---\nPV Name /dev/hdc1\nVG Name big_disk\nPV Size 3.91 GB / NOT usable 543 KB [LVM: 124\nKB]\nPV# 2\nPV Status available\nAllocatable yes\nCur LV 0\nPE Size (KByte) 4096\nTotal PE 1000\nFree PE 1000\nAllocated PE 0\nPV UUID RmxH4b-BSfX-ypN1-cfwO-pZHg-obMz-JKkNK5\n5. You can also display the volume group information by using the\n/sbin/vgdisplay command, which shows output as follows:\n--- Volume group ---\nVG Name big_disk\nVG Access read/write\nVG Status available/resizable\nVG # 0\nMAX LV 256\nCur LV 0\nOpen LV 0\nMAX LV Size 255.99 GB\nMax PV 256\nCur PV 2\nAct PV 2\nVG Size 16.75 GB\nPE Size 4 MB\nTotal PE 4288\nAlloc PE / Size 0 / 0\n58\nPart I: System Performance\n" }, { "page_number": 82, "text": "Free PE / Size 4288 / 16.75 GB\nVG UUID VMttR1-Tl0e-I4js-oXmi-uE1e-hprD-iqhCIX\nIn the preceding report, the total volume group size (VG Size) is roughly\nthe sum of the two physical volumes we added to it.\n6. Run the /sbin/lvcreate --L10G -nvol1 big_disk command to create a\n10GB logical volume called /dev/big_disk/vol1, using the big_disk volume\ngroup.\nYou use disk striping and then use -i option to specify the number of phys-\nical volumes to scatter the logical volume and -I option to specify the \nnumber of kilobytes for the granularity of the stripes. Stripe size must be\n2^n (n = 0 to 7). For example, to create a striped version of the logical \nvolume( using the two physical volumes you added in the volume group\nearlier), you can run the /sbin/lvcreate -i2 -I4 -L10G -nvol1\nbig_disk command. I don’t recommend striping because currently you\ncan’t add new physical volumes to a striped logical volume, which sort of\ndefeats the purpose of LVM.\n7. Decide whether to use a journaling filesystem (such as reiserfs) or ext2 for\nthe newly created logical volume called vol1. To create a reiserfs filesys-\ntem, run the /sbin/mkreiserfs /dev/big_disk/vol1 command, or to\ncreate an ext2 filesystem run the /sbin/mke2fs -b 4096\n/dev/big_disk/vol1 command. If you want to use a different block size,\nchange 4096 to be 1024, or 2048, or your custom block size as needed. I\nprefer to create the reiserfs filesystem when using logical volumes.\n8. Create a mount point called /vol1, using the mkdir /vol1 command.\nThen mount the filesystem, using the mount /dev/big_disk/vol1 /vol1\ncommand. You may have to add -t reiserfs option when mounting a\nreiserfs filesystem. Run df to see the volume listed in the output. Here’s a\nsample output:\nFilesystem 1k-blocks Used Available Use% Mounted on\n/dev/hda1 1035660 243792 739260 25% /\n/dev/hda5 4134868 2574004 1350820 66% /usr\n/dev/big_disk/vol1 10485436 32840 10452596 1% /vol1\nYou are all set with a new logical volume called vol1. The LVM package includes\na set of tools to help you manage your volumes.\nChapter 3: Filesystem Tuning\n59\n" }, { "page_number": 83, "text": "USING PHYSICAL VOLUME MANAGEMENT UTILITIES\nSeveral utilities can manage physical volumes used for your logical volumes.\nN The /sbin/pvscan utility enables you to list all physical volumes in your\nsystem.\nN The /sbin/pvchange utility enables you to change attributes of a physical\nvolume.\nN The /sbin/pvcreate utility enables you to create a new physical volume.\nN\n/sbin/pvdata displays debugging information for a physical volume.\nN\n/sbin/pvdisplay displays attribute information for a physical volume.\nN The /sbin/pvmove utility enables you to move data from one physical\nvolume to another within a volume group.\nFor example, say that you have a logical volume group called vol1 consist-\ning of a single volume group called big_disk, which has two physical vol-\numes /dev/hda7 and /dev/hdc1. You want to move data from /dev/hda7 to\n/dev/hdc1 and remove /dev/hda7 with a new disk (or partition). In such\ncase, first ensure that /dev/hdc1 has enough space to hold all the data from\n/dev/hda7. Then run the /sbin/pvmove /dev/hda8 /dev/hdc1 command.\nThis operation takes a considerable amount of time (depending on data)\nand also shouldn’t be interrupted.\nUSING VOLUME GROUP MANAGEMENT UTILITIES\nTo manage a volume group, which consists of at least one physical volume, you can\nuse the following utilities:\nN The vgscan utility enables you to list all the volume groups in your\nsystem.\nN The /sbin/vgcfgbackup utility backs up a volume group descriptor area.\nN The /sbin/vgcfgrestore utility restores a volume group descriptor area.\nN The /sbin/vgchange utility changes attributes of a volume group. For\nexample, you can activate a volume group by using -a y option and use\nthe -a n option to deactivate it.\nN The /sbin/vgck utility checks a volume group descriptor area \nconsistency.\nN The /sbin/vgcreate utility enables you to create a new volume group.\n60\nPart I: System Performance\n" }, { "page_number": 84, "text": "N The /sbin/vgdisplay utility displays volume group information.\nN The /sbin/vgexport utility makes an inactive volume group unknown to\nthe system so that you can remove physical volumes from it.\nN The /sbin/vgextend utility enables you to add physical volumes to a\nvolume group.\nN The /sbin/vgimport utility enables you to import a volume group that\nhas been previously exported using the vgexport utility.\nN The /sbin/vgmerge utility enables you to merge two volume groups.\nN The /sbin/vgmknodes utility enables you to create volume group directo-\nries and special files.\nN The /sbin/vgreduce utility enables you to reduce a volume group by\nremoving at least one unused physical volume from the group.\nN The /sbin/vgremove utility enables you to remove a volume group that\ndoesn’t have any logical volume and also is inactive. If you have a vol-\nume group that has at least one logical volume, you must deactivate and\nremove it first.\nN The /sbin/vgrename utility enables you to rename a volume group.\nN The /sbin/vgscan utility scans all disks for volume groups and also\nbuilds the /etc/lvmtab and other files in /etc/lvmtab.d directory,\nwhich are used by the LVM module.\nN The /sbin/vgsplit utility enables you to split a volume group.\nUSING LOGICAL VOLUME MANAGEMENT UTILITIES\nThe following utilities enable you to manage logical volumes:\nN The /sbin/lvchange utility enables you to change attributes of a logical\nvolume. For example, you can activate a logical volume group using -a y\noption and deactivate it using -a n option. Once it’s deactivated, you may\nhave to use the vmchange command before you can activate the volume\ngroup.\nN The /sbin/lvcreate utility enables you to create a new logical volume\nusing an existing volume group.\nN The /sbin/lvdisplay utility enables you display attributes of a logical\nvolume.\nN The /sbin/lvextend utility enables you to extend the size of a logical\nvolume.\nN The /sbin/lvreduce utility enables you to change the size of an existing,\nactive logical volume.\nChapter 3: Filesystem Tuning\n61\n" }, { "page_number": 85, "text": "N The /sbin/lvremove utility enables you to remove an inactive logical\nvolume.\nN The /sbin/lvrename utility enables you to rename an existing logical\nvolume.\nN The /sbin/lvscan utility enables you to locate all logical volumes on\nyour system.\nUSING LOGICAL VOLUME MANAGER UTILITIES\nThe following utilities give you control of the logical volume management module\nitself.\nN The /sbin/lvmchange utility enables you to change the attribute of the\nlogical volume manager. You shouldn’t need to use this utility in normal\noperation.\nN The /sbin/lvmcreate_initrd utility enables you to a bootable initial\nRAM disk using a logical volume.\nN The /sbin/lvmdiskscan utility enables you to scan all storage devices in\nyour system that can be used in logical volumes.\nN The /sbin/lvmsadc utility enables you to collect read/write statistics of a\nlogical volume.\nN The /sbin/lvmsar utility enables you to report read/write statistics to a\nlog file.\nAdding a new disk or partition to a logical volume\nAfter a logical volume has been in use for a while, eventually you have to add new\ndisks to it as your system needs more space. Here I add a new disk partition\n/dev/hdc2 to the already created logical volume called /dev/big_disk/vol1.\n1. su to root and run /sbin/pvscan to view the state of all your physical\nvolumes. Here’s a sample output.\npvscan -- reading all physical volumes (this may take a\nwhile...)\npvscan -- ACTIVE PV “/dev/hda7” of VG “big_disk” [12.84 GB\n/ 2.84 GB free]\npvscan -- ACTIVE PV “/dev/hdc1” of VG “big_disk” [3.91 GB /\n3.91 GB free]\npvscan -- total: 2 [16.75 GB] / in use: 2 [16.75 GB] / in no\nVG: 0 [0]\n62\nPart I: System Performance\n" }, { "page_number": 86, "text": "2. Run /sbin/vgdisplay big_disk to learn the current settings for the\nbig_disk volume group. Here’s a sample output:\n--- Volume group ---\nVG Name big_disk\nVG Access read/write\nVG Status available/resizable\nVG # 0\nMAX LV 256\nCur LV 0\nOpen LV 0\nMAX LV Size 255.99 GB\nMax PV 256\nCur PV 2\nAct PV 2\nVG Size 16.75 GB\nPE Size 4 MB\nTotal PE 4288\nAlloc PE / Size 0 / 0\nFree PE / Size 4288 / 16.75 GB\nVG UUID p3N102-z7nM-xH86-DWw8-yn2J-Mw3Y-lshq62\nAs you can see here, the total volume group size is about 16 GB.\n3. Using the fdisk utility, change the new partition’s system ID to 8e (Linux\nLVM). Here’s a sample /sbin/fdisk /dev/hdc session on my system.\nCommand (m for help): p\nDisk /dev/hdc: 255 heads, 63 sectors, 1583 cylinders\nUnits = cylinders of 16065 * 512 bytes\nDevice Boot Start End Blocks Id System\n/dev/hdc1 1 510 4096543+ 8e Linux LVM\n/dev/hdc2 511 1583 8618872+ 83 Linux\nCommand (m for help): t\nPartition number (1-4): 2\nHex code (type L to list codes): 8e\nChanged system type of partition 2 to 8e (Linux LVM)\nCommand (m for help): p\nDisk /dev/hdc: 255 heads, 63 sectors, 1583 cylinders\nUnits = cylinders of 16065 * 512 bytes\nDevice Boot Start End Blocks Id System\n/dev/hdc1 1 510 4096543+ 8e Linux LVM\n/dev/hdc2 511 1583 8618872+ 8e Linux LVM\nCommand (m for help): v\n62 unallocated sectors\nCommand (m for help): w\nThe partition table has been altered!\nChapter 3: Filesystem Tuning\n63\n" }, { "page_number": 87, "text": "4. Run /sbin/mkreiserfs /dev/hdc2 to create a reiserfs filesystem, but if\nyou have been using ext2 filesystems for the logical volume, use the\n/sbin/mke2fs /dev/hdc2 command instead.\n5. Run /sbin/pvcreate /dev/hdc2 to create a new physical volume using\nthe /dev/hdc2 partition.\n6. Run /sbin/vgextend big_disk /dev/hdc2 to add the partition to the\nbig_disk volume group. To verify that the disk partition has been added\nto the volume group, run the /sbin/vgdisplay /dev/big_disk com-\nmand. You should see output like the following:\n--- Volume group ---\nVG Name big_disk\nVG Access read/write\nVG Status available/resizable\nVG # 0\nMAX LV 256\nCur LV 1\nOpen LV 0\nMAX LV Size 255.99 GB\nMax PV 256\nCur PV 3\nAct PV 3\nVG Size 24.97 GB\nPE Size 4 MB\nTotal PE 6392\nAlloc PE / Size 4608 / 18 GB\nFree PE / Size 1784 / 6.97 GB\nVG UUID VMttR1-Tl0e-I4js-oXmi-uE1e-hprD-iqhCIX\nIn this report, the volume group size has increased to about 25GB because\nwe added approximately 8GB to 16GB of existing volume space.\n7. You must unmount the logical volumes that use the volume group. In my\nexample, I can run umount /dev/big_disk/vol1 to unmount the logical\nvolume that uses the big_disk volume group.\nIf you get a device busy error message when you try to unmount the\nfilesystem, you are either inside the filesystem mount point or you have at\nleast one user (or program) currently using the filesystem. The best way to\nsolve such a scenario is to take the system down to single-user mode from\nthe system console,using the /etc/rc.d/rc 1 command and staying out\nof the mount point you are trying to unmount.\n64\nPart I: System Performance\n" }, { "page_number": 88, "text": "8. Increase the size of the logical volume. If the new disk partition is (say)\n8GB and you want to extend the logical volume by that amount, do so\nusing the /sbin/lvextend -L +8G /dev/big_disk/vol1 command. You\nshould see output like the following:\nlvextend -- extending logical volume “/dev/big_disk/vol1” to\n18 GB\nlvextend -- doing automatic backup of volume group “big_disk”\nlvextend -- logical volume “/dev/big_disk/vol1” successfully\nextended\n9. After the logical volume has been successfully extended, resize the filesys-\ntem accordingly:\nN If you use a reiserfs filesystem, you can run\n/sbin/resize_reiserfs -f /dev/big_disk/vol1\nIf the filesytem is already mounted, run the same command without\nthe -f option.\nN If you use the ext2 filesystem, you can use the following command to\nresize both the filesystem and the volume itself.:\n/sbin/e2fsadm -L +8G /dev/big_disk/vol1\n10. You can mount the logical volume as usual.\nFor example, (if you use reiserfs filesystems for the disks, the following\ncommand mounts the logical volume as a reiserfs filesystem:\nmount /dev/big_disk/vol1 /vol1 -t reiserfs\nIf you use an ext2 filesystem, use -t ext2 instead.\nRemoving a disk or partition from a volume group\nBefore you remove a disk from a logical volume, back up all the files. Now say that\nyou have a logical volume called /dev/big_disk/vol1 which is made up of\n/dev/hda7, /dev/hdc1, and /dev/hdc2. Now you want to remove /dev/hda7\nbecause it’s too slow (or for some other reason). Here’s how you can do so.\nEvery time I reduced a logical volume to a smaller size, I had to recreate the\nfilesystem. All data was lost.The lesson? Always back up the logical volume\nfirst.\nChapter 3: Filesystem Tuning\n65\n" }, { "page_number": 89, "text": "1. Move the data on the physical volume /dev/hda7 to another disk or par-\ntition in the same volume group. If /dev/hdc1 has enough space to keep\nthe data, you can simply run the /sbin/pvmove /dev/hda7 /dev/hdc1\ncommand to move the data. If you don’t have the space in either you\nmust add a disk to replace /dev/hda7 if you want to save the data.\n2. Remove the physical volume /dev/hda7 from the volume group, using the\nfollowing command:\n/sbin/vgreduce big_disk /dev/hda7\n3. Reduce the size of the logical volume /dev/big_disk/vol1. To reduce it\nby 2GB, first reduce the filesystem size:\nN If you use reiserfs filesystem, run the following commands:\n/sbin/resize_reiserfs -s -2G /dev/big_disk/vol1\n/sbin/lvreduce -L -1G /dev/big_disk/vol1\nN If you use ext2 filesystem for the logical volume, run the following\ncommand:\n/sbin/e2fsadm -L -2G /dev/big_disk/vol1 .\nLVM when matured and supported as a mainstream disk management solution\nunder Linux, increases storage reliability and eases storage administration under\nLinux’s belt of capabilities; thus, making good inroads towards enterprise comput-\ning. Because the enterprise IT managers are already looking at Linux, the technolo-\ngies that are required by them are likely to be fast tracked automatically because of\ncommercial interests. Supporting Linux for the enterprise is going to be a big busi-\nness in the future, so technologies like LVM will mature quickly. Being on the front\nwith such technology today, ensures that your skill set is high on demand. So don’t\nput off LVM if it isn’t yet mainstream in Linux; it’s simply coming to a job near you.\nUsing RAID, SAN, or Storage\nAppliances\nNo storage-management discussion can be complete with talking about Redundant\nArray of Independent Disks (RAID), Storage-Area Networks (SANs), or the storage\nappliance solutions available today. Most of these solutions involve vendor-specific\nhardware that isn’t Linux-specific, so I won’t go in-depth on those issues. \nUsing Linux Software RAID\nI have never got around to using the software RAID capabilities of Linux because\nsomething about a software RAID bothers me. I just can’t convince myself to play\n66\nPart I: System Performance\n" }, { "page_number": 90, "text": "with software RAID because I have used hardware RAID devices extensively and\nfound them to be very suitable solutions. In almost all situations where RAID is a\nsolution, someone is willing to pay for the hardware. Therefore, I can’t recommend\nsoftware RAID as a tested solution with anywhere near the confidence I have in\nhardware RAID.\nUsing Hardware RAID\nHardware RAID has been around long enough to become very reliable and many\nhardware raid solutions exist for Linux. One of my favorite solutions is IBM’s\nServerRAID controller that can interface with IBM’s external disk storage devices\nsuch as EXP 15 and EXP 200. Similar solutions are available from other vendors.\nA hardware RAID solution typically uses ultra-wide SCSI disks and an internal\nRAID controller card for Linux. (Most RAID vendors now support native Linux \ndrivers.)\nNo matter which RAID (hardware or software) you use, you must pick a RAID\nlevel that is suitable for your needs. Most common RAID levels are 1 and 5. RAID 1\nis purely disk mirroring. To use disk mirroring RAID 1 with 100 GB of total space,\nyou need 200 GB of disk space.\nRAID 5 is almost always the best choice. If you use N devices where the smallest\nhas size S, the size of the entire array is (N-1)*S. This “missing” space is used for\nparity (redundancy) information. (Use same-size media to ensure that your disk\nspace isn’t wasted.)\nUsing Storage-Area Networks (SANs)\nStorage-Area Networking (SAN) is the new Holy Grail of storage solutions.\nCompanies like EMC, IBM, Compaq, and Storage Networks are the SAN experts.\nTypically, a SAN solution consists of dedicated storage devices that you place in a\nfiver channel network and the storage is made available to your Linux systems via\ndedicated switching hardware and fiber channel interface cards. Generally speak-\ning, SAN is for the enterprise world and not yet ready for the small- to mid-range\norganizations.\nIf you co-locate your Linux systems in a well known data center such as those\nprovided by large ISPs like Exodus, Global Center, and Globix, chances are you will\nfind SAN as a value-added service. This may be one way to avoid paying for the\nexpensive SAN hardware and still have access to it. I know of Storage Networks\nwho provide such services in major ISP locations. They also have fiber rings\nthroughout the US, which means you can make your disks in New York appear in\nCalifornia with negligible latency.\nUsing Storage Appliances\nStorage appliances are no strangers to network/system administrators. Today, you\ncan buy dedicated storage appliances that hook up to your 10 or 100 or 1000 Mb\nEthernet and provide RAIDed storage services. These devices are usually remote\nChapter 3: Filesystem Tuning\n67\n" }, { "page_number": 91, "text": "managed using Web. They are good for small- to mid-range organizations and\noften very easy to configure and manage.\nUsing a RAM-Based Filesystem\nIf you are creating storage space for a small system, a temporary, small filesystem\nin RAM — a ramfs for short — can provide high-speed access. This filesystem is rela-\ntively small because (by default) the maximum RAM that a ramfs can use is one-\nhalf the total RAM on your system. So if you have 2GB RAM, a ramfs can use only\n1GB. Because I haven’t yet seen systems with more than 4GB of RAM, even 2GB\nramfs is really small compared to today’s large disk-based filesystems. The ramfs is\nperfect for many small files that must be accessed fast. For example, I use a ramfs\nfor a set of small images used in a heavily accessed Web site.\nTo use a ramfs, you must enable ramfs support in the kernel:\n1. Get the latest Linux kernel source from www.kernel.org and then (as\nroot) extract it into the /usr/src/linux-version directory, (where ver-\nsion is the current version of the kernel). Here I assume this to be 2.4.1.\n2. Select the File systems submenu. Using the spacebar, select Simple\nRAM-based file system support to be included as a kernel module and\nexit the submenu.\n3. Ensure that all other kernel features that you use are also selected as usual\n(see “Tuning the kernel” for details).\n4. Exit the main menu and save the kernel configuration.\n5. Run the make dep command to as suggested by the menuconfig program.\n6. Run make bzImage to create the new kernel. Then run make modules and\nmake modules_install to install the new modules in appropriate \nlocations.\n7. Change directory to arch/i386/boot directory. Note, if your hardware\narchitecture is Intel, you must replace i386 and possibly need further\ninstructions from a kernel HOW-TO documentation to compile and install\nyour flavor of the kernel. I assume that most readers are i386-based.\n8. Copy the bzImage to /boot/vmlinuz-2.4.1 and edit the\n/etc/lilo.conf file to include a new configuration such as the \nfollowing:\nimage=/boot/vmlinuz-2.4.1\nlabel=linux3\nread-only\nroot=/dev/hda1\n68\nPart I: System Performance\n" }, { "page_number": 92, "text": "9. Run the /sbin/lilo command to reconfigure LILO and reboot your sys-\ntem. At the lilo prompt, enter linux3 and boot the new kernel. If you\nhave any problem, you should be able to reboot to your standard Linux\nkernel, which should be default automatically.\n10. After you have booted the new kernel, you are now ready to use the\nramfs capability. Create a directory called ramdrive by using the mkdir \n/ramdrive command.\n11. Mount the ramfs filesystem by using the mount -t ramfs none \n/ramdrive command.\nYou are all set to write files to /ramdrive as usual.\nWhen the system is rebooted or you unmount the filesystem, all contents\nare lost. This is why it should be a temporary space for high-speed access.\nBecause ramfs is really not a block device,such programs as df and du can’t\nsee it. You can verify that you are really using RAM by running the cat\n/proc/mounts command and look for an entry such as the following:\nnone /ram ramfs rw 0 0\nYou can specify options using -o option when mounting the filesystem just\nlike mounting a regular disk-based filesystem. For example, to mount the\nramfs filesystem as read-only, you can use -o ro option.You can also spec-\nify special options such as maxsize=n where n is the number of kilobytes to\nallocate for the filesystem in RAM;maxfiles=n where n is the number of all\nfiles allowed in the filesystem; maxinodes=n where n is the maximum\nnumber of inodes (default is 0 = no limits).\nIf you run a Web server, you should find many uses for a RAM-based filesystem.\nElements such as common images and files of your Web site that aren’t too big can\nbe kept in the ramfs filesystem. You can write a simple shell script to copy the con-\ntents from their original location on each reboot. Listing 3-3 creates a simple script\nfor that.\nChapter 3: Filesystem Tuning\n69\n" }, { "page_number": 93, "text": "Listing 3-3: make_ramfs.sh\n#!/bin/sh\n#\n# Simply script to create a ramfs filesystem\n# on $MOUNTPOINT (which must exists).\n#\n# It copies files from $ORIG_DIR to $MOUNTPOINT\n# and changes ownership of $MOUTPOINT to\n# $USER and $GROUP\n#\n# Change values for these variables to suit\n# your needs.\nMOUNTPOINT=/ram\nORIG_DIR=/www/commonfiles\nUSER=httpd\nGROUP=httpd\nMOUNTCMD=/bin/mount\nCHOWN=/bin/chown\nCP=/bin/cp\necho -n “Creating ramfs filesytem in $MOUNTPOINT “;\n$MOUNTCMD -t ramfs none $MOUNTPOINT\necho “done.”;\necho -n “Copying $ORIG_DIR to $MOUNTPOINT ... “;\n$CP -r $ORIG_DIR $MOUNTPOINT\necho “done.”;\necho -n “Changing ownership to $USER:$GROUP for $MOUNTPOINT ...”;\n$CHOWN -R $USER:$GROUP $MOUNTPOINT\necho “done.”;\nTo use this script on your system, do the following:\n1. Create this script, make_ramfs.sh in /usr/local/scripts directory.\nCreate the /usr/local/scripts directory if you don’t have one.\n2. Edit /etc/rc.d/rc.local file and append the following line to it:\n/usr/local/scripts/make_ramfs.sh\n3. Create a directory called ram using the mkdir /ram command. If you keep\nthe files you want to load in RAM in any other location than /www/com-\nmonfiles, then modify the value for the ORIG_DIR variable in the script.\nFor example, if your files are in the /www/mydomain/htdocs/common\ndirectory, then set this variable to this directory.\n70\nPart I: System Performance\n" }, { "page_number": 94, "text": "4. If you run your Web server using any other username and group than\nhttpd, then change the USER and GROUP variable values accordingly. For\nexample, if you run Apache as nobody (user and group), then set\nUSER=nobody and GROUP=nobody.\n5. Assuming you use Apache Web server, create an alias in your httpd.conf\nfile such as the following:\nAlias /commonfiles/ “/ram/commonfiles/”\nWhenever Apache Web server needs to access /commonfiles/*, it now uses the\nversion in the RAM, which should be substantially faster than the files stored in the\noriginal location. Remember, the RAM-based version disappears whenever you\nreboot or unmount the filesystem. So never update anything there unless you also\ncopy the contents back to a disk-based directory.\nIf you have mounted a ramfs filesystem using a command such as mount -t\nramfs none /ram and copied contents to it and later reran the same mount\ncommand, it wipes out the contents and remounts it. The /proc/mounts\nfile shows multiple entries for the same mount point,which causes problem\nin unmounting the device.If you must regain the memory for other use,you\nmust reboot.Watch for this problem to be fixed in later Linux releases.\nSummary\nIn this chapter you learned about how to tune your disks and filesystems. You\nlearned to tune your IDE/EIDE drives for better performance; you learned to\nenhance ext2 performance along with using journaling filesystems like ReiserFS,\nlogical volume management, and RAM-based filesystems.\nChapter 3: Filesystem Tuning\n71\n" }, { "page_number": 95, "text": "" }, { "page_number": 96, "text": "Network and Service Performance\nCHAPTER 4\nNetwork Performance\nCHAPTER 5\nWeb Server Performance\nCHAPTER 6\nE-Mail Server Performance\nCHAPTER 7\nNFS and Samba Server Performance\nPart II\n" }, { "page_number": 97, "text": "" }, { "page_number": 98, "text": "Chapter 4\nNetwork Performance\nIN THIS CHAPTER\nN Tuning your network\nN Segmenting your network\nN Balancing the traffic load using round-robin DNS\nN Using IP accounting\nTHE NETWORK DEVICES (such as network interface cards, hubs, switches, and routers)\nthat you choose for your network have a big effect on the performance of your net-\nwork so it’s important to choose appropriate network hardware. Because network\nhardware is cheap today, using high performance PCI-based NIC or 100Mb switches\nis no longer a pipe dream for network administrators. Like the hardware, the high-\nspeed bandwidth is also reasonably cheap. Having T1 connection at the office is no\nlonger a status symbol for network administrators. Today, burstable T3 lines are\neven available in many places. So what is left for network tuning? Well, the very\ndesign of the network of course! In this chapter I discuss how you can design high-\nperformance networks for both office and public use. However, the Ethernet Local\nArea Network (LAN) tuning discussion is limited to small- to mid-range offices\nwhere the maximum number of users is fewer than a thousand or so. For larger-\nscale networks you should consult books that are dedicated to large networking\nconcepts and implementations. This chapter also covers a Web network design that\nis scalable and can perform well under heavy load.\nTuning an Ethernet LAN or WAN\nMost Ethernet LANs start with at least one hub. Figure 4-1 shows a typical, small\nEthernet LAN. \n75\n" }, { "page_number": 99, "text": "Figure 4-1: A small Ethernet LAN\nAs the company grows bigger, the small LAN started to look like the one shown\nin Figure 4-2.\nFigure 4-2: A small but growing Ethernet LAN\nAs the company prospers, the number of people and the computers grow and\neventually you have a network that looks as shown in Figure 4-3.\nIn my experience, when a network of cascading Ethernet hubs reaches about 25\nor more users, typically it has enough diverse users and tasks that performance\nstarts to degrade. For example, I have been called in many times to analyze net-\nworks that started degrading after adding only a few more machines. Often those\n“few more machines” were run by “network-heavy” users such as graphic artists\nwho shared or downloaded huge art and graphics files throughout the day as part\nof their work or research. Today it’s even easier to saturate a 10Mb Ethernet with\nlive audio/video feeds (or other apps that kill network bandwidth) that office users\nsometimes run on their desktops. So it’s very important to design a LAN that can\nperform well under a heavy load so everyone’s work gets done fast.\nPC\nPC\nPC\nPC\nHUB\nPC\nPC\nPC\nPC\nHUB\nPC\nMAC\nMAC\nPC\nHUB\nPC\nPC\nPC\nPC\nHUB\n76\nPart II: Network and Service Performance\n" }, { "page_number": 100, "text": "Figure 4-3: A not-so-small Ethernet LAN\nAlthough commonly used, Ethernet hubs are not the best way to expand a LAN\nto support users. Network expansions should be well planned and implemented,\nusing appropriate hardware; the following sections discuss how you can do that.\nUsing network segmentation technique\nfor performance\nThe network shown in Figure 4-3 has a major problem. It’s a single Ethernet seg-\nment that has been put together by placing a group of hubs in a cascading fashion,\nwhich means that all the computers in the network see all the traffic. So when a\nuser from the production department copies a large file from another user next to\nher in the same department, a computer in the marketing department is deprived of\nthe bandwidth to do something else. Figure 4-4 shows a better version of the same\nnetwork.\nFigure 4-4: A segmented Ethernet LAN\nHere the departments are segmented in the different IP networks and intercon-\nnected by a network gateway. This gateway can be a Red Hat Linux system with IP\nMarketing & Sales Department\nNetwork: 192.168.3.0\nNetwork: 192.168.1.0\nNetwork: 192.168.2.0\nNetwork\nGateway\neth2\neth1\neth0\nDevelopment & Production Department\nManagement & Administration\nMarketing & Sales Department\nDevelopment & Production Department\nManagement & Administration\nChapter 4: Network Performance \n77\n" }, { "page_number": 101, "text": "forwarding turned on and a few static routing rules to implement the following\nstandard routing policy:\nIF source and destination of a packet is within the same network THEN\nDO NOT FORWARD the network traffic to any other attached networks\nELSE IF\nFORWARD the network traffic to the appropriate attached network only\nEND\nHere’s an example: John at the marketing department wants to access a file from\nJennifer, who works in the same department. When John accesses Jennifer’s shared\ndrive, the IP packets his system transmits and receives aren’t to be seen by anyone\nin the management/administration or development/production departments. So if\nthe file is huge, requiring three minutes to transfer, no one in the other department\nsuffers network degradation. Of course marketing personnel who are accessing the\nnetwork at the time of the transfer do see performance degrade. But you can reduce\nsuch degradation by using switching Ethernet hardware instead of simple Ethernet\nhubs (I cover switches in a later section).\nThe network gateway computer in Figure 4-4 has three Ethernet interface (NIC)\ncards; each of these cards is connected to a different department (that is, network).\nThe marketing and sales department is on a Class C (192.168.3.0) network, which\nmeans this department can have 254 host computers for their use. Similarly, the\nother departments have their own Class C networks. Here are the steps needed to\ncreate such a setup.\nThere are many ways to do this configuration. For example, instead of using\ndifferent Class C networks to create departmental segments, you can use a\nset of Class B subnets (or even a set of Class C subnets, depending on the\nsize of your departments).In this example,I use different Class C networks to\nmake the example a bit simpler to understand.\n1. For each department in your organization, create a Class C network.\nRemember that a Class C network gives you a total of 254 usable IP\naddresses. If your department size is larger than 254, then you should\nconsider breaking up the department into multiple networks or use a Class\nB network instead. In this example, I assume that each of your depart-\nments has fewer than 254 computers; I also assume that you have three\ndepartmental segments as shown in Figure 4-4 and you have used\n192.168.1.0, 192.168.2.0, and 192.168.3.0 networks.\n2. On your Red Hat Linux system designated to be the network gateway, turn\non IP forwarding. \n78\nPart II: Network and Service Performance\n" }, { "page_number": 102, "text": "I Run /sbin/sysctl -w net.ipv4.ip_forward=1 command as root\nI Add this command at the end of your /etc/rc.d/rc.local script so\nthat IP forwarding is turned on whenever you reboot your system.\nYou may already have IP forwarding turned on; to check, run the cat\n/proc/sys/net/ipv4/ip_forward command.1 means that IP forward-\ning is on and 0 means that IP forwarding is turned off.\n3. Create /etc/sysconfig/network-scripts/ifcfg-eth0, /etc/\nsysconfig/network-scripts/ifcfg-eth1, and /etc/sysconfig/\nnetwork-scripts/ifcfg-eth2 files, as shown here:\n# Contents of ifcfg-eth0 file\nDEVICE=eth0\nBROADCAST=192.168.1.255\nIPADDR=192.168.1.254\nNETMASK=255.255.255.0\nNETWORK=192.168.1.0\nONBOOT=yes\n# Contents of ifcfg-eth1 file\nDEVICE=eth1\nBROADCAST=192.168.2.255\nIPADDR=192.168.2.254\nNETMASK=255.255.255.0\nNETWORK=192.168.2.0\nONBOOT=yes\n# Contents of ifcfg-eth2 file\nDEVICE=eth2\nBROADCAST=192.168.3.255\nIPADDR=192.168.3.254\nNETMASK=255.255.255.0\nNETWORK=192.168.3.0\nONBOOT=yes\n4. Connect the appropriate network to the proper Ethernet NIC on the gate-\nway computer. The 192.168.1.0 network should be connected to the\neth0, 192.168.2.0 should be connected to eth1, and 192.168.3.0\nshould be connected to eth2. Once connected, you can simply restart the\nmachine — or bring up the interfaces, using the following commands from\nthe console.\nChapter 4: Network Performance \n79\n" }, { "page_number": 103, "text": "/sbin/ifconfig eth0 up\n/sbin/ifconfig eth1 up\n/sbin/ifconfig eth2 up\n5. Set the default gateway for each of the networks. For example, all the\ncomputers in the 192.168.1.0 network should set their default route to\nbe 192.168.1.254, which is the IP address associated with the eth0\ndevice of the gateway computer.\nThat’s all there is to isolating each department into its own network. Now\ntraffic from one network only flows to the other when needed. This\nenables the bandwidth on each department to be available for its own use\nmost of the time.\nUsing switches in place of hubs\nWhen a large LAN is constructed with a set of cascading hubs to support many\ncomputers, the bandwidth is shared by all of the computers. If the total bandwidth\nis 10 Mbps, the entire network is limited to that amount. However, this can easily\nbecome a serious bottleneck in a busy network where large files are often accessed\nor audio/video streams are common. In such a case an Ethernet switch can work\nlike magic.\nThe major difference between an Ethernet hub and switch is that each port on a\nswitch is its own logical segment. A computer connected to a port on an Ethernet\nswitch has a full bandwidth to it and need not contend with other computers for\ncollisions. One of the main reasons you purchase a switch over a hub is for its\naddress-handling capabilities. Whereas a hub doesn’t look at the address of a data\npacket and just forwards data to all devices on the network, a switch should read\nthe address of each data packet and correctly forward the data to the intended\nrecipients. If the switch doesn’t correctly read the packet address and correctly for-\nward the data, it has no advantage over a hub. Table 4-1 lists the major differences\nbetween hub and switch.\nTABLE 4-1: DIFFERENCES BETWEEN AN ETHERNET HUB AND A SWITCH\nHub\nSwitch\nTotal network bandwidth is limited to \nTotal network bandwidth is determined by the \nthe speed of the hub; that is, A 10Base-T \nnumber of ports on the switch; that is, a 12 port \nhub provides 10Mb bandwidth, no matter \n100Mb switch can support up to 1200 Mbps \nhow many ports.\nbandwidth — this is referred to as the switch’s\nmaximum aggregate bandwidth.\n80\nPart II: Network and Service Performance\n" }, { "page_number": 104, "text": "Hub\nSwitch\nSupports half duplex communications \nSwitches that support full duplex \nlimiting the connection to the speed of \ncommunications offer the capability to double \nthe port; that is, 10Mb port provides a \nthe speed of each link; that is, from 100Mb \n10Mb link.\nto 200Mb.\nHop count rules limit the number of \nEnables users to greatly expand networks; there \nhubs that can be interconnected \nare no limits to the number of switches that \nbetween two computers.\ncan be interconnected between two computers.\nCheaper than switches\nSlightly more expensive than hubs.\nNo special hardware is needed on the devices that connect to an Ethernet switch.\nThe same network interface used for shared media 10Base-T hubs works with an\nEthernet switch. From that device’s perspective, connecting to a switched port is\njust like being the only computer on the network segment.\nOne common use for an Ethernet switch is to break a large network into seg-\nments. While it’s possible to attach a single computer to each port on an\nEthernet switch, it’s also possible to connect other devices such as a hub. If\nyour network is large enough to require multiple hubs, you could connect\neach of those hubs to a switch port so that each hub is a separate segment.\nRemember that if you simply cascade the hubs directly, the combined net-\nwork is a single logical Ethernet segment.\nUsing fast Ethernet\nThe traditional Ethernet is 10 Mbps, which simply isn’t enough in a modern busi-\nness environment where e-mail-based communication, Internet access, video con-\nferencing, and other bandwidth-intensive operations are more commonplace. The\n100 Mbps Ethernet is the way to go. However, 100 Mbps or “fast” Ethernet is still\nexpensive if you decide to use fast switches, too. I highly recommend that you\nmove towards a switched fast Ethernet. The migration path from 10 Mbps to 100\nMbps can be expensive if you have a lot of computers in your network. Each com-\nputer in your network must have 100 Mbps-capable NIC installed, which can be\nexpensive in cost, staff, and time. For a large LAN with hundreds of users, upgrade\none segment at a time. You can start by buying 10 (ten) 100 Mbps dual-speed NIC\nfor machines and, thus, support your existing 10 Mbps and upcoming 100 Mbps\ninfrastructure seamlessly.\nChapter 4: Network Performance \n81\n" }, { "page_number": 105, "text": "The fast Ethernet with switching hardware can bring a high degree of perfor-\nmance to your LAN. Consider this option if possible. If you have multiple depart-\nments to interconnect, consider an even faster solution between the departments.\nThe emerging gigabit Ethernet is very suitable for connecting local area networks\nto form a wide area network (WAN).\nUsing a network backbone\nIf you are dealing with a mid-size network environment where hundreds of com-\nputers and multiple physical locations are involved, design the network backbone\nthat carries network traffic between locations. Figure 4-5 shows one such network.\nFigure 4-5: A WAN with a gigabit/fiber switched backbone\nHere the four locations A, B, C, and D are interconnected using either a gigabit\nor fiver switched backbone. A large bandwidth capacity in the backbone has two\nbenefits:\nN It accommodates worst-case scenarios. A typical example is when the\nentire WAN is busy because most of the computers are transmitting and\nreceiving data to and from the network. If the backbone is 10 Mb (or even\n100 Mb), performance can degrade — and user perception of the slowdown\nvaries widely. \nN It makes your network amenable to expansion. For example, if location\nA decides to increase its load, the high bandwidth available at the back-\nbone can handle the load.\nLocation A\nGigabit Switch\nor Fiber Switch\nLocation D\nLocation B\nLocation C\nGigabit Switch\nor Fiber Switch\n82\nPart II: Network and Service Performance\n" }, { "page_number": 106, "text": "Fiber optics work very well in enterprise networks as a backbone infrastruc-\nture. Fiber offers exceptional performance for high-bandwidth applications,\nand is extremely reliable and secure. Fiber isn’t susceptible to many of the\nsources of interference that can play havoc with copper-based cabling sys-\ntems. Fiber is also considered to be more secure because it can’t be tapped\nunless you cut and splice the fiber strands — a task that is virtually impossible\nwithout detection. If you need to connect a set of buildings within a corpo-\nrate complex or academic campus,then fiber optics offers the very best solu-\ntion. While it’s possible to use fiber optics to connect PCs and printers in a\nLAN, only organizations with serious security concerns and extremely data-\nintensive applications regularly do so. Fiber-optic networks are expensive to\nimplement,and their installation and maintenance demand a higher level of\nexpertise. At a time when we can achieve 100 Mbps speed over copper\ncabling,it’s seldom cost-effective to use fiber optics for a small office network.\nIf you have mission-critical applications in your network that are accessed\nvia the backbone, you must consider adding redundancy in your backbone\nso that if one route goes down because of an equipment failure or any other\nproblem, an alternative path is available. Adding redundancy doesn’t come\ncheap,but it’s a must for those needing a high uptime percentage.\nUnderstanding and controlling \nnetwork traffic flow\nUnderstanding how your network traffic flows is the primary key in determining\nhow you can tune it for better performance. Take a look at the network segment\nshown in Figure 4-6.\nHere three Web servers are providing Web services to the Internet and they\nshare a network with an NFS server and a database server. What’s wrong with this\npicture? Well, several things are wrong. First of all, these machines are still using\ndumb hub instead of a switch. Second of all, the NFS and database traffic is com-\npeting with the incoming and outgoing Web traffic. If a Web application needs\ndatabase access, it generates database requests, in response to a Web request from\nthe Internet, which in turn reduces from the bandwidth available for other incom-\ning or outgoing Web requests, thus, effectively making the network unnecessarily\nbusy or less responsive. How can you solve such a problem? Using a traffic con-\ntrol mechanism, of course! First determine what traffic can be isolated in this net-\nwork. Naturally, the database and NFS traffic is only needed to service the Web\nChapter 4: Network Performance \n83\n" }, { "page_number": 107, "text": "servers. In such a case, NFS and database traffic should be isolated so that they\ndon’t compete with Web traffic. Figure 4-7 shows a modified network diagram for\nthe same network.\nFigure 4-6: An inefficient Web network\nFigure 4-7: An improved Web network\nHere the database and the NFS server are connected to a switch that is connected\nto the second NIC of each Web server. The other NIC of each Web server is connected\nto a switch that is in turn connected to the load balancing hardware. Now, when a\nWeb request comes to a Web server, it’s serviced by the server without taking away\nfrom the bandwidth of other Web servers. The result is a tremendous increase in net-\nwork efficiency, which trickles down to more positive user experience.\nAfter you have a good network design, your tuning focus should be shifted to\napplications and services that you provide. In many cases, depending on your \nInternet\nSwitch\nNFS Server\nDatabase Server\nLoad Balancing Hardware\nRouter\nWeb Server 1\nWeb Server 3\nWeb Server 2\nSwitch\nRouter\nInternet\nHub\nNFS Server\nDatabase Server\nLoad Balancing Device\n(e.g. CISCO Local Director)\nWeb Server 1\nWeb Server 3\nWeb Server 2\n84\nPart II: Network and Service Performance\n" }, { "page_number": 108, "text": "network load, you may have to consider deploying multiple servers of the same\nkind to implement a more responsive service. This is certainly true for the Web. In\nthe following section I show you a simple-to-use load-balancing scheme using a\nDNS trick.\nBalancing the traffic load using the DNS server\nThe idea is to share the load among multiple servers of a kind. This typically is used\nfor balancing the Web load over multiple Web servers. This trick is called round-\nrobin Domain Name Service.\nSuppose you have two Web servers, www1.yourdomain.com (192.168.1.10) and\nwww2.yourdomain.com (192.168.1.20), and you want to balance the load for\nwww.yourdomain.com on these two servers by using the round-robin DNS trick.\nAdd the following lines to your yourdomain.com zone file:\nwww1 IN A 192.168.1.10\nwww2 IN A 192.168.1.20\nwww IN CNAME www1\nwww IN CNAME www2\nRestart your name server and ping the www.yourdomain.com host. You see the\n192.168.1.10 address in the ping output. Stop and restart pinging the same host,\nand you’ll see the second IP address being pinged, because the preceding configura-\ntion tells the name server to cycle through the CNAME records for www. The www.\nyourdomain.com host is both www1.yourdomain.com and www2.yourdomain.com.\nNow, when someone enters www.yourdomain.com, the name server sends the\nfirst address once, then sends the second address for the next request, and keeps\ncycling between these addresses.\nOne of the disadvantages of the round-robin DNS trick is that the name server\ncan’t know which system is heavily loaded and which isn’t — it just blindly cycles.\nIf a server crashes or becomes unavailable for some reason, the round-robin DNS\ntrick still returns the broken server’s IP on a regular basis. This could be chaotic,\nbecause some people get to the sites and some won’t.\nIf your load demands better management and your server’s health is essential to\nyour operation, then your best choice is to get a hardware solution that uses the\nnew director products such as Web Director (www.radware.com/), Ace Director\n(www.alteon.com/), or Local Director (www.cisco.com/). I have used both Local\nDirector and Web Director with great success.\nIP Accounting\nAs you make headway in tuning your network, you also have a greater need to\ndetermine how your bandwidth is used. Under Linux, you can use the IP account-\ning scheme to get that information.\nChapter 4: Network Performance \n85\n" }, { "page_number": 109, "text": "Knowing how your IP bandwidth is used helps you determine how to make\nchanges in your network to make it more efficient. For example, if you discover\nthat one segment of your network has 70 percent of its traffic going to a different\nsegment on average, you may find a way to isolate that traffic by providing a direct\nlink between the two networks. IP accounting helps you determine how IP packets\nare passed around in your network.\nTo use IP accounting, you must configure and compile the kernel with network\npacket-filtering support. If you use the make menuconfig command to configure\nthe kernel, you can find the Network packet filtering (replaces ipchains)\nfeature under the Networking optionssubmenu. Build and install the new kernel\nwith packet filtering support (See the Tuning Kernel chapter for details on compil-\ning and installing a custom kernel).\nIP accounting on a Linux network gateway\nHere I assume that you want to have a network gateway among three networks —\n192.168.1.0 (eth0), 192.168.2.0 (eth1), and 207.183.15.0 (eth2). Here, the first\ntwo networks are your internal department and the third one is the uplink network\nto your Internet service provider.\nNow you want to set up IP accounting rules that tell you how many packets\ntravel between the 192.168.1.0 network and the Internet. The IP accounting rules\nthat you need are as follows:\n/sbin/iptables -A FORWARD -i eth2 -d 192.168.1.0/24\n/sbin/iptables -A FORWARD -o eth2 -s 192.168.1.0/24\nHere the first states that a new rule be appended (-A) to the FORWARD chain such\nthat all packets destined for the 192.168.1.0 network be counted when the pack-\nets travel via the eth2 interface of the gateway machine. Remember, the eth2 inter-\nface is connected to the ISP network (possibly via a router, DSL device, or Cable\nmodem). The second rule states that another rule be appended to the FORWARD chain\nsuch that any IP packet originated from the 192.168.1.0 network and passing\nthrough the eth2 interface be counted. These two rules effectively count all IP\n86\nPart II: Network and Service Performance\nIP accounting on a Linux system that isn’t a network gateway?\nYes, technically you can do it. If your system is not a gateway — it doesn’t do IP\nforwarding and /proc/sys/net/ipv4/ip_forward is set to 0 — you can run IP\naccounting if you place the NIC in promiscuous mode, use the /sbin/ifconfig\neth0 up promisc command, and then apply IP accounting rules. For the sake of\nnetwork efficiency (and your sanity), however, I highly recommend that you try IP\naccounting on a Linux network gateway system instead.\n" }, { "page_number": 110, "text": "packets (whether incoming or outgoing) that move between the 192.168.1.0 net-\nwork and the Internet. To do the same for the 192.168.2.0 network, use the fol-\nlowing rules:\n/sbin/iptables -A FORWARD -i eth2 -d 192.168.2.0/24\n/sbin/iptables -A FORWARD -o eth2 -s 192.168.2.0/24\nAfter you have set up the preceding rules, you can view the results from time to\ntime by using the /sbin/iptables -L –v -n command. I usually open an SSH ses-\nsion to the network gateway and run /usr/bin/watch –n 3600 /sbin/iptables\n-L –v -n to monitor the traffic on an hourly basis.\nIf you are interested in finding out what type of network services are requested\nby the departments that interact with the Internet, you can do accounting on that,\ntoo. For example, if you want to know how much of the traffic passing through the\neth2 interface is Web traffic, you can implement a rule such as the following:\n/sbin/iptables -A FORWARD -o eth0 -m tcp -p tcp --dport www\nThis records traffic meant for port 80 (www port in /etc/services). You can\nadd similar rules for other network services found in the /etc/services files.\nSummary\nThe state of your network performance is the combined effect of your operating\nsystem, network devices, bandwidth, and the overall network design you choose to\nimplement. \nChapter 4: Network Performance \n87\n" }, { "page_number": 111, "text": "" }, { "page_number": 112, "text": "Chapter 5\nWeb Server Performance\nIN THIS CHAPTER\nN Controlling Apache\nN Accelerating Web performance\nTHE DEFAULT WEB SERVER software for Red Hat Linux is Apache — the most popular\nWeb server in the world. According to Apache Group (its makers), the primary mis-\nsion for Apache is accuracy as an HTTP protocol server first; performance (per se)\nis second. Even so, Apache offers good performance in real-world situations — and\nit continues to get better. As with many items of technology, proper tuning can give\nan Apache Web server excellent performance and flexibility. In this chapter, I focus\non Apache tuning issues — and introduce you to the new kernel-level HTTP daemon\n(available for the 2.4 and later kernels) that can speed the process of Web design. \nApache architecture makes the product extremely flexible. Almost all of its pro-\ncessing — except for core functionality that handles requests and responses — hap-\npens in individual modules. This approach makes Apache easy to compile and\ncustomize.\nIn this book (as in my other books),a common thread running through all of\nthe advice that bears repeating: Always compile your server software if you\nhave access to the source code. I believe that the best way to run Apache is to\ncompile and install it yourself.Therefore my other recommendations in this\nsection assume that you have the latest distribution of the Apache source\ncode on your system.\nCompiling a Lean and Mean Apache\nCompiling an efficient server means removing everything you don’t need and\nretaining only the functions you want Apache to perform. Fortunately, the module-\nbased Apache architecture makes an efficient — and highly customized — installa-\ntion relatively easy. Here’s how:\n89\n" }, { "page_number": 113, "text": "1. Know what Apache modules you currently have; decide whether you\nreally need them all. To find out what modules you currently have\ninstalled in Apache binary code (httpd), run the following command\nwhile logged in as root:\n/usr/local/apache/bin/httpd –l\nChange the path (/usr/local/apache) if you have installed Apache in\nanother location. This command displays all the Apache modules cur-\nrently compiled in the httpd binary. For example, the default Apache\ninstallation compiles the following modules:\nCompiled-in modules:\nhttp_core.c\nmod_env.c\nmod_log_config.c\nmod_mime.c\nmod_negotiation.c\nmod_status.c\nmod_include.c\nmod_autoindex.c\nmod_dir.c\nmod_cgi.c\nmod_asis.c\nmod_imap.c\nmod_actions.c\nmod_userdir.c\nmod_alias.c\nmod_access.c\nmod_auth.c\nmod_setenvif.c\nsuexec: disabled; invalid wrapper /workspace/h1/bin/suexec\nIf you installed a default Apache binary, you can also find out what mod-\nules are installed by default by running the configuration script using the\nfollowing command:\n./configure --help\nThis command displays command-line help, which are explained in\nTable 5-1.\nTABLE 5-1: THE OPTIONS FOR THE CONFIGURE SCRIPT\nOption\nMeaning\n--cache-file=FILE\nCache test results in FILE\n--help\nPrint this message\n90\nPart II: Network and Service Performance\n" }, { "page_number": 114, "text": "Option\nMeaning\n--no-create\nDo not create output files\n--quiet or --silent\nDo not print ‘checking...’ messages\n--version\nPrint the version of autoconf that created configure\nDirectory and filenames:\n--prefix=PREFIX\nInstall architecture-independent files in PREFIX\n[/usr/local/apache2]\n--exec-prefix=EPREFIX\nInstall architecture-dependent files in EPREFIX [same\nas prefix]\n--bindir=DIR\nUser executables in DIR [EPREFIX/bin]\n--sbindir=DIR \nSystem admin executables in DIR [EPREFIX/sbin]\n--libexecdir=DIR\nProgram executables in DIR [EPREFIX/libexec]\n--datadir=DIR\nRead-only architecture-independent data in DIR\n[PREFIX/share]\n--sysconfdir=DIR\nRead-only single-machine data in DIR [PREFIX/etc]\n--sharedstatedir=DIR\nModifiable architecture-independent data in DIR\n[PREFIX/com]\n--localstatedir=DIR\nModifiable single-machine data in DIR [PREFIX/var]\n--libdir=DIR\nObject code libraries in DIR [EPREFIX/lib]\n--includedir=DIR\nC header files in DIR [PREFIX/include]\n--oldincludedir=DIR\nC header files for non-GCC in DIR [/usr/include]\n--infodir=DIR\nInfo documentation in DIR [PREFIX/info]\n--mandir=DIR\nman documentation in DIR [PREFIX/man]\n--srcdir=DIR\nFind the sources in DIR [configure dir or ...]\n--program-prefix=PREFIX\nPrepend PREFIX to installed program names\n--program-suffix=SUFFIX\nAppend SUFFIX to installed program names\n--program-transform-\nRun sed PROGRAM on installed program names\nname=PROGRAM\n--build=BUILD\nConfigure for building on BUILD [BUILD=HOST]\n--host=HOST\nConfigure for HOST\n--target=TARGET\nConfigure for TARGET [TARGET=HOST]\n--disable-FEATURE\nDo not include FEATURE (same as --enable-\nFEATURE=no)\n--enable-FEATURE[=ARG]\nInclude FEATURE [ARG=yes]\nContinued\nChapter 5: Web Server Performance\n91\n" }, { "page_number": 115, "text": "TABLE 5-1: THE OPTIONS FOR THE CONFIGURE SCRIPT (Continued)\nOption\nMeaning\n--with-PACKAGE[=ARG]\nUse PACKAGE [ARG=yes]\n--without-PACKAGE\nDo not use PACKAGE (same as --with-PACKAGE=no)\n--x-includes=DIR\nX include files are in DIR\n--x-libraries=DIR\nX library files are in DIR\n--with-optim=FLAG\nObsolete (use OPTIM environment variable)\n--with-port=PORT\nPort on which to listen (default is 80)\n--enable-debug\nTurn on debugging and compile-time warnings\n--enable-maintainer-mode\nTurn on debugging and compile-time warnings\n--enable-layout=LAYOUT\nUse the select directory layout\n--enable-modules=\nEnable the list of modules specified\nMODULE-LIST\n--enable-mods-\nEnable the list of modules as shared objects\nshared=MODULE-LIST\n--disable-access\nHost-based access control\n--disable-auth\nUser-based access control\n--enable-auth-anon\nAnonymous user access\n--enable-auth-dbm\nDBM-based access databases\n--enable-auth-db\nDB-based access databases\n--enable-auth-digest\nRFC2617 Digest authentication\n--enable-file-cache\nFile cache\n--enable-dav-fs\nDAV provider for the filesystem\n--enable-dav\nWebDAV protocol handling\n--enable-echo\nECHO server\n--enable-charset-lite\nCharacter set translation\n--enable-cache\nDynamic file caching\n--enable-disk-cache\nDisk caching module\n--enable-ext-filter\nExternal filter module\n--enable-case-filter\nExample uppercase conversion filter\n--enable-generic-\nExample of hook exporter\nhook-export\n--enable-generic-\nExample of hook importer\nhook-import\n92\nPart II: Network and Service Performance\n" }, { "page_number": 116, "text": "Option\nMeaning\n--enable-optional-\nExample of optional function importer\nfn-import\n--enable-optional-\nExample of optional function exporter\nfn-export\n--disable-include\nServer-Side Includes\n--disable-http\nHTTP protocol handling\n--disable-mime\nMapping of file-extension to MIME\n--disable-log-config\nLogging configuration\n--enable-vhost-alias\nMass -hosting module\n--disable-negotiation\nContent negotiation\n--disable-dir\nDirectory request handling\n--disable-imap\nInternal imagemaps\n--disable-actions\nAction triggering on requests\n--enable-speling\nCorrect common URL misspellings\n--disable-userdir\nMapping of user requests\n--disable-alias\nTranslation of requests\n--enable-rewrite\nRegex URL translation\n--disable-so\nDSO capability\n--enable-so\nDSO capability\n--disable-env\nClearing/setting of ENV vars\n--enable-mime-magic\nAutomatically determine MIME type\n--enable-cern-meta\nCERN-type meta files\n--enable-expires\nExpires header control\n--enable-headers\nHTTP header control\n--enable-usertrack\nUser-session tracking\n--enable-unique-id\nPer-request unique IDs\n--disable-setenvif\nBase ENV vars on headers\n--enable-tls\nTLS/SSL support\n--with-ssl\nUse a specific SSL library installation\n--with-mpm=MPM\nChoose the process model for Apache to use: \nMPM={beos threaded prefork \nspmt_os2 perchild}\nContinued\nChapter 5: Web Server Performance\n93\n" }, { "page_number": 117, "text": "TABLE 5-1: THE OPTIONS FOR THE CONFIGURE SCRIPT (Continued)\nOption\nMeaning\n--disable-status\nProcess/thread monitoring\n--disable-autoindex\nDirectory listing\n--disable-asis\nAs-is filetypes\n--enable-info\nServer information\n--enable-suexec\nSet UID and GID for spawned processes\n--disable-cgid\nCGI scripts\n--enable-cgi\nCGI scripts\n--disable-cgi\nCGI scripts\n--enable-cgid\nCGI scripts\n--enable-shared[=PKGS]\nBuild shared libraries [default=no]\n--enable-static[=PKGS]\nBuild static libraries [default=yes]\n--enable-fast-\nOptimize for fast installation [default=yes]\ninstall[=PKGS]\n--with-gnu-ld\nAssume the C compiler uses GNU lD [default=no]\n--disable-libtool-lock\nAvoid locking (might break parallel builds)\n--with-program-name\nAlternate executable name\n--with-suexec-caller\nUser allowed to call SuExec\n--with-suexec-userdir\nUser subdirectory\n--with-suexec-docroot\nSuExec root directory\n--with-suexec-uidmin\nMinimal allowed UID\n--with-suexec-gidmin\nMinimal allowed GID\n--with-suexec-logfile\nSet the logfile\n--with-suexec-safepath\nSet the safepath\n--with-suexec-umask\nAmask for suexec’d process\n2. Determine whether you need the modules that you have compiled in\nApache binary (httpd). By removing unnecessary modules, you achieve a\nperformance boost (because of the reduced size of the binary code file)\nand — potentially, at least — greater security.\n94\nPart II: Network and Service Performance\n" }, { "page_number": 118, "text": "For example, if you plan never to run CGI programs or scripts, you can\nremove the mod_cgi module — which reduces the size of the binary file\nand also shuts out potential CGI attacks,making a more secure Apache envi-\nronment.If can’t service CGI requests,all CGI risk goes to zero.To know which\nmodules to keep and which ones to remove, know how each module func-\ntions; you can obtain this information at the www.apache.org Web site.\nReading the Apache documentation for each module can help you deter-\nmine whether you have any use for a moduleot.\nMake a list of modules that you can do without and continue to the next\nstep.\n3. After you decide which default modules you don’t want to keep, simply\nrun the configuration script from the top Apache directory, specifying the\n--disable-module option for each module you want to remove. Here’s\nan example:\n./configure --prefix=/usr/local/apache \\\n--disable-cgi \\\n--disable-imap \\\n--disable-userdir \\\n--disable-autoindex \\\n--disable-status\nIn this list, the configure script must install Apache in /usr/local/\napache, using the --prefix option; it’s also told to disable the CGI\nmodule (mod_cgi), the server-side image-mapping module (mod_imap),\nthe module that supports the user/public_html directory (mod_userdir),\nthe automatic directory-indexing module (mod_autoindex), and the\nserver-status module (mod_status).\n4. After you have run the appropriate configuration command in the previ-\nous step, you can run the make; make install commands to build and\ninstall the lean and mean Apache server.\nTuning Apache Configuration\nWhen you configure an Apache source using the configure script with the \n-- prefix option, this process specifies the primary configuration file as the\nhttpd.conf file (stored in the conf directory of your Apache installation directory).\nThe httpd.conf file consists of a set of Apache directives, some of which are\ndesigned to help you fine-tune Apache performance. This section covers those\nApache directives.\nChapter 5: Web Server Performance\n95\n" }, { "page_number": 119, "text": "Controlling Apache processes\nUse the following directives to control how Apache executes in your system. Using\nthese directives also gives you control of how Apache uses resources on your sys-\ntem. For example, you can decide how many child server processes to run on your\nsystem, or how many threads you should enable Apache to use on a Windows \nplatform.\nA few things to remember when configuring these directives:\nN The more processes you run, the more load your CPUs must handle.\nN The more processes you run, the more RAM you need.\nN The more processes you run, the more operating-system resources (such as\nfile descriptors and shared buffers) you use.\nOf course, more processes could also mean more requests serviced — hence more\nhits for your site. So set these directives by balancing experimentation, require-\nments, and available resources.\nStartServers\nStartServers is set to 3 by default, which tells Apache to start three child servers\nas it starts.\nSyntax: StartServers number\nDefault setting: StartServers 3\nContext: Server config\nYou can start more servers if you want, but Apache is pretty good at increasing\nthe number of child processes as needed based on load. So, changing this is not\nrequired.\nSENDBUFFERSIZE\nThis directive sets the size of the TCP send buffer to the number of bytes specified.\nSyntax: SendBufferSize bytes\nContext: Server config\nOn a high-performance network, you may increase server performance if\nyou set this directive to a higher value than the operating-system defaults.\n96\nPart II: Network and Service Performance\n" }, { "page_number": 120, "text": "LISTENBACKLOG\nThis directive defends against a known type of security attack called denial of ser-\nvice (DoS) by enabling you to set the maximum length of the queue that handles\npending connections.\nSyntax: ListenBacklog backlog\nDefault setting: ListenBacklog 511\nContext: Server config\nIncrease this value if you detect that you are under a TCP SYN flood attack\n(a type of DoS attack); otherwise you can leave it alone.\nTIMEOUT\nIn effect, the Web is really a big client/server system in which the Apache server\nresponds to requests. The requests and responses are transmitted via packets of\ndata. Apache must know how long to wait for a certain packet. This directive con-\nfigures the time in seconds.\nSyntax: TimeOut number\nDefault setting: TimeOut 300\nContext: Server config\nThe time you specify here is the maximum time Apache waits before it breaks a\nconnection. The default setting enables Apache to wait for 300 seconds before it\ndisconnects itself from the client. If you are on a slow network, you may want to\nincrease the time-out value to decrease the number of disconnects.\nCurrently, this time out setting applies to:\nN The total amount of time it takes to receive a GET request\nN The amount of time between receipt of TCP packets on a POST or PUT\nrequest\nN The amount of time between ACKs on transmissions of TCP packets in\nresponses\nChapter 5: Web Server Performance\n97\n" }, { "page_number": 121, "text": "MAXCLIENTS\nThis directive limits the number of simultaneous requests that Apache can service.\nSyntax: MaxClients number\nDefault setting: MaxClients 256\nContext: Server config\nWhen you use the default MPM module (threaded) the number of simultaneous\nrequest is equal to the value of this directive multiplied by the value of the\nThreadsPerChild directive. For example, if you have MaxClients set to default (256)\nand ThreadsPerChild set to default (50) the Apache server can service a total of\n12800 (256 x 50) requests. When using the perfork MPM the maximum number of\nrequests is limited by only the value of MaxClients. The default value (256) is the\nmaximum setting for this directive. If you wish to change this to a higher number,\nyou will have to modify the HARD_SERVER_LIMIT constant in mpm_default.h file\nin the source distribution of Apache and recompile and reinstall it.\nMAXREQUESTSPERCHILD\nThis directive sets the number of requests a child process can serve before getting\nkilled.\nSyntax: MaxRequestsPerChild number\nDefault setting: MaxRequestsPerChild 0\nContext: Server config\nThe default value of 0 makes the child process serve requests forever. I do not\nlike the default value because it allows Apache processes to slowly consume large\namounts of memory when a faulty mod_perl script or even a faulty third-party\nApache module leaks memory. If you do not plan to run any third-party Apache\nmodules or mod_perl scripts, you can keep the default setting or else set it to a rea-\nsonable number. A setting of 30 ensures that the child process is killed after pro-\ncessing 30 requests. Of course, new child processes are created as needed.\nMAXSPARESERVERS\nThis directive lets you set the number of idle Apache child processes that you want\non your server.\nSyntax: MaxSpareServers number\nDefault setting: MaxSpareServers 10\nContext: Server config\n98\nPart II: Network and Service Performance\n" }, { "page_number": 122, "text": "If the number of idle Apache child processes exceeds the maximum number\nspecified by the MaxSpareServers directive, then the parent process kills off the\nexcess processes. Tuning of this parameter should only be necessary for very busy\nsites. Unless you know what you are doing, do not change the default.\nMINSPARESERVERS\nThe MinSpareServers directive sets the desired minimum number of idle child\nserver processes. An idle process is one that is not handling a request. If there are\nfewer idle Apache processes than the number specified by the MinSpareServers\ndirective, then the parent process creates new children at a maximum rate of 1 per\nsecond. Tuning of this parameter should only be necessary on very busy sites.\nUnless you know what you are doing, do not change the default.\nSyntax: MinSpareServers number\nDefault setting: MinSpareServers 5\nContext: Server config\nKEEPALIVE\nThe KeepAlive directive enables you to activate/deactivate persistent use of TCP\nconnections in Apache.\nSyntax: KeepAlive On | Off\nDefault setting: KeepAlive On\nContext: Server config\nOlder Apache servers (prior to version 1.2) may require a numeric value\ninstead of On/Off when using KeepAlive This value corresponds to the\nmaximum number of requests you want Apache to entertain per request. A\nlimit is imposed to prevent a client from taking over all your server\nresources.To disable KeepAlive in the older Apache versions, use 0 (zero)\nas the value.\nKEEPALIVETIMEOUT\nIf you have the KeepAlive directive set to on, you can use the KeepAliveTimeout\ndirective to limit the number of seconds Apache will wait for a subsequent request\nbefore closing a connection. After a request is received, the timeout value specified\nby the Timeout directive applies.\nSyntax: KeepAliveTimeout seconds\nChapter 5: Web Server Performance\n99\n" }, { "page_number": 123, "text": "Default setting: KeepAliveTimeout 15\nContext: Server config\nKEEPALIVETIMEOUT\nIf you have the KeepAlive directive set to on, you can use the KeepAliveTimeout\ndirective to limit the number of seconds Apache will wait for a subsequent request\nbefore closing a connection. After a request is received, the timeout value specified\nby the Timeout directive applies.\nSyntax: KeepAliveTimeout seconds\nDefault setting: KeepAliveTimeout 15\nContext: Server config\nControlling system resources\nApache is flexible in enabling you to control the amount of system resources (such\nas CPU time and memory) it consumes. These control features come in handy for\nmaking your Web server system more reliable and responsive. Often a typical hack\nattempts to make a Web server consume all available system resources until the\nsystem becomes unresponsive — in effect, halted. Apache provides a set of directives\nto combat such a situation. \nRLIMITCPU\nThe RLimitCPU directive enables you to control the CPU usage of Apache children-\nspawned processes such as CGI scripts. The limit does not apply to Apache children\nthemselves or to any process created by the parent Apache server.\nSyntax: RLimitCPU n | ‘max’ [ n | ‘max’]\nDefault setting: Not set; uses operating system defaults\nContext: Server config, virtual host\nThe RLimitCPU directive takes the following two parameters:The first parameter\nsets a soft resource limit for all processes and the second parameter, which is\noptional, sets the maximum resource limit. Note that raising the maximum resource\nlimit requires that the server be running as root or in the initial startup phase.For\neach of these parameters, there are two possible values:\nN\nn is the number of seconds per process.\nN and max is the maximum resource limit allowed by the operating system.\n100\nPart II: Network and Service Performance\n" }, { "page_number": 124, "text": "RLIMITMEM\nThe RLimitMEM directive limits the memory (RAM) usage of Apache children-\nspawned processes such as CGI scripts. The limit does not apply to Apache chidren\nthemselves or to any process created by the parent Apache server.\nSyntax: RLimitMEM n | ‘max’ [ n | ‘max’]\nDefault setting: Not set; uses operating system defaults\nContext: Server config, virtual host\nThe RLimitMEM directive takes two parameters. The first parameter sets a soft\nresource limit for all processes, and the second parameter, which is optional, sets\nthe maximum resource limit. Note that raising the maximum resource limit requires\nthat the server be started by the root user. For each of these parameters, there are\ntwo possible values:\nN\nn is the number of bytes per process\nN\nmax is the maximum resource limit allowed by the operating system\nRLIMITNPROC\nThe RLimitNPROC directive sets the maximum number of simultaneous Apache\nchildren-spawned processes per user ID.\nSyntax: RLimitNPROC n | ‘max’ [ n | ‘max’]\nDefault setting: Unset; uses operating system defaults\nContext: Server config, virtual host\nThe RLimitNPROC directive takes two parameters. The first parameter sets the\nsoft resource limit for all processes, and the second parameter, which is optional,\nsets the maximum resource limit. Raising the maximum resource limit requires that\nthe server be running as root or in the initial startup phase. For each of these para-\nmeters, there are two possible values:\nN\nn is the number of bytes per process\nN\nmax is the maximum resource limit allowed by the operating system\nIf your CGI processes are run under the same user ID as the server process,\nuse of RLimitNPROC limits the number of processes the server can launch\n(or “fork”). If the limit is too low, you will receive a “Cannot fork process”type\nof message in the error log file. In such a case, you should increase the limit\nor just leave it as the default.\nChapter 5: Web Server Performance\n101\n" }, { "page_number": 125, "text": "LIMITREQUESTBODY\nThe LimitRequestBody directive enables you to set a limit on the size of the HTTP\nrequest that Apache will service. The default limit is 0, which means unlimited. You\ncan set this limit from 0 to 2147483647 (2GB).\nSyntax: LimitRequestBody bytes\nDefault setting: LimitRequestBody 0\nContext: Server, virtual host, directory, .htaccess\nSetting a limit is recommended only if you have experienced HTTP-based denial\nof service attacks that try to overwhelm the server with large HTTP requests. This is\na useful directive to enhance server security.\nLIMITREQUESTFIELDS\nThe LimitRequestFields directive allows you to limit number of request header\nfields allowed in a single HTTP request. This limit can be 0 to 32767 (32K). This\ndirective can help you implement a security measure against large request based\ndenial of service attacks.\nSyntax: LimitRequestFields number\nDefault setting: LimitRequestFields 100\nContext: Server config\nLIMITREQUESTFIELDSIZE\nThe LimitRequestFieldsize directive enables you to limit the size (in bytes) of a\nrequest header field. The default size of 8190 (8K) is more than enough for most sit-\nuations. However, if you experience a large HTTP request-based denial of service\nattack, you can change this to a smaller number to deny requests that exceed the\nlimit. A value of 0 sets the limit to unlimited.\nSyntax: LimitRequestFieldsize bytes\nDefault setting: LimitRequestFieldsize 8190\nContext: Server config\nLIMITREQUESTLINE\nThe LimitRequestLine directive sets the limit on the size of the request line. This\neffectively limits the size of the URL that can be sent to the server. The default limit\nshould be sufficient for most situations. If you experience a denial of service attack\nthat uses long URLs designed to waste resources on your server, you can reduce the\nlimit to reject such requests.\n102\nPart II: Network and Service Performance\n" }, { "page_number": 126, "text": "Syntax: LimitRequestLine bytes\nDefault setting: LimitRequestLine 8190\nContext: Server config\nUsing dynamic modules\nApache loads all the precompiled modules when it starts up; however, it also pro-\nvides a dynamic module-loading and -unloading feature that may be useful on cer-\ntain occasions. When you use the following dynamic module directives, you can\nchange the list of active modules without recompiling the server.\nCLEARMODULELIST\nYou can use the ClearModuleList directive to clear the list of active modules and\nto enable the dynamic module-loading feature. Then use the AddModule directive to\nadd modules that you want to activate.\nSyntax: ClearModuleList\nDefault setting: None\nContext: Server config\nADDMODULE\nThe AddModule directive can be used to enable a precompiled module that is cur-\nrently not active. The server can have modules compiled that are not actively in\nuse. This directive can be used to enable these modules. The server comes with a\npreloaded list of active modules; this list can be cleared with the ClearModuleList\ndirective. Then new modules can be added using the AddModule directive.\nSyntax: AddModule module module ...\nDefault setting: None\nContext: Server config\nAfter you have configured Apache using a combination of the mentioned direc-\ntives, you can focus on tuning your static and dynamic contents delivery mecha-\nnisms. In the following sections I show just that.\nSpeeding Up Static Web Pages\nAlthough everyone is screaming about dynamic Web contents that are database-\ndriven or served by fancy application servers, the static Web pages still are there. In\nfact, static Web pages aren’t likely to be completely replaced by dynamic content in\nChapter 5: Web Server Performance\n103\n" }, { "page_number": 127, "text": "the near future. Some dynamic contents systems even create dynamically and peri-\nodically generated static Web pages as cache contents for faster delivery. Because\nserving a static page usually is faster than serving a dynamic page, the static page\nis not going away soon. In this section I improve the speed of static page delivery\nusing Apache and the new kernel HTTP module.\nReducing disk I/O for faster static page delivery\nWhen Apache gets a request for a static Web page, it performs a directory tree\nsearch for .htaccess files to ensure that the requested page can be delivered to the\nWeb browser. For example, say that an Apache server running on www.nitec.com\nreceives a request such as http://www.nitec.com/training/linux/sysad/\nintro.html. Apache performs the following checks:\n/.htaccess\n%DocRoot%/.htaccess\n%DocRoot%/training/.htaccess\n%DocRoot%/training/linux/.htaccess\n%DocRoot%/training/linux/sysad/.htaccess\nwhere %DocRoot% is the document root directory set by the DocumentRoot direc-\ntive in the httpd.conf file. So if this directory is /www/nitec/htdocs, then the\nfollowing checks are made:\n/.htaccess\n/www/.htaccess\n/www/nitec/.htaccess\n/www/nitec/htdocs/.htaccess\n/www/nitec/htdocs/training/.htaccess\n/www/nitec/htdocs/training/linux/.htaccess\n/www/nitec/htdocs/training/linux/sysad/.htaccess\nApache looks for the .htaccess file in each directory of the translated (from the\nrequested URL) path of the requested file (intro.html). As you can see, a URL that\nrequests a single file can result in multiple disk I/O requests to read multiple files.\nThis can be a performance drain for high-volume sites. In such case, your best\nchoice is to disable .htaccess file checks altogether. For example, when the follow-\ning configuration directives are placed within the main server section (that is, not\nwithin a VirtualHost directive) of the httpd.conf file, it disables checking for\n.htaccess for every URL request.\n\nAllowOverride None\n\n104\nPart II: Network and Service Performance\n" }, { "page_number": 128, "text": "When the preceding configuration is used, Apache simply performs a single disk\nI/O to read the requested static file and therefore gain performance in high-volume\naccess scenarios.\nUsing Kernel HTTP daemon\nThe new Linux 2.4 kernel ships with a kernel module called khttpd, which is a ker-\nnel-space HTTP server. This kernel module can serve static contents, such as an\nHTML file or an image, faster than Apache. This is because the module operates in\nkernel space and directly accesses the network without needing to operating in\nuser-space like other Web servers, such as Apache. However, this module isn’t a\nreplacement for Apache or any other Web server, because it can only serve static\ncontents. It can intercept the request for static contents and pass through requests\nthat it can’t service to a Web server such as Apache running on the same machine.\nYou can learn more about this module at www.fenrus.demon.nl. I only recom-\nmend this module for those who need a dedicated static contents server such as an\nimage server.\nSpeeding Up Web Applications\nDynamic contents for the Web are typically generated three ways: server-side\nscripts/applications, client-side scripts, or a combination of both server-side\nscripts/applications and client-side scripts. The client-side scripts have nothing to\ndo with your Linux server and therefore are not covered in this chapter. However,\nthe server-side scripts/applications run on the Linux server, so their performance\nproblems are addressed in this section.\nTypically, Perl and Java are the primary languages for Web contents develop-\nment under the Linux platform. Perl is more common than Java because the Java\nrun-time environment has had a lot of performance problems in Linux platforms\n(although these are likely to be resolved in the near future). In this section I focus\nprimarily on Perl-based Web application performance.\nPerl-based Common Gateway Interface (CGI) script is the granddaddy of server-\nside Web scripting. However, as Web matured, the number of people browsing the\nWeb grew, the shortcomings of CGI scripts became evident. Here are the primary\nreasons CGI scripts don’t cut it any more:\nN A CGI script is started every time a request is made, which means that if\nthe Apache server receives 100 requests for the same script, there are 100\ncopies of the same script running on the server, which makes CGI a very\nunscalable solution.\nN A CGI script can’t maintain persistent connection to a back-end data-\nbase, which means a connection needs to be established every time a\nscript needs to access a database server. This effectively makes CGI scripts\nslow and resource hungry.\nChapter 5: Web Server Performance\n105\n" }, { "page_number": 129, "text": "N CGI scripts are often hacks that are quickly put together by a system-\ninexperienced developer and therefore poses great security risks.\nUnfortunately, many Web sites still use CGI scripts because they are easy\nto develop and often freely available. Stay away from CGI scripts and use\nmore scalable and robust solutions such as the mod_perl, mod_fastcgi,\nor even Java servlets (discussed in the following sections).\nUsing mod_perl\nThe Apache mod_perl module alone keeps Perl in the mainstream of Web develop-\nment. This module for Apache enables you to create highly scalable, Perl-based\nWeb applications that can apply the following facts:\nN A scalable Web application isn’t a CGI script. A mod_perl-based script\nisn’t a CGI script. A mod_perl-based script isn’t invoked for every URL\nrequest for that script. A new process isn’t created every time a mod_perl\nscript is requested, which enables the platform to be scalable and robust.\nN A scalable Web application can apply persistent store and database\nconnections. A mod_perl-based script can apply shared memory or keep\npersistent connections opened to local or remote database servers.\nFortunately, switching your Perl-based CGI scripts to mod_perl isn’t hard at all.\nIn the following section I show how you can install mod_perl for Apache and also\ndevelop performance-friendly mod_perl scripts.\nINSTALLING MOD_PERL\n1. Extract mod_perl-x.y_z.tar.gz (where x.y_z is the latest version num-\nber for mod_perl source distribution) using the tar xvzf mod_perl-\nx.y_z.tar.gz command in the parent directory of your Apache source\ndistribution. If you have extracted the Apache source distribution in\n/usr/src/redhat/SOURCES/apache_x.y.z directory, then you must\nextract the mod_perl source distribution in the /usr/src/redhat/\nSOURCES directory.\n2. Change directory to mod_perl-x.y_z and run\nperl Makefile.PL\nAPACHE_SRC=../apache_x.y.z/src \\\nDO_HTTPD=1 \\\nUSE_APACI=1 \\\nPREP_HTTPD=1 \\\nEVERYTHING=1\n106\nPart II: Network and Service Performance\n" }, { "page_number": 130, "text": "3. Run the make; make install commands to build mod_perl binaries and\nPerl modules.\n4. Change directory to ../apache_x.y.z and run:\n./configure –prefix=/usr/local/apache \\\n--activate-module=src/modules/perl/libperl.a\nIf you want to enable or disable other Apache modules, make sure you\nadd the appropriate --enable-module and --disable-module options in\nthe preceding command line. For example, the following configuration\ncreates a very lean and mean Apache server with mod_perl support:\n./configure --prefix=/usr/local/apache \\\n--disable-module=cgi \\\n--disable-module=imap \\\n--disable-module=userdir \\\n--disable-module=autoindex \\\n--disable-module=status \\\n--activate-module=src/modules/perl/libperl.a\n5. Run the make; make install commands to build and install the Apache\nWeb server.\nCONFIGURING MOD_PERL\nHere’s how you can configure mod_perl for Apache:\n1. First determine where you want to keep your mod_perl scripts. \nKeep your mod_perl scripts outside your document root tree (that is, the\ndirectory pointed by DocumentRoot directive). This ensures that the\nmod_perl script source isn’t accidentally exposed to the world. Also\nensure that the file permissions for the mod_perl script directory is set\nonly for the Apache Web server user. For example, for a Web site whose\nDocumentRoot is set to /www/mysite/htdocs, the ideal mod_perl script\ndirectory can be /www/mysite/perl-bin. After you have determined\nwhat this directory is, create a file in this directory called startup.pl (or\nuse any other name) that contains the following lines:\n#!/usr/bin/perl\n# If you installed perl in another location\n# make sure you change /usr/bin/perl to appropriate\n# path.\nuse strict;\n# extend @INC to include the new mod_perl script\n# location(s)\nChapter 5: Web Server Performance\n107\n" }, { "page_number": 131, "text": "use lib qw(/www/mysite/perl-bin);\n# Following line is required.\n1;\nTo keep your mod_perl scripts in multiple locations, simply type in the\nadditional path in the use lib line. For example, to add another\nmod_perl script location called /www/mysite/stable/perl-bin, you can\nsimply change the last line in the preceding script so it reads as follows:\nuse lib qw(/www/mysite/perl-bin /www/mysite/stable/perl-bin);\n2. Tell Apache to execute the startup script (called startup.pl in the previ-\nous step) when it starts. You can do that by adding the following directive\nin the httpd.conf file.\nPerlRequire /www/mysite/perl-bin/startup.pl\n3. If you know that you are using a set of Perl modules often, you can pre-\nload them by adding use modulename () line in the startup.pl script\nbefore the 1; line. For example, if you use the CGI.pm module (yes, it\nworks with both CGI and mod_perl scripts) in many of your mod_perl\nscripts, you can simply preload it in the startup.pl script, as follows:\nuse CGI ();\nHere’s an example of my startup.pl script.\n#!/usr/bin/perl\n# CVS ID: $Id$\nuse strict;\n# extend @INC if needed\nuse lib qw(/www/release/perl-bin\n/www/beta/perl-bin\n/www/alpha/perl-bin);\nuse CGI ();\nCGI->compile(‘:all’);\nuse Apache ();\nuse Apache::DBI ();\n1;\nI have added CGI->compile(‘:all’); line after use CGI (); line because\nCGI.pm doesn’t automatically load all its methods by default; instead, it\nprovides the compile() function to force loading of all methods.\n108\nPart II: Network and Service Performance\n" }, { "page_number": 132, "text": "4. Determine how you want to make your mod_perl scripts available in your\nWeb site. I prefer specifying a directive for each script, as in\nthe following example:\n\nSetHandler perl-script\nPerlHandler ShoppingCart\n\nHere a mod_perl script called ShoppingCart.pm is set up as the request\nhandler for the /cart URL segment. For example, if a Web site called\nwww.domain.com uses the preceding configuration, all requests for\nwww.domain.com/cert are serviced by the ShoppingCart.pm script. This\nscript must reside in a standard Perl path (that is, be part of @INC) or it\nmust be in the path specified in the startup.pl using the use lib line.\nFor example, suppose your startup.pl script has the following line:\nuse lib qw(/www/mysite/perl-bin);\nThen the ShoppingCart.pm script can reside in /www/mysite/perl-bin\ndirectory. As mentioned before, all requests to /cart are serviced by this\nscript. For example, /cart/abc or /cart/whatever are serviced by this\nscript. If you want to run a different script, say Calc.pm, for a sublocation\nof this URL such as /car/calc, then you must specify another\n directive as follows:\n\nSetHandler perl-script\nPerlHandler Calc\n\nNow all requests such as www.domain.com/cart/calc or www.domain.\ncom/cart/calc/whatever, and so on, are serviced by the Calc.pm script.\nUse of the directive to associate a mod_perl script with a URL\nhas the added side effect of enabling you to hide the actual script name so it\nnever appears in the URL. For example, when someone accesses\nwww.domain.com/cart in the current example, s/he has no idea that the\nApache server is actually executing a script called ShoppingCart.pm in\nthe /www/mysite/perl-bin directory. This is nice in the sense that it\nenables you to hide details of your system from prying eyes.\nChapter 5: Web Server Performance\n109\n" }, { "page_number": 133, "text": "Also, if you wanted to keep a sublocation called /cart/static to be ser-\nviced by the default Apache handler, you can simply use the following\nconfiguration:\n\nSetHandler default-handler\n\nThis setting makes sure that any request to www.domain.com/cart/\nstatic (or to a sublocation) is serviced by the default Apache handler.\nNow all you need is mod_perl scripts to try out your new mod_perl-enabled\nApache server. Because mod_perl script development is largely beyond the scope of\nthis book, I provide a basic a test script called HelloWorld.pm (shown in Listing 5-1).\nListing 5-1: HelloWorld.pm\n#!/usr/bin/perl -w\n# CVS ID: $Id$Id:\npackage HelloWorld;\n# A simple mod_perl script that says “Hello World”\n# and displays the process ID of the Apache child\n# process and a count of (similar) requests served by it.\n#\nuse strict;\nuse Apache::Constants qw(:common :response);\nuse CGI;\nmy $counter = 0;\nsub handler{\nmy $r = shift;\nmy $query = new CGI;\nprint $query->header(-type => ‘text/html’);\nprint “Hello World
”;\nprint “Apache child server PID : $$
”;\nprint “Similar requests processed by this server is: “,\n$counter++, “
”;\nreturn DONE;\n}\n1;\nYou can put the HelloWorld.pm in a location specified by the use lib line your\nstartup.pl script and create a configuration such as the following in httpd.conf.\n\nSetHandler perl-script\nPerlHandler HelloWorld\n\n110\nPart II: Network and Service Performance\n" }, { "page_number": 134, "text": "After you have the preceding configuration, start or restart the Apache server\nand access the HelloWorld.pm script using http://your.server.com/test. You\nshould see the “Hello World” message, the PID of the Apache child server, and a\ncount of how many similar requests this child server has served so far.\nIf you run this test (that is, access the /test URL) with the default values for the\nMinSpareServers, MaxSpareServers, StartServers, MaxRequestsPerChild,\nMaxClients directives, you may get confused. Because your default settings are\nlikely to cause Apache to run many child servers and because Apache chooses the\nchild server per /test request, you may find the count to go up and down as your\nsubsequent /test requests are serviced by any of the many child servers. If you\nkeep making requests for the /test URL, eventually you see that all child servers\nare reporting upwards count until it dies because of the MaxRequestsPerChild\nsetting. This is why it’s a good idea to set these directives as follows for testing \npurposes:\nMinSpareServers 1\nMaxSpareServers 1\nStartServers 1\nMaxRequestsPerChild 10\nMaxClients 1\nRestart the Apache server and access /test and you see that Apache services each\n10 of your requests using a single child server whose count only increases.\nUse of mod_perl scripts within your Apache server ensures that your response\ntime is much better than CGI equivalent. However, heavy use of mod_perl scripts\nalso creates some side-effects that can be viewed as performance problems, which I\ncover in the next section.\nSOLVING PERFORMANCE PROBLEMS RELATED TO A HEAVY\nMOD_PERL ENVIRONMENT\nWhen you start using many mod_perl scripts, you see that your Apache child\nserver processes become larger in size. You can view this using the top command.\nAs long as you have plenty of RAM you should be fine. However, no one ever has\ntoo much RAM. So it’s a good idea to avoid relying on having lots of memory as\nthe solution. Instead, Here’s how you can address this problem more effectively.\nIf you find that Apache child processes are larger due to many mod_perl scripts\nthat are getting loaded in them, consider having a dedicated script server that only\nserves dynamic contents. Figure 5-1 shows how this can work.\nWhen a user requests the home page of a site called www.domain.com, the\nApache server responsible for static pages returns the index.html page to the\nclient. The page contains embedded links for both static and dynamic contents. The\nfigure shows two such links: login and privacy. When the end-user clicks on the\nlogin link it requests http://myapps.domain.com/login, which is a different\nApache server than the www.domain.com server. In fact these two should be two\ndifferent Linux systems in the ideal world. However, not everyone can afford to\nsplit the dynamic and static contents like this, so it isn’t appropriate for everyone.\nChapter 5: Web Server Performance\n111\n" }, { "page_number": 135, "text": "Figure 5-1: Separating static and dynamic (mod_perl script-generated) contents\nIf you must keep the mod_perl and static contents on the same Linux system\nrunning Apache, you still can ensure that fat Apache child processes aren’t serving\nstatic pages. Here’s a solution that I like:\n1. Compile and install the mod_proxy module for your Apache Web server\n2. Copy your existing httpd.conf file to httpd-8080.conf and modify the\nPort directive to be Port 8080 instead of Port 80. Remove all\nmod_perl-specific configurations from httpd.conf so that all your\nmod_perl configurations are in httpd-8080.conf file.\n3. Modify the httpd.conf file to have the following proxy directives:\nProxyPass /myapps http://127.0.0.1:8080/myapps\nYou can change myapps to whatever you like. If you do change this, make\nsure you change it in every other location that mentions it. Here we are\ntelling the Apache server serving static pages that all requests to /myapps\nURL are to be serviced via the proxy module, which should get the\nresponse from the Apache server running on the same Linux system\n(127.0.0.1 is the local host) but on port 8080.\n4. Add the following configuration in httpd-8080.conf to create a\nmod_perl script location.\n\nSetHandler perl-script\nhttp://www.domain.com/index.html\nprivacy\nWelcome to DOMAIN.COM\nClick login to enter our intranet.\nSee our privacy policy for details.\nStatic Page Server\nContents of index.html page\nlogin\n3\n4\n2\n1\nStatic Page Server\nDynamically Generated Page\nDynamic Page Server\nhttp://myapps.domain.com\nlogin\n112\nPart II: Network and Service Performance\n" }, { "page_number": 136, "text": "PerlHandler MyApp1\n\nDon’t forget to change MyApp1 to whatever your script name is.\nNow start (or restart) the Apache server (listening on port 80) as usual using the\napachectl command. However, you must start the Apache on port 8080 using the\n/usr/local/apache/bin/httpd \n–f \n/usr/local/apache/conf/httpd-8080.\nconf command. This assumes that you have installed /usr/local/apache directory; if\nthat isn’t so, make sure you change the path. Now you have two Apache parent\ndaemons (which run as root) running two sets of children — where one set services\nthe static pages and uses the proxy module to fetch the dynamic, mod_perl script\npages using the ProxyPass directive. This enables you to service the static pages\nusing a set of child servers that aren’t running any Perl code whatsoever. On the\nother hand, the server on port 8080 services only dynamic requests so you effec-\ntively have a configuration that is very performance-friendly.\nScripts running under mod_perl run fast because they are loaded within each\nchild server’s code space. Unlike its CGI counterpart, a mod_perl script can keep\npersistent connection to an external database server — thus speeding up the gener-\nation of database-driven dynamic content. However, a new problem introduces\nitself if you run a very large Web server. When you run 50 or 100 or more Apache\nserver processes to service many simultaneous requests, it’s possible for Apache to\neventually open up that many database connections and keep each connection per-\nsist for the duration of each child. Say that you run a Web server system where you\nrun 50 Apache child processes so that you can service about 50 requests per second\nand you happen to have a mod_perl-based script that opens a database connection\nin the initialization stage. As requests come to your database script, eventually\nApache manages to service such requests using each of its child processes and thus\nopening up 50 database connections. Because many database servers allocate\nexpensive resources on a per-connection basis, this could be a major problem on\nthe database side. For example, when making such connections to an IBM\nUniversal Database Server (UDB) Enterprise Edition running on a remote Linux sys-\ntem, each Apache child has a counter-part connection related process on the data-\nbase server. If such environment uses load balancing hardware to balance incoming\nrequests among a set of mod_perl-enabled Apache Web server there is likely to be\na scenario when each Web server system running 50 Apache child processes have\nall opened up connection to the database server. For example, if such an environ-\nment consists of 10 Web servers under the load-balancing hardware, then the total\npossible connections to the database server is 10 x 50 or 500, which may create an\nextensive resource load on the database server.\nOne possible solution for such a scenario is to find a way to have the database\ntime-out any idle connections, make the mod_perl script code detect a stale con-\nnection, and have it reinitiate connection. Another solution is to create a persistent\ndatabase proxy daemon that each Web server uses to fetch data from the database.\nChapter 5: Web Server Performance\n113\n" }, { "page_number": 137, "text": "Fortunately, FastCGI or Java Servlets have more native solution for such prob-\nlems and should be considered for heavily used database-driven applications.\nHere’s another performance-boosting Web technology called FastCGI.\nUsing FastCGI\nLike mod_perl scripts, FastCGI applications run all the time (after the initial load-\ning) and therefore provide a significant performance advantage over CGI scripts.\nTable 5-2 summarizes the differences between a FastCGI application and mod_perl\nscript.\nTABLE 5-2: DIFFERENCE BETWEEN A FASTCGI APPLICATION AND MOD_PERL\nSCRIPTS\nTopic\nFastCGI Applications\nMod_perl Scripts\nApache platform dependent\nNo. FastCGI applications can \nYes. Only Apache supports \nrun on non-Apache Web \nmod_perl module\nservers, such as IIS and \nNetscape Web Server.\nPerl-only solution\nNo. FastCGI applications can be Yes\ndeveloped in many languages, \nsuch as C, C++, and Perl.\nRuns as external process\nYes\nNo\nCan run on remote machine\nYes\nNo\nMultiple instances of the \nTypically a single FastCGI \nNumber of instances of \napplication/script are run\napplication is run to respond \nmod_perl script equal to \nto many requests that are \nthe number of child \nqueued. However, if the load \nApache server processes. \nis high, multiple instances of\nthe same application are run\nWide support available\nYes. However, at times I get \nYes. There are many \nthe impression that FastCGI \nmod_perl sites on the \ndevelopment is slowing down, \nInternet and support via \nbut I can’t verify this or back \nUsenet or Web is \nthis up\navailable.\n114\nPart II: Network and Service Performance\n" }, { "page_number": 138, "text": "Topic\nFastCGI Applications\nMod_perl Scripts\nDatabase connectivity\nBecause all requests are sent \nBecause each Apache \nto a single FastCGI application, \nchild process runs the \nyou only need to maintain a \nmod_perl script, each \nsingle database connection \nchild can potentially have \nwith the back-end database \na database connection to \nserver. However, this can \nthe back-end database. \nchange when Apache FastCGI \nThis means you can \nprocess manager spawns \npotentially end up with \nadditional FastCGI application \nhundreds of database \ninstances due to heavy load. \nconnections from even a \nStill, the number of FastCGI \nsingle Apache server \ninstances of an application is \nsystem.\nlikely to be less than the number \nof Apache child processes.\nLike mod_perl, the Apache module for FastCGI, mod_fastcgi, doesn’t come\nwith the standard Apache distribution. You can download it from www.fastcgi.\ncom. Here’s how you can install it.\nInstalling and configuring FastCGI module for\nApache\nI assume that you have installed the Apache source distribution in /usr/src/\nredhat/SOURCES/apache_x.y.z (where x.y.z is the latest version of Apache). To\ninstall the mod_fastcgi module for Apache, do the following:\n1. Su to root.\n2. Extract the mod_fastcgi source distribution using the tar xvzf\nmod_fastcgi.x.y.z.tar.gz command. Then copy the mod_fastcgi\nsource directory to the /usr/src/redhat/SOURCES/apache_x.y.z/\nsrc/modules/fastcgi directory.\n3. Configure Apache using the configuration script (configure) with the\nfollowing option:\n--active-module=src/modules/fastcgi/libfastcgi.a\nChapter 5: Web Server Performance\n115\n" }, { "page_number": 139, "text": "If you already compiled Apache with many other options and would like\nto retain them, simply run the following command from the\n/usr/src/redhat/SOURCES/apache_x.y.z directory.\n./config.status --activate-\nmodule=src/modules/fastcgi/libfastcgi.a\n4. Run the make; make install command from the same directory to com-\npile and install the new Apache with mod_fastcgi support.\n5. You are ready to configure Apache to run FastCGI applications. First\ndetermine where you want to keep the FastCGI applications and scripts.\nIdeally, you want to keep this directory outside the directory specified in\nthe DocumentRoot directive. For example, if your set DocumentRoot to\n/www/mysite/htdocs, consider using /www/mysite/fast-bin as the\nFastCGI application/script directory. I assume that you will use my advice\nand do so. To tell Apache that you have created a new FastCGI applica-\ntion/script directory, simply use the following configuration:\nAlias /apps/ “/www/mysite/fast-bin/”\n\nOptions ExecCGI\nSetHandler fastcgi-script\n\nThis tells Apache that the alias /apps/ points to the /www/mysite/fast-\nbin directory — and that this directory contains applications (or scripts)\nthat must run via the fastcgi-script handler.\n6. Restart the Apache server and you can access your FastCGI\napplications/scripts using the http://www.yourdomain.com/fast-\nbin/appname URL where www.yourdomain.com should be replaced with\nyour own Web server hostname and appname should be replaced with the\nFastCGI application that you have placed in the /www/mysite/fast-bin\ndirectory. To test your FastCGI setup, you can simply place the following\ntest script (shown in Listing 5-2) in your fast-bin directory and then\naccess it.\nListing 5-2: testfcgi.pl\n#!/usr/bin/perl -w\n#\n# CVS ID: $Id$Id:\nuse strict;\nuse CGI::Fast qw(:standard);\n# Do any startup/initialization steps here.\nmy $counter = 0;\n116\nPart II: Network and Service Performance\n" }, { "page_number": 140, "text": "#\n# Start the FastCGI request loop\n#\nwhile (new CGI::Fast) {\nprint header;\nprint “This is a FastCGI test script” . br;\nprint “The request is serviced by script PID: $$” . br;\nprint “Your request number is : “, $counter++, br;\n}\nexit 0;\nWhen you run the script in Listing 5-2, using a URL request such as\nhttp://www.yourserver.com/fast-bin/testfcgi.pl, you see that the\nPID doesn’t change and the counter changes as you refresh the request\nagain and again. If you run ps auxww | grep testfcgi on the Web\nserver running this FastCGI script, you see that there is only a single\ninstance of the script running and it’s serving all the client requests. If the\nload goes really high, Apache launches another instance of the script.\nFastCGI is a great solution for scaling your Web applications. It even enables\nyou to run the FastCGI applications/scripts on a remote application server. This\nmeans you can separate your Web server from your applications and thus gain bet-\nter management and performance potentials. Also, unlike with mod_perl, you\naren’t limited to Perl-based scripts for performance; with FastCGI, you can write\nyour application in a variety of application programming languages, such as C,\nC++, and Perl.\nQuite interestingly, Java has begun to take the lead in high-performance Web\napplication development. Java used to be considered slow and too formal to write\nWeb applications, even only a few years ago. As Java has matured, it has become a\nvery powerful Web development platform. With Java you have Java Servlets, Java\nServer Pages, and many other up and coming Java technologies that can be utilized\nto gain high scalability and robustness. Java also gives you the power to create dis-\ntributed Web applications easily.\nUsing Java servlets\nFor some unknown reason, Java on Linux platform did not get a great start. It’s\nslowly coming around and the Java run-time environment and development tools\nare more stable. Even so, complex multithreaded Java servlets still don’t always\nwork well under Linux when the same code works just fine under other Java-\nfriendly operating systems (such as Solaris or Windows 2000). \nUsing Java Servlets with back-end database applications is really ideal. You can\nimplement a master Java servlet that acts as a database connection pool and keeps\na given set of connections to the back-end database server. When another servlet\nneeds a connection to the database, it can get it from the connection pool server\nand return it after it has finished using the connection. This provides a much more\nChapter 5: Web Server Performance\n117\n" }, { "page_number": 141, "text": "managed database pooling than both mod_perl or mod_fastcgi approach discussed\nearlier. If you are thinking about why I keep referring to database connectivity,\nthen you have not developed major Web software yet. Just about every major Web\nsoftware development requires back-end database connectivity, so I often consider\na platform good or bad according to how well (and easily) it allows management of\nsuch resources. Java servlets definitely win this one over mod_perl\nor\nmod_fastcgi.\nTo find more information on Java servlets on Apache, check the http://java.\napache.org/ Web site.\nNow that you know many ways to speed up your static and dynamic Web con-\ntents, consider speeding up your access to someone else’s contents. This is typically\ndone by setting up a proxy server with caching capability. In the following section\nI cover Squid, which is just that.\nUsing Squid proxy-caching server\nSquid is an open-source HTTP 1.1-compliant, proxy-caching server that can\nenhance your users’ Web-browsing experience. You can download the latest, stable\nSquid source distribution from www.squid-cache.org.\nIdeally, you want to run the proxy-caching server with two network interfaces.\nN One interface connects it to the Internet gateway or the router\nN One interface connects it to the internal network.\nDisabling IP forwarding on the proxy-caching system ensures that no one\ncan bypass the proxy server and access the Internet directly.\nHere’s how you can install and configure it for your system.\nCOMPILING AND INSTALLING SQUID PROXY-CACHING SERVER\n1. Su to root and extract the source distribution using the tar xvzf suid-\nversion.tar.gz (where version is the latest version number of the\nSquid software).\n2. Run the ./configure --prefix=/usr/local/squid command to config-\nure Squid source code for your system.\n3. Run make all; make install to install Squid in the\n/usr/local/squid directory.\n118\nPart II: Network and Service Performance\n" }, { "page_number": 142, "text": "CONFIGURING SQUID PROXY-CACHING SERVER\nAfter you have installed Squid, you have to configure it.\nHere’s how you can configure Squid.\n1. Create a group called nogroup, using the groupadd nogroup command.\nThis group is used by Squid.\n2. Run the chown -R nobody:nogroup /usr/local/squid command to\ngive the ownership of the /usr/local/squid directory and all its subdi-\nrectories to nobody user and the group called nogroup.\nThis enables Squid (running as nobody user) to create cache directories\nand files and write logs. \n3. Decide which port you want to run the proxy-cache on. Most sites run\nproxy-cache on 8080, so I use that value here.\n4. Add the following line in the squid.conf file:\nhttp_port 8080\nThis tells Squid to listen to port 8080 for proxy requests.\nIf you prefer a different port, use it here. Don’t use a port that is already\nin use by another server. Ideally, you want to use port numbers above\n1024 to avoid collision with standard services, but if you know you aren’t\nrunning a Web server on port 80 and want to run your proxy-cache on\nthat port you can do so. A quick way to check whether a port is available\nis to run telnet localhost portnumber command (where portnumber\nis the port number you want to use for proxy-cache). If you get a connec-\ntion failure message, the port is currently not in use.\n5. Define where you want to keep the cache data. Define the following line\nin the squid.conf.\ncache_dir ufs /usr/local/squid/cache 100 16 256\nThis tells Squid to store the cache data in /usr/local/squid/cache. If\nyou have a very large user base that is going to use this proxy-cache, it’s\na very good idea to have multiple cache directories spanning over differ-\nent disks. This reduces disk I/O-related wait because multiple, independent\ndisks are always faster than a single disk.\n5. Create an access control list (ACL) that enables your network access to the\nproxy-cache selectively.\nBy default, Squid doesn’t allow any connection from anywhere; this secu-\nrity feature uses a simple approach: Deny everyone, allow only those who\nshould have access. For example, if your network address is 192.168.1.0\nChapter 5: Web Server Performance\n119\n" }, { "page_number": 143, "text": "with subnet 255.255.255.0, then you can define the following line in\nsquid.conf to create an ACL for your network.\nacl local_net src 192.168.1.0/255.255.255.0\n6.\nAdd the following line just before the http_access deny all line.\nhttp_access allow local_net\nThis tells Squid to enable machines in local_net ACL access to the proxy-\ncache using the following line in squid.conf.\n7. Tell Squid the username of the cache-manager user. If you want to use\nwebmaster@yourdomain.com as the cache-manager user account, define\nthe following line in the squid.conf file:\ncache_mgr webmaster\n8. Tell Squid which user and group it should run as. Add the following lines\nin squid.conf\ncache_effective_user nobody\ncache_effective_group nogroup\nHere, Squid is told to run as the nobody user and use permissions for the group\ncalled nogroup.\nSave the squid.conf file and run the following command to create the cache\ndirectories.\n/usr/local/squid/squid –z\nNow you can run the /usr/local/squid/bin/squid & command to start Squid\nfor the first time. You can verify it’s working in a number of ways:\nN Squid shows up in a ps –x listing.\nN Running client www.nitec.com dumps Web-page text to your terminal.\nN The files cache.log and store.log in the directory\n/usr/local/squid/logs show Squid to be working.\nN Running squid –k check && echo “Squid is running” tells you when\nSquid is active.\nNow for the real test: If you configure the Web browser on a client machine to\nuse the Squid proxy, you should see results. In Netscape Navigator, select Edit ¡\nPreferences and then select Proxies from within the Advanced category. By select-\ning Manual Proxy Configuration and then clicking View, you can specify the IP\naddress of the Squid server as the http, FTP, and Gopher proxy server. The default\nproxy port is 3128; unless you have changed it in the squid.conf file, place that\nnumber in the port field.\n120\nPart II: Network and Service Performance\n" }, { "page_number": 144, "text": "You should be able to browse any Web site as if you don’t use a proxy. You can\ndouble-check that Squid is working correctly by checking the log file\n/usr/local/squid/logs/access.log from the proxy server and making sure the\nWeb site you were viewing is in there.\nTWEAKING SQUID TO FIT YOUR NEEDS\nNow that you have Squid up and running, you can customize it to fit your needs.\nAt this point it isn’t restricting your users from accessing any sites. You can define\nrules in your squid.conf file to set access control lists and allow or deny visitors\naccording to these lists.\nacl BadWords url_regex foo bar\nBy adding the preceding line, you have defined an ACL rule called BadWords\nthat matches any URL containing the words foo or bar. This applies to\nhttp://foo.deepwell.com/pictures\nand http://www.thekennedycompound.\ncom/ourbar.jpg because they both contain words that are members of BadWords.\nYou can block your users from accessing any URLs that match this rule by\nadding the following command to the squid.conf file:\nhttp_access deny BadWords\nAlmost every administrator using word-based ACLs has a story about not\nexamining all the ways a word can be used.Realize that if you ban your users\nfrom accessing sites containing the word “sex,” you are also banning them\nfrom accessing www.buildersexchange.com and any others that may\nhave a combination of letters matching the forbidden word.\nBecause all aspects of how Squid functions are controlled within the squid.conf\nfile, you can tune it to fit your needs. For example, you can enable Squid to use\n16MB of RAM to hold Web pages in memory by adding the following line:\ncache_mem 16 MB\nBy trial and error, you may find you need a different amount.\nThe cache_mem isn’t the amount of memory Squid consumes; it only sets\nthe maximum amount of memory Squid uses for holding Web pages, pic-\ntures, and so forth.The Squid documentation says you can expect Squid to\nconsume up to three times this amount.\nChapter 5: Web Server Performance\n121\n" }, { "page_number": 145, "text": "By using the line:\nemulate_httpd_log on\nyou arrange that the files in /var/log/squid are written in a form like the Web\nserver log files. This arrangement enables you to use a Web statistics program such\nas Analog or Webtrends to analyze your logs and examine the sites your users are\nviewing.\nSome FTP servers require that an e-mail address be used when one is logging in\nanonymously. By setting ftp_user to a valid e-mail address, as shown here, you\ngive the server at the other end of an FTP session the data it wants to see:\nftp_user squid@deepwell.com\nYou may want to use the address of your proxy firewall administrator. This\nwould give the foreign FTP administrator someone to contact in case of a\nproblem.\nIf you type in a URL and find that the page doesn’t exist, probably that page\nwon’t exist anytime in the near future. By setting negative_ttl to a desired num-\nber of minutes, as shown in the next example, you can control how long Squid\nremembers that a page was not found in an earlier attempt. This is called negative\ncaching.\nnegative_ttl 2 minutes\nThis isn’t always a good thing. The default is five minutes, but I suggest reduc-\ning this to two minutes or possibly one minute, if not disabling it all together. Why\nwould you do such a thing? You want your proxy to be as transparent as possible.\nIf a user is looking for a page she knows exists, you don’t want a short lag time\nbetween the URL coming into the world and your user’s capability to access it.\nUltimately, a tool like Squid should be completely transparent to your users. This\n“invisibility” removes them from the complexity of administration and enables\nthem to browse the Web as if there were no Web proxy server. Although I don’t\ndetail that here, you may refer to the Squid Frequently Asked Questions at http://\nsquid.nlanr.net/Squid/FAQ/FAQ.html. Section 17 of this site details using\nSquid as a transparent proxy.\nAlso, if you find yourself managing a large list of “blacklisted” sites in the\nsquid.conf file, think of using a program called a redirector. Large lists of ACL\nrules can begin to slow a heavily used Squid proxy. By using a redirector to do this\nsame job, you can improve on Squid’s efficiency of allowing or denying URLs\naccording to filter rules. You can get more information on Squirm — a full-featured\nredirector made to work with Squid — from http://www.senet.com.au/squirm/.\n122\nPart II: Network and Service Performance\n" }, { "page_number": 146, "text": "The cachemgr.cgi file comes in the Squid distribution. It’s a CGI program that\ndisplays statistics of your proxy and stops and restarts Squid. It requires only a few\nminutes of your time to install, but it gives you explicit details about how your\nproxy is performing. If you’d like to tune your Web cache, this tool can help.\nIf you are interested in making Squid function beyond the basics shown in this\nchapter, check the Squid Web page at http://squid.nlanr.net/.\nSummary\nIn this chapter, you explored tuning Apache for performance. You examined the\nconfiguration directives that enable you to control Apache’s resource usage so it\nworks just right for your needs. You also encountered the new HTTP kernel module\ncalled khttpd, along with techniques for speeding up both dynamic and static\nWeb-site contents. Finally, the chapter profiled the Squid proxy-cache server and\nthe ways it can help you enhance the Web-browsing experience of your network\nusers\nChapter 5: Web Server Performance\n123\n" }, { "page_number": 147, "text": "" }, { "page_number": 148, "text": "Chapter 6\nE-Mail Server Performance\nIN THIS CHAPTER\nN Tuning sendmail\nN Using Postfix\nN Using PowerMTA for high performance\nSENDMAIL IS THE DEFAULT Mail Transport Agent (MTA) for not only Red Hat Linux\nbut also many other Unix-like operating systems. Therefore, Sendmail is the most\nwidely deployed mail server solution in the world. In recent years, e-mail has taken\ncenter stage in modern business and personal communication — which has\nincreased the demand for reliable, scalable solutions for e-mail servers. This\ndemand helped make the MTA market attractive to both open-source and commer-\ncial software makers; Sendmail now has many competitors. In this chapter, I show\nyou how to tune Sendmail and a few worthy competing MTA solutions for higher\nperformance.\nChoosing Your MTA\nA default open-source Sendmail installation works for most small-to-midsize orga-\nnizations. Unless you plan to deal with a very high volume of e-mails per day, you\nare most likely fine with the open-source version of Sendmail.\nChoosing the right MTA may be dependent on another factor: administration.\nAlthough Sendmail has been around for decades, it’s still not well understood by\nmany system administrators. The configuration files, the M4 macros, the rule sets\nare a lot for a beginning or even an intermediate-level system administrator. There\nis no great Web-based management tool for the open-source version; there are no\nApache-like, directive-oriented configuration options. The complexity of managing\nSendmail often forces system administrators to leave it in its out-of-the-box state.\nAs a result, many Sendmail sites simply run the default options — which are often\nminimal and not well suited to any specific organization’s needs. The complexity of\nSendmail also made it the ideal target for many security attacks over the years.\nLeft to itself, Sendmail also has performance problems. If it’s running as root, a\nmaster Sendmail process forks its child processes so they service incoming or out-\ngoing mail requests individually. Creating a new process for each request is an\n125\n" }, { "page_number": 149, "text": "expensive — and old — methodology, though it’s only a big problem for sites with\nheavy e-mail load.\nSo consider the administrative complexity, potential security risks, and perfor-\nmance problems associated with Sendmail before you select it as your MTA. Even\nso, system administrators who have taken the time to learn to work with Sendmail\nshould stick with it because Sendmail is about as flexible as it is complex. If you\ncan beat the learning curve, go for it.\nThese days, open-source Sendmail has major competitors: commercial Sendmail,\nqmail, and Postfix. Commercial Sendmail is ideal for people who love Sendmail and\nwant to pay for added benefits such as commercial-grade technical support, other\nderivative products, and services. Postfix and qmail are both open-source products.\nA LOOK AT QMAIL\nThe qmail solution has momentum. Its security and performance are very good.\nHowever, it also suffers from administration complexity problems. It isn’t an easy\nsolution to manage. I am also not fond of qmail license, which seems to be a bit\nmore restrictive than most well known open-source projects. I feel that the qmail\nauthor wants to control the core development a bit more tightly than he probably\nshould. However, I do respect his decisions, especially because he has placed a\nreward for finding genuine bugs in the core code. I have played with qmail a short\ntime and found the performance to be not all that exciting, especially because a\nseparate process is needed to handle each connection. My requirements for high\nperformance were very high. I wanted to be able to send about a half million\ne-mails per hour. My experiments with qmail did not result in such a high number.\nBecause most sites aren’t likely to need such a high performance, I think qmail is\nsuitable for many sites but it didn’t meet either my performance or administration\nsimplicity requirements. So I have taken a wait-and-see approach with qmail.\nA LOOK AT POSTFIX\nPostfix is a newcomer MTA. The Postfix author had the luxury of knowing all the\nproblems related to Sendmail and qmail. So he was able to solve the administration\nproblem well. Postfix administration is much easier than both Sendmail and qmail,\nwhich is a big deal for me because I believe software that can be managed well can\nbe run well to increase productivity.\nSome commercial MTA solutions have great strength in administration — and\neven in performance. My favorite commercial outbound MTA is PowerMTA from\nPort25.\nIn this chapter, I tune Sendmail, Postfix, and PowerMTA for performance.\nTuning Sendmail\nThe primary configuration file for Sendmail is /etc/mail/sendmail.cf, which\nappears very cryptic to beginners. This file is generated by running a command\nsuch as m4 \n< \n/path/to/chosen.mc \n> \n/etc/mail/sendmail.cf, where\n126\nPart II: Network and Service Performance\n" }, { "page_number": 150, "text": "/path/to/chosen.mc file is your chosen M4 macro file for the system. For example,\nI run the following command from the /usr/src/redhat/SOURCES/sendmail-\n8.11.0/cf/cf directory to generate the /etc/mail/sendmail.cf for my system:\nm4 < linux-dnsbl.mc > /etc/mail/sendmail.cf\nThe linux-dnsbl.mc macro file instructs m4 to load other macro files such as\ncf.m4, cfhead.m4, proto.m4, version.m4 from the /usr/src/redhat/SOURCES/\nsendmail-8.11.0/cf/m4 subdirectory. Many of the options discussed here are\nloaded from these macro files. If you want to generate a new /etc/mail/\nsendmail.cf file so that your changes aren’t lost in the future, you must change\nthe macro files in cf/m4 subdirectory of your Sendmail source installation.\nIf you don’t have these macro files because you installed a binary RPM distribu-\ntion of Sendmail, you must modify the /etc/mail/sendmail.cf file directly.\nIn any case, always back up your working version of /etc/mail/sendmail.cf\nbefore replacing it completely using the m4 command as shown in the preceding\nexample or modifying it directly using a text editor.\nNow, here’s what you can tune to increase Sendmail performance.\nControlling the maximum size of messages\nTo control the size of e-mails that you can send or receive via Sendmail, use the\nMaxMessageSize option in your mc file as follows:\n# maximum message size\ndefine(‘confMAX_MESSAGE_SIZE’,’1000000’)dnl\nAfter regenerating the /etc/mail/sendmail.cf file using the m4 command, you\nwill have the following line in the /etc/mail/sendmail.cf file\nO MaxMessageSize=1000000\nThis tells Sendmail to set the maximum message size to 1,000,000 bytes (approx.\n1MB). Of course, you can choose a different number to suit your needs. Any mes-\nsage larger than the set value of the MaxMessageSize option will be rejected.\nCaching Connections\nSendmail controls connection caches for IPC connections when processing the\nqueue using ConnectionCacheSize and ConnectionCacheTimeout options.\nIt searches the cache for a pre-existing, active connection first. The Connection-\nCacheSize defines the number of simultaneous open connections that are permitted.\nThe default is two, which is set in /etc/mail/sendmail.cf as follows:\nO ConnectionCacheSize=2\nChapter 6: E-Mail Server Performance\n127\n" }, { "page_number": 151, "text": "You can set it in your mc file using the following:\ndefine(‘confMCI_CACHE_SIZE’, 4)dnl\nHere, the maximum number of simultaneous connections is four. Note that set-\nting this too high will create resource problems on your system, so don’t abuse it.\nSetting the cache size to 0 will disable the connection cache.\nThe ConnectionCacheTimeout option specifies the maximum time that any\ncached connection is permitted to remain idle. The default is\nO ConnectionCacheTimeout=5m\nWhich means that maximum idle time is five minutes. I don’t recommend\nchanging this option.\nCONTROLLING FREQUENCY OF THE MESSAGE QUEUE\nTypically, when Sendmail is run as a standalone service (that is, not as a xinetd-run\nservice), the -q option is used to specify the frequency at which the queue is\nprocessed. For example, the /etc/sysconfig/sendmail file has a line such as the\nfollowing:\nQUEUE=1h\nThis line is used by the /etc/rc.d/init.d/sendmail script to supply the value for the\n-q command line option for the Sendmail binary (/usr/sbin/sendmail).\nThe default value of 1h (one hour) is suitable for most sites, but if you frequently\nfind that the mailq | wc -l command shows hundreds of mails in the queue, you\nmay want to adjust the value to a smaller number, such as 30m (30 minutes).\nCONTROLLING MESSAGE BOUNCE INTERVALS\nWhen a message can’t be delivered to the recipient due to a remote failure such as\n“recipient’s disk quota is full” or “server is temporarily unavailable,” the message is\nqueued and retried and finally bounced after a timeout period. The bounce timeout\ncan be adjusted by defining the following options in your mc file:\ndefine(‘confTO_QUEUERETURN’, ‘5d’)dnl\ndefine(‘confTO_QUEUERETURN_NORMAL’, ‘5d’)dnl\ndefine(‘confTO_QUEUERETURN_URGENT’, ‘2d’)dnl\ndefine(‘confTO_QUEUERETURN_NONURGENT’, ‘7d’)dnl\n128\nPart II: Network and Service Performance\n" }, { "page_number": 152, "text": "These options result in the following configuration lines in /etc/mail/\nsendmail.cf:\nO Timeout.queuereturn=5d\nO Timeout.queuereturn.normal=5d\nO Timeout.queuereturn.urgent=2d\nO Timeout.queuereturn.non-urgent=7d\nHere, the default bounce message is sent to the sender after five days, which is\nset by the Timeout.queuereturn (that is, the confTO_QUEUERETURN option line in\nyour mc file). If the message was sent with a normal priority, the sender receives this\nbounce message within five days, which is set by Timeout.queuereturn.normal\noption (that is, the confTO_QUEUERETURN_NORMAL in your mc file).\nIf the message was sent as urgent, the bounce message is sent in two days, which\nis set by Timeout.queuereturn.urgent (that is, the confTO_QUEUERETURN_URGENT\noption in the mc file).\nIf the message is sent with low priority level, it’s bounced after seven days,\nwhich is set by the Timeout.queuereturn.non-urgent option (that is, the\nconfTO_QUEUERETURN_NONURGENT option in the mc file).\nIf you would like the sender to be warned prior to the actual bounce, you can use\nthe following settings in your mc file:\ndefine(‘confTO_QUEUEWARN’, ‘4h’)dnl\ndefine(‘confTO_QUEUEWARN_NORMAL’, ‘4h’)dnl\ndefine(‘confTO_QUEUEWARN_URGENT’, ‘1h’)dnl\ndefine(‘confTO_QUEUEWARN_NONURGENT’, ‘12h’)dnl\nWhen you regenerate your /etc/mail/sendmail.cf file with the preceding\noptions in your mc file, you will get lines such as the following:\nO Timeout.queuewarn=4h\nO Timeout.queuewarn.normal=4h\nO Timeout.queuewarn.urgent=1h\nO Timeout.queuewarn.non-urgent=12h\nHere, the default warning (stating that a message could not be delivered) message\nis sent to the sender after four hours. Similarly, senders who use priority settings\nwhen sending mail can get a warning after four hours, one hour, and 12 hours for\nnormal-, urgent-, and low-priority messages respectively.\nCONTROLLING THE RESOURCES USED FOR BOUNCED MESSAGES\nAs mentioned before, a message is tried again and again for days before it is\nremoved from the queue. Retrying a failed message takes resources away from the\nnew messages that the system needs to process. Probably a failed message will fail\nfor a while, so trying to resend it too quickly is really a waste of resources.\nChapter 6: E-Mail Server Performance\n129\n" }, { "page_number": 153, "text": "You can control the minimum time a failed message must stay in the queue\nbefore it’s retried using the following line in your mc file:\ndefine(‘confMIN_QUEUE_AGE’, ‘30m’)dnl\nThis results in the following line in your /etc/mail/sendmail.cf file after it’s\nregenerated.\nO MinQueueAge=30m\nThis option states that the failed message should sit in the queue for 30 minutes\nbefore it’s retried.\nAlso, you may want to reduce the priority of a failed message by setting the fol-\nlowing option in your mc file:\ndefine(‘confWORK_TIME_FACTOR’, ‘90000’)\nThis will result in the following option in your /etc/mail/sendmail.cf file\nafter it’s regenerated.\nO RetryFactor=90000\nThis option sets a retry factor that is used in the calculation of a message’s pri-\nority in the queue. The larger the retry factor number, the lower the priority of the\nfailed message becomes.\nControlling simultaneous connections\nBy default, Sendmail enables an unlimited number of connections per second. It\naccepts as many connections as possible under Linux. If you run Sendmail in a sys-\ntem that isn’t just a mail server, this unlimited connection capability may not be\nsuitable, because it takes system resources away from your other services. For\nexample, if you run a Web server on the same machine you run Sendmail on, you\nmay want to limit the SMTP connections to an appropriate value using the follow-\ning option line in your mc file:\ndefine(confCONNECTION_RATE_THROTTLE’, ‘5’)dnl\nThis command creates the following configuration option in /etc/mail/\nsendmail.cf file after you regenerate it.\nO ConnectionRateThrottle=5\nNow Sendmail will accept only five connections per second. Because Sendmail\ndoesn’t pre-fork child processes, it starts five child processes per second at peak\n130\nPart II: Network and Service Performance\n" }, { "page_number": 154, "text": "load. This can be dangerous if you don’t put a cap in the maximum number of chil-\ndren that Sendmail can start. Luckily, you can use the following configuration\noption in your mc file to limit that:\ndefine(‘confMAX_DAEMON_CHILDREN’, ‘15’)dnl\nThis command creates the following configuration option in /etc/mail/\nsendmail.cf file after you regenerate it.\nO MaxDaemonChildren=15\nThis limits the maximum number of child processes to 15. This throttles your\nserver back to a degree that will make it unattractive to spammers, since it really\ncan’t relay that much mail (if you’ve left relaying on). \nLimiting the load placed by Sendmail\nYou can instruct Sendmail to stop delivering mail and simply queue it if the system\nload average gets too high. You can define the following option in your mc file:\ndefine(‘confQUEUE_LA’, ‘5’)dnl\nThis command creates the following configuration option in /etc/mail/\nsendmail.cf file after you regenerate it.\nO QueueLA=5\nHere, Sendmail will stop delivery attempts and simply queue mail when system\nload average is above five. You can also refuse connection if the load average goes\nabove a certain threshold by defining the following option in your mc file:\ndefine(‘confREFUSE_LA’, ‘8’)dnl\nThis command creates the following configuration option in /etc/mail/\nsendmail.cf file after you regenerate it.\nO RefuseLA=8\nHere, Sendmail will refuse connection after load average goes to eight or above.\nNote that locally produced mail isn’t still accepted for delivery.\nSaving memory when processing the mail queue\nWhen Sendmail processes the mail queue, the program’s internal data structure\ndemands more RAM — which can be a problem for a system with not much memory\nto spare. In such a case, you can define the following option in your mc file:\nChapter 6: E-Mail Server Performance\n131\n" }, { "page_number": 155, "text": "define(‘confSEPARATE_PROC’, ‘True’)dnl\nThis command creates the following configuration option in /etc/mail/\nsendmail.cf file after you regenerate it.\nO ForkEachJob=True\nThis command forces Sendmail to fork a child process to handle each message in\nthe queue — which reduces the amount of memory consumed because queued mes-\nsages won’t have a chance to pile up data in memory.\nHowever, all those individual child processes impose a significant performance\npenalty — so this option isn’t recommended for sites with high mail volume.\nIf the ForkEachJob option is set, Sendmail can’t use connection caching.\nControlling number of messages in a queue run\nIf you want to limit the number of messages that Sendmail reads from the mail\nqueue, define the following option in your mc file:\ndefine(‘confMAX_QUEUE_RUN_SIZE’,’10000’)\nThis command creates the following configuration option in the /etc/mail/\nsendmail.cf file after you regenerate it.\nO MaxQueueRunSize=10000\nHere, Sendmail will stop reading mail from the queue after reading 10,000 mes-\nsages. Note that when you use this option, message prioritization is disabled.\nHandling the full queue situation\nThe Sendmail queue directory (specified by the QueueDirectory option in\n/etc/mail/sendmail.cf file or the QUEUE_DIR option in your mc file) is at its best\nif you keep it in a disk partition of its own. This is especially true for a large mail\nsite. The default path for the queue is /var/spool/mqueue. A dedicated queue disk\npartition (or even a full disk) will enhance performance by itself.\nTo avoid running out of queue space in a high e-mail volume site, set a limit so\nSendmail refuses mail until room is available in the queue. You can define the fol-\nlowing option in your mc file for this purpose:\n132\nPart II: Network and Service Performance\n" }, { "page_number": 156, "text": "define(‘confMIN_FREE_BLOCKS’, ‘100’)dnl\nThis command creates the following configuration option in /etc/mail/\nsendmail.cf file after you regenerate it:\nO MinFreeBlocks=100\nThis setting tells Sendmail to refuse e-mail when fewer than 100 1K blocks of\nspace are available in the queue directory.\nTuning Postfix\nPostfix is the new MTA on the block. There is no RPM version of the Postfix distri-\nbution yet, but installing it is simple. I show the installation procedure in the \nfollowing section.\nInstalling Postfix\nDownload the source distribution from www.postfix.org site. As of this writing\nthe source distribution was postfix-19991231-pl13.tar.gz. When you get the\nsource, the version number may be different; always use the current version num-\nber when following the instructions given in this book.\n1. Su to root.\n2. Extract the source distribution in /usr/src/redhat/SOURCES directory\nusing the tar xvzf postfix-19991231-pl13.tar.gz command. This\nwill create a subdirectory called postfix-19991231-pl13. Change to the\npostfix-19991231-pl13 directory.\nIf you don’t have the latest Berkeley DB installed, install it before continuing.\nYou can download the latest Berkeley DB source from ww.sleepycat.com.\n3. Run the make command to compile the source.\n4. Create a user called postfix using the useradd postfix -s /bin/true\n-d /dev/null command.\n5. Create a file called /etc/aliases with the following line:\npostfix: root\nChapter 6: E-Mail Server Performance\n133\n" }, { "page_number": 157, "text": "6. Run the sh INSTALL.sh to installation command to configure and install\nthe Postfix binaries. Simply accept the default values.\n7. Browse the /etc/postfix/main.cf and modify any configuration option\nthat needs to be changed. \nYou can skip Step 8 to get started quickly.\n8. Decide whether to keep the Posfix spool directory (/var/spool/\npostfix/maildrop).configured in one of the following ways:\na World-writeable\nThis is the default. \nb Sticky (1733)\nc More restricted (mode 1730)\nBecause the maildrop directory is world-writeable, there is no need to run\nany program with special privileges (set-UID or set-GID), and the spool\nfiles themselves aren’t world-writeable or otherwise accessible to other\nusers. I recommend that you keep the defaults.\nNow you can start your Postfix as follows:\npostfix start\nThe first time you start the application, you will see warning messages as it cre-\nates its various directories. If you make any changes to configuration files, reload\nPostfix:\npostfix reload\nLimiting number of processes used\nYou can control the total number of concurrent processes used by Postfix using the\nfollowing parameter in the /etc/postfix/main.cf file.\ndefault_process_limit = 50\nHere, Postfix is enabled to run a total of 50 concurrent processes (such as smtp\nclient, smtp server, and local delivery). You can override this setting in the\n/etc/postfix/master.cf file by changing the maxproc column for a service. For\nexample, to receive 100 messages at a time, you can modify the /etc/postfix/\n134\nPart II: Network and Service Performance\n" }, { "page_number": 158, "text": "master.cf file to have the maxproc column set to 100 for smtp service as shown\nbelow.\n# ==========================================================================\n# service type private unpriv chroot wakeup maxproc command + args\n# (yes) (yes) (yes) (never) (50)\n# ==========================================================================\nsmtp inet n - n - 100 smtpd\nLimiting maximum message size\nYou can set the maximum message size to using the following parameter in the\n/etc/postfix/main.cf file.\nmessage_size_limit = 1048576\nHere, the maximum message size is set to 1048576 bytes (1MB).\nLimiting number of messages in queue\nTo control the number of active messages in the queue, use the following parame-\nter in the /etc/postfix/main.cf file:\nqmgr_message_active_limit = 1000\nThis sets the active message limit to 1000.\nLimiting number of simultaneous \ndelivery to a single site\nIt is impolite and possibly illegal to flood too many concurrent SMTP connections\nto any remote server. Some sites such as AOL, Yahoo!, and Hotmail may require you\nto sign an agreement before you can use a high number of connections to these\nsites. Postfix enables you to limit the number of concurrent connections that it\nmakes to a single destination using the following parameter:\ndefault_destination_concurrency_limit = 10\nThis tells Postfix to set a limit of to 10 on concurrent connections to a single site.\nControlling queue full situation\nIf your server handles lots of mail and you often find that the queue space is nearly\nfull, consider adding the following parameter in the /etc/postfix/main.cf file:\nqueue_minfree = 1048576\nChapter 6: E-Mail Server Performance\n135\n" }, { "page_number": 159, "text": "Here, Postfix will refuse mail when the queue directory (that is, the disk partition\nthe queue directory is in) is 1048576 bytes (1MB) in size.\nControlling the length a message \nstays in the queue\nYou need to bounce a message after repeated attempts at delivery. The length of\ntime a failed message remains in the queue can be set in the /etc/postfix/\nmain.cf file using the following parameter:\nmaximal_queue_lifetime = 5\nHere, Postfix will return the undelivered message to the sender after five days of\nretries. If you would like to limit the size of the undelivered (bounce) message sent\nto the sender, use the following parameter:\nbounce_size_limit = 10240\nHere, Posfix returns 10240 bytes (10K) of the original message to the sender.\nControlling the frequency of the queue\nTo control the frequency of the queue runs, use the following parameter in the\n/etc/postfix/main.cf file:\nqueue_run_delay = 600\nHere the parameter specifies that queues may run every 600 seconds (10 minutes).\nUsing PowerMTA for High-Volume\nOutbound Mail\nPowerMTA from Port25 is a multithreaded, highly scalable commercial MTA\ndesigned for high-volume, outbound mail delivery. You can download an RPM\nbinary package from their Web site at www.port25.com. However, you do need to\nfill out their evaluation request form to get the license key needed to start the eval-\nuation process. They send the evaluation license key via e-mail within a reasonable\ntimeframe (usually in the same day).\nAfter you have the binary RPM package and the license key, you can install it\nusing the rpm -ivh pmta-package.rpm command, replacing pmta-package.rpm\nwith the name of the RPM file you downloaded from the Port25 Web site. The RPM\npackage that I downloaded, for example, was called PowerMTA-1.0rel-\n200010112024.rpm.\n136\nPart II: Network and Service Performance\n" }, { "page_number": 160, "text": "After RPM is installed, configure it by following these steps:\n1. Edit the /etc/pmta/license file and insert the evaluation license data\nyou received from Port25 via e-mail.\n2. Edit the /etc/pmta/config file and set the postmaster directive to an\nappropriate e-mail address. \nFor example, replace #postmaster you@your.domain with something like\npostmaster root@yourdomain.com.\n3. If you use Port25’s Perl submission API to submit mail to the PowerMTA\n(pmta) daemon, then change directory to /opt/pmta/api and extract the\nSubmitter-1.02.tar.gz (or a later version) by using the tar xvzf\nSubmitter-1.02.tar.gz command. \n4. Change to the new subdirectory called Submitter-1.02 and run the fol-\nlowing Perl commands — perl Makefile.PL; make; make test; make\ninstall — in exactly that sequence. Doing so installs the Perl submitter\nAPI module.\nTo start the PowerMTA (pmta) server, run the /etc/rc.d/init.d/pmta start\ncommand to start the service. Thereafter, whenever you reconfigure the server by\nmodifying the /etc/pmta/config file, make sure you run the /usr/sbin/pmta\nreload command.\nBecause PowerMTA is a multithreaded application, many threads are listed as\nprocesses if you run commands such as ps auxww | grep pmta. Don’t be alarmed\nif you see a lot of threads; PowerMTA can launch up to 800 threads under the Linux\nplatform.\nUsing multiple spool directories for speed\nPower MTA can take advantage of multiple spool directories using the spool direc-\ntive in /etc/pmta/config file. For example, you can have\nspool /spooldisk1\nspool /spooldisk2\nspool /spooldisk3\nHere, PowerMTA is told to manage spooling among three directories. Three dif-\nferent fast (ultra-wide SCSI) disks are recommended for spooling. Because spooling\non different disks reduces the I/O-related wait for each disk, it yields higher perfor-\nmance in the long run.\nSetting the maximum number of file descriptors\nPowerMTA uses many file descriptors to open many files in the spool directories;\nto accommodate it, you need a higher descriptor limit than the default set by Linux.\nChapter 6: E-Mail Server Performance\n137\n" }, { "page_number": 161, "text": "You can view the current file-descriptor limits for your system by using the cat\n/proc/sys/fs/file-max command.\nUse the ulimit -Hn 4096 command to set the file descriptor limit to 4096\nwhen you start PowerMTA from the /etc/rc.d/init.d/pmta script.\nSetting a maximum number of user processes\nPowerMTA also launches many threads, so you must increase the maximum num-\nber of processes that can run under a single user account. You can set that limit in\nthe /etc/rc.d/init.d/pmta script by adding a line such as the following:\nulimit -Hu 1024\nHere, PowerMTA is enabled to launch 1,024 threads.\nSetting maximum concurrent SMTP connections \nPowerMTA enables you to limit how many concurrent SMTP connections can\naccess a specific domain; you do so in the /etc/pmta/config file. The default\nmaximum is set by a wildcard domain-container directive that looks like this:\n\nmax-smtp-out 20 # max. connections *per domain*\nbounce-after 4d12h # 4 days, 12 hours\nretry-after 60m # 60 minutes\nlog-resolution no\nlog-connections no\nlog-commands no\nlog-data no\n\nHere the max-smtp-out directive is set to 20 for all (*) domains. At this setting,\nPowerMTA opens no more than 20 connections to any one domain. If, however,\nyou have an agreement with a particular domain that allows you to make more\nconnections, you can create a domain-specific configuration to handle that excep-\ntion. For example, to connect 100 simultaneous PowerMTA threads to your friend’s\ndomain (myfriendsdomain.com), you can add the following lines to the /etc/\npmta/config file:\n\nmax-smtp-out 100\n\n138\nPart II: Network and Service Performance\n" }, { "page_number": 162, "text": "Don’t create such a configuration without getting permission from the other\nside. If the other domain is unprepared for the swarm of connecitons, you\nmay find your mail servers blacklisted.You may even get into legal problems\nwith the remote party if you abuse this feature.\nMonitoring performance\nBecause PowerMTA is a high-performance delivery engine, checking on how it’s\nworking is a good idea. You can run the /usr/sbin/pmta show status command\nto view currently available status information. Listing 6-1 shows a sample status\noutput.\nListing 6-1: Sample output of /usr/sbin/pmta status\nPowerMTA v1.0rel status on andre.intevo.com on 2001-01-07 00:30:30\nTraffic ------------inbound------------ ------------outbound-----------\nrcpts msgs kbytes rcpts msgs kbytes\nTotal 221594 221594 5230009.5 221174 221174 4884289.7\nLast Hour 0 0 0.0 0 0 0.0\nTop/Hour 138252 138252 3278106.9 131527 131527 3339707.1\nLast Min. 0 0 0.0 0 0 0.0\nTop/Min. 7133 7133 69948.1 3002 3002 62914.8\nConnections active top maximum Domain cached pending\nInbound 0 3 30 Names 4844 0\nOutbound 1 698 800\nSpool in use recycled\nSMTP queue rcpts domains kbytes Files 659 1000\n340 11 9629.0 Init. complete\nStatus running Started 2001-01-05 13:48:50 Uptime 1 10:41:40\nHere, in the Top/Hour row, PowerMTA reports that it has sent 131,527 messages\nin an hour. Not bad. But PowerMTA can do even better. After a few experiments, I\nhave found it can achieve 300-500K messages per hour easily — on a single PIII Red\nHat Linux system with 1GB of RAM.\nPowerMTA is designed for high performance and high volume. Its multithreaded\narchitecture efficiently delivers a large number of individual e-mail messages to\nmany destinations.\nChapter 6: E-Mail Server Performance\n139\n" }, { "page_number": 163, "text": "Summary\nSendmail, Postfix, and PowerMTA are common Mail Transport Agents (MTAs).\nThey can be fine-tuned for better resource management and higher performance. \n140\nPart II: Network and Service Performance\n" }, { "page_number": 164, "text": "Chapter 7\nNFS and Samba Server\nPerformance\nIN THIS CHAPTER\nN Tuning Samba\nN Tuning NFS server\nA HIGHLY TUNED SAMBA or NFS server has the following characteristics:\nN Its hardware is optimal. A typical client/server system falls short of opti-\nmal because of three hardware bottlenecks: \nI Disk drives. Any component with moving parts always moves too slow\ncompared to information, which moves at the speed of electric\nimpulses. Fortunately, fast, modern hardware is relatively cheap. You\ncan buy 10,000-RPM ultra-wide SCSI disk drives without paying an\narm and a leg.\nI CPU. As with single-user systems, the basic principle that governs CPU\nselection is the faster the better — and thanks to Intel and friends, 1GHz\nCPUs are available in the PC market.\nN Network cabling. Unfortunately, now-obsolescent 10MB Ethernet is still\nthe norm in most organizations. 100MB Ethernet is still not deployed\neverywhere. I have used a PIII 500 MHz Samba system with 10 local,\nultra-wide, 10K RPM drives on three disk controllers on a 100MB Ethernet\nto service over 100 users who included office administrators (small file\nand infrequent access users), engineers (frequent file access users), and\ngraphics artists (large file users). My biggest worry was controlling the\ntemparature of the server because the 10K RPM drives heated up fast. I\nhad to use many small fans as disk bay covers to cool the server.\nN Its server configuration is optimal. And that means a lot of careful\nattention to settings, usually on the part of the system administrator.\nUnfortunately, there is no easy way to formulate the ideal configuration\nfor your Samba or NFS server. Each implementation has its own needs;\nthe best method is trial and error. This chapter shows many configuration\noptions that can help make your trial-and-error experiments effective.\n141\n" }, { "page_number": 165, "text": "Tuning Samba Server\nThis section shows how you can tune the Samba server for best performance.\nControlling TCP socket options\nThe Samba server uses TCP packets to communicate with the clients. You can\nenhance the performance by adding the following parameter in the /etc/samba/\nsmb.conf file.\nsocket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192\nThe TCP_NODELAY option tells the Samba server to send as many packets as nec-\nessary to keep the delay low. The SO_RCVBUF and SO_SNDBUF options set the send\nand receive window (buffer) size to 8K (8192 bytes), which should result in good\nperformance. Here we are instructing the Samba server to read/write 8K data before\nrequesting an acknowledgement (ACK) from the client side.\nUSING OPPORTUNISTIC LOCKS (OPLOCKS)\nWhen a client accesses a file from a Samba server, it doesn’t know whether the file is\naccessed by others who may change the file contents. However, if the Samba server\nsomehow could tell the client that it has exclusive access to a file, the client can then\ncache the file contents and thus increase performance. To enable a client to locally\ncache a file, the server uses opportunistic locks (oplocks). If you have the following\nparameter set in the /etc/samba/smb.conf file for a share, the server can grant an\noplock for clients, which should result in about 30 percent or more performance gain.\noplocks = true\nNewer versions of Samba (2.0.5 or later) support a new type of opportunistic lock\nparameter called level2 oplocks. This type of oplock is used for read-only access.\nWhen this parameter is set to true, you should see a major performance gain in con-\ncurrent access to files that are usually just read. For example, executable applications\nthat are read from a Samba share can be accessed faster due to this option.\nSamba also has a fake oplocks parameter that can be set to true to grant\noplocks to any client that asks for one. However, fake oplocks are depreciated and\nshould never be used in shares that enable writes. If you enable fake oplocks for\nshares that clients can write to, you risk data corruption.\nNote that when you enable oplocks for a share such as the following:\n[pcshare]\ncomment = PC Share\npath = /pcshare\npublic = yes\n142\nPart II: Network and Service Performance\n" }, { "page_number": 166, "text": "writable = yes\nprintable = no\nwrite list = @pcusers\noplocks = true\nyou may want to tell Samba to ignore oplock requests by clients for files that are\nwriteable. You can use the veto oplock files parameter to exclude such files. For\nexample, to exclude all files with doc extension from being oplocked, you can use\nveto oplock files = /*.doc/\nCONTROLLING THE WRITE-AHEAD BEHAVIOR\nIf you run the Samba server on a system where disk access is comparable to network\naccess speed, you can use the read size parameter. For example,\nread size = 16384\nWhen the amount of data transferred is larger than the specified read size\nparameter, the server either begins to write data to disk before it receives the whole\npacket from the network or to write to the network before all data has been read\nfrom the disks.\nCONTROLLING THE WRITE SIZE\nThe maximum size of a single packet is controlled by a network option called\nMaximum Transport Unit (MTU), which is set in network configuration. The default\nvalue is 1500; you can check the MTU value set for a network interface by running\nthe ifconfig command. Now, if Samba transmits data in a smaller size than the\nMTU, throughput is reduced. The max xmit parameter controls the write size that\nSamba uses when writing data to the network. The default value of this parameter\nis set to 65,536, which is the maximum value. You can set it to anything between\n2048 to 65,536. However, on slow networks this value may not be optimal. On slow\nnetworks, use a small value like 2048 for better performance; on high-speed net-\nworks, leave the default as is.\nCONTROLLING RAW READS\nWhen the read raw parameter is set, the Samba server reads a maximum of 65,536\nbytes in a single packet. However, in some instances setting this parameter may\nactually reduce performance. The only sure way to tell whether read raw = yes\nhelps your Server or not is to try using Samba while setting read raw = no. If you\nsee a performance drop, enable it again.\nCONTROLLING RAW WRITES\nWhen the write raw parameter is set, the Samba server writes a maximum of\n65,536 bytes in a single packet. However, in some instances setting this parameter\nChapter 7: NFS and Samba Server Performance\n143\n" }, { "page_number": 167, "text": "may actually reduce performance. The only sure way to tell whether write raw =\nyes helps your Server or not is to try using Samba while setting write raw = no.\nIf you see a performance drop, enable it again.\nSETTING THE LOG LEVEL APPROPRIATELY\nSetting the log level parameter to anything above two will reduce performance\ngreatly because each log entry is flushed to disk. I recommend that you set log\nlevel to one.\nCACHE CURRENT DIRECTORY PATH\nSetting the getwd cache = yes parameter enables caching of current directory\npath which avoids time-consuming tree traversal by the server.\nAVOIDING STRICT LOCKING AND SYNCHRONIZATION FOR FILES\nThere are a few strict parameters that are best avoided.\nN If you set the strick locking parameter to yes, then Samba server will\nperform lock checks on each read/write operation, which will severely\ndecrease performance. So don’t use this option; especially on Samba\nshares that are really remote NFS-mounted filesystems.\nN If you set the strict sync parameter to yes, the Samba server will write\neach packet to disk and wait for the write to complete whenever the client\nsets the sync bit in a packet. This will cause severe performance problems\nwhen working with Windows clients running MS explorer or other pro-\ngrams like it, which set the sync bit for every packet.\nAUTOMATICALLY CREATING USER ACCOUNTS\nAlthough this isn’t a performance option, it saves you a lot of administrative hassle,\nso it is included here. When you want Windows users or other Linux-based Samba\nclients to access your Samba server, you need user accounts for each client. If the\nSamba resource you want to offer to these clients can be shared using a single\naccount, you can simply create a user account called myguest using the useradd\nmyguest command and set guest account = myguest in the global section of the\n/etc/samba/smb.conf file. Then you can use the guest ok = yes parameter in the\nappropriate resource section to enable guest access for that resource. For example:\n[printers]\ncomment = All Printers\npath = /var/spool/samba\nbrowseable = no\nguest ok = no\nprintable = yes\n144\nPart II: Network and Service Performance\n" }, { "page_number": 168, "text": "Here, all the printers managed by Samba are accessible via guest accounts.\nHowever, it isn’t often desirable to use guest accounts to access shares. For exam-\nple, enabling guest access to a user’s home directory isn’t desirable for the obvious\nreasons.\nUnfortunately, maintaining Linux user accounts for all your Windows users can\nbe a tough task, especially because you must manually synchronize the addition\nand removal of such users. Fortunately, if you use domain-level security you can\nautomate this process using the following parameters in the global section:\nadd user script = /usr/sbin/useradd %u -g smbusers \ndelete user script = /usr/sbin/userdel %u\nWhenever a Windows user (or a remote Samba client) attempts to access your\nSamba server (using domain level security), it creates a new user account if the pass-\nword server (typically the Primary Domain Controller) authenticates the user. Also,\nthe user account is removed if the password server fails to authenticate the user.\nThis means that if you add a new user account in your Windows 2000/NT domain\nand your Samba server uses a Windows 2000/NT server for domain-level security,\nthe corresponding Linux account on the Samba server is automatically managed.\nTuning Samba Client\nIf you use Windows clients such as Windows 9x and Windows 2000/NT to access\nthe Samba server, consult your operating system guide to determine if you can\nincrease the performance of the TCP/IP stack used.\nTuning NFS Server\nThe primary bottleneck in an NFS environment is the disk I/O speed of the NFS\nserver. The disk I/O speed is dependent on what kind of disk subsystem you use with\nyour NFS server. For example, running an NFS server using IDE disks doesn’t yield\ngreat performance versus running a server with ultra-wide, SCSI disks that have high\nRPM rates. The maximum number of I/O operations per second will dictate how well\nyour NFS server performs. I have used an Intel Xeon 500 system with 10 ultra-wide\nSCSI disks in RAID 5 as an NFS server for about 50 users with great success.\nAfter you have decided on a good disk subsystem, such as a RAID 5 using an\narray of 10K RPM ultra-wide SCSI disks with a disk controller that has a large built-\nin disk cache, your next hardware bottleneck is the network itself. Isolating high-\nbandwidth traffic into its own network is a good way to reduce performance loss.\nSo I recommend that you connect your NFS servers to your NFS clients using a\ndedicated 100 Mb Ethernet of its own. Create an NFS backbone, which only moves\nNFS packets. This will result in a high-performance NFS network.\nChapter 7: NFS and Samba Server Performance\n145\n" }, { "page_number": 169, "text": "Optimizing read/write block size\nThe default read and write block size for NFS is 4096 bytes (4KB), which may not\nbe optimal for all situations. You can perform a test to determine whether changing\nthe block size will help you or not. Here’s how you perform such a test.\nThis test assumes that you have an NFS server running on a Linux system and\nalso have a Linux-based NFS client system. The test also assumes that the client\nmounts a filesystem called /mnt/nfs1 from the NFS server.\n1. Su to root on the NFS client machine.\n2. You need to know the total amount of memory your system has. You\nshould know this by default because it’s your system, but if you don’t\nremember too well, you can run the cat /proc/meminfo command to\nview the memory information for your system. This will display similar to\nwhat is shown below:\ntotal: used: free: shared: buffers: cached:\nMem: 263720960 260456448 3264512 30531584 228245504\n6463488\nSwap: 271392768 6209536 265183232\nMemTotal: 257540 kB\nMemFree: 3188 kB\nMemShared: 29816 kB\nBuffers: 222896 kB\nCached: 6312 kB\nBigTotal: 0 kB\nBigFree: 0 kB\nSwapTotal: 265032 kB\nSwapFree: 258968 kB\n3. The total amount of system memory is shown under the column heading\ntotal:; divide this number by 1,048,576 (1024x1024) to get the total\n(approximate) memory size in megabytes. In the preceding example, this\nnumber is 251MB. \nInterestingly, total memory is never reported accurately by most PC sys-\ntem BIOS, so you must round the number based on what you know about\nthe total memory. In my example, I know that the system should have\n256MB of RAM, so I use 256MB as the memory size in this test.\nIf you have RAM > 1GB, I recommend using 512MB as the RAM size for this\nexperiment. Although you may have 1GB+ RAM, pretend that you have\n512MB for this experiment.\n146\nPart II: Network and Service Performance\n" }, { "page_number": 170, "text": "4. Change directory to a currently mounted /mnt/nfs1 NFS file directory.\nRun the du command to see whether you have at least 512MB (2 x total\nRAM) of free space available on the NFS directory. If you don’t, you can’t\ncontinue with this experiment. I assume that you do have such space\navailable.\n5. We want to measure the write performance of your current NFS setup. So\nwe will write a 512MB (16KB/block x 32,768 blocks) file called 512MB.dat\nin the /mnt/nfs1 directory using the following command:\ntime dd if=/dev/zero \\\nof=/mnt/nfs1/512MB.dat \\\nbs=16k count=32768\nThis command runs the time command, which records execution time of\nthe program named as the first argument. In this case, the dd command\nis timed. The dd command is given an input file (using if option) called\n/dev/zero. This file is a special device that returns a 0 (zero) character\nwhen read. If you open this file for reading, it keeps returning a 0 character\nuntil you close the file. This gives us an easy source to fill out an output\nfile (specified using the of option) called /mnt/nfs1/512MB.dat; the dd\ncommand is told to use a block size (specified using bs option) of 16KB and\nwrite a total of 32,768 blocks (specified using the count option). Because\n16KB/block times 32,768 blocks equal 512MB, we will create the file we\nintended. After this command is executed, it prints a few lines such as the\nfollowing:\n32768+0 records in\n32768+0 records out\n1.610u 71.800s 1:58.91 61.7% 0+0k 0+0io 202pf+0w\nHere the dd command read 32,768 records from the /dev/zero device and\nalso wrote back the same number of records to the /mnt/nfs1/512MB.dat\nfile. The third line states that the copy operation took one minute and\n58.91 seconds. Write this line in a text file as follows:\nWrite, 1, 1.610u, 71.800s, 1:58.91, 61.7%\nHere, you are noting that this was the first (1st) write experiment.\n6. We need to measure the read performance of your current NFS setup. We\ncan simply read the 512MB file we created earlier and see how long it\ntakes to read it back. To read it back and time the read access, you can\nrun the following command:\ntime dd if=/mnt/nfs1/512MB.dat \\\nof=/dev/null \\\nbs=16k count=32768\nChapter 7: NFS and Samba Server Performance\n147\n" }, { "page_number": 171, "text": "Here the dd command is timed again to read the /mnt/nfs1/512MB.dat\nfile as input, then output the file contents to /dev/null, which is the\nofficial bottomless bit bucket for Linux. Like before, record the time used\nin the same file you wrote the read performance record. For example, the\nread test using the preceding command displayed the following output on\nmy system.\nRecord the third line as follows:\nRead, 1, 1.970u, 38.970s, 2:10.44, 31.3%\nHere, you are noting that this was the first (1st) read experiment.\n7. Remove the 512MB.dat file from /mnt/nfs1 and umount the partition\nusing the umount /mnt/nfs1 command. The unmounting of the NFS\ndirectory ensures that disk caching doesn’t influence your next set of tests.\n8. Repeat the write and read back test (Steps 5 - 7) at least five times. You\nshould have a set of notes as follows:\nRead, 1, 1.971u, 38.970s, 2:10.44, 31.3%\nRead, 2, 1.973u, 38.970s, 2:10.49, 31.3%\nRead, 3, 1.978u, 38.971s, 2:10.49, 31.3%\nRead, 4, 1.978u, 38.971s, 2:10.49, 31.3%\nRead, 5, 1.978u, 38.971s, 2:10.49, 31.3%\nWrite, 1, 1.610u, 71.800s, 1:58.91, 61.7%\nWrite, 2, 1.610u, 71.801s, 1:58.92, 61.7%\nWrite, 3, 1.610u, 71.801s, 1:58.92, 61.7%\nWrite, 4, 1.610u, 71.801s, 1:58.92, 61.7%\nWrite, 5, 1.611u, 71.809s, 1:58.92, 61.7%\n9. Calculate the average read and write time from the fifth column (shown in\nbold).\nYou have completed the first phase of this test. You have discovered the average\nread and write access time for a 512MB file. Now you can start the second phase of\nthe test as follows:\n1. Unmount the /mnt/nfs1 directory on the NFS client system using the\numount /mnt/nfs1 command.\n2. Modify the /etc/fstab file on the NFS client system such that the\n/mnt/nfs1 filesystem is mounted with the rsize=8192, wsize=8192\noptions as shown below:\nnfs-server-host:/nfs1 /mnt/nfs1 nfs \\\nrsize=8192, wsize=8192 0 0\n148\nPart II: Network and Service Performance\n" }, { "page_number": 172, "text": "3. Mount the /mnt/nfs1 directory back using the mount /mnt/nfs1\ncommand.\n4. Perform Steps 4 to 9 of the previous experiment.\n5. Compare the read and write access averages between phase 1 and phase 2\nof the test. If the results in phase 2 (this part) of the test looks better,\nthe changing of the read and write blocks have increased your NFS\nperformance. If not, remove the rsize=8192, wsize=8192 options from\nthe line in /etc/fstab. Most likely, the read and write block size change\nwill increase NFS performance. You can also experiment with other block\nsizes. It’s advisable that you use multiples of 1024 in block size because\n1024 is the actual filesystem block size. Also, don’t use larger numbers\nabove 8192 bytes.\nIf the block size change works for you, keep the rsize=8192, wsize=8192 (or\nwhatever you find optimal via further experiment) in the /etc/fstab line for the\n/mnt/nfs1 definition.\nSetting the appropriate Maximum\nTransmission Unit\nThe Maximum Transmission Unit (MTU) value determines how large a single packet\ntransmission can be. If the MTU is set too small, NFS performance will suffer greatly.\nTo discover the appropriate MTU setting, do the following:\n1. su to root on the NFS client system.\n2. Run the tracepath nfsserver/2049 command where nfsserver is your\nNFS server’s hostname. The command will report the MTU for the path.\n3. Check out the current MTU that the network interface uses to access the\nNFS server. You can simply run the ifconfig command list information\nabout all your up and running network interfaces.\n4. If you see that your MTU setting for the appropriate network interface is\nnot the same as the one reported by the tracepath command, use\nifconfig to set it using the mtu option. For example, the ifconfig eth0\nmtu 512 command sets the MTU for network interface eth0 to 512 bytes.\nRunning optimal number of NFS daemons\nBy default, you run eight NFS daemons. To see how heavily each nfsd thread is\nused, run the cat /proc/net/rpc/nfsd command. The last ten numbers on the\nline in that file indicate the number of seconds that the nfsd thread usage was at\nthat percentage of the maximum allowable. If you have a large number in the top\nChapter 7: NFS and Samba Server Performance\n149\n" }, { "page_number": 173, "text": "three deciles, you may want to increase the number of nfsd instances. To change\nthe number of NFS daemons started when your server boots up, do the following:\n1. su to root.\n2. Stop nfsd using the /etc/rc.d/init.d/nfs stop command if you run it\ncurrently.\n3. Modify the /etc/rc.d/init.d/nfs script so that RPCNFSDCOUNT=8 is set\nto an appropriate number of NFS daemons.\n4. Start nfsd using the /etc/rc.d/init.d/nfs start command.\nCONTROLLING SOCKET INPUT QUEUE SIZE\nBy default, Linux uses a socket input queue of 65,535 bytes (64KB). If you run 8\nNFS daemons (nfsd) on your system, each daemon gets 8K buffer to store data in\nthe input queue. Increase the queue size to at least 256KB as follows:\n1. su to root.\n2. Stop nfsd using the /etc/rc.d/init.d/nfs stop command if you run it\ncurrently.\n3. Modify the /etc/rc.d/init.d/nfs script so that just before the NFS dae-\nmon (nfsd) is started using the daemon rpc.nfsd $RPCNFSDCOUNT line,\nthe following lines are added:\necho 262144 > /proc/sys/net/core/rmem_default\necho 262144 > /proc/sys/net/core/rmem_max\n4. Right after the daemon rpc.nfsd $RPCNFSDCOUNT line, add the following\nlines:\necho 65536 > /proc/sys/net/core/rmem_default\necho 65536 > /proc/sys/net/core/rmem_max\n5. Restart NFS daemon using the /etc/rc.d/init.d/nfs start command.\nNow each NFS daemon started by the /etc/rc.d/init.d/nfs script uses 32K\nbuffer space in the socket input queue.\nMonitoring packet fragments\nThe Linux kernel controls the number of unprocessed UDP packet fragments it can\nhandle using a high to low range. When unprocessed UDP packet fragment size\nreaches the high mark (usually 262,144 bytes or 256KB), the kernel throws away the\nincoming packet fragments. In other words, when UDP packet fragments reach the\nhigh mark, packet loss starts. The loss of packet fragments continues until the total\nunprocessed fragment size reaches a low threshold (usually 196,608 bytes or 192KB).\n150\nPart II: Network and Service Performance\n" }, { "page_number": 174, "text": "Because NFS protocol uses fragmented UDP packets, the preceding high to low\nthreshold used by Linux matters in NFS performance. \nN You can view the current value of your high threshold size; run\ncat /proc/sys/net/ipv4/ipfrag_high_thresh \nN You can change the high values by running\necho high-number > /proc/sys/net/ipv4/ipfrag_high_thresh\nN You can view the low threshold value by running\ncat /proc/sys/net/ipv4/ipfrag_low_thresh \nN To change the low number, run\necho low-number > /proc/sys/net/ipv4/ipfrag_low_thresh \nSummary\nIn this chapter you learned to tune the Samba and NFS servers.\nChapter 7: NFS and Samba Server Performance\n151\n" }, { "page_number": 175, "text": "" }, { "page_number": 176, "text": "System Security\nCHAPTER 8\nKernel Security\nCHAPTER 9\nSecuring Files and Filesystems\nCHAPTER 10\nPAM\nCHAPTER 11\nOpenSSL\nCHAPTER 12\nShadow Passwords and OpenSSH\nCHAPTER 13\nSecure Remote Passwords\nCHAPTER 14\nXinetd\nPart III\n" }, { "page_number": 177, "text": "" }, { "page_number": 178, "text": "Chapter 8\nKernel Security\nIN THIS CHAPTER\nN Using Linux Intrusion Detection System (LIDS)\nN Libsafe\nN Protecting stack elements\nTHIS CHAPTER PRESENTS kernel- or system-level techniques that enhance your over-\nall system security. I cover the Linux Intrusion Detection System (LIDS) and Libsafe,\nwhich transparently protect your Linux programs against common stack attacks.\nUsing Linux Intrusion Detection\nSystem (LIDS)\nThe root is the source of all evil. Probably this statement only makes sense to\nUnix/Linux system administrators. After an unauthorized root access is confirmed,\ndamage control seems very hopeless, or at least is at the intruder’s mercy.\nIn a default Red Hat Linux system, several subsystems are typically unprotected.\nN Filesystem. The system has many important files, such as /bin/login,\nthat hackers exploit frequently because they aren’t protected. If a hacker\nbreaks in, he can access the system in the future by uploading a modified\nlogin program such as /bin/login. In fact, files (that is, programs) such\nas /bin/login shouldn’t change frequently (if at all) — therefore, they\nmust not be left unprotected.\nN Running processes. Many processes run with the root privileges, which\nmeans that when they are exploited using tricks such as buffer overflow\n(explained in the “Using libsafe to Protect Program Stacks” section), the\nintruder gains full root access to the system.\n155\n" }, { "page_number": 179, "text": "LIDS enhances system security by reducing the root user’s power. LIDS also\nimplements a low-level security model — in the kernel — for the following purposes:\nN Security protection\nN Incident detection\nN Incident-response capabilities\nFor example, LIDS can provide the following protection:\nN Protect important files and directories from unauthorized access on your\nhard disk, no matter what local filesystem they reside on.\nN Protect chosen files and directories from modifications by the root user, so\nan unauthorized root access doesn’t turn an intruder into a supervillain.\nN Protect important processes from being terminated by anyone, including\nthe root user. (Again, this reduces root user capabilities.)\nN Prevent raw I/O operations from access by unauthorized programs.\nN Protect a hard disk’s master boot record (MBR).\nLIDS can detect when someone scans your system using port scanners — and\ninform the system administrator via e-mail. LIDS can also notify the system admin-\nistrator whenever it notices any violation of imposed rules — and log detailed mes-\nsages about the violations (in LIDS-protected, tamper-proof log files). LIDS can not\nonly log and send e-mail about detected violations, it can even shut down an\nintruder’s interactive session\nBuilding a LIDS-based Linux system\nThe Linux Intrusion Detection System (LIDS) is a kernel patch and a suite of admin-\nistrative tools that enhances security from within the Linux operating system’s kernel.\nLIDS uses a reference monitor security model, putting everything it refers to—the\nsubject, the object, and the access type—in the kernel. If you want more information\nabout this approach, the LIDS project Web site is www.lids.org/about.html.\nA LIDS-enabled Linux system runs a customized kernel, so you must have the\nlatest kernel source from a reliable kernel site, such as www.kernel.org. After you\nhave downloaded and extracted the kernel in /usr/src/linux, download the LIDS\npatch for the specific kernel you need. For example, if you use kernel 2.4.1, make\nsure you download the LIDS patch from the LIDS project Web site. Typically, the\nLIDS patch and administrative tool package is called lids-x.x.x.y.y.y.tar.gz,\nwhere x.x.x represents the LIDS version number and y.y.y represents the kernel\nversion (for example, lids-1.0.5-2.4.1).\n156\nPart III: System Security\n" }, { "page_number": 180, "text": "I use LIDS 1.0.5 for kernel 2.4.1 in the instructions that follow.Make sure you\nchange the version numbers as needed.Extract the LIDS source distribution\nin the /usr/local/src directory using the tar xvzf lids-1.0.5-\n2.4.1.tar.gz command from the /usr/local/src directory.Now you\ncan patch the kernel.\nMake sure that /usr/src/linux points to the latest kernel source distrib-\nution that you downloaded.You can simply run ls -l /usr/src/linux to\nsee which directory the symbolic link points to. If it points to an older kernel\nsource,remove the link using rm -f /usr/src/linux and re-link it using\nln -s /usr/src/linux-version /usr/src/linux, where version is\nthe kernel version you downloaded. For example, ln -s /usr/src/\nlinux-2.4.1 /usr/src/linux links the latest kernel 2.4.1 source to\n/usr/src/linux.\nPatching, compiling, and installing the kernel with LIDS. Before you can use\nLIDS in your system you need to patch the kernel source, then compile and install\nthe updated kernel. Here is how you can do that:\n1. As root, extract the LIDS patch package in a suitable directory of your\nchoice.\nI usually keep source code for locally compiled software in the /usr/\nlocal/src directory. I assume that you will do the same. So from the\n/usr/local/src directory, run the tar xvzf lids-1.0.5-2.4.1.tar.gz\ncommand. Doing so creates a new subdirectory called lids-1.0.5-2.4.1.\n2. Change directory to /usr/src/linux and run the patch -p < /usr/\nlocal/src/lids-1.0.5-2.4.1.patch command to patch the kernel\nsource distribution.\n3. Run the make menuconfig command from the /usr/src/linux directory\nto start the menu-based kernel configuration program.\nInstead of using make menuconfig command,you can also use the make\nconfig command to configure the kernel.\nChapter 8: Kernel Security\n157\n" }, { "page_number": 181, "text": "4. From the main menu, select the Code maturity level options submenu\nand choose the Prompt for development and/or incomplete code/\ndrivers option by pressing the spacebar key; then exit this submenu.\n5. Select the Sysctl support from the General setup submenu; then exit\nthe submenu.\n6. From the main menu, select the Linux Intrusion Detection System\nsubmenu.\nThis submenu appears only if you have completed Steps 4 and 5,at the\nbottom of the main menu; you may have to scroll down a bit.\n7. From the LIDS submenu, select the Linux Intrusion Detection System\nsupport (EXPERIMENTAL) (NEW) option.\nYou see a list of options:\n(1024) Maximum protected objects to manage (NEW)\n(1024) Maximum ACL subjects to manage (NEW)\n(1024) Maximum ACL objects to manage (NEW)\n(1024) Maximum protected proceeds (NEW)\n[ ] Hang up console when raising a security alert (NEW)\n[ ] Security alert when executing unprotected programs\nbefore sealing LIDS (NEW)\n[ ] Try not to flood logs (NEW)\n[ ] Allow switching LIDS protections (NEW)\n[ ] Port Scanner Detector in kernel (NEW)\n[ ] Send security alerts through network (NEW)\n[ ] LIDS Debug (NEW)\nThe default limits for managed,protected objects,ACL subjects/objects,and\nprotected processes should be fine for most systems. You can leave them\nas is.\nI If you want LIDS to disconnect the console when a user violates a\nsecurity rule, select the Hang up console when raising a security\nalert option.\n158\nPart III: System Security\n" }, { "page_number": 182, "text": "I If you want to issue a security alert when a program is executed before\nLIDS protection is enabled, select the Security alert when\nexecuting unprotected programs before sealing LIDS option.\nLIDS is enabled during bootup (as described later in the chapter),so it’s likely\nthat you will run other programs before running LIDS.When you select this\noption, however, you can also disable execution of unprotected programs\naltogether using the Do not execute unprotected programs\nbefore sealing LIDS option. I don’t recommend that you disable\nunprotected programs completely during bootup unless you are absolutely\nsure that everything you want to run during boot (such as the utilities and\ndaemons) is protected and doesn’t stop the normal boot process.\n8. Enable the Try not to flood logs (NEW) option.\nLeave the default 60-second delay between logging of two identical entries.\nDoing so helps preserve your sanity by limiting the size of the log file.The\ndelay will ensure too many log entries are not written too fast.\n9. Select Allow switching LIDS protections option if you want to enable\nswitching of LIDS protection. If you do, you can customize this further by\nselecting the value for the following options:\nI\nNumber of attempts to submit password\nI\nTime to wait after a fail (seconds)\nI\nAllow remote users to switch LIDS protections\nI\nAllow any program to switch LIDS protections\nI\nAllow reloading config. file\nThese are my preferences:\n[*] Allow switching LIDS protections (NEW)\n(3) Number of attempts to submit password (NEW)\n(3) Time to wait after a fail (seconds) (NEW)\n[*] Allow remote users to switch LIDS protections (NEW)\n[ ] Allow any program to switch LIDS protections (NEW)\n[*] Allow reloading config. file (NEW)\nChapter 8: Kernel Security\n159\n" }, { "page_number": 183, "text": "10. Select the Port Scanner Detector in kernel option and the Send\nsecurity alerts through network option. Don’t change the default\nvalues for the second option.\n11. Save your kernel configuration and run the following commands to\ncompile the new kernel and its modules (if any).\nmake depend\nmake bzImage\nmake modules\nmake modules_install\nIf you aren’t compiling a newer kernel version than what is running on the\nsystem, back up the /bin/modules/current-version\ndirectory\n(where current-version is the current kernel version). For example, if\nyou are compiling 2.4.1 and you already have 2.4.1 running, then run the\ncp -r /lib/modules/2.4.1 /lib/modules/2.4.1.bak command\nto back-up the current modules. In case of a problem with the new kernel,\nyou can delete the broken kernel’s modules and rename this directory with\nits original name.\n12. Copy the newly created /usr/src/linux/arch/i386/boot/bzImage kernel\nimage to /boot/vmlinuz-lids-1.0.5-2.4.1 using the cp /usr/src/\nlinux/arch/i386/boot/bzImage /boot/vmlinuz-lids-1.0.5-2.4.1\ncommand.\n13. In the /etc/lilo.conf file, add the following:\nimage=/boot/vmlinuz-lids-1.0.5-2.4.1\nlabel=lids\nread-only\nroot=/dev/hda1\nIf /dev/hda1 isn’t the root device, make sure you change it as appropriate.\n14. Run /sbin/lilo to reconfigure LILO.\nWhen the LILO is reconfigured, the kernel configuration is complete.\nCOMPILING, INSTALLING, AND CONFIGURING LIDS\nAfter configuring the kernel, you can proceed with the rest of your LIDS\nconfiguration.\n160\nPart III: System Security\n" }, { "page_number": 184, "text": "Here’s how to compile and install the LIDS administrative program lidsadm.\n1. Assuming that you have installed the LIDS source in the /usr/local/src\ndirectory, change to /usr/local/src/lids-1.0.5-2.4.1/lidsadm-1.0.5.\n2. Run the make command, followed by the make install command.\nThese commands perform the following actions:\nI Install the lidsadm program in /sbin.\nI Create the necessary configuration files (lids.cap, lids.conf,\nlids.net, lids.pw) in /etc/lids.\n3. Run the /sbin/lidsadm -P command and enter a password for the LIDS\nsystem.\nThis password is stored in the /etc/lids/lids.pw file, in RipeMD-160\nencrypted format.\n4. Run the /sbin/lidsadm -U command to update the inode/dev numbers.\n5. Configure the /etc/lids/lids.net file. A simplified default\n/etc/lids/lids.net file is shown in Listing 8-1.\nListing 8-1: /etc/lids/lids.net\nMAIL_SWITCH= 1\nMAIL_RELAY=127.0.0.1:25\nMAIL_SOURCE=lids.sinocluster.com\nMAIL_FROM= LIDS_ALERT@lids.sinocluster.com\nMAIL_TO= root@localhost\nMAIL_SUBJECT= LIDS Alert\nI The MAIL_SWITCH option can be 1 or 0 (1 turns on the e-mail alert\nfunction, 0 turns it off). Leave the default (1) as is.\nI Set the MAIL_RELAY option to the IP address of the mail server that\nLIDS should use to send the alert message.\nIf you run the mail server on the same machine you are configuring\nLIDS for, leave the default as is. The port number, 25, is the default\nSMTP port and should be left alone unless you are running your mail\nserver on a different port.\nI Set the MAIL_SOURCE option to the hostname of the machine being\nconfigured. Change the default to the appropriate hostname of your\nsystem.\nI Set the MAIL_FROM option to an address that tells you which system the\nalert is coming from. Change the default to reflect the hostname of\nyour system.\nChapter 8: Kernel Security\n161\n" }, { "page_number": 185, "text": "You don’t need a real mail account for the from address. The MAIL_TO\noption should be set to the e-mail address of the administrator of the sys-\ntem being configured. Because the root address, root@localhost, is the\ndefault administrative account, you can leave it as is. The MAIL_SUBJECT\noption is obvious and should be changed as needed.\n6. Run the /sbin/lidsadm -L command, which should show output like the\nfollowing:\nLIST\nSubject\nACCESS TYPE\nObject\n-----------------------------------------------------\nAny File\nREAD\n/sbin\nAny File\nREAD\n/bin\nAny File\nREAD\n/boot\nAny File\nREAD\n/lib\nAny File\nREAD\n/usr\nAny File\nDENY\n/etc/shadow\n/bin/login\nREAD\n/etc/shadow\n/bin/su\nREAD\n/etc/shadow\nAny File\nAPPEND\n/var/log\nAny File\nWRITE\n/var/log/wtmp\n/sbin/fsck.ext2\nWRITE\n/etc/mtab\nAny File\nWRITE\n/etc/mtab\nAny File\nWRITE\n/etc\n/usr/sbin/sendmail\nWRITE\n/var/log/sendmail.st\n/bin/login\nWRITE\n/var/log/lastlog\n/bin/cat\nREAD\n/home/xhg\nAny File\nDENY\n/home/httpd\n/usr/sbin/httpd\nREAD\n/home/httpd\nAny File\nDENY\n/etc/httpd/conf\n/usr/sbin/httpd\nREAD\n/etc/httpd/conf\n/usr/sbin/sendmail\nWRITE\n/var/log/sendmail.st\n/usr/X11R6/bin/XF86_SVGA\nNO_INHERIT RAWIO\n/usr/sbin/in.ftpd\nREAD\n/etc/shadow\n/usr/sbin/httpd\nNO_INHERIT HIDDEN\nThis step reveals what’s protected by default.\nBecause you aren’t likely to have /home/xhg (the home directory of the\nauthor of LIDS), you can remove the configuration for it using the\n/sbin/lidsadm -D -s /bin/cat -o /home/xhg command. You can\nleave everything else as is, making changes later as needed.\n162\nPart III: System Security\n" }, { "page_number": 186, "text": "7. Add the following line to the /etc/rc.d/rc.local file to seal the kernel\nduring the end of the boot cycle:\n/sbin/lidsadm -I\n8. Enter lids at the LILO prompt.\nThis step reboots the system and chooses the LIDS-enabled kernel.\nWhen the system boots and runs the /sbin/lidsadm -I command from the\n/etc/rc.d/rc.local script, it seals the kernel and the system is protected by LIDS.\nAdministering LIDS\nAfter you have your LIDS-enabled Linux system in place, you can modify your ini-\ntial settings as the needs of your organization change. Except for the\n/etc/lids/lids.net file, you must use the /sbin/lidsadm program to modify\nthe LIDS configuration files: /etc/lids/lids.conf, /etc/lids/lids.pw, and\n/etc/lids/lids.cap.\nN The /etc/lids/lids.conf file stores the Access Control List (ACL)\ninformation.\nN The /etc/lids/lids.cap file contains all the capability rules for the\nsystem.\nYou can enable or disable a specific capability on the system by editing\nthis file, using the /sbin/lidsadm command. Put a plus sign ( + ) in front\nof a capability’s name to enable it; use a minus sign ( - ) to disable.\nN The /etc/lids/lids.net file configures the mail setup needed for\ne-mailing security alerts. You can use a regular text editor such as vi,\nemacs, or pico to edit this file.\nWhen LIDS must stop for system-administration tasks, do the following:\nN Use the /sbin/lidsadm -S -- -LIDS or the /sbin/lidsadm -S -- \n-LIDS_GLOBAL command.\nN Provide the LIDS password to switch off LIDS.\nAfter you make changes in a LIDS configuration file (using the lidsadm com-\nmand), reload the updated configuration into the kernel by running the /sbin/\nlidsadm -S -- + RELOAD_CONF command.\nTo add a new ACL in the /etc/lids/lids.conf file, use the /sbin/lidsadm\ncommand like this:\n/sbin/lidsadm -A [-s subject] [-t | -d | -i] -o object -j TARGET\nChapter 8: Kernel Security\n163\n" }, { "page_number": 187, "text": "In the preceding line of code\nN The -A option tells the /sbin/lidsadm program to add a new ACL.\nN The -s subject option specifies a subject of the ACL.\nA subject can be any program (for example, /bin/cat).\nWhen you don’t specify a subject,the ACL applies to everything.\nN The -t, -d, and -i options aren’t typically needed.\nN The -o object option specifies the name of the object, which can be one\nof the following:\nI File\nI Directory\nI Capability\nEach ACL requires a named object.\nN The -j TARGET option specifies the target of the ACL.\nI When the new ACL has a file or directory as the object, the target can\nbe READ, WRITE, APPEND, DENY, or IGNORE.\nI If the object is a Linux capability, the target must be either INHERIT or\nNO_INHERIT. This defines whether the object’s children can have the\nsame capability.\nPROTECTING FILES AND DIRECTORIES\nYou can use lidsadm to protect important files and directories. LIDS provides the\nfollowing types of protection for files and directories\nN READ: Makes a file or directory read-only\nN WRITE: Allows modifications of the file or directory\nN IGNORE: Ignores all other protections that may be set for a\nfile or directory\nN APPEND: Allows adding to the file\nN DENY: Denies all access to the file or directory\n164\nPart III: System Security\n" }, { "page_number": 188, "text": "MAKING FILES OR DIRECTORIES READ-ONLY\nTo make a file called /path/\nfilename read-only so that no one can change it, run the following command:\n/sbin/lids -A -o /path/filename -j READ\nTo make a directory called /mypath read-only, run the following command:\n/sbin/lids -A -o /mypath -j READ\nNo program can write to the file or directory.Because you don’t specify a\nsubject in any of the preceding commands,the ACL applies to all programs.\nDENYING ACCESS TO A FILE OR DIRECTORY\nTo deny access to a file called\n/etc/shadow, run the following command:\n/sbin/lids -A -o /etc/shadow -j DENY\nAfter you run the preceding command and the LIDS configuration is reloaded,\nyou can run commands such as ls -l /etc/shadow and cat /etc/shadow to\ncheck whether you can access the file. None of these programs can see the file\nbecause we implicitly specified the subject as all the programs in the system.\nHowever, if a program such as /bin/login should access the /etc/shadow file,\nyou can allow it to have read access by creating a new ACL, as in the following\ncommand:\n/sbin/lids -A -s /bin/login -o /etc/shadow -j READ\nENABLING APPEND-ONLY ACCESS\nTypically, programs need append-only access\nonly to critical system logs such as /var/log/messages or /var/log/secure. You\ncan enable append-only mode for these two files using the following commands:\n/sbin/lids -A -o /var/log/messages -j APPEND\n/sbin/lids -A -o /var/log/secure -j APPEND\nALLOWING WRITE-ONLY ACCESS\nTo allow a program called /usr/local/\napache/bin/httpd to write to a protected directory called /home/httpd, run the\nfollowing commands:\n/sbin/lids -A -o /home/httpd -j DENY\n/sbin/lids -A -s /usr/local/apache/bin/httpd -o /home/httpd -j READ\nChapter 8: Kernel Security\n165\n" }, { "page_number": 189, "text": "DELETING AN ACL\nTo delete all the ACL rules, run the /sbin/lidsadm -Z com-\nmand. To delete an individual ACL rule, simply specify the subject (if any) and/or\nthe object of the ACL. For example, if you run /sbin/lidsadm -D -o /bin com-\nmand, all the ACL rules with /bin as the object are deleted. However, if you run\n/sbin/lidsadm -D -s /bin/login -o /bin, then only the ACL that specifies\n/bin/login as the subject and /bin as the object is deleted.\nSpecifying the -Z option or the -D option without any argument deletes all\nyour ACL rules.\nUSING MY PREFERRED FILE AND DIRECTORY PROTECTION SCHEME\nHere’s my\npreferred file and directory protection scheme.\n# Make the /boot directory or partition read-only\n/sbin/lidsadm -A -o /boot -j READ\n# Make the system library directory read-only\n# This protects the lib/modules as well\n/sbin/lidsadm -A -o /lib -j READ\n# Make the root user’s home directory read-only\n/sbin/lidsadm -A -o /root -j READ\n# Make the system configuration directory read-only\n/sbin/lidsadm -A -o /etc -j READ\n# Make the daemon binary directory read-only\n/sbin/lidsadm -A -o /sbin -j READ\n# Make the other daemon binary directory read-only\n/sbin/lidsadm -A -o /usr/sbin -j READ\n# Make the general binary directory read-only\n/sbin/lidsadm -A -o /bin -j READ\n# Make the other general binary directory read-only\n/sbin/lidsadm -A -o /usr/bin -j READ\n# Make the general library directory read-only\n/sbin/lidsadm -A -o /usr/lib -j READ\n# Make the system log directory append-only\n/sbin/lidsadm -A -o /var/log -j APPEND\n# Make the X Windows binary directory read-only\n/sbin/lidsadm -A -o /usr/X11R6/bin -j READ\nApart from protecting your files and directories using the preceding technique,\nLIDS can use the Linux Capabilities to limit the capabilities of a running program\n(that is, process). In a traditional Linux system, the root user (that is, a user with\nUID and GID set to 0) has all the “Capabilities” or ability to perform any task by\n166\nPart III: System Security\n" }, { "page_number": 190, "text": "running any process. LIDS uses Linux Capabilities to break down all the power of\nthe root (or processes run by root user) into pieces so that you can fine-tune the\ncapabilities of a specific process. To find more about the available Linux\nCapabilities, see the /usr/include/linux/capability.h header file. Table 8-1\nlists all Linux Capabilities and their status (on or off) in the default LIDS\nCapabilities configuration file /etc/lids/lids.cap.\nTABLE 8-1: LIST OF LINUX CAPABILITIES\n#\nCapability Name\nMeaning\nStatus in\n/etc/lids/lids.cap\n0\nCAP_CHOWN\nAllow/disallow the changing \nAllow\nof file ownership\n1\nCAP_DAC_OVERRIDE\nAllow/disallow override of \nAllow\nall DAC access restrictions\n2\nCAP_DAC_READ_SEARCH\nAllow/disallow override of \nAllow\nall DAC restrictions regarding \nread and search\n3\nCAP_FOWNER \nAllow/disallow the following \nAllow\nrestrictions: (1) that the effective \nuser ID shall match the file owner \nID when setting the S_ISUID and \nS_ISGID bits on a file; (2) that \nthe effective group ID shall match \nthe file owner ID when setting \nthat bit on a file\n4\nCAP_FSETID \nAllow/disallow access when the \nAllow\neffective user ID does not equal \nowner ID\n5\nCAP_KILL \nAllow/disallow the sending of \nAllow\nsignals to processes belonging \nto others\n6\nCAP_SETGID \nAllow/disallow changing of \nAllow\nthe GID\n7\nCAP_SETUID \nAllow/disallow changing of the UID\nAllow\n8\nCAP_SETPCAP\nAllow/disallow the transferring and \nAllow\nremoval of current set to any PID\nContinued\nChapter 8: Kernel Security\n167\n" }, { "page_number": 191, "text": "TABLE 8-1: LIST OF LINUX CAPABILITIES (Continued)\n#\nCapability Name\nMeaning\nStatus in\n/etc/lids/lids.cap\n9\nCAP_LINUX_IMMUTABLE\nAllow/disallow the modification \nDisallow\nof immutable and append-only \nfiles\n10\nCAP_NET_BIND_SERVICE\nAllow/disallow binding to ports \nDisallow\nbelow 1024\n11\nCAP_NET_BROADCAST\nAllow/disallow broadcasting/\nAllow\nlistening to multicast\n12\nCAP_NET_ADMIN\nAllow/disallow network \nDisallow\nadministration of the following \ntasks: (1) interface configuration; \n(2) administration of IP firewall; \n(3)masquerading and accounting; \n(4) setting debug option on sockets; \n(5) modification of routing tables; \n(6) setting arbitrary process / process \ngroup ownership on sockets; \n(7) binding to any address for \ntransparent proxying; (8) setting \nType Of Service (TOS); (9) setting \npromiscuous mode; (10) clearing \ndriver statistics; (11) multicasting; \n(12) read/write of device-specific \nregisters\n13\nCAP_NET_RAW\nAllow/disallow use of raw sockets\nDisallow\n14\nCAP_IPC_LOCK\nAllow/disallow locking of shared \nAllow\nmemory segments\n15\nCAP_IPC_OWNER\nAllow/disallow IPC ownership checks\nAllow\n16\nCAP_SYS_MODULE\nAllow/disallow insertion and removal Disallow\nof kernel modules\n17\nCAP_SYS_RAWIO\nAllow ioperm(2)/iopl\nDisallow\n(2) to access CAP_SYS_CHROOT\nchroot(2)\n18\nCAP_SYS_CHROOT\nAllow/disallow chroot system call\nDisallow\n19\nCAP_SYS_PTRACE\nAllow/disallow ptrace\nAllow\n168\nPart III: System Security\n" }, { "page_number": 192, "text": "#\nCapability Name\nMeaning\nStatus in\n/etc/lids/lids.cap\n20\nCAP_SYS_PACCT\nAllow/disallow configuration \nAllow\nof process accounting\n21\nCAP_SYS_ADMIN\nAllow/disallow various system \nAllow\nadministration tasks\n22\nCAP_SYS_BOOT\nAllow/disallow reboot\nAllow\n23\nCAP_SYS_NICE\nAllow/disallow changing of process \nAllow\npriority using the nice command\n24\nCAP_SYS_RESOURCE\nAllow/disallow setting of system \nAllow\nresource limit\n25\nCAP_SYS_TIME\nAllow/disallow setting of system time\nAllow\n26\nCAP_SYS_TTY_CONFIG\nAllow/disallow pseudo terminal (TTY) \nAllow\nconfiguration\n27\nCAP_MKNOD\nAllow/disallow the privileged aspects \nAllow\nof mknod() system call\n28\nCAP_LEASE\nAllow/disallow taking of leases \nAllow\non files\n29\nCAP_HIDDEN\nAllow/disallow hiding of a process \nAllow\nto rest of the system\n30\nCAP_INIT_KILL\nAllow/disallow programs the \nAllow\ncapability of killing children \nof the init process (PID = 1)\nThe default settings for the Linux Capabilities that appear in Table 8-1 are stored\nin the /etc/lids/lids.cap file, as shown in Listing 8-2.\nListing 8-2: /etc/lids/lids.cap\n+0:CAP_CHOWN\n+1:CAP_DAC_OVERRIDE\n+2:CAP_DAC_READ_SEARCH\n+3:CAP_FOWNER\n+4:CAP_FSETID\n+5:CAP_KILL\n+6:CAP_SETGID\n+7:CAP_SETUID\nContinued\nChapter 8: Kernel Security\n169\n" }, { "page_number": 193, "text": "Listing 8-2 (Continued)\n+8:CAP_SETPCAP\n-9:CAP_LINUX_IMMUTABLE\n-10:CAP_NET_BIND_SERVICE\n+11:CAP_NET_BROADCAST\n-12:CAP_NET_ADMIN\n-13:CAP_NET_RAW\n+14:CAP_IPC_LOCK\n+15:CAP_IPC_OWNER\n-16:CAP_SYS_MODULE\n-17:CAP_SYS_RAWIO\n-18:CAP_SYS_CHROOT\n+19:CAP_SYS_PTRACE\n+20:CAP_SYS_PACCT\n-21:CAP_SYS_ADMIN\n+22:CAP_SYS_BOOT\n+23:CAP_SYS_NICE\n+24:CAP_SYS_RESOURCE\n+25:CAP_SYS_TIME\n+26:CAP_SYS_TTY_CONFIG\n+27:CAP_MKNOD\n+28:CAP_LEASE\n+29:CAP_HIDDEN\n+30:CAP_INIT_KILL\nThe + sign enables the capability; the - sign disables it. For example, in the pre-\nceding listing, the last Linux Capability called CAP_INIT_KILL is enabled, which\nmeans that a root-owned process could kill any child process (typically daemons)\ncreated by the init process. Using a text editor, enable or disable the Linux\nCapabilities you want.\nPROTECTING YOUR SYSTEM USING LIDS-MANAGED\nLINUX CAPABILITIES\nYou can use LIDS-provided capabilities to protect your system. Here you learn how\nto use the Linux Capabilities managed by LIDS.\nPROTECTING DAEMONS FROM BEING KILLED BY ROOT\nTypically, the init\nprocess starts daemon processes such as the Sendmail mail transport agent and\nApache Web server. If you want to protect them from being killed by the root user,\nmodify the CAP_INIT_KILL settings in /etc/lids/lids.cap to the following:\n-30:CAP_INIT_KILL\nAfter you have reloaded the LIDS configuration (using the /sbin/lidsadm -S --\n+ RELOAD_CONF command) or rebooted the system and sealed the kernel (using the\n170\nPart III: System Security\n" }, { "page_number": 194, "text": "/sbin/lidsadm -I command in the /etc/rc.d/rc.local script) you (as root)\ncan’t kill the init children. This ensures that even if your system is compromised and\nan intruder has gained root privileges, he can’t kill the daemons and replace them\nwith his Trojan versions.\nHIDING PROCESSES FROM EVERYONE\nBy default, the CAP_HIDDEN capability is\nturned on in the /etc/lids/lids.cap configuration file. This can hide a process\nfrom everyone using the following command (/path/to/binary is the fully quali-\nfied path to the executable that you want to hide when running):\nlidsadm -A -s /path/to/binary -t -o CAP_HIDDEN -j INHERIT\nFor example, to hide the Apache server process /usr/local/apache/bin/httpd\nwhen running, simply run the following command:\nlidsadm -A -s /usr/local/apache/bin/httpd -t -o CAP_HIDDEN -j INHERIT\nThis labels the process as hidden in the kernel and it can’t be found using any\nuser-land tools such as ps, top, or even by exploring files in the /proc filesystem.\nDISABLING RAW DEVICE ACCESS BY PROCESSES\nNormally, only special\nprocesses need access to raw devices. So it’s a good idea to disable accesses to raw\ndevices and enable as needed, which conforms to the overall security concept of\nclose all, open only what you need.\nThe raw device access is controlled using the CAP_SYS_RAWIO capability, which\nis disabled by default in the /etc/lids/lids.cap configuration file. If the capa-\nbility were enabled, processes could access such raw block devices as\nN\nioperm/iopi\nN\n/dev/port\nN\n/dev/mem\nN\n/dev/kmem\nFor example, when this capability is off (as in the default) the /sbin/lilo\nprogram can’t function properly because it needs raw device-level access to the\nhard disk.\nBut some special programs may want this capability to run properly, such as\nXF86_SVGA. In this case, we can add the program in the exception list like this:\nlidsadm -A -s /usr/X11R6/bin/XF86_SVGA -t -o CAP_SYS_RAWIO -j INHERIT\nThis makes XF86_SVGA have the capability of CA_SYS_RAWIO while other programs\nare unable to obtain CAP_SYS_RAWIO.\nChapter 8: Kernel Security\n171\n" }, { "page_number": 195, "text": "DISABLING NETWORK-ADMINISTRATION TASKS\nBy default, the CAP_NET_ADMIN\ncapability is turned off, which means that a network administrator (typically the\nroot user) can no longer do the following network administration tasks:\nN Configuring Ethernet interface\nN Administering IP firewall, masquerading, and accounting\nN Setting debug option on sockets\nN Modifying routing tables\nN Setting arbitrary process or process group ownership on sockets\nN Binding to any address for transparent proxying\nN Setting Type Of Service (TOS)\nN Setting promiscuous mode\nN Clearing driver statistics\nN Multicasting\nN Reading/writing device-specific registers\nThe default setting (this capability is turned off) is highly recommended. For one\nof the preceding tasks, simply take down LIDS temporarily using the /sbin/\nlidsadm -S -- -LIDS command.\nPROTECTING THE LINUX IMMUTABLE FLAG FOR FILES\nThe ext2 filesystem has\nan extended feature that can flag a file as immutable. This is done using the chattr\ncommand. For example, the chattr +i /path/to/myfile turns /path/to/myfile\ninto an immutable file. A file with the immutable attribute can’t be modified or\ndeleted or renamed, nor can it be symbolically linked. However, the root user can\nchange the flag by using the chattr -i /path/to/myfile command. Now, you\ncan protect immutable files even from the super user (root) by disabling the\nCAP_LINUX_IMMUTABLE capability.\nThe CAP_LINUX_IMMUTABLE capability is disabled by default in\n/etc/lids/lids.cap.\nDETECTING SCANNERS\nIf you have enabled the built-in port scanner during\nkernel compilation as recommended in the Patching, compiling, and installing the\n172\nPart III: System Security\n" }, { "page_number": 196, "text": "kernel with LIDS section, you can detect port scanners. This scanner can detect\nhalf-open scan, SYN stealth port scan, Stealth FIN, Xmas, Null scan, and so on. The\ndetector can spot such tools as Nmap and Satan — and it’s useful when the raw\nsocket (CAP_NET_RAW) is disabled.\nWhen CAP_NET_RAW is turned off, some common scanners available to\nusers (most of which are based on sniffing) don’t work properly. But the\nkernel-based scanner provided by LIDS is more secure to begin with — it\ndoesn’t use any socket. You may want to consider using the LIDS-supplied\nscanner in tandem with (or instead of) turning off the raw socket.\nRESPONDING TO AN INTRUDER\nWhen LIDS detects a violation of any ACL rule, it can respond to the action by the\nfollowing methods:\nN Logging the message. When someone violates an ACL rule, LIDS logs a\nmessage using the kernel log daemon (klogd).\nN Sending e-mail to appropriate authority. LIDS can send e-mail when a\nviolation occurs. This feature is controlled by the /etc/lids/lids.net\nfile.\nN Hanging up the console. If you have enabled this option during kernel\npatching for LIDS (as discussed in Step 9 in the section called “Patching,\ncompiling, and installing the kernel with LIDS”), the console is dropped\nwhen a user violates an ACL rule.\nAnother similar system to LIDS is the OpenWall project (www.openwall.com/\nlinux/). The OpenWall project has some security features that differ from those of\nLIDS — one of the OpenWall patches (for example) makes the stack area of a process\nnonexecutable. Take a look at this work-in-progress project.\nUsing libsafe to Protect\nProgram Stacks\nProcess stacks are vulnerable to buffer overflow — and you can bet that hackers\nknow it. Exploitation of that vulnerability makes up a significant portion of secu-\nrity attacks in recent years.\nChapter 8: Kernel Security\n173\n" }, { "page_number": 197, "text": "You can address this problem by including a dynamically loadable library called\nlibsafe in the kernel. The libsafe program has distinctive advantages:\nN\nlibsafe works with existing binary programs.\nN\nlibsafe doesn’t require special measures such as:\nI Operating-system modifications\nI Access to the source code of defective programs\nI Recompilation or offline processing of binaries\nN\nlibsafe can be implemented system-wide and remain transparent to users.\nThe libsafe solution is based on a middleware software layer that intercepts all\nfunction calls made to library functions that are known to be vulnerable. In response\nto such calls, libsafe creates a substitute version of the corresponding function to\ncarry out the original task—but in a manner that contains any buffer overflow within\nthe current stack frame. This strategy prevents attackers from “smashing” (overwrit-\ning) the return address and hijacking the control flow of a running program.\nlibsafe can detect and prevent several known attacks, but its real benefit is that\nit can prevent yet unknown attacks — and do it all with negligible performance\noverhead.\nThat said, most network-security professionals accept that fixing defective (vul-\nnerable) programs is the best solution to buffer-overflow attacks — if you know that\na particular program is defective. The true benefit of using libsafe and other alter-\nnative security measures is protection against future buffer overflow attacks on\nprograms that aren’t known to be vulnerable.\nlibsafe doesn’t support programs linked with libc5. If a process pro-\ntected by libsafe experiences a segmentation fault, use the ldd utility to\ndetermine whether the process is linked with libc5.If that is the case,then\neither recompile/re-link the application with libc6 (that is, glibc) or\ndownload a newer version that has been linked with libc6. Most applica-\ntions are offered with a libc6 version.\nOne known source of vulnerability in some programs is their use of easily\nexploited functions in the C programming language. libsafe currently monitors\nthe unsafe C functions listed in Table 8-2.\n174\nPart III: System Security\n" }, { "page_number": 198, "text": "TABLE 8-2: LIST OF UNSAFE C FUNCTIONS MONITORED BY LIBSAFE\nC Function\nPotential Damage\nstrcpy(char *dest, const char *src)\nMay overflow the dest buffer\nstrcat(char *dest, const char *src)\nMay overflow the dest buffer\ngetwd(char *buf)\nMay overflow the buf buffer\ngets(char *s)\nMay overflow the s buffer\n[vf]scanf(const char *format, ...)\nMay overflow its arguments\nrealpath(char *path, char resolved_path[]) May overflow the path buffer\n[v]sprintf(char *str, const char \nMay overflow the str buffer\n*format, ...)\nCompiling and installing libsafe\nThe source code for libsafe is available for download at the following Web\naddress:\nwww.research.avayalabs.com/project/libsafe\nTo use libsafe, download the latest version (presently 2.0) and extract it into\nthe /usr/local/src directory. Then follow these steps:\n1. As root, change directory to /usr/local/src/libsafe and run make to\ncompile libsafe.\nIf you get error messages, consult the INSTALL file for help.\n2. After you have compiled libsafe, install the library using the make\ninstall command.\n3. Before you can use libsafe, you must set the LD_PRELOAD environment\nvariable for each of the processes you want to protect with libsafe. Simply\nadd the following lines to your /etc/bashrc script:\nLD_PRELOAD=/lib/libsafe.so.1\nexport LD_PRELOAD\n4. Modify the /etc/cshrc script to include the following line:\nsetenv LD_PRELOAD /lib/libsafe.so.1\nChapter 8: Kernel Security\n175\n" }, { "page_number": 199, "text": "After adding libsafe protection for your processes, use your programs as you\nwould normally. libsafe transparently checks the parameters for supported unsafe\nfunctions. If such a violation is detected, libsafe takes the following measures:\nN The entire process group receives a SIGKILL signal.\nN An entry is added to /var/log/secure. The following is an example of\nsuch an entry:\nFeb 26 13:57:40 k2 libsafe[15704]: Detected an attempt to\nwrite across stack boundary.\nFeb 26 13:57:40 k2 libsafe[15704]: Terminating\n/users/ttsai/work/security.D0_2/test/t91\nFeb 26 13:57:40 k2 libsafe[15704]: scanf()\nFor greater security, the dynamic loader disregards environmental variables such\nas LD_PRELOAD when it executes set-UID programs. However, you can still use\nlibsafe with set-UID programs if you use one of the following two methods:\nN Append the path to libsafe.so.1 to /etc/ld.so.preload instead of\nusing the LD_PRELOAD environment variable.\nIf you use /etc/ld.so.preload, install libsafe.so.1 on your root\nfilesystem,for instance in /lib,as is done by the default installation.Using a\ndirectory that isn’t available at boot time,such as /usr/local/lib causes\ntrouble at the next reboot. You should also remove libsafe from\n/etc/ld.so.preload when installing a new version. First test it using\nLD_PRELOAD. Then — and only if everything is okay — you can put it back\ninto /etc/ld.so.preload.\nN If you have a version of ld.so that’s more recent than 1.9.0, you can set\nLD_PRELOAD to contain only the base name libsafe.so.1 without having\nto include the directory.\nIf you use this approach, the file is found if it’s in the shared library path\n(which usually contains /lib and /usr/lib).\nBecause the search is restricted to the library search path,this also works\nfor set-UID programs.\n176\nPart III: System Security\n" }, { "page_number": 200, "text": "Add the following lines to the /etc/bashrc script:\nLD_PRELOAD=libsafe.so.1\nexport LD_PRELOAD\nAdd the following line to the /etc/csh.cshrc script.\nsetenv LD_PRELOAD libsafe.so.1\nThis line makes libsafe easier to turn off if something goes wrong.\nAfter you have installed libsafe and appropriately configured either LD_PRELOAD\nor /etc/ld.so.preload, libsafe is ready to run. You can monitor processes with\nno changes.\nIf a process attempts to use one of the monitored functions to overflow a buffer\non the stack, the following actions happen immediately:\nN A violation is declared.\nN A message is output to the standard error stream.\nN An entry is made in /var/log/secure.\nN A core dump and a stack dump are produced (provided the corresponding\noptions are enabled) during compilation. (See the libsafe/INSTALL file.)\nPrograms written in C have always been plagued with buffer overflows. Two\nreasons contribute to this:\nN Many functions provided by the standard C library (such as those listed in\nthe introduction) are unsafe.\nN The C programming language doesn’t automatically check the bounds of\nreferences to arrays and pointers.\nMany programs experience buffer overflows — which makes them vulnera-\nble, without exception, to security attacks. Programmers should check\nexplicitly to ensure that these functions can’t overflow any buffers — but\ntoo often they omit such checks.\nChapter 8: Kernel Security\n177\n" }, { "page_number": 201, "text": "libsafe in action\nlibsafe uses a novel method to detect and handle buffer-overflow attacks. Without\nrequiring source code, it can transparently protect processes against stack-smashing\nattacks — even on a system-wide basis — by intercepting calls to vulnerable library\nfunctions, substituting overflow-resistant versions of such functions, and restricting\nany buffer overflow to the current stack frame.\nThe key to using libsafe effectively is to estimate a safe upper limit on the size\nof buffers — and to instruct libsafe to impose it automatically. This estimation\ncan’t be performed at compile time; the size of the buffer may not yet be known\nthen. For the most realistic estimate of a safe upper limit, calculate the buffer size\nafter the start of the function that makes use of the buffer. This method can help\nyou determine the maximum buffer size by preventing such local buffers from\nextending beyond the end of the current stack frame — thus enabling the substitute\nversion of the function to limit how many times a process may write to the buffer\nwithout exceeding the estimated buffer size. When the return address from that\nfunction (which is located on the stack) can’t be overwritten, control of the process\ncan’t be commandeered.\nSummary\nLIDS is a great tool to protect your Linux system from intruders. Since LIDS is a\nkernel level intrusion protection scheme, it is hard to defeat using traditional hack-\ning tricks. In fact, a sealed LIDS system is very difficult to hack. Similarly, a system\nwith Libsafe support can protect your programs against buffer overflow attacks,\nwhich are the most common exploitations of weak server software. By implement-\ning LIDS and Libsafe on your system, you are taking significant preventive mea-\nsures against attacks. These two tools significantly enhance overall system security.\n178\nPart III: System Security\n" }, { "page_number": 202, "text": "Chapter 9\nSecuring Files and\nFilesystems\nIN THIS CHAPTER\nN Managing files, directories, and permissions\nN Using groups for file security\nN Using ext2 security features\nN Checking file integrity \nFILES are at the heart of modern computing. Virtually everything you do with a com-\nputer these days creates, accesses, updates, or deletes files in your computer or on a\nremote server. When you access the Web via your PC, you access files. It doesn’t mat-\nter if you access a static HTML page over the Web or run a Java Servlet on the server,\neverything you do is about files. A file is the most valuable object in computing.\nUnfortunately, most computer users don’t know how to take care of their files.\nFor example, hardly anyone takes a systematic, process-oriented approach to stor-\ning files by creating a manageable directory hierarchy. Often over the past decade I\nhave felt that high schools or and colleges should offer courses to teach everyone to\nmanage computer files.\nAlthough lack of organization in file management impedes productivity, it isn’t\nthe only problem with files. Thanks to many popular personal operating systems\nfrom one vendor, hardly anyone with a PC knows anything about file security.\nWhen users migrate from operating systems such as MS-DOS and Windows 9x,\nthey are 100 percent unprepared to understand how files work on Linux or other\nUnix/Unix-like operating systems. This lack of understanding can became a serious\nsecurity liability, so this chapter introduces file and directory permissions in terms\nof their security implications. I also examine technology that helps reduce the secu-\nrity risks associated with files and filesystems.\nManaging Files, Directories, and\nUser Group Permissions\nIf a user creates, modifies, or deletes a file that doesn’t have appropriate file per-\nmissions and a malicious user from inside (or a hacker from outside) can get hold\n179\n" }, { "page_number": 203, "text": "of the file, the result is a probable security problem for the user or the system. It’s\nvery important that everyone — both user and system administrator — understand\nfile permissions in depth. (If you already do, you may want to skim or skip the next\nfew sections.)\nUnderstanding file ownership & permissions\nEvery file on a Linux system is associated with a user and a group. Consider an\nexample:\n-rw-rw-r — 1 sheila intranet 512 Feb 6 21:11 milkyweb.txt\nThe preceding line is produced by the ls –l milkyweb.txt command on my\nRed Hat Linux system. (You may already know that the ls program lists files and\ndirectories.) The –l option shows the complete listing for the milkyweb.txt file.\nNow consider the same information in a tabular format in Table 9-1.\nTABLE 9-1: OUTPUT OF AN EXAMPLE LS –1 COMMAND\nInformation Type\nls Output\nFile access permission\n-rw-rw-r —\nNumber of links\n1\nUser (file owner)\nSheila\nGroup\nIntranet\nFile size (in bytes)\n512\nLast modification date\nFeb 6\nLast modification time\n21:11\nFilename\nmilkyweb.txt\nHere the milkyweb.txt file is owned by a user called sheila. She is the only\nregular user who can change the access permissions of this file. The only other user\nwho can change the permissions is the superuser (that is, the root account). The\ngroup for this file is intranet. Any user who belongs to the intranet group can\naccess (read, write, or execute) the file under current group permission settings\n(established by the owner).\n180\nPart III: System Security\n" }, { "page_number": 204, "text": "To become a file owner, a user must create the file. Under Red Hat Linux, when\na user creates a file or directory, its group is also set to the default group of the user\n(which is the private group with the same name as the user). For example, say that\nI log in to my Red Hat Linux system as kabir and (using a text editor such as vi)\ncreate a file called todo.txt. If I do an ls –l todo.txt command, the following\noutput appears:\n-rw-rw-r — 1 kabir kabir 4848 Feb 6 21:37 todo.txt\nAs you can see, the file owner and the group name are the same; under Red Hat\nLinux, user kabir’s default (private) group is also called kabir. This may be con-\nfusing, but it’s done to save you some worries, and of course you can change this\nbehavior quite easily. Under Red Hat Linux, when a user creates a new file, the fol-\nlowing attributes apply:\nN The file owner is the file creator.\nN The group is the owner’s default group.\nAs a regular user, you can’t reassign a file or directory’s ownership to someone\nelse. For example, I can’t create a file as user Kabir and reassign its ownership to a\nuser called Sheila. Wonder why this is so? Security, of course. If a regular user can\nreassign file ownership to others, someone could create a nasty program that\ndeletes files, changes the program’s ownership to the superuser, and wipes out the\nentire filesystem. Only the superuser can reassign file or directory ownership.\nChanging ownership of files and\ndirectories using chown\nAs a superuser, you can change the ownership of a file or directory using the chown\ncommand:\nchown newuser file or directory\nFor example:\nchown sheila kabirs_plans.txt\nThis command makes user sheila the new owner of the file kabirs_plans.txt.\nIf the superuser would also like to change the group for a file or directory, she\ncan use the chown command like this:\nchown newuser.newgroup file or directory\nChapter 9: Securing Files and Filesystems\n181\n" }, { "page_number": 205, "text": "For example:\nchown sheila.admin kabirs_plans.txt\nThe preceding command not only makes sheila the new owner, but also resets\nthe group of the file to admin.\nIf the superuser wants to change the user and/or the group ownership of all the\nfiles or directories under a given directory, she can use the –R option to run the\nchown command in recursive mode. For example:\nchown -R sheila.admin /home/kabir/plans/\nThe preceding command changes the user and group ownership of the\n/home/kabir/plans/ directory — and all the files and subdirectories within it.\nAlthough you must be the superuser to change the ownership of a file, you can\nstill change a file or directory’s group as a regular user using the chgrp command.\nChanging group ownership of files\nand directories with chgrp\nThe chgrp command enables you to change the group ownership of a file or direc-\ntory if you are also part of the new group. This means you can change groups only\nif you belong to both the old and new groups, as in this example:\nchgrp httpd *.html\nIf I run the preceding command to change the group for all the HTML files in a\ndirectory, I must also be part of the httpd group. You can find what groups you are\nin using the groups command without any argument. Like the chown command,\nchgrp uses –R to recursively change group names of files or directories.\nUsing octal numbers to set file\nand directory permissions \nAlthough octal numbers are my favorite method for understanding file and direc-\ntory access permissions, I must warn you that this approach involves converting\noctal digits to binary bits. If you feel mathematically challenged, you can skip this\nsection; the next section explains the same permissions using a somewhat simpler\nconcept: the access string.\nSince octal numbers are useful in an accurate explanation of access permissions,\na small memory refresher is in order. The octal number system uses eight digits in\nthe same way the decimal system uses ten; the familiar decimal digits are 0-9, the\ncorresponding octal digits are 0–7. This difference has a practical use: In the binary\nsystem of ones and zeros that underlies computer code, each octal digit can repre-\nsent three binary bits. Table 9-2 shows the binary equivalent for each octal digit.\n182\nPart III: System Security\n" }, { "page_number": 206, "text": "TABLE 9-2: OCTAL DIGITS AND THEIR BINARY EQUIVALENTS\nOctal\nBinary\n0\n000\n1\n001\n2\n010\n3\n011\n4\n100\n5\n101\n6\n110\n7\n111\nThis table demonstrates the relative efficiency of the octal system. An\nadministrator has to set permissions for many different files and directo-\nries; this compact numeric system puts a practical limit on the number of\nbits required in a representation of any one file/directory permission. \nWhen any of these digits is omitted,the space next to the leftmost digit is\nconsidered a zero.\nTable 9-3 shows a few example permission values that use octal digits. \nTABLE 9-3: EXAMPLE PERMISSION VALUES USING OCTAL DIGITS\nPermission Value\nExplanation\n0400\nOnly read (r) permission for the file owner. This is equivalent to\n400, where the missing octal digit is treated as a leading zero.\n0440\nRead (r) permission for both the file owner and the users in the\ngroup. This is equivalent to 440.\nContinued\nChapter 9: Securing Files and Filesystems\n183\n" }, { "page_number": 207, "text": "TABLE 9-3: EXAMPLE PERMISSION VALUES USING OCTAL DIGITS (Continued)\nPermission Value\nExplanation\n0444\nRead (r) permission for everyone. This is equivalent to 444.\n0644\nRead (r) and write (w) permissions for the file owner. Everyone\nelse has read-only access to the file. This is equivalent to 644; the\nnumber 6 is derived by adding 4 (r) and 2 (w). \n0755\nRead (r), write (w), and execute (x) permissions for the file owner\nand read (r) and execute (x) permissions to the file for everyone\nelse. This is equivalent to 755; the number 7 is derived by adding\n4 (r) + 2 (w) + 1 (x).\n4755\nSame as 755 in the previous example, except this file is set-UID.\nWhen an executable file with set-UID permission is run, the\nprocess runs as the owner of the file. In other words, if a file is\nset-UID and owned by the user gunchy, any time it’s run, the\nrunning program enjoys the privileges of the user gunchy. So if a\nfile is owned by root and the file is also set to set-UID, anyone\nwho can run the file essentially has the privileges of the superuser.\nIf anyone but root can alter a set-UID root file, it’s a major\nsecurity hole. Be very careful when setting the set-UID bit.\n2755\nLike 755 but also sets the set-GID bit. When such a file is\nexecuted, it essentially has all the privileges of the group to which\nthe file belongs.\n1755\nLike 755 but also sets the sticky bit. The sticky bit is formally\nknown as the save text mode. This infrequently used feature tells\nthe operating system to keep an executable program’s image in\nmemory even after it exits. This should reduce the startup time of\na large program. Instead of setting the sticky bit, recode the\napplication for better performance when possible.\nTo come up with a suitable permission setting, first determine what access the user,\nthe group, and everyone else should have and consider if the set-UID, set-GID, or\nsticky bit is necessary. After you have determined the need, you can construct each\noctal digit using 4 (read), 2 (write), and 1 (execute), or construct a custom value by\nadding any of these three values. Although using octal numbers to set permissions\nmay seem awkward at the beginning, with practice their use can become second\nnature.\n184\nPart III: System Security\n" }, { "page_number": 208, "text": "Using permission strings to set access permissions \nOne alternative to using octal digits for setting permissions is a method (supposedly\nsimpler) that uses a special version of an access string called a permission string. To\ncreate a permission string, specify each permission type with one character (shown\nin parentheses), as in the following example:\nN Whom does the permission affect? You have the following choices:\nI u (user)\nI g (group)\nI o (others)\nI a (all)\nN What permission type should you set? You have the following choices:\nI r (read)\nI w (write)\nI x (execute)\nI s (set-UID or set-GID)\nI t (sticky bit)\nN What is the action type? Are you setting the permission or removing it?\nWhen setting the permissions, + specifies an addition and – specifies a\nremoval.\nFor example, a permission string such as u+r allows the file owner read\naccess to the file. A permission string such as a+rx allows everyone to\nread and execute a file. Similarly, u+s makes a file set-UID; g+s makes it\nset-GID.\nChanging access privileges of files\nand directories using chmod\nThe chmod (change mode) utility can change permission modes. Both the octal and\nthe string method work with this nifty utility, as in this example:\nchmod 755 *.pl\nThe preceding command changes permissions for files ending with the extension\n.pl. It sets write and execute permissions for each .pl file (7 = 4 [read] + 2 [write]\n+ 1 [execute]) and grants them to the file’s owner. The command also sets the files\nas readable and executable (5 = 4 [read] + 1 [execute]) by the group and others.\nChapter 9: Securing Files and Filesystems\n185\n" }, { "page_number": 209, "text": "You can accomplish the same using the string method, like this:\nchmod a+rx,u+w *.pl\nHere a+rx allows read (r) and execute (x) permissions for all (a), and u+w allows\nthe file owner (u) to write (w) to the file.\nRemember these rules for multiple access strings:\nN Separate each pair of values by a comma.\nN No space is allowed between the permissions strings.\nIf you want to change permissions for all the files and subdirectories within a\ndirectory, you can use the –R option to perform a recursive permission operation.\nFor example:\nchmod -R 750 /www/mysite\nHere the 750 octal permission is applied to all the files and subdirectories of the\n/www/mysite directory.\nThe permission settings for a directory are like those for regular files, but not\nidentical. Here are some special notes on directory permissions:\nN Read-only access to a directory doesn’t allow you to cd into that direc-\ntory; to do that, you need execute permission.\nN Execute-only permission can access the files inside a directory if the fol-\nlowing conditions exist:\nI You know their names.\nI You can read the files.\nN To list the contents of a directory (using a program such as ls) and also\ncd into a directory, you need both read and execute permissions.\nN If you have write permission for a directory, you can create, delete, or\nmodify any files or subdirectories within that directory — even if someone\nelse owns the file or subdirectory.\nManaging symbolic links\nApart from the regular files and directories, you encounter another type of file quite\nfrequently — links (files that point to other files). A link allows one file or directory\nto have multiple names. Two types of links exist:\nN Hard\nN Soft (symbolic)\n186\nPart III: System Security\n" }, { "page_number": 210, "text": "Here I discuss the special permission issues that arise from links.\nCHANGING PERMISSIONS OR OWNERSHIP OF A HARD LINK\nIf you change the permission or the ownership of a hard link, it also changes the\noriginal file’s permission. For example, take a look at the following ls –l output:\n-rw-r — r —\n1 root 21 Feb 7 11:41 todo.txt\nNow, if the root user creates a hard link (using the command line Len todo.txt\nplan) called plan for todo.txt, the ls –l output looks like this:\n-rw-r — r —\n2 root 21 Feb 7 11:41 plan\n-rw-r — r —\n2 root 21 Feb 7 11:41 todo.txt\nAs you can see, the hard link, plan, and the original file (todo.txt) have the\nsame file size (as shown in the fourth column) and also share the same permission\nand ownership settings. Now, if the root user runs the following command:\nchown sheila plan\nIt gives the ownership of the hard link to a user called sheila; will it work as\nusual? Take a look at the ls –l output after the preceding command:\n-rw-r — r —\n2 sheila root 21 Feb 7 11:41 plan\n-rw-r — r —\n2 sheila root 21 Feb 7 11:41 todo.txt\nAs you can see, the chown command changed the ownership of plan, but the\nownership of todo.txt (the original file) has also changed. So when you change\nthe ownership or permissions of a hard link, the effect also applies to the original\nfile.\nCHANGING PERMISSIONS OR OWNERSHIP OF A SOFT LINK\nChanging the ownership of a symbolic link or soft link doesn’t work the same way.\nFor example, take a look at the following ls –l output:\nlrwxrwxrwx 1 sheila root 8 Feb 7 11:49 plan -> todo.txt\n-rw-r — r —\n1 sheila root 21 Feb 7 11:41 todo.txt\nHere you can see that the plan file is a symbolic (soft) link for todo.txt. Now,\nsuppose the root user changes the symbolic link’s ownership, like this:\nchown kabir plan\nChapter 9: Securing Files and Filesystems\n187\n" }, { "page_number": 211, "text": "The ls –l output shows the following:\nlrwxrwxrwx 1 kabir root 8 Feb 7 11:49 plan -> todo.txt\n-rw-r — r —\n1 sheila root 21 Feb 7 11:41 todo.txt\nThe question is, can user kabir write to todo.txt using the symbolic link\n(plan)? The answer is no, unless the directory in which these files are stored is\nowned by kabir. So changing a soft link’s ownership doesn’t work in the same way\nas with hard links. If you change the permission settings of a soft link, however, the\nfile it points to gets the new settings, as in this example:\nchmod 666 plan\nThis changes the todo.txt file’s permission as shown here in the ls –l listing:\n-rw-rw-rw- 1 kabir kabir 25 Feb 7 11:52 plan\n-rw-r — r —\n1 sheila root 21 Feb 7 11:41 todo.txt\nSo be cautious with links; the permission and ownership settings on these spe-\ncial files are not intuitive.\nManaging user group permission\nLinux user groups are defined in the /etc/group file. A user group is a comma-\nseparated user list that has a unique group ID (GID) number. For example:\nlazyppl:x:666:netrat,mkabir,mrfrog\nHere the user group called lazyppl has three users (netrat, mkabir, mrfrog) as\nmembers.\nBy default, Red Hat Linux supplies a number of user groups, many of which\ndon’t even have a user as a member. These default groups are there for backward\ncompatibility with some programs that you may or may not install. For example,\nthe Unix-to-Unix Copy (uucp) program can use the uucp group in /etc/group, but\nprobably you aren’t going to use uucp to copy files over the Internet. You are more\nlikely to use the FTP program instead.\nDon’t delete these unused groups.The likelihood of breaking a program is\nhigh if you do.\n188\nPart III: System Security\n" }, { "page_number": 212, "text": "USING RED HAT’S PRIVATE USER GROUPS\nWhen you create a new user using the useradd command, Red Hat Linux automat-\nically creates a group in the /etc/group file. This group has the exact same name\nas the user and the only member in that group is the user herself. For example, if\nyou create a user called mrfrog using the useradd mrfrog command, you see this\nentry in the /etc/group file:\nmrfrog:x:505:\nThis group is used whenever mrfrog creates files or directories. But you may\nwonder why mrfrog needs a private user group like that when he already owns\neverything he creates. The answer, again, has security ramifications: The group pre-\nvents anyone else from reading mrfrog’s files. Because all files and directories cre-\nated by the mrfrog user allow access only to their owner (mrfrog) and the group\n(again mrfrog), no one else can access his files.\nCREATING USER GROUPS TO DEPARTMENTALIZE USERS\nIf several people need access to a set of files or directories, a user group can control\naccess. For example, say that you have three users: mrfrog, kabir, sheila who need\nread, write, and execute access to a directory called /www/public/htdocs directory.\nYou can create a user group called webmaster using groupadd webmaster, which\ncreates the following entry in the /etc/group file:\nwebmaster:x:508:\nYou can modify this line so it looks like this:\nwebmaster:x:508:mrfrog,kabir,sheila\nNow you can change the /www/public/htdocs directory permission, using the\nchown :webmaster /www/public/htdocs.\nIf you want to change the group ownership for all subdirectories under the\nnamed directory,use the -R option with the chown command.\nNow the three users can access files in that directory only if the file-and-directory\npermissions allow the group users to view, edit, and delete files and directories. To\nmake sure they can, run the chmod 770 /www/public/htdocs command. Doing so\nallows them read, write, and execute permission for this directory. However, when\nany one of them creates a new file in this directory, it is accessible only by that per-\nson; Red Hat Linux automatically sets the file’s ownership to the user and group\nChapter 9: Securing Files and Filesystems\n189\n" }, { "page_number": 213, "text": "ownership to the user’s private group. For example, if the user kabir runs the touch\nmyfile.txt command to create an empty file, the permission setting for this file is\nas shown in the following line:\n-rw-rw-r-- 1 kabir kabir 0 Dec 17 17:41 myfile.txt\nThis means that the other two users in the webmaster group can read this file\nbecause of the world-readable settings of the file, but they can’t modify it or remove\nit. Because kabir wants to allow them to modify or delete this file, he can run the\nchgrp webmaster myfile.txt command to change the file’s group permission as\nshown in the following line:\n-rw-rw-r-- 1 kabir webmaster 0 Dec 17 17:42 myfile.txt\nNow everyone in the webmaster group can do anything with this file. Because\nthe chgrp command is cumbersome to run every time someone creates a new file,\nyou can simply set the SGID bit for the directory by using the chmod 2770\n/www/public/htdocs command. This setting appears as the following when the ls\n-l command is run from the /www/public directory.\ndrwxrws--- 2 bin webmaster 4096 Dec 17 17:47 htdocs\nIf any of the webmaster members creates a file in the htdocs directory, the com-\nmand gives the group read and write permissions to the file by default.\nWhen you work with users and groups, back up original files before making\nany changes.This saves you a lot of time if something goes wrong with the\nnew configuration. You can simply return to the old configurations by\nreplacing the new files with old ones. If you modify the /etc/group file\nmanually,make sure you have a way to check for the consistency of informa-\ntion between the /etc/group and /etc/passwd files.\nChecking Consistency \nof Users and Groups\nMany busy and daring system administrators manage the /etc/group and\n/etc/passwd file virtually using an editor such as vi or emacs. This practice is\nvery common and quite dangerous. I recommend that you use useradd, usermod,\nand userdel commands to create, modify, and delete users and groupadd, groupmod,\nand groupdel to create, modify, and delete user groups.\n190\nPart III: System Security\n" }, { "page_number": 214, "text": "When you use these tools to manage your user and group files, you should end up\nwith a consistent environment where all user groups and users are accounted for.\nHowever, if you ever end up modifying these files by hand, watch for inconsistencies\nthat can become security risks or at least create a lot of confusion. Also, many system\nadministrators get in the habit of pruning these files every so often to ensure that no\nunaccounted user group or user is in the system. Doing this manually every time is\nvery unreliable. Unfortunately, no Red Hat-supplied tool exists that can ensure that\nyou don’t break something when you try to enhance your system security. This\nbugged me enough times that I wrote a Perl script called chk_pwd_grp.pl, shown in\nListing 9-1, that performs the following consistency checks:\nN Check for duplicate username and UID in the /etc/passwd file.\nN Check for invalid GID in the /etc/passwd file.\nN Check for duplicate group and GID in the /etc/group file.\nN Check for unused, non-standard groups in the /etc/group file.\nN Check for nonexistent users in /etc/group who don’t exist in\n/etc/passwd.\nListing 9-1: The chk_pwd_grp.pl script\n#!/usr/bin/perl\n# Purpose: checks /etc/passwd and /etc/group for inconsistencies\n# and produces report\n# Features\n# - checks for duplicate username and UID in /etc/passwd file.\n# - checks for invalid GID in /etc/passwd file.\n# - checks for duplicate group and GID in /etc/group file.\n# - checks for unused, non-standard groups in /etc/group file.\n# - checks for non-existent users in /etc/group who don’t exist\n# in /etc/passwd\n#\n# Written by: Mohammed J. Kabir (kabir@nitec.com)\n# CVS Id: $id$\n#\nuse strict;\nuse constant DEBUG => 0;\nmy $PASSWD_FILE = ‘/etc/passwd’;\nmy $GROUP_FILE = ‘/etc/group’;\n# Groups that are supplied by Red Hat by default are considered\n# okay even if they don’t get used in /etc/passwd\nmy %DEFAULT_GROUP = (\nroot => 1, bin => 1,\nContinued\nChapter 9: Securing Files and Filesystems\n191\n" }, { "page_number": 215, "text": "Listing 9-1 (Continued)\nadm => 1, tty => 1,\nkmem => 1, wheel => 1,\nman => 1, games => 1,\nnobody => 1, users => 1,\npppusers => 1, popusers => 1,\ndaemon => 1, sys => 1, rpc => 1,\ndisk => 1, lp => 1, mem => 1,\nmail => 1, news => 1, uucp => 1,\ngopher => 1, dip => 1, ftp => 1,\nutmp => 1, xfs => 1, floppy => 1,\nslipusers => 1, rpcuser => 1,\nslocate => 1\n);\n# Get information from the passwd file\nmy ( $userByUIDRef,\n$uidByGIDRef,\n$uidByUsernameRef) = get_user_info($PASSWD_FILE);\n# Get information from the group file\nmy ( $groupByGIDRef,\n$groupByUsernameRef,\n$groupByUserListRef,\n$groupBySizeRef) = get_group_info($GROUP_FILE,\n$userByUIDRef,\n$uidByGIDRef,\n$uidByUsernameRef);\n# Make report using information from both passwd and group files\nmy $report = make_group_report(\n$userByUIDRef,\n$uidByGIDRef,\n$groupByGIDRef,\n$groupByUsernameRef,\n$groupByUserListRef,\n$groupBySizeRef);\n# Print report\nprint $report;\n# Exit program\nexit 0;\n# subroutine blocks\nsub get_user_info {\n#\n# Read the passwd file and create multiple hashes needed\n# for analysis\n#\nmy $passwdFile = shift;\n192\nPart III: System Security\n" }, { "page_number": 216, "text": "# Open file\nopen(PWD, $passwdFile) || die “Can’t read $passwdFile $!\\n”;\n# Declare variables\nmy (%userByUIDHash, %uidByGIDHash,\n%uidByUsernameHash, $user,$uid,$gid);\n# Set line count\nmy $lineCnt = 0;\n# Parse the file and stuff hashes\nwhile(){\nchomp;\n$lineCnt++;\n# Parse the current line\n($user,undef,$uid,$gid) = split(/:/);\n# Detect duplicate usernames\nif (defined $userByUIDHash{$uid} &&\n$user eq $userByUIDHash{$uid}) {\nwarn(“Warning! $passwdFile [Line: $lineCnt] : “ .\n“multiple occurance of username $user detected\\n”);\n# Detect\n} elsif (defined $userByUIDHash{$uid}) {\nwarn(“Warning! $passwdFile [Line: $lineCnt] : “ .\n“UID ($uid) has been used for user $user “ .\n“and $userByUIDHash{$uid}\\n”);\n}\n$userByUIDHash{$uid} = $user;\n$uidByGIDHash{$gid} = $uid;\n$uidByUsernameHash{$user} = $uid;\n}\nclose(PWD);\nreturn(\\%userByUIDHash, \\%uidByGIDHash, \\%uidByUsernameHash);\n}\nsub get_group_info {\nmy ($groupFile, $userByUIDRef, $uidByGIDRef, $uidByUsernameRef) = @_;\nopen(GROUP, $groupFile) || die “Can’t read $groupFile $!\\n”;\nmy (%groupByGIDHash,\n%groupByUsernameHash,\n%groupByUserListHash,\n%groupBySizeHash,\n%gidByGroupHash,\n$group,$gid,\n$userList);\nmy $lineCnt = 0;\nwhile(){\nchomp;\n$lineCnt++;\nContinued\nChapter 9: Securing Files and Filesystems\n193\n" }, { "page_number": 217, "text": "Listing 9-1 (Continued)\n# Parse the current line\n($group,undef,$gid,$userList) = split(/:/);\n# Detect duplicate GID\nif (defined $groupByGIDHash{$gid}) {\nwarn(“Warning! $GROUP_FILE [Line: $lineCnt] : “ .\n“duplicate GID ($gid) found! Group: $group\\n”);\n} elsif (defined $gidByGroupHash{$group}){\nwarn(“Warning! $GROUP_FILE [Line: $lineCnt] : “ .\n“duplicate group name ($group) detected.\\n”);\n}\n$groupByGIDHash{$gid} = $group;\n$gidByGroupHash{$group} = $gid;\nforeach my $user (split(/,/,$userList)) {\n# If user doesn’t exist in /etc/passwd file\nif (! defined $uidByUsernameRef->{$user}) {\nwarn(“Warning! $GROUP_FILE [Line: $lineCnt] : user $user “ .\n“does not exist in $PASSWD_FILE\\n”);\n}\n$groupByUsernameHash{$user} = $gid;\n$groupByUserListHash{$gid} = $userList;\nDEBUG and print “Total members for $group = “,\nscalar (split(/,/,$userList)), “\\n”;\n$groupBySizeHash{$group} =\nscalar (split(/,/,$userList))\n}\n}\nclose(PWD);\nreturn(\\%groupByGIDHash,\n\\%groupByUsernameHash,\n\\%groupByUserListHash,\n\\%groupBySizeHash);\n}\nsub make_group_report {\nmy ($userByUIDRef,\n$uidByGIDRef,\n$groupByGIDRef,\n$groupByUsernameRef,\n$groupByUserListRef,\n$groupBySizeRef) = @_;\nmy $report = ‘’;\nmy ($totalGroups,\n$groupName,\n$totalPrivateGroups,\n$totalPublicGroups);\n194\nPart III: System Security\n" }, { "page_number": 218, "text": "# Get total user count in /etc/passwd\nmy $totalUsers = scalar keys %$userByUIDRef;\nforeach my $gid (sort keys %$groupByGIDRef) {\n$totalGroups++;\n$groupName = $groupByGIDRef->{$gid};\nDEBUG and print “Group: $groupName\\n”;\n# If group has members listed in the /etc/group file\n# then list them\nif ($groupByUserListRef->{$gid} ne ‘’) {\n$totalPublicGroups++;\n# Maybe this is a private user group?\n} elsif (defined $uidByGIDRef->{$gid}) {\n$totalPrivateGroups++;\n# This is a default user group or an empty group\n} elsif (! defined $DEFAULT_GROUP{$groupName}) {\nwarn(“Warning! $GROUP_FILE : Non-standard user group “ .\n“$groupByGIDRef->{$gid} does not have “ .\n“any member.\\n”);\n}\n}\n# Now check to see if /etc/passwd has any user with\n# invalid group\nforeach my $gid (keys %$uidByGIDRef){\nif (! defined $groupByGIDRef->{$gid}) {\nwarn(“Warning! $PASSWD_FILE : user “ .\n“$userByUIDRef->{$uidByGIDRef->{$gid}} “.\n“belongs to an invalid group (GID=$gid)\\n” );\n}\n}\n# Create report\n$report .=<{$a} <=> $groupBySizeRef->{$b}}\nkeys %$groupBySizeRef) {\n$report .= sprintf(“%s\\t\\t%2d\\n”,\n$group,\n$groupBySizeRef->{$group});\nContinued\nChapter 9: Securing Files and Filesystems\n195\n" }, { "page_number": 219, "text": "Listing 9-1 (Continued)\n}\n$report .= “\\n”;\nreturn $report;\n}\n# End of chk_pwd_grp.pl script\nBefore I modify /etc/passwd or /etc/group using the Red Hat-supplied utili-\nties or manually (yes, I am guilty of this habit myself), I simply run the preceding\nscript to check for warning messages. Here is a sample output of the perl\nchk_pwd_grp.pl command:\nWarning! /etc/passwd [Line: 2] : UID (0) has been used for user hacker and root\nWarning! /etc/group [Line: 3] : user xyz does not exist in /etc/passwd\nWarning! /etc/group : Non-standard user group testuser does not have any member.\nWarning! /etc/passwd : user hacker belongs to an invalid group (GID=666)\nTotal users : 27\nTotal groups : 40\nPrivate user groups : 14\nPublic user groups : 11\nGROUP TOTAL\n===== =====\ndaemon 4\nbin 3\nadm 3\nsys 3\nlp 2\ndisk 1\nwheel 1\nroot 1\nnews 1\nmail 1\nuucp 1\nI have many warnings as shown in the first few lines of the above output. Most\nof these warnings need immediate action:\nN The /etc/passwd file (line #2) has a user called hacker who uses the\nsame UID (0) as root.\nThis is definitely very suspicious, because UID (0) grants root privilege!\nThis should be checked immediately.\n196\nPart III: System Security\n" }, { "page_number": 220, "text": "N A user called xyz (found in /etc/group line #3) doesn’t even exist in the\n/etc/passwd file.\nThis means there is a group reference to a user who no longer exists. This\nis definitely something that has potential security implications so it also\nshould be checked immediately.\nN A non-standard user group called testuser exists that doesn’t have any\nmembers.\nA non-standard user group is a group that isn’t one of the following:\nI /etc/group by default \nI a private user group\nN User hacker belongs to an invalid group whose GID is 666.\nN The script also reports the current group and account information in a\nsimple text report, which can be very useful to watch periodically.\nI recommend that you create a small script called cron_chk_pwd_grp.sh, as\nshown in Listing 9-2, in the /etc/cron.weekly directory.\nListing 9-2: The cron_chk_pwd_grp.sh script\n#!/bin/sh\n# Standard binaries\nMAIL=/bin/mail\nRM=/bin/rm\nPERL=/usr/bin/perl\n# Change the path\nSCRIPT=/path/to/chk_pwd_grp.pl\nTMP_FILE=/tmp/$$\n# Change the username\nADMIN=root@localhost\n# Get the date and week number\nDATE=`/bin/date “+%m-%d-%Y [Week: %U]”`\n# Run the script and redirect output\n#(STDOUT and STDERR) to $TMP_FILE\n$PERL $SCRIPT > $TMP_FILE 2>&1;\n# Send the script report via email to ADMIN user\n$MAIL -s “User and Group Consistency Report $DATE “ \\\n$ADMIN < $TMP_FILE;\n# Delete the temporary file\n$RM -f $TMP_FILE;\n# Exit\nexit 0;\nChapter 9: Securing Files and Filesystems\n197\n" }, { "page_number": 221, "text": "N Change the SCRIPT=/path/to/chk_pwd_grp.pl line to point to the\nappropriate, fully qualified path of the chk_pwd_grp.pl script.\nN Change the ADMIN=root@localhost to the appropriate e-mail address.\nNow you receive an e-mail report from the user and group consistency checker\nscript, chk_pwd_grp.pl, on a weekly basis to the e-mail address used for ADMIN.\nSecuring Files and Directories\nA few steps can ensure the security of files and directories on your system. The\nvery first step is to define a system-wide permission setting; next step is to identify\nthe world-accessible files and dealing with them as appropriate; the third step is to\nlocate set-UID and set-GID and dealing with them as appropriate. All of these steps\nare discussed in the following sections.\nBefore you can enhance file and directory security,establish the directory\nscheme Red Hat Linux follows.This helps you plan and manage files and\ndirectories.\nUnderstanding filesystem hierarchy structure\nRed Hat follows the Filesystem Hierarchy Standard (FHS) maintained at the\nwww.pathname.com/fhs/ Web site. According to the FHS Web site, the FHS defines\na common arrangement of the many files and directories in Unix-like systems that\nmany different developers and groups such as Red Hat have agreed to use. Listing\n9-3 shows the FHS that Red Hat Linux uses.\nListing 9-3: FHS used in Red Hat Linux\n/ (root partition)\n|\n|---dev (device files)\n|---etc (system configuration files)\n| |---X11 (X Window specific)\n| +---skel (Template files for user shells)\n|\n|---lib (library files)\n|---proc (kernel proc Filesystem)\n|---sbin (system binaries)\n|---usr (userland programs)\n| |---X11R6\n| |---bin (user executables)\n198\nPart III: System Security\n" }, { "page_number": 222, "text": "| |---dict (dictionary data files)\n| |---doc (documentation for binaries)\n| |---etc (configuration for binaries)\n| |---games (useless, boring games)\n| |---include (c header files)\n| |---info (documentation for binaries)\n| |---lib (library files for binaries)\n| |---libexec (library files)\n| |---local (locally installed software directory)\n| | |---bin\n| | |---doc\n| | |---etc\n| | |---games\n| | |---info\n| | |---lib\n| | |---man\n| | |---sbin\n| | +---src\n| |\n| |---man (manual pages)\n| |---share (shared files such as documentation)\n| +- src (linux source code)\n|\n|---var\n| |---catman\n| |---lib\n| |---local\n| |---lock (lock directory)\n| |---log (log directory)\n| |---named\n| |---nis\n| |---preserve\n| |---run\n| +--spool (spool directory)\n| | |---anacron\n| | |---at\n| | |---cron\n| | |---fax\n| | |---lpd\n| | |---mail (received mail directory)\n| | |---mqueue (mail queue)\n| | |- news\n| | |---rwho\n| | |---samba\nContinued\nChapter 9: Securing Files and Filesystems\n199\n" }, { "page_number": 223, "text": "Listing 9-3 (Continued)\n| | |---slrnpull\n| | |---squid\n| | |---up2date\n| | |---uucp\n| | |---uucppublic\n| | |---vbox\n| | +---voice\n| +---tmp\n|\n+---tmp (temporary files)\nThe FHS-based directory structure is reasonably simple. I have provided a brief\nexplanation of what the important directories are all about in the preceding listing.\nFHS requires that the /usr directory (usually a disk partition by itself) be mounted\nas read-only, which isn’t the case with Red Hat Linux. If /usr is read-only, it\nenhances system security greatly because no one can modify any binaries in\n/usr/bin or /usr/local/bin directories (if /usr/local is a subdirectory of /usr\nand not a separate partition itself).\nHowever, mounting /usr as read-only has one major inconvenience: If you plan\nto add new software to your system, you probably will write to /usr or one of its\nsubdirectories to install most software. This is probably why the Red Hat-supplied\ndefault /etc/fstab file doesn’t mount /usr as read-only. Here’s what I recommend:\nN If you make your system available on the Internet, seriously consider\nmaking the /usr partition read-only.\nN Because it’s an industry standard not to modify production systems, you\ncan enforce the read-only /usr rule for yourself and others. Fully config-\nure your system with all necessary software and test it for a suitable\nperiod of time; reconfigure if necessary. Then run the system for another\ntest period with /usr set to read-only. If you don’t see any problem with\nany of your software or services, you should be able to enforce a read-\nonly /usr in your production system.\nN To make the /usr read-only, modify the /etc/fstab file. Edit the file and\ncomment out the line that mounts /usr using default mount options. This\nline in my /etc/fstab looks like this:\nLABEL=/usr /usr ext2 defaults 1 2\nN After you have commented this line out by placing a # character in front\nof the line, you can create a new line like this:\nLABEL=/usr /usr ext2 ro,suid,dev,auto,nouser,async 1 2\n200\nPart III: System Security\n" }, { "page_number": 224, "text": "N The new fstab line for /usr simply tells mount to load the filesystem\nusing ro,suid,dev,auto,nouser, and async mount options. The defaults\noption in the commented-out version expanded to rw,suid,dev,auto,\nnouser, and async. Here you are simply replacing rw (read-write) with ro\n(read-only).\nN Reboot your system from the console and log in as root.\nN Change directory to /usr and try to create a new file using a command\nsuch as touch mynewfile.txt in this directory. You should get an error\nmessage such as the following:\ntouch: mynewfile.txt: Read-only filesystem\nN As you can see, you can no longer write to the /usr partition even with a\nroot account, which means it isn’t possible for a hacker to write there\neither.\nWhenever you need to install some software in a directory within /usr, you\ncan comment out the new /usr line and uncomment the old one and reboot the\nsystem. Then you can install the new software and simply go back to read-only\nconfiguration.\nIf you don’t like to modify /etc/fstab every time you write to /usr, you\ncansimplymaketwoversionsof/etc/fstab called/etc/fstab.usr-ro\n(thisonehastheread-only,ro,flagfor/usr line)and/etc/fstab/usr-rw\n(this one has the read-write, rw, flag for the /usr line) and use a symbolic\nlink (using the ln command) to link one of them to /etc/fstab as\ndesired.\nSetting system-wide default permission\nmodel using umask\nWhen a user creates a new file or directory, Linux uses a mask value to determine\nthe permission setting for the new file or directory. The mask value is set using a\ncommand called umask. If you run the umask command by itself, you see the current\ncreation mask value. The mask value is stored as an octal number; it’s the comple-\nment of the desired permission mode. For example, a mask value of 002 makes\nLinux create files and directories with permission settings of 775. Similarly, a mask\nvalue of 777 would result in a permission setting of 000, which means no access.\nYou can set the default umask value for all the users in /etc/profile. For\nexample, the default /etc/profile includes the following line, which determines\numask settings for users.\nChapter 9: Securing Files and Filesystems\n201\n" }, { "page_number": 225, "text": "if [ `id -gn` = `id -un` -a `id -u` -gt 14 ]; then\numask 002\nelse\numask 022\nfi\nThis script segment ensures that all users with UID > 14 get a umask setting of\n002 and users with UID < 14, which includes root and the default system accounts\nsuch as ftp and operator, get a umask setting of 022. Because ordinary user UID\nstarts at 500 (set in /etc/login.defs; see UID_MIN) they all get 002, which trans-\nlates into 775 permission setting. This means that when an ordinary user creates a\nfile or directory, she has read, write, and execute for herself and her user group\n(which typically is herself, too, if Red Hat private user groups are used) and the rest\nof the world can read and execute her new file or change to her new directory. This\nisn’t a good idea because files should never be world-readable by default. So I rec-\nommend that you do the following:\nN Modify /etc/profile and change the umask 002 line to umask 007, so\nthat ordinary user files and directories have 770 as the default permission\nsettings. This file gets processed by the default shell /bin/bash. The\ndefault umask for root is 022, which translates into a 755 permission\nmode. This is a really bad default value for all the users whose UID is less\nthen 14. . Change the umask to 077, which translates a restrictive (that is,\nonly file owner access) 700 permission mode. The modified code segment\nin /etc/profile looks like this:\nif [ `id -gn` = `id -un` -a `id -u` -gt 14 ]; then\numask 077\nelse\numask 007\nfi\nN Modify the /etc/csh.login file and perform the preceding change. This\nfile is processed by users who use /bin/csh or /bin/tcsh login shells.\nIf you use the su command to become root, make sure you use the su -\ncommand instead of su without any argument. The - ensures that the new\nshell acts like a login shell of the new user’s (that is, root.) In other words,\nusing the - option,you can instruct the target shell (by default it’s bin/bash\nunless you changed the shell using the chsh command) to load appropriate\nconfiguration files such as /etc/profile or /etc/csh.login.\n202\nPart III: System Security\n" }, { "page_number": 226, "text": "Dealing with world-accessible files\nAfter you have made sure that the default permission mode for new files and direc-\ntories is properly configured as discussed in the preceding text, you can remove\nproblematic files and directories. Any user on the system can access a world-\naccessible file or directory. The best way to handle world-readable, world-writeable,\nand world-executable files or directories is to not have any of them.\nUnfortunately, you may need some world-readable files and world-executables\ndirectories when creating public directories for user Web sites or other shared disk\nconcepts. However, world-writeable files and directories and world-executable files\nshould be avoided completely. You can regularly find these files and directories by\nusing a script, as shown in Listing 9-4.\nListing 9-4: The find_worldwx.sh\n!/bin/sh\n# Purpose: to locate world-writable files/dir and\n# world-executable files# Written by Mohammed J. Kabir\n# Standard binaries\nFIND=/usr/bin/find\nCAT=/bin/cat\nRM=/bin/rm\nMAIL=/bin/mail\n# Get the date and week number\nDATE=`/bin/date “+%m-%d-%Y [Week: %U]”`\n# Starting path\nROOT_DIR=/\nADMIN=root@localhost\n# Temp directory\nTMP_DIR=/tmp\nWORLD_WRITABLE=-2\nWORLD_EXEC=-1\nTYPE_FILE=f\nTYPE_DIR=d\nTYPE_LINK=l\nRUN_CMD=-ls\nOUT_FILE=$$.out\n# Find all world-writable files/directories (that is, not\n# symbolic links\necho “List of all world-writable files or directories” > $OUT_FILE;\n$FIND $ROOT_DIR -perm $WORLD_WRITABLE ! -type $TYPE_LINK \\\n$RUN_CMD >> $OUT_FILE;\necho >> $OUT_FILE;\necho “List of all world-executable files” >> $OUT_FILE;\n$FIND $ROOT_DIR -perm $WORLD_EXEC -type $TYPE_FILE \\\nContinued\nChapter 9: Securing Files and Filesystems\n203\n" }, { "page_number": 227, "text": "Listing 9-4 (Continued)\n$RUN_CMD >> $OUT_FILE;\n# Send the script report via email to ADMIN user\n$MAIL -s “World-wx Report $DATE “ $ADMIN < $OUT_FILE;\n$RM -f $OUT_FILE;\nexit 0;\nWhen you run this script as a cron job from /etc/cron.weekly, it sends e-mail\nto ADMIN every week (so don’t forget to change root@localhost to a suitable\ne-mail address), listing all world-writeable files and directories, as well as all world-\nexecutable files. An example of such an e-mail report (slightly modified to fit the\npage) is shown in the following listing:\nFrom root sun Dec 17 21:27:56 2000\nDate: sun, 17 Dec 2000 21:27:56 -0500\nFrom: root \nTo: kabir@k2.intevo.com\nsubject: World-wx Report 12-17-2000 [Week: 51]\nList of all world-writable files or directories\n14625 4 drwxrwxrwt 11 root root 4096 Dec 17 21:24 /tmp\n17422 0 -rw-rw-rw- 1 root root 0 Dec 17 20:53 /tmp/deadletter\n44648 4 drwxrwxrwx 2 root root 4096 Dec 17 20:53 /tmp/rootkit\nList of all world-executable files\n104581 8 -rwxr-xr-x 1 root root 7151 Oct 17 11:50 /tmp/hack.o\n4554 4 -rwxr-xr-x 1 root webmaste 1716 Dec 12 22:50 /tmp/x86.asm\nWhen you receive such e-mails, look closely; spot and investigate the files and\ndirectories that seem fishy (that is, out of the ordinary). In the preceding example,\nthe rootkit directory and the hack.o in /tmp would raise a red flag for me; I would\ninvestigate those files immediately. Unfortunately, there’s no surefire way to spot\nsuspects — you learn to suspect everything at the beginning and slowly get a work-\ning sense of where to look. (May the force be with you.)\nIn addition to world-writeables, two other risky types of files exist that you\nshould keep an eye open for: SUID and SGID files.\nDealing with set-UID and set-GID programs\nAn ordinary user can run a set-UID (SUID) program with the privileges of another\nuser. Typically, SUID programs run applications that should be run as root — which\nposes a great security risk. Listing 9-5 shows an example that illustrates this risk: a\nsimple Perl script called setuid.pl.\nListing 9-5: The setuid.pl script\n#!/usr/bin/perl\n# Purpose: demonstrate set-uid risk\n204\nPart III: System Security\n" }, { "page_number": 228, "text": "#\nuse strict;\n# Log file path\nmy $LOG_FILE = “/var/log/custom.log”;\n# Open log file\nopen(LOG,”>>$LOG_FILE”) || die “Can’t open $LOG_FILE $!\\n”;\n# Write an entry\nprint LOG “PID $$ $0 script was run by $ENV{USER}\\n”;\n# Close log file\nclose(LOG);\n# Exit program\nexit 0;\nThis script simply writes a log entry in /var/log/custom.log file and exits.\nWhen an ordinary user runs this script she gets the following error message:\nCan’t open /var/log/custom.log Permission denied\nThe final line of Listing 9-5 shows that the /var/log/custom.log cannot be opened,\nwhich is not surprising. Because the /var/log directory isn’t writeable by an ordi-\nnary user; only root can write in that directory. But suppose the powers-that-be\nrequire ordinary users to run this script. The system administrator has two dicey\nalternatives:\nN Opening the /var/log directory for ordinary users\nN Setting the UID of the script to root and allowing ordinary users to run it\nBecause opening the /var/log to ordinary users is the greater of the two evils,\nthe system administrator (forced to support setuid.pl) goes for the set-UID\napproach. She runs the chmod 5755 setuid.pl command to set the set-uid bit\nfor the script and allow everyone to run the script. When run by a user called\nkabir, the script writes the following entry in /var/log/custom.log:\nPID 2616 ./setuid.pl script was run by kabir\nAs shown, the script is now enabling the ordinary user to write to\n/var/log/custom.log file. A malicious user (typically not someone from inside\nyour organization, but an outsider who managed to break into an ordinary user\naccount) looks for set-UID programs and checks for a way to exploit them.\nGoing back to the simple example, if the user account called kabir is hacked by\none such bad guy, he can run a command such as find / -type f -perm -04000\n-ls to locate set-UID programs such as setuid.pl. Upon finding such a program,\nthe hacker can look for a way to gain root access.\nYou may be thinking (correctly) that because setuid.pl is a Perl script, the\nhacker could easily study the source code, find out why a set-UID script was\nrequired, and plan an attack. But don’t trust your C programs either; Listing 9-6\nshows the source code of a small C program called write2var.c.\nChapter 9: Securing Files and Filesystems\n205\n" }, { "page_number": 229, "text": "Listing 9-6: The write2var.c source file\n/*\nPurpose: to demonstrate set-uid issue\nWritten by Mohammed J. Kabir\n*/\n#include \n#include \n#include \nint main(void)\n{\nFILE *out;\nchar *str;\n// Try to allocate 128 bytes of memory to store fqpn of log\nif ( (str = malloc(128)) == NULL)\n{\nfprintf(stderr,\n“Cannot allocate memory to store filename.\\n”);\nreturn 0;\n}\n// Assign filename to allocated memory (string)\nstrcpy(str, “/var/log/test.log”);\n// Try to open the log file for writing\nif (( out = fopen(str, “a+”)) == NULL )\n{\nfprintf(stderr, “Cannot open the log file.\\n”);\nreturn 1;\n}\n// Write to log\nfputs(“Wrote this line\\n”,out);\nfclose(out);\n// Done\nreturn 0;\n}\nWhen this C program is compiled (using the gcc -o test write2var.c com-\nmand), it can run as ./go from the command-line. This program writes to\n/var/log/test.log if it’s run as root, but must run as a set-UID program if an\nordinary user is to run it. If this program is set-UID and its source code isn’t\navailable, the hacker can simply run the strings ./go command — or run the\nstrace ./go command to investigate why a set-UID program was necessary — and\ntry to exploit any weakness that shows up. For example, the strings go command\nshows the following output:\n/lib/ld-linux.so.2\n__gmon_start__\n206\nPart III: System Security\n" }, { "page_number": 230, "text": "libc.so.6\nstrcpy\n__cxa_finalize\nmalloc\nfprintf\n__deregister_frame_info\nfclose\nstderr\nfopen\n_IO_stdin_used\n__libc_start_main\nfputs\n__register_frame_info\nGLIBC_2.1.3\nGLIBC_2.1\nGLIBC_2.0\nPTRh\nCannot allocate memory to store filename.\n/var/log/test.log\nCannot open the log file.\nWrote this line\nNotice the line in bold; even a not-so-smart hacker can figure that this program\nreads or writes to /var/log/test.log. Because this is a simple example, the\nhacker may not be able to do much with this program, but at the least he can cor-\nrupt entries in the /var/log/test.log file by manually editing it.Similarly, a set-\nGID (SGID) program can run using its group privilege. The example ls -l output in\nthe following listing shows a setuid and setgid file.\n-rwsr-x--- 1 root root 0 Dec 18 00:58 /tmp/setuid\n-rwxr-s--- 1 root root 0 Dec 18 00:57 /tmp/setgid\nBoth the set-UID and the set-GID fields are represented using the s character\n(shown in bold for emphasis). Listing 9-7 shows a script called find_suid_sgid.sh\nthat you can run from /etc/cron.weekly; it e-mails you an SUID/SGID report\nevery week.\nListing 9-7: The find_suid_sgid.sh script\n#!/bin/sh\n# Purpose: to locate world-writable files/dir and\n# world-executable files\n# Written by Mohammed J. Kabir\n# Standard binaries\nFIND=/usr/bin/find\nContinued\nChapter 9: Securing Files and Filesystems\n207\n" }, { "page_number": 231, "text": "Listing 9-7 (Continued)\nCAT=/bin/cat\nRM=/bin/rm\nMAIL=/bin/mail\n# Get the date and week number\nDATE=`/bin/date “+%m-%d-%Y [Week: %U]”`\n# Starting path\nROOT_DIR=/\nADMIN=root@localhost\n# Temp directory\nTMP_DIR=/tmp\nWORLD_WRITABLE=-2\nWORLD_EXEC=-1\nTYPE_FILE=f\nTYPE_DIR=d\nTYPE_LINK=l\nRUN_CMD=-ls\nOUT_FILE=$$.out\n# Find all world-writable files/directories (that is, not\n# symbolic links\necho “List of all world-writable files or directories” > $OUT_FILE;\n$FIND $ROOT_DIR -perm $WORLD_WRITABLE ! -type $TYPE_LINK \\\n$RUN_CMD >> $OUT_FILE;\necho >> $OUT_FILE;\necho “List of all world-executable files” >> $OUT_FILE;\n$FIND $ROOT_DIR -perm $WORLD_EXEC -type $TYPE_FILE \\\n$RUN_CMD >> $OUT_FILE;\n# Send the script report via email to ADMIN user\n$MAIL -s “World-wx Report $DATE “ $ADMIN < $OUT_FILE;\ncat $OUT_FILE;\n$RM -f $OUT_FILE;\nRemember to change ADMIN=root@localhost to a suitable e-mail address for you.\nUsing ext2 Filesystem\nSecurity Features\nSo far I have been discussing various risky system features such as world-writeable\nfiles, set-UID and set-GID files, and some directories of the Linux filesystem that\nget in the way of system security. Fortunately, an ext2 filesystem also has some\nbuilt-in security measures you can use to your advantage.\nThe Linux ext2 filesystem supports a set of extended attributes (listed in Table\n9-4) that can help you tighten security.\n208\nPart III: System Security\n" }, { "page_number": 232, "text": "TABLE 9-4: EXT2 FILESYSTEM EXTENDED ATTRIBUTES WITH SECURITY USES\nExtended Attribute\nDescription\nA\nWhen the A attribute is set, file-access time isn’t updated.\nThis can benefit computers that have power-consumption\nproblems because it makes some disk I/O is unnecessary.\nS\nWhen the S attribute is set, the file is synchronized with the\nphysical storage, which in the long run provides a higher level\nof data integrity at the expense of performance.\na \nFile becomes append-only — files can be created or modified\nwithin a particular directory but can’t be removed.\ni \nFiles can’t be changed. In a particular directory, files can be\nmodified but new files can’t be created or deleted.\nd\nThe dump program ignores the file.\nc\nSetting this attribute means that a write request coming to\nthe file is compressed and a read request is automatically\nuncompressed. This attribute isn’t yet available in the 2.2 or\n2.4 kernel.\ns\nWhen a file with this attribute is deleted, the file data is\noverwritten with zeros. This attribute isn’t yet available in\nthe 2.2 or 2.4 kernel.\nU\nWhen a file with this attribute is deleted, the data is moved\naway so it can be undeleted. This attribute isn’t yet available\nin the 2.2 or 2.4 kernel.\nUsing chattr\nThe ext2 filesystem used for Red Hat Linux provides some unique features. One of\nthese features makes files immutable by even the root user. For example:\nchattr +i filename\nThis command sets the i attribute of a file in an ext2 filesystem. This attribute\ncan be set or cleared only by the root user. So this attribute can protect against file\naccidents. When this attribute is set, the following conditions apply:\nN No one can modify, delete, or rename the file.\nN New links can’t point to this file.\nChapter 9: Securing Files and Filesystems\n209\n" }, { "page_number": 233, "text": "When you need to clear the attribute, you can run the following command:\nchattr –i filename\nUsing lsattr\nIf you start using the chattr command, sometimes you notice that you can’t modify\nor delete a file, although you have the necessary permission to do so. This happens if\nyou forget that earlier you set the immutable attribute of the file by using chattr—\nand because this attribute doesn’t show up in the ls output, the sudden “freezing” of\nthe file content can be confusing. To see which files have which ext2 attributes, use\nthe lsattr program.\nUnfortunately, what you know now about file and filesystem security may be old\nnews to informed bad guys with lots of free time to search the Web. Use of tools such\nas chattr may make breaking in harder for the bad guy, but they don’t make your\nfiles or filesystems impossible to damage. In fact, if the bad guy gets root-level\nprivileges, ext2 attributes provide just a simple hide-and-seek game.\nUsing a File Integrity Checker\nDetermining whether you can trust your files is a major problem after a break-in.\nYou may wonder whether the bad guy has installed a Trojan application or embed-\nded a virus to infect new files (and possibly provide access to other computers that\nyou access). None of the methods examined so far in this chapter can handle this\naspect of a security problem. The solution? Run a file integrity checker program;\nthe upcoming section shows how.\nA file integrity checker is a tool that allows checksum-like values to use hashing\nfunctions. These values are stored in a safe place that is guaranteed unalterable (for\nexample, read-only media like CD-ROM). The file integrity checker then can check\nthe current files against the checksum database and detect whether files have been\naltered. \nUsing a home-grown file integrity checker\nListing 9-8 shows a simple MD5 digest-based file integrity checker script with the\nDigest::MD5 module in Perl.\nListing 9-8: The md5_fic.pl script\n#!/usr/bin/perl\n# Purpose: creates and verifies MD5 checksum for files.\n# 1st time:\n# md5_fic.pl /dir/filename creates and stores a MD5 checksum\n# 2nd time:\n# md5_fic.pl /dir/filename verifies the integrity of the file\n210\nPart III: System Security\n" }, { "page_number": 234, "text": "# using the stored MD5 checksum\n# If the /dir/filename has changed, the script reports ‘*FAILED*’\n# else it reports ‘PASSED’\n# Limited wildcard supported. Example: md5_fic.pl /dir/*.conf\n#\n# Written by: Mohammed J. Kabir\n# CVS ID: $Id$\nuse strict;\nuse File::Basename;\nuse Digest::MD5;\nuse constant DEBUG => 0;\nuse constant UMASK => 0777;\n# Change this directory to an appropriate path on your system\nmy $SAFE_DIR = ‘/usr/local/md5’;\n# Cycle through each file given in the command-line\nforeach my $filename (@ARGV) {\n# If the given filename does not exist, show syntax msg\nsyntax() if (! -R $filename);\n# Create path to the checksum file\nmy $chksumFile = get_chksum_file($filename);\n# Create intermediate directory names for the checksum path\nmy $dir2 = dirname($chksumFile);\nmy $dir1 = dirname($dir2);\n# Create intermediate directories if they don’t exist\nmkdir $dir1, UMASK if (! -e $dir1);\nmkdir $dir2, UMASK if (! -e $dir2);\nDEBUG and print “Checksum File $chksumFile\\n”;\n# Get data from the input file\nmy $data = get_data_from_file($filename);\n# If MD5 checksum exists for this file\nif (! -e $chksumFile ) {\nDEBUG and print “Writing MD5 fingerprint for $filename to $chksumFile\\n”;\n# Create a MD5 digest for the data we read from the file\nmy $newDigest = get_digest($data);\n# Write the digest to the checksum file for this input file\nwrite_data_to_file($chksumFile, $newDigest);\n# Show status message\nprintf(“%-40s ... MD5 finger-print created\\n”, $filename);\n} else {\nDEBUG and print “Verifying $filename with $chksumFile\\n”;\nContinued\nChapter 9: Securing Files and Filesystems\n211\n" }, { "page_number": 235, "text": "Listing 9-8 (Continued)\nCAT=/bin/cat\n# Read the old digest from the checksum file we created\n# earlier for this input file.\nmy $oldDigest = get_data_from_file($chksumFile);\n# Create a new digest for the data read from the current\n# version of the file\nmy $newDigest = get_digest($data);\n# Compare the old and the current checksum and see if\n# data has been altered or not; report accordingly\nmy $status = ($oldDigest eq $newDigest) ? ‘PASSED’ : ‘*FAILED*’;\n# Show status message\nprintf(“%-40s ... %s\\n”, $filename,$status);\n}\n}\nexit 0;\nsub write_data_to_file {\n# Write data to file\nmy ($filename, $data) = @_;\nopen(DATA, “>$filename”) || die “Can’t write $filename $!\\n”;\nprint DATA $data;\nclose(DATA);\n}\nsub get_data_from_file {\n# Load data from a given file\n#\nmy $filename = shift;\nlocal $/ = undef;\nopen(FILE, $filename) || die “Can’t read $filename $!\\n”;\nmy $data = ;\nclose(FILE);\nreturn $data;\n}\nsub get_digest {\n# Calculate a MD5 digest for the given data\n#\nmy $data = shift;\nmy $ctx = Digest::MD5->new;\n$ctx->add($data);\nmy $digest;\n212\nPart III: System Security\n" }, { "page_number": 236, "text": "$digest = $ctx->digest;\n#$digest = $ctx->hexdigest;\n#$digest = $ctx->b64digest;\nreturn $digest;\n}\nsub syntax {\n# Print syntax\n#\ndie “Syntax: $0 /dir/files\\nLimited wild card supported.\\n”;\n}\nsub get_chksum_file {\n# Create the path (based on the given filename) for the checksum file\n#\nmy $filename = shift;\nmy $chksumFile = sprintf(“%s/%s/%s/%s.md5”,\n$SAFE_DIR,\nlc substr(basename($filename),0,1),\nlc substr(basename($filename),1,1),\nbasename($filename) );\nreturn $chksumFile;\n}\n# END OF SCRIPT\nThe md5_fic.pl script takes filenames as command-line arguments. For example,\nif you run the ./md5_fic.pl /etc/pam.d/* command, the script generates the\nfollowing output:\n/etc/pam.d/chfn ... MD5 finger-print created\n/etc/pam.d/chsh ... MD5 finger-print created\n/etc/pam.d/ftp ... MD5 finger-print created\n/etc/pam.d/kbdrate ... MD5 finger-print created\n/etc/pam.d/linuxconf ... MD5 finger-print created\n/etc/pam.d/linuxconf-auth ... MD5 finger-print created\n/etc/pam.d/linuxconf-pair ... MD5 finger-print created\n/etc/pam.d/login ... MD5 finger-print created\n/etc/pam.d/other ... MD5 finger-print created\n/etc/pam.d/passwd ... MD5 finger-print created\n/etc/pam.d/ppp ... MD5 finger-print created\n/etc/pam.d/rexec ... MD5 finger-print created\n/etc/pam.d/rlogin ... MD5 finger-print created\n/etc/pam.d/rsh ... MD5 finger-print created\n/etc/pam.d/samba ... MD5 finger-print created\n/etc/pam.d/su ... MD5 finger-print created\n/etc/pam.d/sudo ... MD5 finger-print created\n/etc/pam.d/system-auth ... MD5 finger-print created\nChapter 9: Securing Files and Filesystems\n213\n" }, { "page_number": 237, "text": "The script simply reads all the files in /etc/pam.d directory and creates MD5\nchecksums for each file. The checksum files are stored in a directory pointed by the\n$SAFE_DIR variable in the script. By default, it stores all checksum files in\n/usr/local/md5. Make sure you change the $SAFE_DIR from /usr/local/md5 to\nan appropriate path the you can later write-protect. For example, use /mnt/floppy\nto write the checksums to a floppy disk (which you can later write-protect).\nAfter the checksum files are created, every time you run the script with the same\narguments, it compares the old checksum against one it creates from the current\ncontents of the file. If the checksums match, then your file is still authentic, because\nyou created the checksum file for it last time. For example, running the\n./md5_fic.pl /etc/pam.d/* command again generates the following output:\n/etc/pam.d/chfn ... PASSED\n/etc/pam.d/chsh ... PASSED\n/etc/pam.d/ftp ... PASSED\n/etc/pam.d/kbdrate ... PASSED\n/etc/pam.d/linuxconf ... PASSED\n/etc/pam.d/linuxconf-auth ... PASSED\n/etc/pam.d/linuxconf-pair ... PASSED\n/etc/pam.d/login ... PASSED\n/etc/pam.d/other ... PASSED\n/etc/pam.d/passwd ... PASSED\n/etc/pam.d/ppp ... PASSED\n/etc/pam.d/rexec ... PASSED\n/etc/pam.d/rlogin ... PASSED\n/etc/pam.d/rsh ... PASSED\n/etc/pam.d/samba ... PASSED\n/etc/pam.d/su ... PASSED\n/etc/pam.d/sudo ... PASSED\n/etc/pam.d/system-auth ... PASSED\nBecause the files have not changed between the times you executed these two\ncommands, the checksums still match; therefore each of the files passed.\nNow if you change a file in the /etc/pam.d directory and run the same com-\nmand again, you see a *FAILED* message for that file because the stored MD5 digest\ndoes not match the newly computed digest. Here’s the output after I modified the\n/etc/pam.d/su file.\n/etc/pam.d/chfn ... PASSED\n/etc/pam.d/chsh ... PASSED\n/etc/pam.d/ftp ... PASSED\n/etc/pam.d/kbdrate ... PASSED\n/etc/pam.d/linuxconf ... PASSED\n214\nPart III: System Security\n" }, { "page_number": 238, "text": "/etc/pam.d/linuxconf-auth ... PASSED\n/etc/pam.d/linuxconf-pair ... PASSED\n/etc/pam.d/login ... PASSED\n/etc/pam.d/other ... PASSED\n/etc/pam.d/passwd ... PASSED\n/etc/pam.d/ppp ... PASSED\n/etc/pam.d/rexec ... PASSED\n/etc/pam.d/rlogin ... PASSED\n/etc/pam.d/rsh ... PASSED\n/etc/pam.d/samba ... PASSED\n/etc/pam.d/su ... *FAILED*\n/etc/pam.d/sudo ... PASSED\n/etc/pam.d/system-auth ... PASSED\nYou can also run the script for a single file. For example, the ./md5_fic.pl\n/etc/pam.d/su command produces the following output:\n/etc/pam.d/su ... *FAILED*\nA file integrity checker relies solely on the pristine checksum data. The data\nmustn’t be altered in any way. Therefore, it’s extremely important that you don’t\nkeep the checksum data in a writeable location. I recommend using a floppy disk (if\nyou have only a few files to run the checksum against), a CD-ROM, or a read-only\ndisk partition.\nWrite-protect the floppy,or mount a partition read-only after you check the\nchecksum files.\nThis little script is no match for a commercial-grade file integrity checker\nsuch as Tripwire.\nUsing Tripwire Open Source, Linux Edition\nIn a great move towards open-source software, Tripwire released Tripwire Open\nSource, Linux Edition, under the General Public License (GPL). Simply speaking,\nTripwire is a file-and-directory integrity checker; it creates a database of signatures\nfor all files and directories and stores them in one file. When Tripwire is run again,\nChapter 9: Securing Files and Filesystems\n215\n" }, { "page_number": 239, "text": "it computes new signatures for current files and directories and compares them\nwith the original signatures stored in the database. If it finds a discrepancy, it\nreports the file or directory name along with information about the discrepancy.\nYou can see why Tripwire can be a great tool for helping you determine which\nfiles were modified in a break-in. Of course, for that you must ensure the security\nof the database that the application uses. When creating a new server system, many\nexperienced system administrators do the following things:\n1. Ensure that the new system isn’t attached to any network to guarantee\nthat no one has already installed a Trojan program, virus program, or\nother danger to your system security.\n2. Run Tripwire to create a signature database of all the important system\nfiles, including all system binaries and configuration files.\n3. Write the database in a recordable CD-ROM.\nThis ensures that an advanced bad guy can’t modify the Tripwire database\nto hide Trojans and modified files from being noticed by the application.\nAdministrators who have a small number of files to monitor often use a\nfloppy disk to store the database. After writing the database to the floppy\ndisk, the administrator write-protects the disk and, if the BIOS permits,\nconfigures the disk drive as a read-only device.\n4. Set up a cron job to run Tripwire periodically (daily, weekly, monthly)\nsuch that the application uses the CD-ROM database version.\nGETTING TRIPWIRE\nRed Hat Linux includes the binary Tripwire RPM file. However, you can download\nthe free (LGPL) version of Tripwire from an RPM mirror site such as\nhttp://fr.rpmfind.net. I downloaded the Tripwire source code and binaries\nfrom this site by using http://fr.rpmfind.net/linux/rpm2html/search.\nphp?query=Tripwire.\nThe source RPM that I downloaded was missing some installation scripts, so\nI downloaded the source again from the Tripwire Open Source development site at\nthe http://sourceforge.net/projects/tripwire/ site. The source code I down-\nloaded was called tripwire-2.3.0-src.tar.gz. You may find a later version\nthere when you read this. In the spirit of compiling open-source software from\nthe source code, I show compiling, configuring, and installing Tripwire from the\ntripwire-2.3.0-src.tar.gz file.\nWhen following the instructions given in the following section,replace the\nversion number with the version of Tripwire you have downloaded.\n216\nPart III: System Security\n" }, { "page_number": 240, "text": "If you want to install Tripwire from the binary RPM package, simply run the\nrpm -ivh tripwire-version.rpm command.You still must configure\nTripwire by running twinstall.sh. Run this script from the /etc/trip-\nwire directory and skip to Step 7 in the following section.\nCOMPILING TRIPWIRE\nTo compile from the source distribution, do the following:\n1. su to root.\n2. Extract the tar ball, using the tar xvzf tripwire-2.3.0-src.tar.gz\ncommand. This creates a subdirectory called\n/usr/src/redhat/SOURCES/tripwire-2.3.0-src.\nChange your current directory to /usr/src/redhat/SOURCES/tripwire-\n2.3.0-src/src.\n3. Run the make release command to compile all the necessary Tripwire\nbinaries. (This takes a little time, so do it just before a coffee break.)\nAfter it is compiled, install the binaries: Change directory to\n/usr/src/redhat/SOURCES/tripwire-2.3.0-src/install. Copy the\ninstall.cfg and install.sh files to the parent directory using the cp\ninstall.* .. command.\n4. Before you run the installation script, you may need to edit the\ninstall.cfg file, which is shown in Listing 9-9. For example, if you\naren’t a vi editor fan, but rather camp in the emacs world, you change the\nTWEDITOR field in this file to point to emacs instead of /usr/bin/vi. I\nwouldn’trecommendchangingthevaluesforCLOBBER,TWBIN,TWPOLICY,\nTWMAN,TWDB,TWDOCS,TWSITEKEYDIR,TWLOCALKEYDIRsettings.\nHowever,youmaywanttochangethevaluesforTWLATEPROMPTING,\nTWLOOSEDIRCHK, TWMAILNOVIOLATIONS, TWEMAILREPORTLEVEL,\nTWREPORTLEVEL, TWSYSLOG, TWMAILMETHOD, TWMAILPROGRAM,\nand so on. The meaning of these settings are given in the comment lines\nabove each setting in the install.cfg file.\nListing 9-9: The install.cfg file\n# install.cfg\n# default install.cfg for:\n# Tripwire(R) 2.3 Open Source for Linux\n# NOTE: This is a Bourne shell script that stores installation\n# parameters for your installation. The installer will\nContinued\nChapter 9: Securing Files and Filesystems\n217\n" }, { "page_number": 241, "text": "Listing 9-9 (Continued)\n# execute this file to generate your config file and also to\n# locate any special configuration needs for your install.\n# Protect this file, because it is possible for\n# malicious code to be inserted here\n# This version of Tripwire has been modified to conform to the FHS\n# standard for Unix-like operating systems.\n# To change the install directory for any tripwire files, modify\n# the paths below as necessary.\n#=======================================================\n# If CLOBBER is true, then existing files are overwritten.\n# If CLOBBER is false, existing files are not overwritten.\nCLOBBER=false\n# Tripwire binaries are stored in TWBIN.\nTWBIN=”/usr/sbin”\n# Tripwire policy files are stored in TWPOLICY.\nTWPOLICY=”/etc/tripwire”\n# Tripwire manual pages are stored in TWMAN.\nTWMAN=”/usr/man”\n# Tripwire database files are stored in TWDB.\nTWDB=”/var/lib/tripwire”\n# Tripwire documents directory\nTWDOCS=”/usr/doc/tripwire”\n# The Tripwire site key files are stored in TWSITEKEYDIR.\nTWSITEKEYDIR=”${TWPOLICY}”\n# The Tripwire local key files are stored in TWLOCALKEYDIR.\nTWLOCALKEYDIR=”${TWPOLICY}”\n# Tripwire report files are stored in TWREPORT.\nTWREPORT=”${TWDB}/report”\n# This sets the default text editor for Tripwire.\nTWEDITOR=”/bin/vi”\n# TWLATEPROMTING controls the point when tripwire asks for a password.\nTWLATEPROMPTING=false\n# TWLOOSEDIRCHK selects whether the directory should be monitored for\n# properties that change when files in the directory are monitored.\nTWLOOSEDIRCHK=false\n# TWMAILNOVIOLATIONS determines whether Tripwire sends a no violation\n# report when integrity check is run with --email-report but no rule\n# violations are found. This lets the admin know that the integrity\n# was run, as opposed to having failed for some reason.\nTWMAILNOVIOLATIONS=true\n# TWEMAILREPORTLEVEL determines the verbosity of e-mail reports.\nTWEMAILREPORTLEVEL=3\n218\nPart III: System Security\n" }, { "page_number": 242, "text": "# TWREPORTLEVEL determines the verbosity of report printouts.\nTWREPORTLEVEL=3\n# TWSYSLOG determines whether Tripwire will log events to the system log\nTWSYSLOG=false\n#####################################\n# Mail Options - Choose the appropriate\n# method and comment the other section\n#####################################\n#####################################\n# SENDMAIL options - DEFAULT\n# Either SENDMAIL or SMTP can be used to send reports via TWMAILMETHOD.\n# Specifies which sendmail program to use.\n#####################################\nTWMAILMETHOD=SENDMAIL\nTWMAILPROGRAM=”/usr/lib/sendmail -oi -t”\n#####################################\n# SMTP options\n# TWSMTPHOST selects the SMTP host to be used to send reports.\n# SMTPPORT selects the SMTP port for the SMTP mail program to use.\n#####################################\n# TWMAILMETHOD=SMTP\n# TWSMTPHOST=”mail.domain.com”\n# TWSMTPPORT=25\n################################################################################\n# Copyright (C) 1998-2000 Tripwire (R) Security Systems, Inc. Tripwire (R) is a\n# registered trademark of the Purdue Research Foundation and is licensed\n# exclusively to Tripwire (R) Security Systems, Inc.\n##################################################################\n5. Run the ./install.sh command. This walks you through the installation\nprocess. You are asked to press Enter, accept the GPL licensing agreement,\nand (finally) to agree to the locations to which files copy.\n6. After the files are copied, you are asked for a site pass phrase.\nThis pass phrase encrypts the Tripwire configuration and policy files.\nEnter a strong pass phrase (that is, not easily guessable and at least eight\ncharacters long) to ensure that these files aren’t modified by any unknown\nparty.\n7. Choose a local pass phrase. This pass phrase encrypts the Tripwire data-\nbase and report files.\nChoose a strong pass phrase..\nChapter 9: Securing Files and Filesystems\n219\n" }, { "page_number": 243, "text": "8. You are asked for the site pass phrase.\nThe installation program signs the configuration file using your pass\nphrase. A clear-text version of the Tripwire configuration file is created in\n/etc/tripwire/twcfg.txt. The encrypted, binary version of the config-\nuration file — which is what Tripwire uses — is stored in /etc/tripwire/\ntw.cfg. The clear-text version is created for your inspection. The install-\nation program recommends that you delete this file manually after you\nhave examined it.\n9. You are asked for the site pass phrase so the installation program can use\nit for signing the policy file.\nThe installation program creates a clear-text policy file in /etc/tripwire/\ntwpol.txt and the encrypted version is kept in /etc/tripwire/tw.pol.\n(You learn to modify the text version of the policy file later — and to create\nthe binary, encrypted version that Tripwire uses.)\nCONFIGURING TRIPWIRE POLICY\nThe policy file defines rules that Tripwire uses to perform integrity checks. Each rule\ndefines which files and directories to check — and what types of checks to perform.\nAdditionally, each rule can include information such as name and severity.\nSyntax for a typicalrule is shown in the following example:\n(attribute=value attribute=value ...)\n{\n/path/to/a/file/or/directory -> mask;\n}\nTable 9-5 lists available attributes and their meanings.\nTABLE 9-5 LIST OF AVAILABLE ATTRIBUTES\nAttribute\nMeaning\nrulename=name\nThis attribute associates a name to the rule. This attribute\nmakes Tripwire reports more readable and easy to sort by\nnamed rules.\nemailto=emailaddr\nWhen a rule is violated, the e-mail address given as value for\nthis attribute receives a violation report.\n220\nPart III: System Security\n" }, { "page_number": 244, "text": "Attribute\nMeaning\nseverity=number\nThis attribute can associate a severity level (that is,\nimportance) to a rule. This makes Tripwire reports easier to\nmanage.\nrecurse=true | false\nThis attribute determines whether a directory is automatically\nrecursed. If it’s set to true (or -1), all subdirectories are\nrecursed; if it’s set to false (or 0), the subdirectories aren’t\ntraversed. Any numeric value in the range of -1 to 1000000\n(excluding -1 and 0) dictates the depth to which the\nsubdirectories are recursed. For example recurse=3 means\nthat subdirectories up to level-3 depth are recursed.\nLook at the following example rule:\n(Rulename= “OS Utilities”, severity=100)\n{\n/bin/ls -> +pinugtsdrbamcCMSH-l;\n}\nHere the rule being defined is called the OS Utilities rule; it has a severity\nrating of 100 — which means violation of this rule is considered a major problem;\nthe +pinugtsdrbamcCMSH-l properties of /bin/ls is checked. Table 9-6 describes\neach of these property/mask characters.\nTABLE 9-6 PROPERTY/MASKS CHARACTERS USED IN TRIPWIRE POLICY FILE\nProperty or\nDescription\nMask\na\nAccess timestamp of the file or directory\nb\nNumber of blocks allocated to the file\nc\nInode timestamp\nd\nID of the disk where the inode resides\ng\nOwner’s group \nContinued\nChapter 9: Securing Files and Filesystems\n221\n" }, { "page_number": 245, "text": "TABLE 9-6 PROPERTY/MASKS CHARACTERS USED IN TRIPWIRE POLICY FILE\n(Continued)\nProperty or\nDescription\nMask\ni\nInode number\nl\nFile is increasing in size\nm\nModification timestamp\nn\nInode reference count or number of links\np\nPermission bits of file or directory\nr\nID of the device pointed to by an inode belonging to a device \nfile\ns\nSize of a file\nt\nType of file\nu\nOwner’s user ID\nC\nCRC-32 value\nH\nHaval value\nM\nMD5 value\nS\nSHA value\n+\nRecord and check the property followed by this character\n-\nIgnore the property followed by this character\nAnother way to write the previous rule is shown in the following line:\n/bin/ls -> +pinugtsdrbamcCMSH-l (Rulename= “OS Utilities”, severity=100);\nThe first method is preferable because it can group many files and directories\nunder one rule. For example, all the listed utilities in the following code fall under\nthe same policy:\nSEC_CRIT = +pinugtsdrbamcCMSH-l;\n(Rulename= “OS Utilities”, severity=100)\n{\n/bin/ls -> $(SEC_CRIT);\n/bin/login -> $(SEC_CRIT);\n222\nPart III: System Security\n" }, { "page_number": 246, "text": "/bin/ls -> $(SEC_CRIT);\n/bin/mail -> $(SEC_CRIT);\n/bin/more -> $(SEC_CRIT);\n/bin/mt -> $(SEC_CRIT);\n/bin/mv -> $(SEC_CRIT);\n/bin/netstat -> $(SEC_CRIT);\n}\nThe preceding code uses the SEC_CRIT variable,which is defined before it’s\nused in the rule.This variable is set to +pinugtsdrbamcCMSH-l and substi-\ntuted in the rule statements using $(SEC_CRIT).This can define one vari-\nable with a set of properties that can be applied to a large group of files\nand/or directories.When you want to add or remove properties, you simply\nchange the mask value of the variable; the change is reflected everywhere\nthe variable is used.Some built-in variables are shown in Table 9-7.\nTABLE 9-7: A SELECTION OF BUILT-IN VARIABLES FOR THE TRIPWIRE POLICY FILE\nVariable\nMeaning\nReadOnly\n+pinugtsdbmCM-rlacSH. Good for files that should remain read-only.\nDynamic\n+pinugtd-srlbamcCMSH. Good for user directories and files that are\ndynamic and sub of changes.\nGrowing\n+pinugtdl-srbamcCMSH. Good for files that grow in size.\nDevice\n+pugsdr-intlbamcCMSH. Good for device files.\nIgnoreAll\n-pinugtsdrlbamcCMSH. Checks if the file exists or not but doesn’t\ncheck anything else.\nIgnoreNone\n+pinugtsdrbamcCMSH-l. Opposite of IgnoreAll. Checks all\nproperties.\nWhen creating a rule, consider the following:\nN Don’t create multiple rules that apply to the same file or directory, as in\nthis example:\n/usr -> $(ReadOnly);\n/usr -> $(Growing);\nChapter 9: Securing Files and Filesystems\n223\n" }, { "page_number": 247, "text": "Tripwire complains about such a policy.\nN More specific rules are honored, as in this example:\n/usr -> $(ReadOnly);\n/usr/local/home -> $(Dynamic);\nIn the second line of the example, when you check a file with the path\n/usr/local/home/filename, Tripwire checks the properties substituted\nby the variable $(Dynamic).\nIf you want to create or modify rules, run the following command:\n/usr/sbin/twadmin --create-polfile /etc/twpol.txt\nThe command generates the encrypted /etc/tripwire/tw.pol policy file. You\nare asked for the site pass phrase needed to sign (that is, encrypt) the policy file.\nCREATING THE TRIPWIRE DATABASE\nBefore you initialize the Tripwire database file, be absolutely certain that bad guys\nhave not already modified the files on your current system. This is why the best\ntime for creating this database is when your new system hasn’t yet been connected\nto the Internet or any other network. After you are certain that your files are\nuntouched, run the following command:\n/usr/sbin/tripwire –-init\nThis command applies the policies listed in the /etc/tripwire/tw.pol file and\ncreates a database in var/lib/tripwire/k2.intevo.com.\nAfter you have created the database, move it to a read-only medium such as a\nCD-ROM or a floppy disk (write-protected after copying) if possible.\nPROTECTING TRIPWIRE ITSELF\nBad guys can modify the Tripwire binary (/usr/sbin/tripwire) or the\n/etc/tripwire/tw.pol policy file to hide traces of their work. For this reason, you\ncan run the /usr/sbin/siggen utility to create a set of signatures for these files. To\ngenerate a signature for the /usr/sbin/tripwire binary, you can run the\n/usr/sbin/siggen -a /usr/sbin/tripwire command.\nYou see something like the following on-screen:\n---------------------------------------------------------------------\nSignatures for file: /usr/sbin/tripwire\nCRC32 BmL3Ol\nMD5 BrP2IBO3uAzdbRc67CI16i\nSHA F1IH/HvV3pb+tDhK5we0nKvFUxa\nHAVAL CBLgPptUYq2HurQ+sTa5tV\n---------------------------------------------------------------------\n224\nPart III: System Security\n" }, { "page_number": 248, "text": "You can keep the signature in a file by redirecting it to that file. (Print the sig-\nnature too.) Don’t forget to generate a signature for the siggen utility itself, also. If\nyou ever get suspicious about Tripwire not working right, run the siggen utility on\neach of these files and compare the signatures. If any of them don’t match, then\nyou shouldn’t trust those files; replace them with fresh new copies and launch an\ninvestigation into how the discrepancy happened.\nRUNNING TRIPWIRE TO DETECT INTEGRITY\nIN INTERACTIVE MODE\nYou can run Tripwire in the interactive mode using the /usr/sbin/tripwire --\ncheck --interactive command. In this mode, a report file is generated and\nloaded in the preferred editor. The summary part of an example Tripwire report\ngenerated by this command is shown in the following listing:\nTripwire(R) 2.3.0 Integrity Check Report\nReport generated by: root\nReport created on: Fri Dec 22 02:31:25 2000\nDatabase last updated on: Fri Dec 22 02:13:44 2000\n===============================================================================\nReport summary:\n===============================================================================\nHost name: k2.intevo.com\nHost IP address: 172.20.15.1\nHost ID: None\nPolicy file used: /etc/tripwire/tw.pol\nConfiguration file used: /etc/tripwire/tw.cfg\nDatabase file used: /var/lib/tripwire/k2.intevo.com.twd\nCommand line used: /usr/sbin/tripwire --check --interactive\n===============================================================================\nRule summary:\n-------------------------------------------------------------------------------\nSection: Unix Filesystem\n-------------------------------------------------------------------------------\nRule Name Severity Level Added Removed Modified\n--------- -------------- ----- ------- --------\nInvariant Directories 66 0 0 0\nTemporary directories 33 0 0 0\n* Tripwire Data Files 100 0 0 1\nCritical devices 100 0 0 0\nUser binaries 66 0 0 0\nTripwire Binaries 100 0 0 0\n* Critical configuration files 100 0 0 1\nLibraries 66 0 0 0\nShell Binaries 100 0 0 0\nChapter 9: Securing Files and Filesystems\n225\n" }, { "page_number": 249, "text": "Filesystem and Disk Administration Programs\n100 0 0 0\nKernel Administration Programs 100 0 0 0\nNetworking Programs 100 0 0 0\nSystem Administration Programs 100 0 0 0\nHardware and Device Control Programs\n100 0 0 0\nSystem Information Programs 100 0 0 0\nApplication Information Programs\n100 0 0 0\nShell Related Programs 100 0 0 0\nCritical Utility Sym-Links 100 0 0 0\nCritical system boot files 100 0 0 0\nSystem boot changes 100 0 0 0\nOS executables and libraries 100 0 0 0\nSecurity Control 100 0 0 0\nLogin Scripts 100 0 0 0\nOperating System Utilities 100 0 0 0\nRoot config files 100 0 0 0\nTotal objects scanned: 14862\nTotal violations found: 2\nTwo rules violations exist, which are marked using the ‘*’ sign on the very left of\nthe lines.\nN The “Tripwire Data Files” rule. The report also states that there’s another\nviolation for the “Critical configuration files” rule. In both cases, a file has\nbeen modified that was supposed to be. Now, the Object summary section\nof the report shows the following lines:\n===============================================================================\nObject summary:\n===============================================================================\n-------------------------------------------------------------------------------\n# Section: Unix Filesystem\n-------------------------------------------------------------------------------\n-------------------------------------------------------------------------------\nRule Name: Tripwire Data Files (/etc/tripwire/tw.pol)\nSeverity Level: 100\n-------------------------------------------------------------------------------\nRemove the “x” from the adjacent box to prevent updating the database\nwith the new values for this object.\nModified:\n[x] “/etc/tripwire/tw.pol”\n-------------------------------------------------------------------------------\n226\nPart III: System Security\n" }, { "page_number": 250, "text": "Rule Name: Critical configuration files (/etc/cron.daily)\nSeverity Level: 100\n-------------------------------------------------------------------------------\nRemove the “x” from the adjacent box to prevent updating the database\nwith the new values for this object.\nModified:\n[x] “/etc/cron.daily”\nAs shown, Tripwire shows exactly which files were modified and what rules\nthese files fall under. If these modifications are okay, I can simply leave the ‘x’\nmarks in the appropriate sections of the report and exit the editor. Tripwire updates\nthe database per my decision. For example, if I leave the ‘x’ marks on for both files,\nnext time when the integrity checker is run, it doesn’t find these violations any\nmore because the modified files are taken into account in the Tripwire database.\nHowever, if one of the preceding modifications was not expected and looks suspi-\ncious, Tripwire has done its job!\nIf you want to view a report from the /var/lib/tripwire/report\ndirectory at any time, you can run the /usr/sbin/twprint -m r --\ntwrfile reportfilename command.\nRUNNING TRIPWIRE TO DETECT INTEGRITY AUTOMATICALLY\nYou can also run Tripwire as a cron job by creating a small script such as the one\nshown in Listing 9-10.\nListing 9-10: The /etc/cron.daily/tripwire-check file\n#!/bin/sh\nHOST_NAME=`uname -n`\nif [ ! -e /var/lib/tripwire/${HOST_NAME}.twd ] ; then\necho “*** Error: Tripwire database for ${HOST_NAME} not found. ***”\necho “*** Run “/etc/tripwire/twinstall.sh” and/or “tripwire --init”. ***”\nelse\ntest -f /etc/tripwire/tw.cfg && /usr/sbin/tripwire --check\nfi\nThis script checks whether the Tripwire database file exists or not. If it exists, the\nscript then looks for the configuration file. When both files are found, it runs the\n/usr/sbin/tripwire command in a non-interactive mode. This results in a report\nfile; if you have configured rules using the emailto attribute, e-mails are sent to\nthe appropriate person(s).\nChapter 9: Securing Files and Filesystems\n227\n" }, { "page_number": 251, "text": "UPDATING THE TRIPWIRE DATABASE\nUpdate the Tripwire database whenever you have a change in the filesystem that\ngenerates a false warning. For example, if you modify a configuration file or\nremove a program that Tripwire is “watching” for you, it generates a violation\nreport. Therefore, whenever you change something intentionally, you must update\nthe database. You can do it two ways:\nN Reinitialize the database using the /usr/sbin/tripwire --init\ncommand.\nN Update the database using the /usr/sbin/tripwire --update command.\nThe update method should save you a little time because it doesn’t create\nthe entire database again.\nSimilarly, when you change the Tripwire policy file, /etc/tripwire/twpol.txt,\nupdate the database. Again, instead of reinitializing the entire database using the --\ninit option, you can instruct the program to apply policy changes and update the\ndatabase using the /usr/sbin/tripwire --update-policy /etc/tripwire/\ntwpol.txt command.\nAfter you create a Tripwire database, it should be updated every time you update\nyour policy file. Instead of reinitializing the database every time you change (or\nexperiment) with your policy file, you can run the tripwire --update-policy\n/etc/tripwire/twpol.txt command to update the database. This saves a signifi-\ncant amount of time.\nGETTING THE TRIPWIRE REPORT BY E-MAIL\nIf you use the emailto attribute in rules, you can receive violation (or even non-\nviolation) reports from Tripwire. This is especially useful if you are running Tripwire\nchecks as a cron job. (See the preceding section, “Running Tripwire to detect\nintegrity automatically.”)\nBefore you can get e-mail from Tripwire, you must configure the e-mail settings\nin the /etc/tripwire/twcfg.txt file and rebuild the configuration file using the\n/usr/sbin/twadmin --create-cfgfile /etc/tripwire/twcfg.txt command.\nThe settings that control e-mail are explained in Table 9-8.\n228\nPart III: System Security\n" }, { "page_number": 252, "text": "TABLE 9-8: E-MAIL SETTINGS FOR TRIPWIRE CONFIGURATION FILE\nAttribute\nMeaning\nMAILMETHOD = SMTP | SENDMAIL\nDefault: MAILMETHOD = SENDMAIL\nThis attribute sets the mail delivery method\nTripwire uses. The default allows Tripwire to\nuse the Sendmail daemon, which must be\nspecified using the MAILPROGRAM attribute\ndiscussed later. Because most popular\nSendmail-alternative mail daemons (such as\nqmail and postoffice), work very much like\nSendmail, you can still set this to\nSENDMAIL and specify the path to your\nalternative daemon using the\nMAILPROGRAM.\nHowever, if you don’t run a Sendmail or a\nSendmail-like daemon on the machine on\nwhich you run Tripwire, you can set this\nattribute to SMTP and specify the\nSMTPHOST and SMTPPORT number\nattributes. Assuming the SMTPHOST allows\nyour system to relay messages, Tripwire\nconnects to the host via the SMTP port and\ndelivers messages that are later delivered to\nthe appropriate destination by the host.\nSMTPHOST = hostname | IP Address\nDefault: none\nThis attribute can specify the hostname of a\nmail server. Use this only if you don’t have\nmail capabilities in the same system where\nTripwire runs. You can look up the mail\nserver IP or hostname using the nslookup\n-q=mx yourdomain command.\nSMTPPORT = port number\nDefault: none\nThis attribute specifies the TCP port number\nof the remote mail server. Typically, this\nshould be set to 25. You only need this if\nyou set MAILMETHOD to SMTP.\nContinued\nChapter 9: Securing Files and Filesystems\n229\n" }, { "page_number": 253, "text": "TABLE 9-8: E-MAIL SETTINGS FOR TRIPWIRE CONFIGURATION FILE (Continued)\nAttribute\nMeaning\nMAILPROGRAM = /path/to/mail/program\nDefault: MAILPROGRAM =\n/usr/sbin/sendmail -oi -t\nThis attribute specifies the mail daemon\npath and any arguments to run it. This\nattribute only makes sense if you use\nMAILMETHOD = SENDMAIL.\nEMAILREPORTLEVEL = 0 - 4\nDefault: EMAILREPORTLEVEL = 3\nThis attribute specifies the level of\ninformation reported via e-mail. Leave the\ndefault as is.\nMAILNOVIOLATIONS = true | false\nDefault: MAILNOVIOLATIONS = true\nIf you don’t want to receive e-mail when\nno violation is found, set this to false.\nTo test your e-mail settings, you can run Tripwire using the /usr/sbin/tripwire\n-m t -email your@emailaddr command. Remember to change the you@emailaddr\nto your own e-mail address.\nSetting up Integrity-Checkers \nWhen you have many Linux systems to manage, it isn’t always possible to go from\none machine to another to perform security checks—in fact, it isn’t recommended.\nWhen you manage a cluster of machines, it’s a good idea to centralize security as\nmuch as possible. As mentioned before, Tripwire can be installed and set up as a cron\njob on each Linux node on a network, but that becomes a lot of work (especially on\nlarger networks). Here I discuss a new integrity checker called Advanced Intrusion\nDetection Environment (AIDE), along with a Perl-based utility called Integrity\nChecking Utility (ICU) that can automate integrity checking on a Linux network.\nSetting up AIDE\nAIDE is really a Tripwire alternative. The author of AIDE liked Tripwire but wanted to\ncreate a free replacement of Tripwire with added functionality. Because Tripwire Open\nSource exists, the “free aspect” of the AIDE goal no longer makes any difference, but\nthe AIDE tool is easy to deploy in a network environment with the help of ICU.\n230\nPart III: System Security\n" }, { "page_number": 254, "text": "You can get Tripwire company to sell you a shrink-wrapped, integrity check-\ning solution that works in a cluster of Linux hosts.So inquire about this with\nTripwire.\nDownloading and extracting the latest source distribution from ftp://ftp.\nlinux.hr/pub/aide/ is the very first step in establishing AIDE. As of this writing\nthe latest version is 0.7 (aide-0.7.tar.gz). When following these instructions,\nmake sure you replace the version number with the version you are currently\ninstalling. Here’s how you can compile AIDE:\n1. su to root.\n2. Extract the source tar ball.\nFor version 0.7, use the tar xvzf aide-0.7.tar.gz command in the\n/usr/src/redhat/SOURCES directory. You see a new subdirectory called\naide-0.7.\n3. Change your current directory to aide-0.7 and run the ./configure\ncommand.\n4. Run make; make install to compile and install the software.\nThe AIDE binary, aide, is installed in /usr/local/bin and the man pages\nare installed in the /usr/local/man directory.\nNow you can set up ICU.\nICU requires that you have SSH1 support available in both the ICU server\nand ICU client systems. You must install OpenSSH (which also requires\nOpenSSL) to make ICU work.See Chapter 12 and Chapter 11 for information\non how to meet these prerequisites.\nSetting up ICU\nTo use the Perl-based utility called Integrity Checking Utility (ICU) you have to set\nup the ICU server and client software.\nESTABLISHING AN ICU SERVER\nStart the setup process by downloading the latest version of ICU from the http://\nnitzer.dhs.org/ICU/ICU.html site. I downloaded version 0.2 (ICU-0.2.tar.gz)\nfor these instructions. As always, make sure you replace the version number men-\ntioned here with your current version of ICU.\nChapter 9: Securing Files and Filesystems\n231\n" }, { "page_number": 255, "text": "Here’s how you can compile ICU on the server that manages the ICU checks on\nother remote Linux systems:\n1. su to root on the system where you want to run the ICU service.\nThis is the server that launches ICU on remote Linux systems and performs\nremote integrity checking and also hosts the AIDE databases for each host.\n2. Extract the source in /usr/src/redhat/SOURCES.\nA new subdirectory called ICU-0.2 is created.\n3. Run the cp -r /usr/src/redhat/SOURCES/ICU-0.2 /usr/local/ICU\ncommand to copy the source in /usr/local/ICU, which makes setup\nquite easy because the author of the program uses this directory in the\ndefault configuration file.\n4. Create a new user account called icu, using the adduser icu command.\nI Change the ownership of the /usr/local/ICU directory to the new\nuser by running the chown -R icu /usr/local/ICU command.\nI Change the permission settings using the chmod -R 700\n/usr/local/ICU command so that only the new user can access files\nin that directory.\n5. Edit the ICU configuration file (ICU.conf) using your favorite text editor.\nI Modify the icu_server_name setting to point to the ICU server that\nlaunches and runs ICU on remote Linux machines.\nThis is the machine you are currently configuring.\nI Change the admin_e-mail setting to point to your e-mail address.\nIf you don’t use Sendmail as your mail server, change the sendmail\nsetting to point to your Sendmail-equivalent mail daemon.\n6. The default configuration file has settings that aren’t compatible with\nOpenSSH utilities. Change these settings as shown here:\nOpenSSH-incompatible setting\nChange to\nssh = /usr/local/bin/sshl\nssh = /usr/local/bin/ssh\nscp = /usr/local/bin/scpl\nscp = /usr/local/bin/scp\nssh_keygen = /usr/local/bin/ssh-keygenl\nssh_keygen = /usr/local/\nbin/ssh-keygen\n232\nPart III: System Security\n" }, { "page_number": 256, "text": "7. Remove the -l option from the following lines of the scp (secure copy)\ncommand settings in the ICU.conf file:\nget_bin_cmd = %scp% -1 -P %port% -i %key_get_bin_priv% \\\nroot@%hostname%:%host_basedir%/aide.bin %tmp_dir%/ 2>&1\nget_conf_cmd = %scp% -1 -P %port% -i %key_get_conf_priv% \\\nroot@%hostname%:%host_basedir%/aide.conf %tmp_dir%/ 2>&1\nget_db_cmd = %scp% -1 -P %port% -i %key_get_db_priv% \\\nroot@%hostname%:%host_basedir%/aide.db %tmp_dir%/ 2>&1\n8. su to the icu user using the su icu command. Run the ./ICU.pl -G\ncommand to generate five pairs of keys in the keys directory.\n9. Run the ./ICU.pl -s to perform a sanity check, which ensures that\neverything is set up as needed.\nIf you get error messages from this step, fix the problem according to the\nmessages displayed.\n10. Copy and rename the AIDE binary file from /usr/local/bin to\n/usr/local/ICU/binaries/aide.bin-i386-linux,using the following\ncommand:\ncp /usr/local/bin/aide /usr/local/ICU/binaries/aide.\nbin-i386-linux \nI recommend that you read the man page for AIDE configuration (using the\nman aide.conf command) before you modify this file. For now, you can\nleave the configuration as is.\nNow you can set up a remote Linux system as an ICU client.\nESTABLISHING AN ICU CLIENT\nThe ICU server runs AIDE on this remote host via the SSH1 protocol. Here’s how\nyou add a host that the ICU server manages:\n1. Modify the /usr/local/ICU/ICU.hosts file to add a line using the fol-\nlowing syntax:\nhostname:email:OS:architecture:SSH1 port\nAn example is shown here:\nk2.intevo.com:admin@id10t.intevo.com:linux:i386:22\n2. Perform a sanity check for the host using the ./ICU.pl -s -r hostname\ncommand. For the preceding example, this command is ./ICU.pl -s -r\nChapter 9: Securing Files and Filesystems\n233\n" }, { "page_number": 257, "text": "k2.intevo.com. Remember to replace the hostname with the actual host-\nname of the Linux computer that you want to bring under ICU control.\n3. Create a tar ball containing all the necessary files for the host, using the\n./ICU.pl -n -r hostname command. This creates a .tar file called\n/usr/local/ICU/databases/hostname.icu-install.tar.\n4. FTP the .tar file to the desired remote Linux system whose files you want\nto bring under integrity control. Log in to the remote Linux system and su\nto root.\n5. Run the tar xvf hostname.icu-install.tar command to extract it in\na temporary directory. This creates a new subdirectory within the extrac-\ntion directory called hostname-icu-install.\n6. From the new directory, run the ./icu-install.sh command to install a\ncopy of aide.conf and aide.db to the /var/adm/.icu directory\n7. Append five public keys to ~/.ssh/authorized_keys - key_init_db.pub to ini-\ntialize the database, key_check.pub to run an AIDE check, key_get_bin.pub\nto send aide.bin (the AIDE binary), key_get_conf.pub to send aide.conf\n(configuration) and key_get_db.pub to send aide.db (the integrity database).\nThe keys don’t use any pass phrase because they are used via cron to run\nautomatic checks.\nNow you can start ICU checks from the ICU server.\nINITIALIZING THE REMOTE HOST’S INTEGRITY DATABASE\nBefore you can perform the actual integrity checks on any of the remote systems,\nyou have to create the initial integrity database.\n1. Log in as icu user and change directory to /usr/local/ICU.\n2. Run the ./ICU.pl -i -r hostname command, where hostname should\nbe replaced with the name of the remote Linux system (the name of your\nnew ICU client).\n3. Because this is the first time you are connecting to the remote system\nusing the icu account, you are prompted as follows:\nThe authenticity of host ‘k2.intevo.com’ can’t be\nestablished.\nRSA key fingerprint is\n1d:4e:b3:d1:c2:94:f5:44:e9:ae:02:65:68:4f:07:57.\nAre you sure you want to continue connecting (yes/no)? yes\n234\nPart III: System Security\n" }, { "page_number": 258, "text": "4. Enter yes to continue. You see a warning message as shown here:\nWarning: Permanently added ‘k2.intevo.com,172.20.15.1’ (RSA)\nto the list of known hosts.\n5. Wait until the database is initialized. Ignore the traverse_tree() warning\nmessages from AIDE. Your screen displays output similar to the following\nexample:\nVerbose mode activated.\nInitializing sanity check.\nSanity check passed.\nDatabase initialization started\nChecking if port 22 is open.\nExecuting init command: ‘/usr/local/bin/ssh -x -l root -p 22 -i\n/usr/local/ICU/keys/key_init_db k2.intevo.com “/var/adm/.icu/aide.bin -i -c\n/var/adm/.icu/aide.conf -V5 -A gzip_dbout=no -B gzip_dbout=no -B\ndatabase_out=file:/var/adm/.icu/aide.db.new -B\ndatabase=file:/var/adm/.icu/aide.db -B report_url=stdout; mv\n/var/adm/.icu/aide.db.new /var/adm/.icu/aide.db” 2>&1’\nThis may take a while.\ntraverse_tree():No such file or directory: /root/.ssh2\ntraverse_tree():No such file or directory: /usr/heimdal\ntraverse_tree():No such file or directory: /usr/krb4\ntraverse_tree():No such file or directory: /usr/krb5\ntraverse_tree():No such file or directory: /usr/arla\nmv: overwrite `/var/adm/.icu/aide.db’? y\naide.conf 100%\n|*******************************************************| 5787 00:00\naide.db 100%\n|*******************************************************| 3267 KB 00:03\nAll files successfully received.\nSending mail to kabir@k2.intevo.com with subject: [ICU - k2.intevo.com] Welcome\nto ICU!\nDatabase initialization ended.\nWhen initializing a new host, the first integrity database and configuration\nare saved as /usr/localICU/databases/hostname/archive/aide.db-first-\nTIMESTAMP.gz\nand \n/usr/localICU/databases/hostname/archive/aide.\nconf-first-.TIMESTAMP. For example, /usr/localICU/databases/k2.intevo.\ncom/archive/aide.db-first-Sat Dec 23 11:30:50 2000.gz and /usr/\nlocalICU/databases/k2.intevo.com/archive/aide.conf-first-Sat Dec 23\n11:30:50 2000 are the initial database and configuration files created when the\npreceding steps were followed by a host called k2.intevo.com.\nAfter the database of the remote host is initialized, you can run file-system\nintegrity checks on the host.\nChapter 9: Securing Files and Filesystems\n235\n" }, { "page_number": 259, "text": "CHECKING REMOTE HOST’S INTEGRITY DATABASE\nTo perform a file-system integrity check on a remote Linux system (in this case,\nyour new ICU client), you can do the following:\n1. Become the icu user on the ICU server.\n2. Change directory to /usr/local/ICU.\n3. Run the ./ICU.pl -v -c -r hostname command, where hostname is\nthe name of the ICU client system. For example, the ./ICU.pl -v -c -r\nr2d2.intevo.com command performs filesystem integrity checks on the\nr2d2.intevo.com site from the ICU server. An example output of this\ncommand is shown in the following listing:\nVerbose mode activated.\nInitializing sanity check.\nSanity check passed.\nCheck started.\nChecking if port 22 is open.\nGetting files from r2d2.intevo.com: aide.bin aide.conf aide.db\nAll files successfully received.\nVerifying MD5 fingerprint of the AIDE database...match.\nVerifying MD5 fingerprint of the AIDE configuration...match.\nVerifying MD5 fingerprint of the AIDE binary...match.\nExecuting AIDE check command on the remote host. This may take a while.\nGetting files from r2d2.intevo.com: aide.db\nAll files successfully received.\nA change in the filesystem was found, updating\n/usr/local/ICU/databases/r2d2.intevo.com/aide.db.current.\nSaving copy as /usr/local/ICU/databases/ r2d2.intevo.com/archive/aide.db-sun Dec\n24 09:56:26 2000.\nSending mail to kabir@r2d2.intevo.com with subject: [ICU - r2d2.intevo.com]\nWarning: Filesystem has changed (Added=2,Removed=0,Changed=11)\nCheck ended.\nYou have new mail in /var/mail/kabir\nIf you don’t use the -v option in the preceding command, ICU.pl is less\nverbose.The -v option is primarily useful when you run the command from\nan interactive shell. Also, you can add the -d option to view debugging\ninformation if something isn’t working right.\nIf the filesystems on the remote machine have changed, the administrator is\nnotified via e-mail. As shown in the preceding sample output, two new files have\n236\nPart III: System Security\n" }, { "page_number": 260, "text": "been added and eleven files were modified. The e-mail sent to the administrator\n(kabir@ r2d2.intevo.com) looks like this:\nFrom icu sun Dec 24 09:56:30 2000\nDate: sun, 24 Dec 2000 09:56:30 -0500\nFrom: The ICU server \nTo: kabir@r2d2.intevo.com\nsubject: [ICU - r2d2.intevo.com] Warning: Filesystem has changed\n(Added=2,Removed=0,Changed=11)\nX-ICU-version: ICU v0.2 By Andreas Östling, andreaso@it.su.se.\n*** Warning ***\nThe filesystem on r2d2.intevo.com has changed.\nThis could mean that authorized changes were made, but it could\nalso mean that the host has been compromised. The database has been\nupdated with these changes and will now be regarded as safe.\nConsider updating your /var/adm/.icu/aide.conf if you get warnings\nabout these changes all the time but think that the changes are legal.\nBelow is the output from AIDE. Read it carefully.\nAIDE found differences between database and filesystem!!\nStart timestamp: 2000-12-24 09:50:34\nsummary:\nTotal number of files=35619,added files=2,removed files=0,changed files=11\nAdded files:\nadded:/etc/rc.d/rc3.d/S65named\nadded:/var/lib/tripwire/report/r2d2.intevo.com-20001224-091546.twr\nChanged files:\nchanged:/etc/rc.d/rc3.d\nchanged:/etc/mail/virtusertable.db\nchanged:/etc/mail/access.db\nchanged:/etc/mail/domaintable.db\nchanged:/etc/mail/mailertable.db\nchanged:/etc/aliases.db\nchanged:/etc/ioctl.save\nchanged:/etc/issue\nchanged:/etc/issue.net\nchanged:/boot\nchanged:/boot/System.map\n[Information on change details are not shown here]\nWith the AIDE and ICU combo, you can detect filesystem changes quite easily.\nYou can, in fact, automate this entire process by running the ICU checks on remote\nmachines as a cron job on the ICU server. Here’s how:\n1. Become the icu user on the ICU server.\n2. Run the crontab -e command to enter new cron entries for the icu user.\nChapter 9: Securing Files and Filesystems\n237\n" }, { "page_number": 261, "text": "3. Enter the following line (remember to replace hostname with appropriate\nremote host name).\n15 1 * * * cd /usr/local/ICU; ./ICU.pl -c -r hostname\n4. Save and exit the crontab file.\nThis runs filesystem integrity checks on the named host at 01:15 AM every\nmorning. After you create a cron job for a host, monitor the log file\n(/usr/local/ICU/logs/hostname.log) for this host on the ICU server next morn-\ning to ensure that ICU.pl ran as intended.\nIf you have a lot of remote Linux systems to check, add a new entry in the\n/var/spool/cron/icu file (using the crontab -e command),as shown\nin the preceding example.However,don’t schedule the jobs too close to each\nother. If you check five machines, don’t start all the ICU.pl processes at the\nsame time.Spread out the load on the ICU server by scheduling the checks at\n15- to 30-minute intervals.This ensures the health of your ICU server.\nWhen ICU.pl finds integrity mismatches, it reports it via e-mail to the adminis-\ntrator. It’s very important that the administrator reads her e-mail or else she won’t\nknow about a potential break-in.Doing Routine Backups\nProtecting your system is more than keeping bad guys out. Other disasters\nthreaten your data. Good, periodic backup gives you that protection.\nThe most important security advice anyone can give you is back up regularly.\nCreate a maintainable backup schedule for your system. For example, you can per-\nform incremental backups on weekdays and schedule a full backup over the week-\nend. I prefer removable media-based backup equipment such as 8mm tape drives or\nDAT drives. A removable backup medium enables you to store the information in a\nsecure offsite location. Periodically check that your backup mechanism is function-\ning as expected. Make sure you can restore files from randomly selected backup\nmedia. You may recycle backup media, but know the usage limits that the media\nmanufacturer claims.\nAnother type of “backup” you should do is backtracking your work as a system\nadministrator. Document everything you do, especially work that you do as a supe-\nruser. This documentation enables you to trace problems that often arise while you\nare solving another.\nI keep a large history setting (a shell feature that remembers N number last\ncommands), and I often print the history in a file or on paper. The script\ncommand also can record everything you do while using privileged accounts.\n238\nPart III: System Security\n" }, { "page_number": 262, "text": "Creating a Permission Policy\nMost user problems on Unix and Unix-like systems are caused by file permissions.\nIf something that was working yesterday and the day before yesterday all of a sud-\nden stops working today, first suspect a permission problem. One of the most com-\nmon causes of permission problems is the root account. Inexperienced system\nadministrators often access files and programs via a superuser (root) account. The\nproblem is that when the root user account runs a program, the files created can\noften be set with root ownership — in effect, that’s an immediate and unintended\ngap in your security.\nSetting configuration file permissions for users\nEach user’s home directory houses some semi-hidden files that start with a period\n(or dot). These files often execute commands at user login. For example, shells (csh,\ntcsh, bash, and so on) read their settings from a file such as .cshrc or .bashrc. If\na user doesn’t maintain file permissions properly, another not-so-friendly user can\ncause problems for the naive user. For example, if one user’s .cshrc file is write-\nable by a second user, the latter can play a silly trick such as putting a logout com-\nmand at the beginning of the .cshrc file so that the first user is logged out as soon\nas she logs in. Of course, the silly trick could develop into other tricks that violate\na user’s file privacy in the system. Therefore, you may want to watch for such situ-\nations on a multiuser system. If you only have a few users, you can also quickly\nperform simple checks like the following:\nfind /home -type f -name “.*rc” -exec ls -l {} \\;\nChapter 9: Securing Files and Filesystems\n239\nUsing the dump and restore utilities\nThe dump and restore utilities can back up files onto tape drives, backup disks,\nor other removable media. The dump command can perform incremental backups,\nwhich makes the backup process much more manageable than simply copying all\nfiles on a routine basis. The restore command can restore the files backed up by\nthe dump command. To learn more about these utilities, visit http://dump.\nsourceforge.net.\nYou need ext2 file-system utilities to compile the dump/restore suite. The ext2 file-\nsystem utilities (e2fsprogs) contain all of the standard utilities for creating, fixing,\nconfiguring, and debugging ext2 filesystems. Visit http://sourceforge.net/\nprojects/e2fsprogs for information on these utilities.\n" }, { "page_number": 263, "text": "This command displays permissions for all the dot files ending in “rc” in the\n/home directory hierarchy. If your users’ home directories are kept in /home, this\nshows you which users may have a permission problem.\nSetting default file permissions for users\nAs a system administrator, you can define the default permission settings for all the\nuser files that get on your system. The umask command sets the default permissions\nfor new files.\nSetting executable file permissions\nOnly the owner should have write permission for program files run by regular\nusers. For example, the program files in /usr/bin should have permission settings\nsuch that only root can read, write, and execute; the settings for everyone else\nshould include only read and execute. When others besides the owner can write to\na program file, serious security holes are possible. For example, if someone other\nthan the root user can write to a program such as /usr/bin/zip, a malicious user\ncan replace the real zip program with a Trojan horse program that compromises\nsystem security, damaging files and directories anywhere it goes. So, always check\nthe program files on your systems for proper permissions. Run COPS frequently to\ndetect permission-related problems.\nSummary\nImproper file and directory permissions are often the cause of many user support\nincidents and also the source of many security problems. Understanding of file and\ndirectory permissions is critical to system administration of a Linux system. By\nsetting default permissions for files, dealing with world-accessible and set-UID and\nset-GID files, taking advantage of advanced ext2 filesystem security features, using\nfile integrity checkers such as Tripwire, AIDE, ICU, etc. you can enhance your\nsystem security.\n240\nPart III: System Security\n" }, { "page_number": 264, "text": "Chapter 10\nPAM\nIN THIS CHAPTER\nN What is PAM?\nN How does PAM work?\nN Enhancing security with PAM modules \nPLUGGABLE AUTHENTICATION MODULES (PAM) were originally developed for the\nSolaris operating system by Sun Microsystems. The Linux-PAM project made PAM\navailable for the Linux platform. PAM is a suite of shared libraries that grants priv-\nileges to PAM-aware applications.\nWhat is PAM?\nYou may wonder how programs such as chsh, chfn, ftp, imap, linuxconf, rlogin,\nrexec, rsh, su, login, and passwd suddenly understand the shadow password\nscheme (see Chapter 12) and use the /etc/shadow password file instead of the\n/etc/passwd file for authentication. They can do so because Red Hat distributes\nthese programs with shadow password capabilities. Actually, Red Hat ships these\nprograms with the much grander authentication scheme — PAM. These PAM-aware\nprograms can enhance your system security by using both the shadow password\nscheme and virtually any other authentication scheme.\nTraditionally, authentication schemes are built into programs that grant privi-\nleges to users. Programs such as login or passwd have the necessary code for\nauthentication. Over time, this approach proved virtually unscaleable, because\nincorporating a new authentication scheme required you to update and recompile\nprivilege-granting programs. To relieve the privilege-granting software developer\nfrom writing secure authentication code, PAM was developed. Figure 10-1 shows\nhow PAM works with privilege-granting applications.\n241\n" }, { "page_number": 265, "text": "Figure 10-1: How PAM-aware applications work\nWhen a privilege-granting application such as /bin/login is made into a \nPAM-aware application, it typically works in the manner shown in Figure 10-1 and\ndescribed in the following list:\n1. A user invokes such an application to access the service it offers.\n2. The PAM-aware application calls the underlying PAM library to perform\nthe authentication.\n3. The PAM library looks up an application-specific configuration file in the\n/etc/pam.d/ directory. This file tells PAM what type of authentication is\nrequired for this application. (In case of a missing configuration file, the\nconfiguration in the /etc/pam.d/other file is used.)\n4. The PAM library loads the required authentication module(s).\n5. These modules make PAM communicate with the conversation functions\navailable in the application.\nUser\n1\n9\n6\n7\n2\n5\n8\n3\n4\nPAM-aware\nprivilege granting application\nPAM\nConversion functions\nfor PAM\nRequest information\nfrom the user\nAuthentication module(s)\nConfiguration file for the\nPAM-aware application. This file\nis stored in /etc/pam.d/directory\n242\nPart III: System Security\n" }, { "page_number": 266, "text": "6. The conversation functions request information from the user. For example,\nthey ask the user for a password or a retina scan.\n7. The user responds to the request by providing the requested information.\n8. The PAM authentication modules supply the application with an authenti-\ncation status message via the PAM library.\n9. If the authentication process is successful, the application does one of the\nfollowing:\nI Grants the requested privileges to the user \nI Informs the user that the process failed\nThink of PAM as a facility that takes the burden of authentication away from\nthe applications and stacks multiple authentication schemes for one application.\nFor example, the PAM configuration file for the rlogin application is shown in\nListing 10-1.\nListing 10-1: The /etc/pam.d/rlogin file\n#%PAM-1.0\nauth required /lib/security/pam_securetty.so\nauth sufficient /lib/security/pam_rhosts_auth.so\nauth required /lib/security/pam_stack.so service=system-auth\nauth required /lib/security/pam_nologin.so\naccount required /lib/security/pam_stack.so service=system-auth\npassword required /lib/security/pam_stack.so service=system-auth\nsession required /lib/security/pam_stack.so service=system-auth\nIn this file, multiple pluggable authentication modules from /lib/security\nauthenticate the user. \nWorking with a PAM configuration file\nListing 10-1 shows what a PAM configuration file for an application looks like.\nBlank lines and lines starting with a leading # character are ignored. A configura-\ntion line has the following fields:\nmodule-type control-flag module-path module-args\nCurrently, four module types exist, which are described in Table 10-1.\nChapter 10: PAM\n243\n" }, { "page_number": 267, "text": "TABLE 10-1: PAM MODULE TYPES\nModule Type\nDescription\nauth\nDoes the actual authentication. Typically, an auth module\nrequires a password or other proof of identity from a user.\naccount\nHandles all the accounting aspects of an authentication request.\nTypically, an account module checks whether the user access\nmeets all the access guidelines. For example, it can check\nwhether the user is accessing the service from a secure host and\nduring a specific time.\npassword\nSets password.\nsession\nHandles session management tasks, such as refreshing session\ntokens.\nThe control flag defines how the PAM library handles a module’s response. Four\ncontrol flags, described in Table 10-2, are currently allowed.\nTABLE 10-2: PAM MODULE CONTROL FLAGS\nControl Flag\nDescription\nrequired\nThis control flag tells the PAM library to require the success of\nthe module specified in the same line. When a module returns a\nresponse indicating a failure, the authentication definitely fails,\nbut PAM continues with other modules (if any). This prevents\nusers from detecting which part of the authentication process\nfailed, because knowing that information may aid a potential\nattacker.\nrequisite\nThis control flag tells the PAM library to abort the\nauthentication process as soon as the PAM library receives a\nfailure response. \nsufficient\nThis control flag tells the PAM library to consider the\nauthentication process complete if it receives a success\nresponse. Proceeding with other modules in the configuration\nfile is unnecessary.\noptional\nThis control flag is hardly used. It removes the emphasis on the\nsuccess or failure response of the module.\n244\nPart III: System Security\n" }, { "page_number": 268, "text": "The control-flag field also permits conditional flags.The conditional flags\ntake the following form:\n[key1=value1 key2=value2 ...]\nThe key in this key value list ,can be one of the following:\nopen_err, symbol_err, service_err, system_err, buf_err,\nperm_denied, auth_err, cred_insufficient,\nauthinfo_unavail, user_unknown, maxtries,\nnew_authtok_reqd, acct_expired, session_err,\ncred_unavail, cred_expired, cred_err, no_module_data,\nconv_err, authtok_err, authtok_recover_err,\nauthtok_lock_busy, authtok_disable_aging, try_again,\nignore, abort, authtok_expired, module_unknown,\nbad_item, and default.\nThe value in this key value list can be one of the following:\nignore, ok, done, bad, die, reset, or a positive\ninteger\nIf a positive integer is used,PAM skips that many records of the same type.\nThe module path is the path of a pluggable authentication module. Red Hat\nLinux stores all the PAM modules in the /lib/security directory. You can supply\neach module with optional arguments, as well.\nIn Listing 10-1, the PAM library calls the pam_securetty.so module, which\nmust return a response indicating success for successful authentication. If the mod-\nule’s response indicates failure, PAM continues processing the other modules so\nthat the user (who could be a potential attacker) doesn’t know where the failure\noccurred. If the next module (pam_rhosts_auth.so) returns a success response, the\nauthentication process is complete, because the control flag is set to sufficient.\nHowever, if the previous module (pam_securetty.so) doesn’t fail but this one fails,\nthe authentication process continues and the failure doesn’t affect the final result.\nIn the same fashion, the PAM library processes the rest of the modules.\nThe order of execution exactly follows the way the modules appear in the con-\nfiguration. However, each type of module (auth, account, password, and session)\nis processed in stacks. In other words, in Listing 10-1, all the auth modules are\nstacked and processed in the order of appearance in the configuration file. The rest\nof the modules are processed in a similar fashion.\nEstablishing a PAM-aware Application\nEvery program that requires user authentication under Red Hat Linux can use PAM.\nIn fact, virtually all such programs include their own PAM configuration file in\nChapter 10: PAM\n245\n" }, { "page_number": 269, "text": "/etc/pam.d directory. Because each application has its own configuration file, cus-\ntom authentication requirements are easily established for them. However, too\nmany custom authentication requirements are probably not a good thing for man-\nagement. This configuration management issue has been addressed with the recent\nintroduction of a PAM module called the pam_stack.so. This module simply can\njump to another PAM configuration while in the middle of one. This can be better\nexplained with an example. Listing 10-2 shows /etc/pam.d/login, the PAM con-\nfiguration file for the login application.\nListing 10-2: The /etc/pam.d/login file\n#%PAM-1.0\nauth required /lib/security/pam_securetty.so\nauth required /lib/security/pam_stack.so service=system-auth\nauth required /lib/security/pam_nologin.so\naccount required /lib/security/pam_stack.so service=system-auth\npassword required /lib/security/pam_stack.so service=system-auth\nsession required /lib/security/pam_stack.so service=system-auth\nsession optional /lib/security/pam_console.so\nWhen the PAM layer is invoked by the login application, it looks up this file\nand organizes four different stacks:\nN Auth stack\nN Account stack\nN Password stack\nN Session stack\nIn this example, the auth stack consists of the pam_securetty, pam_stack, and\npam_nologin modules. PAM applies each of the modules in a stack in the order\nthey appear in the configuration file. In this case, the pam_securetty module must\n(because of the “required” control flag) respond with a failure for the auth process-\ning to continue. After the pam securetty module is satisfied, the auth processing\nmoves to the pam_stack module. This module makes PAM read a configuration file\nspecified in the service=configuration argument. Here, the system-auth configura-\ntion is provided as the argument; therefore, it’s loaded. The default version of this\nconfiguration file is shown in Listing 10-3.\nListing 10-3: The /etc/pam.d/system-auth file\n#%PAM-1.0\n# This file is auto-generated.\n# User changes are destroyed the next time authconfig is run.\nauth sufficient /lib/security/pam_unix.so likeauth nullok md5 shadow\n246\nPart III: System Security\n" }, { "page_number": 270, "text": "auth required /lib/security/pam_deny.so\naccount sufficient /lib/security/pam_unix.so\naccount required /lib/security/pam_deny.so\npassword required /lib/security/pam_cracklib.so retry=3\npassword sufficient /lib/security/pam_unix.so nullok use_authtok md5\nshadow\npassword required /lib/security/pam_deny.so\nsession required /lib/security/pam_limits.so\nsession required /lib/security/pam_unix.so\nAs shown, this configuration has its own set of auth, account, password, and\nsession stacks. Because the pam_stack module can jump to a central configura-\ntion file like this one, it enables a centralized authentication configuration, which\nleads to better management of the entire process. You can simply change the system-\nauth file and affect all the services that use the pam_stack module to jump to \nit. For example, you can enforce time-based access control using a module called\npam_time (the Controlling access by time section explains this module) for every\ntype of user access that understands PAM. Simply add the necessary pam_time con-\nfiguration line in the appropriate stack in the system-auth configuration file.\nTypically, when you are establishing a new PAM-aware application on Red Hat\nLinux, it should include the PAM configuration file. If it doesn’t include one or it\nincludes one that appears to not use this centralized configuration discussed, you\ncan try the following:\n1. If you have a PAM configuration file for this application, rename it to\n/etc/pam.d/myapp.old, where myapp is the name of your current PAM\nconfiguration file.\n2. Create a new file called /etc/pam.d/myapp so that it has the following\nlines:\nauth required /lib/security/pam_stack.so\nservice=system-auth\nauth required /lib/security/pam_nologin.so\naccount required /lib/security/pam_stack.so\nservice=system-auth\npassword required /lib/security/pam_stack.so\nservice=system-auth\nsession required /lib/security/pam_stack.so\nservice=system-auth\n3. The preceding PAM configuration delegates actual configuration to the\n/etc/pam.d/system-auth file.\n4. Access the application as usual. If you have no problem accessing it,\nyou just created a centrally managed PAM configuration file for myapp\napplication.\nChapter 10: PAM\n247\n" }, { "page_number": 271, "text": "5. If you run into a problem, run the tail –f /var/log/messages com-\nmand on a shell or xterm and try myapp as usual. Watch the log messages\nthat PAM generates. \nPAM-generated log messages usually have PAM_modulename strings in\nthem where modulename is the name of the PAM module that is attempt-\ning a task.The log information should show why the application isn’t work-\ning as usual. If you still can’t fix it and have an old configuration, simply\nrename the old configuration file back to myapp so that you can use the\napplication.In such a case, your application doesn’t work with the system-\nauth configuration and you can’t do much to change that.\nMost PAM-aware applications are shipped with their own PAM configuration\nfiles. But even if you find one that is not, it’s still using PAM. By default, when\nPAM can’t find a specific configuration file for an application, it uses the default\n/etc/pam.d/other configuration. This configuration file is shown in Listing 10-4.\nListing 10-4: The /etc/pam.d/other file\n#%PAM-1.0\nauth required /lib/security/pam_deny.so\naccount required /lib/security/pam_deny.so\npassword required /lib/security/pam_deny.so\nsession required /lib/security/pam_deny.so\nThis configuration simply denies access using the pam_deny module, which\nalways returns failure status. I recommend that you keep this file the way it is so\nthat you have a “deny everyone access unless access is permitted by configuration”\ntype of security policy.\nUsing Various PAM Modules\nto Enhance Security\nRed Hat Linux ships with many PAM modules. \nN\npam_access.so\nThis module uses the /etc/security/access.conf configuration file.\nThis configuration file has the following configuration format:\n< + or - > : : 0660 0660 root.floppy\n 0600 0600 root.disk\nAlso the , , and aliases (also known as classes)\nmust point to the desired devices. The default values for these aliases are also found\nin the same file. They are shown below:\n=tty[0-9][0-9]* :[0-9]\\.[0-9] :[0-9]\n=/dev/fd[0-1]*\n=/dev/cdrom* /dev/cdwriter*\nAs shown, the values contain wildcards and simple, regular expressions. The\ndefault values should cover most typical situations.\nAs mentioned before, the pam_console module also controls which PAM-aware,\nprivileged commands such as /sbin/shutdown, /sbin/halt, and /sbin/reboot\nan ordinary user can run. Let’s take a look at what happens when an ordinary user\nruns the shutdown command.\nN The user enters the shutdown -r now command at the console prompt to\nreboot the system.\nN The /usr/bin/shutdown script, which is what the user runs, runs a\nprogram called consolehelper. This program in turn uses a program\ncalled userhelper that runs the /sbin/reboot program. In this process,\nthe PAM configuration for the reboot program (stored in /etc/pam.d/\nreboot) is applied. \nN In the /etc/pam.d/reboot file you will see that the pam_console module is\nused as an auth module, which then checks for the existence of a file\ncalled /etc/security/console.apps/reboot. If this file exists and the\nuser meets the authentication and authorization requirements of the\n/etc/pam.d/reboot configuration, the reboot command is executed.\n260\nPart III: System Security\n" }, { "page_number": 284, "text": "If the user runs the shutdown command using the -h option, the\n/usr/bin/shutdown script uses the /sbin/halt program in place of\n/sbin/reboot and uses halt-specific PAM configuration files.\nConsider these security scenarios:\nN Prevent an ordinary console user from rebooting or halting by removing\nthe /etc/security/console.apps/reboot or /etc/security/console.\napps/halt file accordingly. However, console users are typically trusted\nunless the console is located in an unsecured place.\nN If you house your system in an ISP co-location facility or other unsecured\nplaces, consider restricting access to the shutdown, reboot, and halt\ncommands by modifying the /etc/pam.d/reboot, /etc/pam.d/halt, and\n/etc/pam.d/shutdown files to the following line:\nauth required /lib/security/pam_stack.so service=system-auth\nN This makes sure that even if someone can access a user account or opened\nshell (perhaps you didn’t log out when you walked away from the system),\nhe must know the user’s password to shut down, reboot, or halt the\nmachine. In my recent security analysis experience, I found instances where\nmany organizations housed their Web servers in ISP co-location facilities,\nwhich are very secured from outside. However, many of the servers had\nphysical consoles attached to them and often had opened shell running\nsimple stats programs such as top and vmstat. Anyone could stop these\nprograms and simply pull a prank by typing shutdown, reboot, or, even\nworse—halt! It is essential in these situations to require the password, using\nthe configuration line discussed in the preceding text.\nIt’s a big step towards security management that Red Hat Linux ships with PAM\nand PAM-aware applications. To follow the PAM happenings, visit the primary\nPAM distribution site at www.us.kernel.org/pub/linux/libs/pam/ frequently.\nSummary\nPAM is a highly configurable authentication technology that introduces a layer of\nmiddleware between the application and the actual authentication mechanism. In\naddition to this, PAM can handle account and session data, which is something that\nnormal authentication mechanisms don’t do very well. Using various PAM modules,\nyou can customize authentication processes for users, restrict user access to console\nand applications based on such properties as username, time, and terminal location.\nChapter 10: PAM\n261\n" }, { "page_number": 285, "text": "" }, { "page_number": 286, "text": "Chapter 11\nOpenSSL\nIN THIS CHAPTER\nN Understanding how SSL works\nN Installing and configuring OpenSSL\nN Understanding server certificates\nN Getting a server certificate from a commercial CA\nN Creating a private certificate authority\nONLY A FEW YEARS AGO, the Internet was still what it was initially intended to be —\na worldwide network for scientists and engineers. By virtue of the Web, however,\nthe Internet is now a network for everyone. These days, it seems as though every-\none and everything is on the Internet. It’s also the “new economy” frontier; thou-\nsands of businesses, large and small, for better or worse, have set up e-commerce\nsites for customers around the world. Customers are cautious, however, because\nthey know that not all parts of the Internet are secured.\nTo eliminate this sense of insecurity in the new frontier, the Netscape\nCorporation invented a security protocol that ensures secured transactions between\nthe customer’s Web browser and the Web server. Netscape named this protocol\nSecured Sockets Layer (SSL). Quickly SSL found its place in many other Internet\napplications, such as e-mail and remote access. Because SSL is now part of the\nfoundation of the modern computer security infrastructure, it’s important to know\nhow to incorporate SSL in your Linux system. This chapter shows you how.\nUnderstanding How SSL Works\nThe foundation of SSL is encryption. When data travels from one point of the\nInternet to another, it goes through a number of computers such as routers, gateways,\nand other network devices. \nAs you can see, the data must travel through many nodes. Although data packets\ntravel at a high speed (usually reaching their destination in milliseconds), intercep-\ntion is still a possibility at one of these nodes — which is why we need a secured\nmechanism for exchanging sensitive information. This security is achieved through\nencryption.\n263\n" }, { "page_number": 287, "text": "Technically speaking, encryption is the mathematical encoding scheme that\nensures that only the intended recipient can access the data; it hides the data from\neavesdroppers by sending it in a deliberately garbled form. Encryption schemes\noften restrict access to resources. For example, if you log on to a Unix or Windows\nNT system, the passwords or keys you use are typically stored in the server com-\nputer in an encrypted format. On most Unix systems, a user’s password is encrypted\nand matched with the encrypted password stored in an /etc/passwd file. If this\ncomparison is successful, the user is given access to the requested resource. Two\nkinds of encryption schemes are available.\nSymmetric encryption\nSymmetric encryption is like the physical keys and locks you probably use every\nday. Just as you would lock and unlock your car with the same key, symmetric\nencryption uses one key to lock and unlock an encrypted message. \nBecause this scheme uses one key, all involved parties must know this key for\nthe scheme to work. \nAsymmetric encryption\nAsymmetric encryption works differently from symmetric encryption. This scheme\nhas two keys:\nN A public key\nN A private key\nThe extra key is the public key (so this scheme is also known as public key\nencryption). \nWhen data is encrypted with the public key, it can only be decrypted using the\nprivate key, and vice versa. Unlike symmetric encryption, this scheme doesn’t\nrequire that the sender know the receiver’s private key to unlock the data. The pub-\nlic key is widely distributed, so anyone who needs a secure data communication\ncan use it. The private key is never distributed; it’s always kept secret.\nSSL as a protocol for data encryption\nUsing both symmetric and asymmetric encryption schemes, Netscape developed the\nopen, nonproprietary protocol called Secured Socket Layer (SSL) for data encryption,\nserver authentication, data integrity, and client authentication for TCP/IP-based\ncommunication. \nThe SSL protocol runs above TCP/IP and below higher-level, application-layer\nprotocols such as HTTP, FTP, and IMAP. It uses TCP/IP on behalf of the application-\nlayer protocols. Doing so accomplishes the following:\nN Allows an SSL-enabled server to authenticate itself to an SSL-enabled client\n264\nPart III: System Security\n" }, { "page_number": 288, "text": "N Allows the client to authenticate itself to the server\nN Allows both machines to establish an encrypted connection\nHOW DOES SSL WORK?\nIn an SSL-based transaction, the server sends a certificate (defined later in this\nchapter) to the client system.\n1. A certificate is typically issued by a well-known digital certificate issuing\ncompany known as a Certificate Authority (CA). \nThe Certificate Authority encrypts the certificate using its private key.\nThe client decrypts the certificate using the public key provided by the\nCertificate Authority.\nBecause the certificate contains the CA server’s public key, the client can\nnow decrypt any encrypted data sent by the server. \n2. The server sends a piece of data identifying itself as the entity mentioned\nin the certificate. It then creates a digest message of the same data it sent\nto identify itself earlier.\nThe digest is then encrypted using the server’s private key. The client now\nhas the following information:\nI The certificate from a known CA stating what the server’s public key\nshould be\nI An identity message from the server\nI An encrypted digest version of the identity message\n3. Using the server’s public key, the client can decrypt the digest message.\nThe client then creates a digest of the identity message and compares it\nwith the digest sent by the server. \nA match between the digest and the original message confirms the identity\nof the server. Why? The server initially sent a certificate signed by a known\nCA, so the client is absolutely sure to whom this public key belongs.\nHowever, the client needed proof that the server that sent the certificate\nis the entity that it claims to be, so the server sent a simple identification\nmessage along with a public-key-encrypted digest of the same message. If\nthe sending server hadn’t had the appropriate private key, it would have\nbeen unable to produce the same digest that the client computed from the\nidentification message.\nIf this seems complex , it is — intentionally so — and it doesn’t end here. The\nclient can now send a symmetric encryption key to the server, using the server’s\npublic key to encrypt the new message. The server can then use this new key to\nChapter 11: OpenSSL\n265\n" }, { "page_number": 289, "text": "encrypt data and transmit it to the client. Why do that all over again? Largely\nbecause symmetric encryption is much faster than asymmetric encryption.\nAsymmetric encryption (using private and public keys) safely transmits a\nrandomly generated symmetric key from the client to the server; this key is\nlater used for a fast,secured communication channel.\nIf an impostor sits between the client and the server system, and is capable of\nintercepting the transmitted data, what damage can it do? It doesn’t know the\nsecret symmetric key that the client and the server use, so it can’t determine the\ncontent of the data; at most, it can introduce garbage in the data by injecting its\nown data into the data packets.\nTo avoid this, the SSL protocol allows for a message-authentication code (MAC).\nA MAC is simply a piece of data computed by using the symmetric key and the\ntransmitted data. Because the impostor doesn’t know the symmetric key, it can’t\ncompute the correct value for the MAC. For example, a well-known cryptographic\ndigest algorithm called MD5 (developed by RSA Data Security, Inc.) can generate\n128-bit MAC values for each transmitted data packet. The computing power and\ntime required to successfully guess the correct MAC value this way is almost\nnonexistent. SSL makes secure commerce possible on the Internet.\nOBTAINING SSL\nFor many years, SSL was available mainly in commercial Linux software such as\nStronghold, an Apache-based, commercial Web server. Because of patent and US\nexport restrictions, no open-source versions of SSL for Linux were available for a\nlong time. Recently, the OpenSSL Project has changed all that.\nUnderstanding OpenSSL\nThe OpenSSL Project is an open-source community collaboration to develop \ncommercial-grade SSL, Transport Layer Security (TLS), and full-strength, general-\npurpose cryptography library packages. The current implementation of SSL is also\ncalled OpenSSL. OpenSSL is based on SSLeay library, which has been developed by\nEric A. Young and Tim J. Hudson. The OpenSSL software package license allows\nboth commercial and noncommercial use of the software.\nUses of OpenSSL\nSSL can be used in many applications to enhance and ensure transactional data\nsecurity: OpenSSL simply makes that capability more widely available. This section\nexamines using OpenSSL for the following security tasks:\n266\nPart III: System Security\n" }, { "page_number": 290, "text": "N Securing transactions on the Web using Apache-SSL (see Chapter 15 for\ndetails)\nN Securing user access for remote access to your Linux computer\nN Securing Virtual Private Network (VPN) connections via PPP, using\nOpenSSL-based tunneling software (see Chapter 20 for details)\nN Securing e-mail services (IMAP, PO3) via tunneling software that uses\nOpenSSL (see Chapter 20 for details).\nGetting OpenSSL\nOpenSSL binaries are currently shipped with the Red Hat Linux distribution in RPM\npackages. So you can either use the RPM version supplied by Red Hat or you can\nsimply download the source code from the official OpenSSL Web site at\nwww.openssl.org/source.\nAs mentioned throughout the book, I prefer that security software be installed\nfrom source distribution downloaded from authentic Web or FTP sites. So, in the\nfollowing section I discuss the details of compiling and installing OpenSSL from the\nofficial source distribution downloaded from the OpenSSL Web site.\nIf you must install OpenSSL from the RPM, use a trustworthy, binary RPM\ndistribution, such as the one found on the official Red Hat CD-ROM.\nTo install OpenSSL binaries from an RPM package, simply run the \nrpm –ivh openssl-packagename.rpm command.\nInstalling and Configuring OpenSSL\nThe OpenSSL Web site offers the OpenSSL source in a gzip compressed tar file. The\nlatest version as of this writing is openssl-0.9.6.tar.gz. Before you can start with\nthe compilation process, you must ensure that your system meets the prerequisites.\nOpenSSL prerequisites\nThe OpenSSL source distribution requires that you have Perl 5 and an ANSI C com-\npiler. I assume that you installed both Perl 5 and gcc (C compiler) when you set up\nyour Linux system.\nChapter 11: OpenSSL\n267\n" }, { "page_number": 291, "text": "Compiling and installing OpenSSL\nCompiling OpenSSL is a simple task. Follow the steps given below.\n1. Log in to your Linux system as root from the console.\n2. Copy the OpenSSL source tar ball into the /usr/src/redhat/SOURCES\ndirectory.\n3. Extract the source distribution by running the tar xvzf openssl-\nversion.tar.gz command.\nFor example, to extract the openssl-0.9.6.tar.gz file, I can run the tar\nxvzf openssl-0.9.6.tar.gz command. The tar command creates a direc-\ntory called openssl-version, which in my example is openssl-0.9.6.\nYou can delete the tar ball at this point if disk space is an issue for you. First,\nhowever, make sure you have successfully compiled and installed OpenSSL.\n4. Make the newly created directory your current directory.\nAt this point, feel free to read the README or INSTALL files included in the distri-\nbution. The next step is to configure the installation options; certain settings are\nneeded before you can compile the software.\nTo install OpenSSL in the default /usr/local/ssl directory, run the following\ncommand:\n./config\nHowever, if you must install it in a different directory, append --prefix and --\nopenssldir flags to the preceding command. For example, to install OpenSSL in\n/opt/security/ssl directory, the preceding command line looks like this:\n./config --prefix=/opt/security\nYou can use many other options with the config or Configure script to prepare\nthe source distribution for compilation. These options are listed and explained in\nTable 11-1.\n268\nPart III: System Security\n" }, { "page_number": 292, "text": "TABLE 11-1: CONFIGURATION OPTIONS FOR COMPILING OPENSSL\nConfiguration Options\nPurpose\n--prefix=DIR\nThis option installs OpenSSL in the DIR directory. It creates\nsubdirectories such as DIR/lib, DIR/bin, DIR/include/\nopenssl. The configuration files are stored in DIR/ssl unless\nyou use the --openssldir option to specify this directory.\n--openssldir=DIR\nThis option specifies the configuration files directory. If the --\nprefix option isn’t used, all files are stored in this directory.\nRsaref\nThis option forces building of the RSAREF toolkit. To use the\nRSAREF toolkit, make sure you have the RSAREF library\n(librsaref.a) in your default library search path.\nno-threads \nThis option disables support for multithreaded applications.\nthreads \nThis option enables support for multithreaded applications.\nno-shared \nThis option disables the creation of a shared library.\nShared\nThis option enables the creation of a shared library.\nno-asm \nThis option disables the use of assembly code in the source tree.\nUse this option only if you are experiencing problems in\ncompiling OpenSSL.\n386 \nUse this only if you are compiling OpenSSL on an Intel 386\nmachine. (Not recommended for newer Intel machines.)\nno- \nOpenSSL uses many cryptographic ciphers such as bf, cast,\ndes, dh, dsa, hmac, md2, md5, mdc2, rc2, rc4, rc5, rsa, and\nsha. If you want to exclude a particular cipher from the\ncompiled binaries, use this option.\n-Dxxx, -lxxx, -Lxxx, These options enable you to specify various system-dependent \n-fxxx, -Kxxx\noptions. For example, Dynamic Shared Objects (DSO) flags, such\nas -fpic, -fPIC, and -KPIC can be specified on the command\nline. This way one can compile OpenSSL libraries with Position\nIndependent Code (PIC), which is needed for linking it into DSOs.\nMost likely you won’t need any of these options to compile\nOpenSSL. However, if you have problems compiling it, you can\ntry some of these options with appropriate values. For example,\nif you can’t compile because OpenSSL complains about missing\nlibrary files, try specifying the system library path using the\n–L option.\nChapter 11: OpenSSL\n269\n" }, { "page_number": 293, "text": "After you have run the config script without any errors, run the make utility. If\nthe make command is successful, run make test to test the newly built binaries.\nFinally, run make install to install OpenSSL in your system. \nIf you have problems compiling OpenSSL, one source of the difficulty may\nbe a library-file mismatch — not unusual if the latest version of software like\nOpenSSL is being installed on an old Linux system. Or the problem may be\ncaused by an option,specified in the command line,that’s missing an essen-\ntial component. For example, if you don’t have the RSAREF library (not\nincluded in Red Hat Linux) installed on your system and you are trying to use\nthe rsaref option, the compilation fails when it tries to build the binaries.\nHere some traditional programming wisdom comes in handy: Make sure\nyou know exactly what you’re doing when you use specific options. If nei-\nther of these approaches resolves the problem, try searching the OpenSSL\nFAQ page at www.openssl.org/support/faq.html. Or simply install\nthe binary RPM package for OpenSSL.\nUnderstanding Server Certificates\nBefore you can use OpenSSL with many SSL-capable applications (such as\nOpenSSH and Apache-SSL), you must create appropriate server certificates.\nWhat is a certificate?\nIn an SSL transaction, a certificate is a body of data placed in a message to serve as\nproof of the sender’s authenticity. It consists of encrypted information that associ-\nates a public key with the true identity of an individual, server, or other entity,\nknown as the subject. It also includes the identification and electronic signature of\nthe issuer of the certificate. The issuer is known as a Certificate Authority (CA). \nA certificate may contain other information that helps the CA manage certifi-\ncates (such as a serial number and period of time when the certificate is valid).\nUsing an SSL-enabled Web browser (such as Netscape Navigator or Microsoft\nInternet Explorer), you can view a server’s certificate easily. \nThe identified entity in a certificate is represented by distinguished name fields\n(as defined in the X509 standard). Table 11-2 lists common distinguished name\nfields.\n270\nPart III: System Security\n" }, { "page_number": 294, "text": "TABLE 11-2: DISTINGUISHED NAME FIELDS\nDN Field:\nAbbreviation\nMeaning\nCommon Name\nCN\nCertified entity is known by this name.\nOrganization or Company\nO\nEntity is associated with this organization.\nOrganizational Unit\nOU\nEntity is associated with this organization unit.\nCity/Locality\nL\nEntity is located in this city.\nState/Province\nST\nEntity is located in this state or province.\nCountry\nC\nName is located in this country (2-digit ISO\ncountry code).\nThe certificate is usually transmitted in binary code or as encrypted text.\nWhat is a Certificate Authority (CA)?\nA Certificate Authority (CA) is a trusted organization that issues certificates for\nboth servers and clients (that is, users.) To understand the need for such an organi-\nzation, consider the following scenario.\nOne of your clients wants secure access to a Web application on your extranet\nWeb server. She uses the HTTPS protocol to access your extranet server, say\nhttps://extranet.domain.com/login.servlet\nHer Web browser initiates the SSL connection request. Your extranet Web server\nuses its private key to encrypt data it sends to her Web browser — which decrypts\nthe data using your Web server’s public key. \nBecause the Web server also sends the public key to the Web browser, there’s no\nway to know whether the public key is authentic. What stops a malicious hacker\nfrom intercepting the information from your extranet server and sending his own\npublic key to your client? That’s where the CA comes in to play. After verifying\ninformation regarding your company in the offline world, a CA has issued you a\nserver certificate — signed by the CA’s own public key (which is well known).\nGenuine messages from your server carry this certificate. When the Web browser\nreceives the server certificate, it can decrypt the certificate information using the\nwell-known CA’s public key. This ensures that the server certificate is authentic. The\nWeb browser can then verify that the domain name used in the authentic certificate\nis the same as the name of the server it’s communicating with.\nChapter 11: OpenSSL\n271\n" }, { "page_number": 295, "text": "Similarly, if you want to ensure that a client is really who she says she is, you could\nenforce a client-side certificate restriction, creating a closed-loop secured process\nfor the entire transaction.\nIf each party has a certificate that validates the other’s identity, confirms the\npublic key,and is signed by a trusted agency,then they both are assured that\nthey are communicating with whom they think they are.\nTwo types of Certificate Authority exist:\nN Commercial CA\nN Self-certified private CA\nCommercial CA\nA commercial Certificate Authority’s primary job is to verify the authenticity of\nother companies’ messages on the Internet. After a CA verifies the offline authen-\nticity of a company by checking various legal records (such as official company\nregistration documents and letters from top management of the company), one of\nits appropriately empowered officers can sign the certificate. Only a few commer-\ncial CAs exist; the two best known are\nN Verisign (www.verisign.com)\nN Thawte (www.thawte.com)\nVerisign recently acquired Thawte Consulting,which created an over-\nwhelming monopoly in the digital-certificate marketplace.\nSelf-certified, private CA\nA private CA is much like a root-level commercial CA: It’s self-certified. However, a\nprivate CA is typically used in a LAN or WAN environment (or in experimenting with\nSSL). For example, a university with a WAN that interconnects departments may\ndecide on a private CA instead of a commercial one. If you don’t expect an unknown\nuser to trust your private CA, you can still use it for such specific purposes.\n272\nPart III: System Security\n" }, { "page_number": 296, "text": "Getting a Server Certificate\nfrom a Commercial CA\nYou can get a certificate from a commercial CA or create your own CA to certify\nyour servers and clients. To get a signed certificate from a commercial CA, you\nmust meet its requirements. Commercial CAs have two requirements:\nN Prove that you are the entity you claim to be.\nTo meet this requirement, usually you follow the CA’s guidelines for veri-\nfying individuals or organizations. Consult with your chosen CA to find\nout how to proceed.\nN Submit a Certificate Signing Request (CSR) in electronic form.\nTypically, if you plan to get your Web server certified, be prepared to submit\ncopies of legal documents such as business registration or incorporation papers.\nHere, I show you how you can create a CSR using OpenSSL.\nGENERATING A PRIVATE KEY\nThe very first step to creating a CSR is creating a private key for your server.\nTo generate an encrypted private key for a Web server host called\nwww.domain.com, for example, you would run the following command:\nopenssl genrsa -des3 -out www.domain.com.key 1024 -rand /dev/urandom. \nAfter running this command, you are asked for a pass phrase (that is, password)\nfor use in encrypting the private key. Because the private key is encrypted using the\ndes3 cipher, you are asked for the pass phrase every time your server is started. If\nthis is undesirable, you can create an unencrypted version of the private key by\nremoving the –des3 option in the preceding command line.\nTo ensure a high level of security, use an encrypted private key. You don’t\nwant someone else who has access to your server to see (and,possibly,later\nuse) your private key.\nThe content of the www.domain.com.key file is shown in Listing 11-1.\nListing 11-1: The content of www.domain.com.key file\n-----BEGIN RSA PRIVATE KEY-----\nProc-Type: 4,ENCRYPTED\nDEK-Info: DES-EDE3-CBC,C48E9F2F597AF968\nContinued\nChapter 11: OpenSSL\n273\n" }, { "page_number": 297, "text": "Listing 11-1 (Continued)\n47f4qGkVrfFfTNEygEs/uyaPOeAqksOnALtKUvADHKL7BhaB+8BrT/Haa7MHwEzU\njjaRd1XF1k1Ej3qH6d/Zl0AwVfYiAYvO1H3wQB2pllSuxui2sm7ZRkYUOpRMjxZI\n/srHn/DU+dUq11pH3vJRw2hHNVjHUB0cuCszZ8GOhICa5MFGsZxDR+cKP0T2Uvf5\njlGyiMroBzN0QF0v8sqwZoSOsuKHU9ZKdA/Pcbu+fwyDWFzNfr8HPNTImlaMjGEt\ni9LWZikzBW2mmaw79Pq6xSyqL+7dKXmiQL6d/bYiH0ZUYHjMkJtqUp1fNXxJd4T6\nkB8xVbvjPivo1AyvYK0qmmVQp7WDnEyrrYUZVyRu0a+1O50aTG2GnfSy32YGuNTY\nlMB3PH5BuocSRp+9SsKKTVoW0a01n0RtgVk/EZTO2Eo94qPcsZes6YyAwY4fFVAw\ngG/G3ZJCPdjBI2YLmvhua3bvp9duc5CXmKDxOO49VvjbEB/yvi9pLbuj8KuAt4ht\nfZcZB94wxrR/EMGODs2xgNhH+SwEf5Pc/bPUMRCq/0t6F/HJ47jVnUf17tdtoTT7\nUbQQVyAsr9tKSFzsRKMOGBO4VoenkD5CzUUF3iO/NaXSs/EFu9HG1ctWRKZEVIp/\nMSJBe3jYDXbmeGdQGNJUExpY64hv1XoNd0pAJk0E622o2al1raFusl2PotNvWYdI\nTShgoIHSmNgQQLCfssJH5TABKyLejsgQy5Rz/Vp3kDzkWhwEC0hI42p0S8sr4GhM\n6YEdASb51uP3ftn2ivKshueZHpFOvS1pCGjnEYAEdY4QLJkreznM8w==\n-----END RSA PRIVATE KEY-----\nGENERATING A CERTIFICATE SIGNING REQUEST\nYou generate the Certificate Signing Request as follows:\n1. Run the following command:\nopenssl req -new -key www.domain.com.key -out\nwww.domain.com.csr\nDon’t forget to change www.domain.com with your server’s hostname.\n2. If you encrypted the private key earlier, you are asked for the pass phrase\nfor the private key. Enter the appropriate pass phrase. Then you are asked\nfor country name, state, city, organization name, organization unit/\ndepartment name, common name (that is, your name if the certificate\nrequest is for yourself) or your server’s hostname, as well as e-mail address\nand some optional information (such as a challenge password and an\noptional company name).\n3. When you have filled in the necessary information, you submit your CSR\nto a Certificate Authority such as Thawte. The certification process then\nturns to verifying your individual or business-identity documents; such\nverification may take from a few days to a few weeks or even months. (In\nthe upcoming section, I use Thawte as the chosen CA in the examples.)\n4. If you are in a rush to get the certificate so you can start testing your sys-\ntem and its online security — or have other reasons to get a temporary cer-\ntificate fast — ask the officers of your CA. They may have a way for you to\nget a temporary, untrusted certificate. For example, Thawte allows you to\nsubmit your CSR via the Web for a temporary certificate, which you\nreceive in minutes via e-mail.\n274\nPart III: System Security\n" }, { "page_number": 298, "text": "Creating a Private Certificate\nAuthority\nIf you aren’t interested in getting a signed certificate from a commercial CA, you\ncan create your own CA — and certify entities such as your servers or users — at any\ntime. \nIt may be possible to get a cross-linked certificate for your private CA from a\ncommercial CA.In such a case, your private CA is chained to the commercial\nCA — and everyone should trust any certificate you issue.However,the com-\nmercial CA may limit your certificate-granting authority to your own organi-\nzation to ensure that you don’t become a competitor.\nIt is quite easy to create a private, self-certified CA using OpenSSL. Simply\ndownload the latest ssl.ca-version.tar.gz script distribution version from the\nuser-contributed software section (www.openssl.org/contrib) of the OpenSSL\nWeb site. Extract this file to a directory of your choice. A subdirectory called\nssl.ca-version is created. You find a set of sh scripts in the directory.\nHere is how you can create server and client certificates using your own CA:\nN Run the new-root-ca.sh script to create a self-signed root certificate for\nyour private CA. You are asked for a pass phrase. This pass phrase is\nrequired to sign future certificates.\nN Creating a server certificate\nRun the new-server-cert.sh www.domain.com script to create a server’s\nprivate and public keys. You are asked for distinguished name fields for\nthe new server certificate. The script also generates a CSR, which you can\nsend to a commercial CA later if you so choose.\nN Signing a server certificate\nRun the sign-server-cert.sh script to approve and sign the server cer-\ntificate you created using the new-server-cert.sh script.\nN Creating a user or client certificate\nRun the new-user-cert.sh script to create a user certificate. User certifi-\ncates when signed by a commercial certificate authority can be used with\nWeb browsers to authenticate users to remote services. However, user cer-\ntificates have not yet become common because of lack of understanding\nand availability of both client and server software.\nChapter 11: OpenSSL\n275\n" }, { "page_number": 299, "text": "N Signing a user or client certificate\nRun the sign-user-cert.sh script to sign a user certificate. Also, run the\np12.sh script to package the private key, the signed key, and the CA’s\nPublic key into a file with a .p12 extension. This file can then be\nimported into applications such as e-mail clients for use.\nNow you can use OpenSSL with various applications. \nSummary\nOpenSSL is an integral part of security. The more you get used to OpenSSL, the\nmore easily you can incorporate it in many services. You learn about using\nOpenSSL with Apache and other applications to enhance security, in many chapters\nin this book.\n276\nPart III: System Security\n" }, { "page_number": 300, "text": "Chapter 12\nShadow Passwords and\nOpenSSH\nIN THIS CHAPTER\nN Understanding user-access risks\nN Using shadow passwords\nN Exploring OpenSSH\nN Securing user access\nN Creating a user-access policy\nN Monitoring user access\nMOST SECURITY BREAK-INS VIA the Internet follow this sequence:\nN A hacker launches a program to exploit a known bug in an Internet ser-\nvice daemon process.\nN The exploit program tricks the buggy daemon to change system files for\nroot access.\nN The hacker logs on to the system using an ordinary user account, which\nhe or she either created or stole using the exploit program.\nN The hacker changes more system files and installs trojan programs, which\nensure back-door access for a later time.\nEver wonder what would it be like if you could remove all nonconsole user\naccess from your Internet server — or from the Linux system in your LAN? If a user\nhad only one way to gain shell access to your Linux system — via the console —\nperhaps the number of break-ins would drop substantially. Of course, that would\nturn Linux into Windows NT! Or would it?\nActually, removing user access altogether isn’t quite practical for most Linux\ninstallations. So you must understand the risks involving user accounts and reduce\nthe risks as much as you can. In this chapter you learn exactly that. Typically, a\nuser accesses a Linux system via many means such as Web, Telnet, FTP, rlogin, rsh,\nor rexec. Here I discuss only the non-anonymous types of user access that require\nLinux user accounts.\n277\n" }, { "page_number": 301, "text": "Understanding User Account Risks\nTypically, a user gains non-anonymous access to a Linux system via a username\nand a password. She enters the username and password at the prompt of a commu-\nnication program and gains access. Unfortunately (in most cases), the client\nmachine transmits both the username and password to the Linux server without\nany encryption, in clear text. A malicious hacker could use network packet-sniffing\nhardware/software to sniff out the username and password — with no special effort\nrequired beyond being part of the same network. For example, let’s say that joe1 is\na user of a large ISP called DummyISP and connects to his Linux server (which is\ncolocated at the ISP facility). A hacker who hacked into another colocated server on\nthe same ISP network can now sniff IP packets on their way in and out of their net-\nwork — and find Joe’s username and password if he uses services such as Telnet or\nFTP to connect to his system. Clear-text passwords are indeed a big risk, especially\nwhen the password travels over an untrusted public network: the Internet.\nIf you run a Linux system that allows shell access to many users, make sure\nthe /var/log directory and its files aren’t readable by ordinary users. I\nknow of many incidents when ordinary “unfriendly” users gained access to\nother user accounts by simply browsing the /var/log/messages log file.\nEvery time login fails because of a username and/or password mismatch,the\nincident is recorded in the /var/log/messages file. Because many users\nwho get frustrated with the login process after a few failed attempts often\ntype their passwords in the login: prompt instead of the password:\nprompt, there may be entries in the messages file that show their pass-\nwords.For example,log entries may show that user mrfrog failed to log in a\nfew times, then got in via Telnet, but one entry (in bold) reveals the user’s\npassword when he mistakenly entered the password as a response to the\nlogin: prompt.\nlogin: FAILED LOGIN 2 FROM neno FOR mrfrog,\nAuthentication failure\nPAM_unix: Authentication failure; (uid=0) ->\nmysecretpwd for system-auth service\nlogin: FAILED LOGIN 3 FROM neno FOR mysecretpwd,\nAuthentication failure\nPAM_unix: (system-auth) session opened for user mrfrog\nby (uid=0)\nNow if anyone but the root user can access such a log file, disaster may\nresult.Never let anyone but the root account access your logs!\n278\nPart III: System Security\n" }, { "page_number": 302, "text": "Although passing clear-text usernames and passwords over a network is a big\nconcern, many more security issues are tied to user access. For example, a great\nsecurity risk arises in systems that allow users to pick or change their passwords;\nmost users tend to choose easy passwords that they can remember. If you survey\nyour user base, hardly anyone has passwords like “x86nOop916”. In most cases you\nfind that people choose passwords from dictionary words, names, and numbers that\nthey use every day.\nIn addition, as a result of a long-lasting Unix tradition, Linux systems store the\npassword in a world-readable /etc/passwd file. Although the password entries\naren’t stored in clear text, the file has been the primary target of many security\nexploits, decade after decade. A typical hacker simply tries to retrieve this file in\norder to run a password-guessing program like crack to find weak passwords.\nIf you combine easy-to-guess, clear-text passwords with a world-readable\n/etc/passwd storage location, the result is a major security risk in your user-\nauthentication process.\nSecuring User Accounts\nTo manage your accounts to reduce risks in your user-authentication process, give\neach entry in the /etc/passwd file the following format:\nusername:password:uid:gid:fullname:homedir:shell\nTable 12-1 describes each of these fields.\nTABLE 12-1: /ETC/PASSWD FIELDS\nField Name\nFunction\nUsername\nLogin name of the account\nPassword\nEncoded password\nUID\nUnique user ID \nGID\nGroup ID\nFullname\nTypically used to store a user’s real-world full name but can store short\ncomments\nHomedir\nUser’s home directory path\nShell\nUser’s login shell\nChapter 12: Shadow Passwords and OpenSSH\n279\n" }, { "page_number": 303, "text": "As mentioned before, /etc/passwd is a world readable text file that holds all user\npasswords in an encoded form. The password file should be world-readable; after all,\nmany applications depend on user information such as user ID, group ID, full name,\nor shell for their services. To improve the security of your user-authentication\nprocess, however, you can take several measures immediately. The upcoming sections\ndescribe them.\nUsing shadow passwords and groups\nEnsure that /etc/passwd can’t give away your user secrets. For this you need\nshadow passwords. Luckily, by default Red Hat Linux uses a shadow-password\nscheme — and it begins by storing the user passwords someplace other than the\n/etc/passwd file. Instead, the passwords are stored in the /etc/shadow file, which\nhas the following format:\nusername:password:last:may:must:warn:expire:disable:reserved\nTable 12-2 describes each of these fields.\nTABLE 12-2: /ETC/SHADOW FIELDS\nField Name\nFunction\nusername\nThe username\npassword\nThe encoded password\nlast\nDays since January 1, 1970 that password was last changed\nmay\nMinimum days a user must wait before she can change the password\nsince her last change\nmust\nMaximum number of days that the user can go on without changing her\npassword\nwarn\nNumber of days when the password change reminder starts\nexpire\nDays after password expires that account is disabled\ndisable\nDays since Jan. 1, 1970 that account is disabled\nreserved\nA reserved field\nThe /etc/passwd file format remains exactly the same as it was — except\nthe password field is always set to ‘x’ instead of the encoded user password.\n280\nPart III: System Security\n" }, { "page_number": 304, "text": "An example entry of the /etc/shadow password file looks like this:\nmrfrog:$1$ar/xabcl$XKfp.T6gFb6xHxol4xHrk.:11285:0:99999:7:::\nThis line defines the account settings for a user called mrfrog. Here mrfrog has\nlast changed his password 11285 days since January 1, 1970. Because the minimum\nnumber of days he must wait before he can change the password is set to 0, he can\nchange it at any time. At the same time, this user can go on for 99,999 days with-\nout changing the password.\nAGING YOUR USER ACCOUNTS\nAlthough a shadow-password file could allow users to go on without changing\ntheir passwords, good security demands otherwise. Therefore, the shadow-password\nmechanism can incorporate the concept of password aging so the users must\nchange passwords at a set interval.\nUnder a shadow-password scheme, when you create a new user account, the\nuser entry in /etc/shadow is created using the default values stored in the\n/etc/login.defs configuration file. The default version of this file contains the\nfollowing entries:\nPASS_MAX_DAYS 99999\nPASS_MIN_DAYS 0\nPASS_MIN_LEN 5\nPASS_WARN_AGE 7\nThe PASS_MAX_DAYS entry dictates how long a user can go on without changing\nher password. The default value is 99999, which means that a user can go for\napproximately 274 years before changing the password. I recommend changing\nthis to a more realistic value. An appropriate value probably is anywhere from 30\nto 150 days for most organizations. If your organization frequently faces password\nsecurity problems, use a more restrictive number in the 15- to 30-day range.\nThe PASS_MIN_DAYS entry dictates how long the user must wait before she can\nchange her password since her last change. The default value of 0 lets the user\nchange the password at any time. This user flexibility can be good if you can\nensure that your users choose hard-to-guess passwords. The PASS_MIN_LEN entry\nsets the minimum password length. The default value reflects the frequently used\nminimum size of 5. The PASS_WARN_AGE entry sets the reminder for the password\nchange. I use the following settings in many systems that I manage:\nPASS_MAX_DAYS 150\nPASS_MIN_DAYS 0\nPASS_MIN_LEN 5\nPASS_WARN_AGE 7\nChapter 12: Shadow Passwords and OpenSSH\n281\n" }, { "page_number": 305, "text": "Before changing a system configuration file such as /etc/login.defs,\n/etc/passwd,or /etc/shadow,back up the file.\nAfter you modify the /etc/login.defs file, make sure your aging policy works\nas expected.\nTESTING YOUR NEW ACCOUNT AGING POLICY IN ACTION\nCreate a test user account using the useradd testuser command and set the pass-\nword using the passwd testuser command. Then verify that the default values\nfrom /etc/login.defs are used in the /etc/shadow file. To simulate aging, you\ncan simply modify the last password change day count. This shows an entry in my\n/etc/shadow file for testuser.\ntestuser:$1$/fOdEYFo$qcxNCerBbSE6unDn2uaCb1:11294:0:150:7:::\nHere the last password change was on Sunday, December 3, 2000, which makes\n11,294 days since January 1, 1970. Now, if I want to see what happens after 150\ndays have elapsed since the last change, I can simply subtract 150+1 from 11,295\nand set the last change value like this:\ntestuser:$1$/fOdEYFo$qcxNCerBbSE6unDn2uaCb1:11143:0:150:7:::\nNow, if I try to log in to the system using this account, I must change the pass-\nword because it has aged. Once you have tested your settings by changing appro-\npriate values in the /etc/shadow file, you have a working password-aging policy.\nRemove the test user account using the userdel testuser command.\nChecking password consistency\nWhen you work with password files like /etc/passwd and /etc/shadow, be very\ncareful:\nN Back up these files before modification.\nN Confirm the validity of your files by running a consistency checker.\n282\nPart III: System Security\n" }, { "page_number": 306, "text": "The pwck command can do exactly that. This command performs integrity\nchecking for both of the password files and the /etc/group file, too.\nAlthough shadow passwords and password aging are great ways to fight user\nsecurity risks, the clear-text password risk still remains. To eliminate that risk, stop\nusing shell access that requires clear-text passwords.\nNormally you should have only one superuser (that is,root) account in your\n/etc/passwd and /etc/shadow files.For security, periodically scan these\nfiles so you know there’s only one root entry.The grep ‘:x:0:’ /etc/\npasswd command displays all users who have root access.\nEliminating risky shell services\nTelnet, which uses clear-text passwords, is the primary culprit of all shell-related\nsecurity incidents. Unfortunately, Red Hat Linux comes with Telnet service turned on.\nDon't use Telnet for accessing your Linux system. To disable Telnet do the following:\nDon’t continue if you are currently using Telnet to access the server.You\nmust follow the steps below from the console.\nN Log in to your Linux system as root from the console.\nN Using vi or another text editor, open the /etc/services file. Search for\nthe string telnet, and you should see a line such as the following:\ntelnet 23/tcp\nN Insert a # character before the word telnet, which should make the line\nlook like this:\n#telnet 23/tcp\nN Save the /etc/services file.\nN Modify the /etc/xinetd.conf file by adding disabled = telnet line in\nthe defaults section.\nFor more about configuring xinetd,see Chapter 14.\nChapter 12: Shadow Passwords and OpenSSH\n283\n" }, { "page_number": 307, "text": "N If you have a file called /etc/xinetd.d/telnet, modify this file by\nadding a new line, disabled = yes, so that the Telnet service definition\nlooks like the following:\nservice telnet\n{\ndisable = yes\nflags = REUSE\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/sbin/in.telnetd\nlog_on_failure += USERID\ndisabled = yes\n}\nN Restart xinetd using the killall –USR1 xinetd command.\nThis command disables the Telnet service immediately. Verify that Telnet\nservice is no longer available by running the telnet localhost 23 com-\nmand; you should get the following error message:\nTrying 127.0.0.1...\ntelnet: Unable to connect to remote host: Connection refused\nIf you don’t get this error message, xinetd hasn’t been restarted properly\nin the last step of the example. Retry that command and return to\nverifying.\nAs an added security precaution, remove the /usr/sbin/in.telnetd\nTelnet daemon.\nAlthough Telnet is the most frequently used method for accessing a remote sys-\ntem, you may also have rlogin, rsh, or rexec services turned on. Check the fol-\nlowing directory carefully:\n/etc/xinetd.d/\nIf you don’t see a disabled = yes line in the service definition, add one in each\nof these files and then restart xinetd.\nIf it isn’t practical to access the system via the console, use Secure Shell (SSH)\nfor remote access. SSH encrypts all your traffic, including your passwords, when\nyou connect to another machine over the network, effectively eliminating risks\nassociated with eavesdropping in a network connection.\n284\nPart III: System Security\n" }, { "page_number": 308, "text": "Using OpenSSH for Secured\nRemote Access\nThe OpenSSH suite of tools implements the SSH1 and SSH2 protocols. These proto-\ncols allow a cryptographically secure connection between the server running the\nOpenSSH daemon and the client machine.\nGetting and installing OpenSSH\nYou can download OpenSSH from http://www.openssh.com; the latest version is\n2.3.0. Download the following RPM packages:\nopenssh-version.rpm\nopenssh-clients-version.rpm\nopenssh-server-version.rpm\nopenssh-version.src.rpm\nYou need only the first three RPM if you want to install the OpenSSH binaries.\nOpenSSH uses OpenSSL (See Using OpenSSL chapter) and the general-purpose, in-\nmemory compression/decompression library called Zlib. Red Hat supplies Zlib RPMs,\nwhich should be already installed on your system. You can check this using the rpm\n–qa | grep zlib command. If you don’t already have Zlib installed, download and\ninstall the Zlib RPM packages\n(zlib-version.rpm, zlib-devel-version.\nrpm) from a Red Hat RPM site. You can also download the Zlib source code from\nftp://ftp.freesoftware.com/pub/infozip/zlib/, then compile and install it.\nOnce your system meets all the OpenSSH prerequisites, you can install OpenSSH.\nI downloaded the following RPM packages:\nopenssh-2.3.0p1-1.i386.rpm\nopenssh-clients-2.3.0p1-1.i386.rpm\nopenssh-server-2.3.0p1-1.i386.rpm\nopenssh-2.3.0p1-1.src.rpm\nTo avoid or reduce future debugging time,it’s better to install the client soft-\nware on the server and thus remove the issues that occur because of remote\naccess. Running the client from the server ensures that you aren’t likely to\nface DNS issues or other network issues.Once you get the client working on\nthe server, you can try a remote client knowing that the software works and\nany problem probably is related to network configuration and availability.\nChapter 12: Shadow Passwords and OpenSSH\n285\n" }, { "page_number": 309, "text": "I like to have source code available — so I installed all the preceding packages\nusing the rpm -ivh openssh*.rpm command. If you decide to compile the source\ncode (openssh-version.src.rpm), see the following instructions after you run the\nrpm –ivh openssh-version.src.rpm command:\nBecause the source distribution doesn’t install all the necessary configura-\ntion files, be sure to install all the binary RPMs first — and then compile and\ninstall the source on top of them.\n1. Make /usr/src/redhat/SOURCES your current directory.\n2. Extract the OpenSSH tar ball by using the tar xvzf openssh-version.tar.gz\ncommand. This extracts the source code and creates a new directory called\nopenssh-version. Make openssh-version your current directory.\n3. Run ./configure, then make, and finally make install to install the\nOpenSSH software.\n4. Replace the binary RPM installed lines in the /etc/pam.d/sshd file with\nthe lines shown in the listing that follows. The new file tells the SSH dae-\nmon to use the system-wide authentication configuration (found in the\n/etc/pam.d/system-auth file).\n#%PAM-1.0\nauth required /lib/security/pam_stack.so service=system-auth\nauth required /lib/security/pam_nologin.so\naccount required /lib/security/pam_stack.so service=system-auth\npassword required /lib/security/pam_stack.so service=system-auth\nsession required /lib/security/pam_stack.so service=system-auth\nNow you can configure OpenSSH.\nConfiguring OpenSSH service\nThe RPM version of the OpenSSH distribution creates a directory called /etc/ssh.\nThis directory contains the following files:\nssh_host_dsa_key\nssh_host_dsa_key.pub\nssh_host_key\nssh_host_key.pub\nsshd_config\n286\nPart III: System Security\n" }, { "page_number": 310, "text": "Files ending with a .pub extension store the public keys for the OpenSSH server.\nThe files with the .key extension store the private keys. The private keys shouldn’t\nbe readable by anyone but the root user. The very last file, sshd_config, is the\nconfiguration file. Listing 12-1 shows the default version of this file (slightly mod-\nified for brevity).\nListing 12-1: /etc/ssh/ssh_config\n# /etc/ssh/ssh_config file\n# This is ssh server systemwide configuration file.\nPort 22\nListenAddress 0.0.0.0\nHostKey /etc/ssh/ssh_host_key\nServerKeyBits 768\nLoginGraceTime 600\nKeyRegenerationInterval 3600\nPermitRootLogin yes\nIgnoreRhosts yes\nStrictModes yes\nX11Forwarding no\nX11DisplayOffset 10\nPrintMotd yes\nKeepAlive yes\nSyslogFacility AUTH\nLogLevel INFO\nRhostsAuthentication no\nRhostsRSAAuthentication no\nRSAAuthentication yes\nPasswordAuthentication yes\nPermitEmptyPasswords no\nCheckMail no\nThese directives may require changes:\nN\nPort specifies the port number that sshd binds to listen for connections.\nI The default value of 22 is standard.\nI You can add multiple Port directives to make sshd listen to multiple\nports.\nA non-standard port for SSH (a port other than 22) can stop some port\nscans.\nChapter 12: Shadow Passwords and OpenSSH\n287\n" }, { "page_number": 311, "text": "N\nListenAddress specifies the IP address to listen on.\nI By default, sshd listens to all the IP addresses bound to the server.\nI The Port directive must come before the ListenAddress directive.\nN\nHostKey specifies the fully qualified path of the private RSA host key file.\nN\nServerKeyBits specifies the number of bits in the server key.\nN\nLoginGraceTime specifies the grace period for login request to complete.\nN\nKeyRegenerationInterval specifies the time interval for generating the\nkey.\nN\nPermitRootLogin, when set to yes, enables sshd to log in as the root\nuser. Set this to no unless you have used the /etc/hosts.allow and\n/etc/hosts.deny files (discussed in a later section) to restrict sshd\naccess.\nN\nIgnoreRhostshas, when set to yes, enables sshd to ignore the .rhosts\nfile found in a user’s home directory. Leave the default as is.\nN\nStrictModes, when set to yes, has sshd enable a strict mode of opera-\ntion. Normally sshd doesn’t allow connection to users whose home direc-\ntory (or other important files such as .rhosts) are world-readable. Leave\nthe default as is.\nN\nX11Forwarding, if set to yes, allows X Window System forwarding. I typ-\nically don’t use the X Window System, so I set this to no.\nN\nX11DisplayOffset specifies the first X Window System display available\nto SSH for forwarding. Leave the default as is.\nN\nPrintMotd, when set to yes, has sshd print the /etc/motd file when a\nuser logs in. This is a relatively minor option.\nN\nKeepAlive, when set to yes, has sshd use the KeepAlive protocol for\nreducing connection overhead. Leave the default as is.\nN\nSyslogFacility specifies which syslog facility is used by sshd. Leave the\ndefault as is.\nN\nLogLevel specifies which log level is used for syslog. Leave the default as is.\nN\nRhostsAuthentication, when set to no, has sshd disable any authentica-\ntion based on .rhosts- or /etc/hosts.equiv. Leave the default as is.\n288\nPart III: System Security\n" }, { "page_number": 312, "text": "N\nRhostsRSAAuthentication, when set to no, has sshd disable .rhosts-\nbased authentication even if RSA host authentication is successful. Leave\nthe default as is.\nN\nRSAAuthentication specifies whether RSA-based authentication is\nallowed. Leave the default as is.\nN\nPasswordAuthentication specifies whether password-based authentica-\ntion is allowed. Leave the default as is.\nN\nPermitEmptyPasswords specifies whether empty passwords are okay.\nLeave the default as is.\nN\nCheckMail, upon successful login, has sshd check whether the user has\ne-mail. Leave the default as is.\nOnce you’ve made all the necessary changes to the /etc/ssh/ssh_config file,\nyou can start sshd. The next subsections discuss the two ways you can run sshd:\nN standalone service\nN\nxinetd service\nSTANDALONE SERVICE\nThe standalone method is the default method for running sshd. In this method, the\ndaemon is started at server startup, using the /etc/rc.d/init.d/sshd script. This\nscript is called from the appropriate run-level directory. For example, if you boot\nyour Linux system in run-level 3 (default for Red Hat Linux), you can call the script\nby using the /etc/rc.d/rc3.d/S55sshd link, which points to the /etc/rc.d/\ninit.d/sshd script.\nTo run sshd in standalone mode, you must install the openssh-server-\nversion.rpm package. If you have installed sshd only by compiling the source\ncode, follow these steps:\n1. Create a script named /etc/rc.d/init.d/sshd, as shown in Listing 12-2.\nThis script is supplied by Red Hat in the binary RPM package for the sshd\nserver.\nListing 12-2: /etc/rc.d/init.d/sshd\n#!/bin/bash\n# Init file for OpenSSH server daemon\n# chkconfig: 2345 55 25\n# description: OpenSSH server daemon\n# processname: sshd\n# config: /etc/ssh/ssh_host_key\n# config: /etc/ssh/ssh_host_key.pub\nContinued\nChapter 12: Shadow Passwords and OpenSSH\n289\n" }, { "page_number": 313, "text": "Listing 12-2 (Continued)\n# config: /etc/ssh/ssh_random_seed\n# config: /etc/ssh/sshd_config\n# pidfile: /var/run/sshd.pid\n# source function library\n. /etc/rc.d/init.d/functions\nRETVAL=0\n# Some functions to make the below more readable\nKEYGEN=/usr/bin/ssh-keygen\nRSA_KEY=/etc/ssh/ssh_host_key\nDSA_KEY=/etc/ssh/ssh_host_dsa_key\nPID_FILE=/var/run/sshd.pid\ndo_rsa_keygen() {\nif $KEYGEN -R && ! test -f $RSA_KEY ; then\necho -n “Generating SSH RSA host key: “\nif $KEYGEN -q -b 1024 -f $RSA_KEY -C ‘’ -N ‘’ >&/dev/null; then\nsuccess “RSA key generation”\necho\nelse\nfailure “RSA key generation”\necho\nexit 1\nfi\nfi\n}\ndo_dsa_keygen() {\nif ! test -f $DSA_KEY ; then\necho -n “Generating SSH DSA host key: “\nif $KEYGEN -q -d -b 1024 -f $DSA_KEY -C ‘’ -N ‘’ >&/dev/null;\nthen\nsuccess “DSA key generation”\necho\nelse\nfailure “DSA key generation”\necho\nexit 1\nfi\nfi\n}\ncase “$1” in\nstart)\n# Create keys if necessary\ndo_rsa_keygen;\ndo_dsa_keygen;\n290\nPart III: System Security\n" }, { "page_number": 314, "text": "echo -n “Starting sshd: “\nif [ ! -f $PID_FILE ] ; then\nsshd\nRETVAL=$?\nif [ “$RETVAL” = “0” ] ; then\nsuccess “sshd startup”\ntouch /var/lock/subsys/sshd\nelse\nfailure “sshd startup”\nfi\nfi\necho\n;;\nstop)\necho -n “Shutting down sshd: “\nif [ -f $PID_FILE ] ; then\nkillproc sshd\n[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/sshd\nfi\necho\n;;\nrestart)\n$0 stop\n$0 start\nRETVAL=$?\n;;\ncondrestart)\nif [ -f /var/lock/subsys/sshd ] ; then\n$0 stop\n$0 start\nRETVAL=$?\nfi\n;;\nstatus)\nstatus sshd\nRETVAL=$?\n;;\n*)\necho “Usage: sshd {start|stop|restart|status|condrestart}”\nexit 1\n;;\nesac\nexit $RETVAL\nChapter 12: Shadow Passwords and OpenSSH\n291\n" }, { "page_number": 315, "text": "2. Link this script to your run-level directory, using the following command:\nln –s /etc/rc.d/init.d/sshd /etc/rc.d/rc3.d/S55sshd\nThis form of the command assumes that your run level is 3, which is\ntypical.\nThe openssh-server-version.rpm package contains the preceding script along\nwith other files, making it easy to administer the SSH daemon. If you installed this\npackage earlier, you can start the daemon this way:\n/etc/rc.d/init.d/sshd start\nWhen you run the preceding command for the very first time, you see output\nlike this:\nGenerating SSH RSA host key: [ OK ]\nGenerating SSH DSA host key: [ OK ]\nStarting sshd: [ OK ]\nBefore the SSH daemon starts for the very first time, it creates both public and\nprivate RSA and DSA keys — and stores them in the /etc/ssh directory. Make sure\nthat the key files have the permission settings shown here:\n-rw------- 1 root root 668 Dec 6 09:42 ssh_host_dsa_key\n-rw-rw-r-- 1 root root 590 Dec 6 09:42 ssh_host_dsa_key.pub\n-rw------- 1 root root 515 Dec 6 09:42 ssh_host_key\n-rw-rw-r-- 1 root root 319 Dec 6 09:42 ssh_host_key.pub\n-rw------- 1 root root 1282 Dec 3 16:44 sshd_config\nThe files ending in _key are the private key files for the server and must not be\nreadable by anyone but the root user. To verify that the SSH daemon started, run\nps aux | grep sshd, and you should see a line like this one:\nroot 857 0.0 0.6 3300 1736 ? S 09:29 0:00 sshd\nOnce the SSH daemon starts, SSH clients can connect to the server. Now, if you\nmake configuration changes and want to restart the server, simply run the\n/etc/rc.d/init.d/sshd restart command. If you want to shut down the sshd\nserver for some reason, run the /etc/rc.d/init.d/sshd stop command.\nYou can safely run the /etc/rc.d/init.d/sshd stop command,even if\nyou are currently connected to your OpenSSH server via an SSH client.You\naren’t disconnected.\n292\nPart III: System Security\n" }, { "page_number": 316, "text": "RUNNING SSHD AS XINETD SERVICE\nEvery time sshd runs, it generates the server key — which is why sshd is typically\nrun only once (in standalone mode) during server startup. However, to use xinetd’s\naccess control features for the ssh service, you can run it as xinetd service. Here’s\nhow:\n1. Create a service file for xinetd called /etc/xinetd.d/sshd, as shown in\nthe following listing:\nservice ssh\n{\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/local/sbin/sshd\nserver_args = -i\nlog_on_success += DURATION USERID\nlog_on_failure += USERID\nnice = 10\n}\n2. Run the ps auxw | grep sshd command to check whether sshd is\nalready running. If it’s running, stop it by using the /etc/rc.d/init.d/\nsshd stop command.\n3. Force xinetd to load its configuration using the killall –USR1 xinetd\ncommand.\nNow you can set up SSH clients. Typically, most people who access a Linux\nserver are running sshd from another Linux system (or from a PC running\nWindows or some other operating system).\nConnecting to an OpenSSH server\nA Linux system can connect to an OpenSSH server. To run the OpenSSH client on a\nLinux system install OpenSSL (see Using OpenSSL chapter) and the following\nOpenSSH packages.\nopenssh-version.rpm\nopenssh-clients-version.rpm\nTry the client software on the server itself so that you know the entire\nclient/server environment is working before attempting to connect from a\nremote client system.\nChapter 12: Shadow Passwords and OpenSSH\n293\n" }, { "page_number": 317, "text": "If you are following my recommendations, then you already have these two\npackages installed on your server. If that is the case, go forward with the configu-\nration as follows:\n1. Log on to your system as an ordinary user.\n2. Generate a public and private key for yourself, which the client uses on\nyour behalf.\nTo generate such keys run the /usr/bin/ssh-keygen command.\nThis command generates a pair of public and private RSA keys, which are\nneeded for default RSA authentication. The keys are stored in a subdirec-\ntory called .ssh within your home directory.\nI\nidentity.pub is the public key.\nI\nidentity is the private key.\n3. To log in to the OpenSSH server, run the ssh -l username hostname\ncommand, where username is your username on the server and the host-\nname is the name of the server.\nFor example, to connect to a server called k2.nitec.com, I can run the\nssh –l kabir k2.nitec.com command.\n4. The first time you try to connect to the OpenSSH server, you see a mes-\nsage that warns you that ssh, the client program, can’t establish the\nauthenticity of the server. An example of this message is shown here:\nThe authenticity of host ‘k2.nitec.com’ can’t be established.\nYou are asked whether you want to continue. Because you must trust your\nown server, enter yes to continue. You are warned that this host is perma-\nnently added to your known host list file. This file, known_hosts, is cre-\nated in the .ssh directory.\n5. You are asked for the password for the given username. Enter appropriate\npassword.\nTo log in without entering the password,copy the identity.pub file from\nyour workstation to a subdirectory called .ssh in the home directory on\nthe OpenSSH server. On the server, rename this identity.pub file to\nauthorized_keys, using the mv identity.pub authorized_keys\ncommand. Change the permission settings of the file to 644, using the\nchmod 644 authorized_keys command. Doing so ensures that only\n294\nPart III: System Security\n" }, { "page_number": 318, "text": "you can change your public key and everyone else can only read it. This\nallows the server to authenticate you by using the your public key, which is\nnow available on both sides.\n6. Once you enter the correct password, you are logged in to your OpenSSH\nserver using the default SSH1 protocol. To use the SSH2 protocol:\nI Use the -2 option\nI Create RSA keys using the ssh-keygen command.\nIf you enter a pass phrase when you generate the keys using ssh-keygen pro-\ngram, you are asked for the pass phrase every time ssh accesses your private key\n(~/.ssh/identity) file. To save yourself from repetitively typing the pass phrase,\nyou can run the script shown in Listing 12-3.\nListing 12-3: ssh-agent.sh script\n#!/bin/sh\n# Simple script to run ssh-agent only once.\n# Useful in a multi-session environment (like X),\n# or if connected to a machine more than once.\n# Written by: Danny Sung \n# Released under the GPL\n# Sat May 22 23:04:19 PDT 1999\n# $Log: ssh-agent.sh,v $\n# Revision 1.4 1999/05/23 07:52:11 dannys\n# Use script to print, not ssh-agent.\n# Revision 1.3 1999/05/23 07:44:59 dannys\n# Added email address to comments.\n# Added GPL license.\n# Revision 1.2 1999/05/23 07:43:04 dannys\n# Added ability to kill agent.\n# Added csh/sh printouts for kill statement.\n# Revision 1.1.1.1 1999/05/23 06:05:46 dannys\n# SSH utilities/scripts\n#\nSSHDIR=”${HOME}/.ssh”\nHOSTNAME=”`hostname`”\nLOCKFILE=”${SSHDIR}/agent/${HOSTNAME}”\nSHELL_TYPE=”sh”\nRUNNING=0\nparse_params()\n{\nContinued\nChapter 12: Shadow Passwords and OpenSSH\n295\n" }, { "page_number": 319, "text": "Listing 12-3 (Continued)\nwhile [ $# -ge 1 ]; do\ncase “$1” in\n-s)\nSHELL_TYPE=”sh”\n;;\n-c)\nSHELL_TYPE=”csh”\n;;\n-k)\nkill_agent\n;;\n*)\necho “[-cs] [-k]”\nexit 0\n;;\nesac\nshift\ndone\n}\nsetup_dir()\n{\nif [ ! -e “${SSHDIR}/agent” ]; then\nmkdir “${SSHDIR}/agent”\nfi\n}\nget_pid()\n{\nif [ -e “${LOCKFILE}” ]; then\nPID=`cat “${LOCKFILE}” | grep “echo” | sed ‘s/[^0-9]*//g’`\nelse\nPID=””\nfi\n}\ncheck_stale_lock()\n{\nRUNNING=”0”\nif [ ! -z “$PID” ]; then\nps_str=`ps auxw | grep $PID | grep -v grep`\nif [ -z “$ps_str” ]; then\nrm -f “${LOCKFILE}”\nelse\n# agent already running\nRUNNING=”1”\nfi\nfi\n296\nPart III: System Security\n" }, { "page_number": 320, "text": "}\nstart_agent()\n{\nif [ “$RUNNING” = “1” ]; then\n. “${LOCKFILE}” > /dev/null\nelse\nssh-agent -s > “${LOCKFILE}”\n. “${LOCKFILE}” > /dev/null\nfi\n}\nkill_agent()\n{\ncheck_stale_lock\nif [ -e “${LOCKFILE}” ]; then\n. “${LOCKFILE}” > /dev/null\ncase “$SHELL_TYPE” in\nsh)\nPARAMS=”-s”\n;;\ncsh)\nPARAMS=”-c”\n;;\n*)\nPARAMS=””\n;;\nesac\nssh-agent ${PARAMS} -k > /dev/null\nrm -f “${LOCKFILE}”\nfi\nprint_kill\nexit 0\n}\nprint_agent()\n{\ncase “$SHELL_TYPE” in\ncsh)\necho “setenv SSH_AUTH_SOCK $SSH_AUTH_SOCK;”\necho “setenv SSH_AGENT_PID $SSH_AGENT_PID;”\n;;\nsh)\necho “SSH_AUTH_SOCK=$SSH_AUTH_SOCK; export SSH_AUTH_SOCK;”\necho “SSH_AGENT_PID=$SSH_AGENT_PID; export SSH_AGENT_PID;”\n;;\nesac\necho “echo Agent pid $PID”\nContinued\nChapter 12: Shadow Passwords and OpenSSH\n297\n" }, { "page_number": 321, "text": "Listing 12-3 (Continued)\n}\nprint_kill()\n{\ncase “$SHELL_TYPE” in\ncsh)\necho “unsetenv SSH_AUTH_SOCK;”\necho “unsetenv SSH_AGENT_PID;”\n;;\nsh)\necho “unset SSH_AUTH_SOCK;”\necho “unset SSH_AGENT_PID;”\n;;\nesac\necho “echo Agent pid $PID killed”\n}\nsetup_dir\nget_pid\nparse_params $*\ncheck_stale_lock\nstart_agent\nget_pid\nprint_agent\nWhen you run this script once, you can use ssh multiple times without entering\nthe pass phrase every time. For example, after you run this script you can start the\nX Window System as usual using startx or other means you use. If you run ssh\nfor remote system access from xterm, the pass phrase isn’t required after the very\nfirst time. This can also be timesaving for those who use ssh a lot.\nManaging the root Account\nIn most cases, an intruder with a compromised user account tries for root access as\nsoon as possible. This is why it’s very important to know how to manage your root\naccount.\nTypically, the root account is the Holy Grail of all break-in attempts. Once the\nroot account is compromised, the system is at the mercy of an intruder. By simply\nrunning a command such as rm –rf /, an intruder can wipe out everything on the\nroot filesystem or even steal business secrets. So if you have root access to your\nsystem, be very careful how you use it. Simple mistakes or carelessness can create\nserious security holes that can cause great harm. Each person with root privileges\nmust follow a set of guidelines. Here are the primary guidelines that I learned from\nexperienced system administrators:\n298\nPart III: System Security\n" }, { "page_number": 322, "text": "N Be root only if you must. Having root access doesn’t mean you should\nlog in to your Linux system as the root user to read e-mail or edit a text\nfile. Such behavior is a recipe for disaster! Use a root account only to\nI Modify a system file that can’t be edited by an ordinary user account\nI Enable a service or to do maintenance work, such as shutting down the\nserver\nN Choose a very difficult password for root.\nroot is the Holy Grail for security break-ins.Use an unusual combination of\ncharacters,pun\\ctuation marks,and numbers.\nN Cycle the root password frequently. Don’t use the same root password\nmore than a month. Make a mental or written schedule to change the\nroot password every month.\nN Never write down the root password. In a real business, usually the\nroot password is shared among several people. So make sure you notify\nappropriate coworkers of your change, or change passwords in their pres-\nence. Never e-mail the password to your boss or colleagues.\nLimiting root access\nFortunately, the default Red Hat Linux system doesn’t allow login as root via\nTelnet or any other remote-access procedure. This magic is done using the\n/etc/securetty file. This file lists a set of TTY devices that are considered secure\nfor root access. The default list contains only vc/1 through vc/11 and tty1\nthrough tty11; that is, virtual consoles 1 through 11, which are tied to tty1\nthrough tty11. This is why you can log in directly as root only from the physical\nconsole screen using a virtual console session. The big idea here is that if you are at\nthe system console, you are okay to be the root user.\nIf you look at the /etc/inittab file,you notice that it has lines such as the\nfollowing:\n# Run gettys in standard runlevels\n1:2345:respawn:/sbin/mingetty tty1\n2:2345:respawn:/sbin/mingetty tty2\n3:2345:respawn:/sbin/mingetty tty3\n4:2345:respawn:/sbin/mingetty tty4\nChapter 12: Shadow Passwords and OpenSSH\n299\n" }, { "page_number": 323, "text": "5:2345:respawn:/sbin/mingetty tty5\n6:2345:respawn:/sbin/mingetty tty6\nThese lines tie vc/1 through vc/6 to tty1 through tty6.You can remove\nthe rest of the unused virtual consoles and TTYs from the\n/etc/securetty file (the lines for vc/7 through vc/11 and tty7\nthrough tty11).\nThe /etc/securetty file must not be readable by anyone other than the root\nuser account itself. Because login-related processes run as root, they can access the\nfile to verify that root-account access is authorized for a certain tty device. If\npseudo-terminal devices such as pts/0, pts/1, and pts/3 are placed in this file,\nyou can log in as the root user — which means that anyone else can try brute-force\nhacks to break in, simply by trying to log in as root. To ensure that this file has the\nappropriate permission settings that don’t allow others to change the file, run the\nchown root /etc/securetty and chmod 600 /etc/securetty commands.\nThe OpenSSH daemon, sshd, doesn’t use the /etc/securetty file to\nrestrict access to the root account.\nIt uses a directive called\nPermitRootLogin in the /etc/ssh/sshd_config file to control root\nlogins. If this directive is set to yes then direct root login from remote sys-\ntems is allowed. Disable this option by setting it to no and restarting the\ndaemon (using the /etc/rc.d/init.d/sshd restart command).\nYou can’t log in as root because of /etc/securetty (or the PermitRootLogin =\nno line in the /etc/ssh/sshd_config file). So if you need to be the root user and\ncan’t access the machine from the physical console, you can use the su command.\nUsing su to become root or another user\nThe su command can run a shell with a user and group ID other than those you\nused to log in. For example, if you are logged in as user kabirmj and want to\nbecome user gunchy, simply run su gunchy.\nTo run the su session as a login session of the new user,use the - option.For\nexample, su – gunchy switches to the user gunchy and runs such files as\n.login, .profile, and .bashrc files as if the user had logged in\ndirectly.\n300\nPart III: System Security\n" }, { "page_number": 324, "text": "Similarly, to become the root user from an ordinary user account, run the su\nroot command. You are asked for the root password. Once you enter the appropri-\nate password, you are in.\nA common shortcut switch to root is to run the su command without any\nusername.\nYou can switch back and forth between your root session and the original ses-\nsion by using the suspend and fg commands. For example, you can su to root\nfrom an ordinary user account and then if you must return to the original user\nshell, simply run the suspend command to temporarily stop the su session. To\nreturn to the su session run the fg command.\nThe su command is a PAM-aware application and uses the /etc/pam.d/su con-\nfiguration file as shown in Listing 12-4.\nListing 12-4: /etc/pam.d/su\n#%PAM-1.0\nauth sufficient /lib/security/pam_rootok.so\n# Uncomment the following line to implicitly trust users in the “wheel” group.\n#auth sufficient /lib/security/pam_wheel.so trust use_uid\n# Uncomment the following line to require a user to be in the “wheel” group.\n#auth required /lib/security/pam_wheel.so use_uid\nauth required /lib/security/pam_stack.so service=system-auth\naccount required /lib/security/pam_stack.so service=system-auth\npassword required /lib/security/pam_stack.so service=system-auth\nsession required /lib/security/pam_stack.so service=system-auth\nsession optional /lib/security/pam_xauth.so\nThe preceding configuration file allows the root user to su to any other user\nwithout a password, which makes sense because going from high privilege to low\nprivilege isn’t insecure by design. However, the default version of this file also per-\nmits any ordinary user who knows the root password to su to root. No one but the\nroot user should know his or her password; making the root account harder to\naccess for unauthorized users who may have obtained the password makes good\nsecurity sense. Simply uncomment (that is, remove the # character from) the fol-\nlowing line:\n#auth required /lib/security/pam_wheel.so use_uid\nNow the users who are listed in the wheel group in the /etc/group file can use\nthe su command to become root.\nChapter 12: Shadow Passwords and OpenSSH\n301\n" }, { "page_number": 325, "text": "An ordinary user can su to other ordinary user accounts without being a\nmember of the wheel group. The wheel group restrictions apply only to\nroot account access.\nNow, if you want to enable a user to become root via the su facility, simply add\nthe user into the wheel group in the /etc/group file. For example, the following\nline from my /etc/group file shows that only root and kabir are part of the\nwheel group.\nwheel:x:10:root,kabir\nDon’t use a text editor to modify the /etc/group file. Chances of making\nhuman mistakes such as typos or syntax errors are too great and too risky.\nSimply issue the usermod command to modify a user’s group privileges.For\nexample, to add kabir to the wheel group, run the usermod -G wheel\nkabir command.\nThe su command is great to switch over from an ordinary user to root but it’s\nan all-or-nothing type of operation. In other words, an ordinary user who can su to\nroot gains access to all that root can do. This is often not desirable. For example,\nsay you want a coworker to be able to start and stop the Web server if needed. If\nyou give her the root password so that she can su to root to start and stop the\nWeb server, nothing stops her from doing anything else root can do. Thankfully,\nthere are ways to delegate selected root tasks to ordinary users without giving\nthem full root access.\nUsing sudo to delegate root access\nThere are two common ways to delegate root tasks. You can change file permis-\nsions for programs that normally can only be run by root. Typically, you use set-\nUID for this so that an ordinary user can act as the root user. Using set-UID is\ndiscussed in a later chapter (see Securing Filesystems.) This method, though, is very\nunsafe and cumbersome to manage. The other option is called sudo, which is short\nfor superuser do.\nThe sudo suite of programs can let users (or user groups) run selected commands\nas root. When an ordinary user uses sudo to execute a privileged command, sudo\nlogs the command and the arguments so that a clear audit trail is established.\nBecause the sudo package isn’t in the standard Red Hat Linux distribution, you\nmust install it yourself.\n302\nPart III: System Security\n" }, { "page_number": 326, "text": "COMPILING AND INSTALLING SUDO\nThe official Web site for the sudo package is http://www.courtesan.com/sudo/.\nYou can download the sudo source distribution from there. Or, you can download\nthe RPM version of sudo from the very useful RPM Finder Web site at\nhttp://rpmfind.net. Search for sudo to locate the sudo RPMs at this site.\nBecause I prefer to compile and install software, I recommend that you down-\nload the sudo source RPM package. As of this writing the latest source sudo RPM is\nsudo-1.6.3-4.src.rpm. The version that you download may be different, so make\nsure you replace the version number (1.6.3-4) wherever I refer to it in the follow-\ning section.\nTo install the latest sudo binary RPM package suitable for your Red Hat\nLinux architecture (such as i386, i686, or alpha), download it from the RPM\nFinder Web site and install it using the rpm command. For example, the lat-\nest binary RPM distribution for i386 (Intel) architecture is sudo-\n1.6.3-4.i386.rpm. Run the rpm –ivh sudo-1.6.3-4.i386.rpm\ncommand to install the package.\nAfter downloading the source RPM package, complete the following steps to\ncompile and install sudo on your system.\n1. su to root.\n2. Run rpm –ivh sudo-1.6.3-4.src.rpm command to extract the sudo tar\nball in /usr/src/redhat/SOURCES directory.\nChange your current directory to /usr/src/redhat/SOURCES. If you run\nls –l sudo* you see a file such as the following:\n-rw-r--r-- 1 root root 285126 Apr 10 2000 sudo-\n1.6.3.tar.gz\n3. Extract the sudo-1.6.3.tar.gz file using the tar xvzf sudo-1.6.3.\ntar.gz command. This creates a subdirectory called sudo-1.6.3. Change\nyour current directory to sudo-1.6.3.\n4. Run the ./configure --with-pam script to configure sudo source code\nfor your system. The --with-pam option specifies that you want to build\nsudo with PAM support.\n5. Run make to compile. If you don’t get any compilation errors, you can run\nmake install to install the software.\n6. Run cp sample.pam /etc/pam.d/sudo to rename the sample PAM con-\nfiguration file; then copy it to the /etc/pam.d directory.\nChapter 12: Shadow Passwords and OpenSSH\n303\n" }, { "page_number": 327, "text": "Modify the /etc/pam.d/sudo file to have the following lines:\n#%PAM-1.0\nauth required /lib/security/pam_stack.so\nservice=system-auth\naccount required /lib/security/pam_stack.so\nservice=system-auth\npassword required /lib/security/pam_stack.so\nservice=system-auth\nsession required /lib/security/pam_stack.so\nservice=system-auth\n7. Run the make clean command to remove unnecessary object files.\nCONFIGURING AND RUNNING SUDO\nThe sudo configuration file is called /etc/sudoers. Use the visudo program as\nroot to edit this file. The visudo command\nN Locks the /etc/sudoers file to prevent simultaneous changes by multiple\nroot sessions.\nN Checks for configuration syntax.\nBy default, the visudo command uses the vi editor. If you aren’t a vi fan\nand prefer emacs or pico,you can set the EDITOR environment variable to\npoint to your favorite editor, which makes visudo run the editor of your\nchoice.\nFor example,\nif you use the pico\neditor,\nrun export\nEDITOR=/usr/bin/pico for a bash shell, or run setenv \nEDITOR\n/usr/bin/pico editor for csh, tcsh shells. Then run the visudo com-\nmand to edit the /etc/sudoers contents in the preferred editor.\nThe default /etc/sudoers file has one configuration entry as shown below:\nroot ALL=(ALL) ALL\nThis default setting means that the root user can run any command on any host\nas any user. The /etc/sudoers configuration is quite extensive and often confus-\ning. The following section discusses a simplified approach to configuring sudo for\npractical use.\nTwo types of configuration are possible for sudo:\n304\nPart III: System Security\n" }, { "page_number": 328, "text": "N Aliases. An alias is a simple name for things of the same kind. There are\nfour types of aliases supported by sudo configuration.\nI Host_Alias = list of one or more hostnames. For example, WEB-\nSERVERS = k2.nitec.com, everest.nitec.com defines a host alias\ncalled WEBSERVERS, which is a list of two hostnames.\nI User_Alias = list of one or more users. For example, JRADMINS =\ndilbert, catbert defines a user alias called JRADMIN, which is a list\nof two users.\nI Cmnd_Alias = list of one or more commands. For example,\nCOMMANDS = /bin/kill, /usr/bin/killall defines a command\nalias called COMMANDS, which is a list of two commands.\nN User specifications. A user specification defines who can run what com-\nmand as which user.\nFor example:\nJRADMINS WEBSERVER=(root) COMMANDS\nThis user specification says sudo allows the users in JRADMINS to run pro-\ngrams in COMMANDS on WEBSERVER systems as root. In other words, it\nspecifies that user dlibert and catbert can run /bin/kill or /usr/\nbin/killall command on k2.nitec.om, everest.nitec.com as root.\nListing 12-5 is an example configuration.\nListing 12-5: /etc/sudoers sample configuration file\nHost_Alias WEBSERVER = www.nitec.com\nUser_Alias WEBMASTERS = sheila, kabir\nCmnd_Alias KILL = /bin/kill, /usr/bin/killall\nWEBMASTERS WEBSERVER=(root) KILL\nThe preceding configuration authorizes user sheila and kabir to run (via sudo)\nthe \nkill\ncommands (/bin/kill\nand \n/usr/bin/killall) as \nroot\non\nwww.nitec.com. In other words, these two users can kill any process on\nwww.nitec.com. How is this useful? Let’s say that user sheila discovered that a\nprogram called oops.pl that the system administrator (root) ran before going to\nlunch has gone nuts and is crawling the Web server. She can kill the process with-\nout waiting for the sysadmin to return. User sheila can run the ps auxww | grep\noops.pl command to check whether the oops.pl program is still running. The out-\nput of the command is:\nroot 11681 80.0 0.4 2568 1104 pts/0 S 11:01 0:20 perl /tmp/oops.pl\nShe tries to kill it using the kill -9 11681 command, but the system returns\n11681: Operation not permitted error message. She realizes that the process is\nChapter 12: Shadow Passwords and OpenSSH\n305\n" }, { "page_number": 329, "text": "owned by root (as shown in the ps output) and runs sudo kill -9 11681 to kill\nit. Because she is running the sudo command for the very first time, she receives\nthe following message from the sudo command.\nWe trust you have received the usual lecture from the local System\nAdministrator. It usually boils down to these two things:\n#1) Respect the privacy of others.\n#2) Think before you type.\nPassword:\nAt this point she is asked for her own password (not the root password) and\nonce she successfully provides the password, sudo runs the requested command,\nwhich kills the culprit process immediately. She then verifies that the process is\nno longer running by rerunning the ps auxww | grep oops.pl command. As\nshown sudo can safely delegate system tasks to junior-level administrators or\ncoworkers. After all, who likes calls during the lunch?. Listing 12-6 presents a prac-\ntical sudo configuration that I use to delegate some of the Web server administra-\ntion tasks to junior administrators.\nListing 12-6: Kabir’s /etc/sudoers for a Web server\n# sudoers file.\n# This file MUST be edited with the ‘visudo’\n# command as root.\n# See the sudoers man page for the details on how\n# to write a sudoers file.\n# Host alias specification\nHost_Alias WEBSERVER = www.intevo.com\n# User alias specification\nUser_Alias WEBMASTERS = wsajr1, wsajr2\n# Cmnd alias specification\nCmnd_Alias APACHE = /usr/local/apache/bin/apachectl\nCmnd_Alias KILL = /bin/kill, /usr/bin/killall\nCmnd_Alias REBOOT = /usr/sbin/shutdown\nCmnd_Alias HALT = /usr/sbin/halt\n# User privilege specification\nWEBMASTERS WEBSERVER=(root) APACHE, KILL, REBOOT, HALT\nThis configuration allows two junior Web administrators (wsajr1 and wsajr2) to\nstart, restart, stop the Apache Web server using the /usr/local/apache/\nbin/apachectl command. They can also kill any process on the server and even\nreboot or halt the server if need be. All this can happen without having the full\nroot access.\n306\nPart III: System Security\n" }, { "page_number": 330, "text": "Chapter 12: Shadow Passwords and OpenSSH\n307\nCommands that allow shell access (such as editors like vi or programs like\nless) shouldn’t run via the sudo facility,because a user can run any com-\nmand via the shell and gain full root access intentionally or unintentionally.\nThe configuration I use is quite simple compared to what is possible with sudo.\n(Read the sudoers man pages for details.) However, it’s a good idea to keep your\n/etc/sudoers configuration as simple as possible. If the program you want to give\naccess to others is complex or has too many options, consider denying it com-\npletely. Don’t give out sudo access to users you don’t trust. Also, get in the habit of\nauditing sudo-capable users frequently using the logs.\nAUDITING SUDO USERS\nBy default, sudo logs all attempts to run any command (successfully or unsuccess-\nfully) via the syslog. You can run sudo –V to find which syslog facility sudo uses\nto log information. You can also override the default syslog facility configuration\nin /etc/sudoers. For example, adding the following line in /etc/sudoers forces\nsudo to use the auth facility of syslog.\nDefaults syslog=auth\nTo keep a separate sudo log besides syslog managed log files, you can add a\nline such as the following to /etc/sudoers:\nDefaults log_year, logfile=/var/log/sudo.log\nThis forces sudo to write a log entry to the /var/log/sudo.log file every time\nit’s run.\nMonitoring Users\nThere are some simple tools that you can use every day to keep yourself informed\nabout who is accessing your system. These tools aren’t exactly monitoring tools by\ndesign, but you can certainly use them to query your system about user activity.\nOften I have discovered (as have many other system administrators) unusual activ-\nity with these tools, perhaps even by luck, but why quibble? The tools have these\ncapabilities; an administrator should be aware of them. In this section I introduce\nsome of them.\n" }, { "page_number": 331, "text": "Finding who is on the system\nYou can use the who or w commands to get a list of users who are currently on your\nsystem. Here is some sample output from who:\nswang pts/1 Dec 10 11:02\njasont pts/2 Dec 10 12:01\nzippy pts/3 Dec 10 12:58\nmimi pts/0 Dec 10 8:46\nIf you simply want a count of the users, run who –q. The w command provides\nmore information than who does. Here’s an example output of the w command.\nUSER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT\nswang pts/1 reboot.nitec.co 11:02am 12.00s 0.29s 0.15s pine\njasont pts/2 k2.nitec.co 12:01pm 2.00s 0.12s 0.02s vi .plan\nzippy pts/3 reboot.nitec.co 12:58pm 17:45 0.04s 0.04s -tcsh\nmimi pts/0 gatekeeper.nitec.co 8:46am 0.00s 1.02s 0.02s lynx\nHere user swang appears to read e-mail using pine, user jasont is modifying his\n.plan file, user zippy seems to be running nothing other than the tcsh shell, and\nuser mimi is running the text-based Web browser Lynx.\nWhen you have a lot of users (in the hundreds or more), running w or who can\ngenerate more output than you want to deal with. Instead of running the who or w\ncommands in such cases, you can run the following script from Listing 12-7 to\ncheck how many unique and total users are logged in.\nListing 12-7: The who.sh script\n#!/bin/sh\n# Purpose: this simple script uses the common Linux\n# utilities to determine the total and unique number\n# of users logged on the system\n# Version: 1.0\n#\nWHO=/usr/bin/who\nGREP=/bin/grep\nAWK=/bin/awk\nSORT=/bin/sort\nWC=/usr/bin/wc\nSED=/bin/sed\necho -n “Total unique users:”;\n# Filter the output of the who command using awk to\n# extract the first column and then uniquely sort the\n# columns using sort. Pipe the sorted output to wc for\n# line count. Finally remove unnecessary white spaces\n308\nPart III: System Security\n" }, { "page_number": 332, "text": "# from the output of wc using sed\n$WHO | $AWK ‘{print $1}’ | $SORT -u | $WC -l | $SED ‘s/ */ /g’;\n# Use grep to filter the output of the who command to\n# find the line containing user count.\n# Then print out the user count using awk.\n$WHO -q | $GREP users | $AWK ‘BEGIN{FS=”=”;} {printf(“\\nTotal user sessions:\n%d\\n\\n”, $2);}’;\n# Exit\nexit 0;\nYou can run this script from the command line as sh who.sh at any time to\ncheck how many total and unique users are logged in. Also, if you want to run the\ncommand every minute, use the watch –n 60 sh /path/to/who.sh command.\nThis command runs the who.sh script every 60 seconds. Of course, if you want to\nrun it at a different interval, change the number accordingly.\nFinding who was on the system\nYou can run the last command to find users who have already logged out. Last\nuses the /var/log/wtmp file to display a list of users logged in (and out) since that\nfile was created. You specify a username, and it displays information only for the\ngiven user. You can also use the finger username command to see when someone\nlast logged in.\nTo use the finger command on the local users;you don’t need the finger\ndaemon.\nAll the commands (who, w, last, finger) discussed in this section depend on\nsystem files such as /var/log/wtmp and /var/run/utmp files. Make sure that these\nfiles aren’t world-writeable; otherwise a hacker disguised as an ordinary user can\nremove his tracks.\nCreating a User-Access\nSecurity Policy\nSystem and network administrators are often busy beyond belief. Any administra-\ntor that manages ten or more users know that there’s always something new to take\ncare of every day. I often hear that user administration is a thankless job, but it\ndoesn’t have to be. With a little planning and documentation, an administrator can\nmake life easier for herself and everyone else involved. If every administrator\nChapter 12: Shadow Passwords and OpenSSH\n309\n" }, { "page_number": 333, "text": "would craft a tight security policy and help users understand and apply it, user-\naccess-related security incidents would subside dramatically. Follow these guide-\nlines for creating a user security policy.\nN Access to a system is a privilege. This privilege comes with responsibil-\nity; one must take all precautions possible to ensure the access privilege\ncan’t be easily exploited by potential vandals. Simply knowing who may\nbe watching over your shoulder when you enter a password can increase\nuser-access security.\nN Passwords aren’t a personal preference. A user must not consider her\npassword as something that she has a lot of control over when it comes to\nchoosing one.\nN Passwords expire. A user must accept that passwords aren’t forever.\nN Passwords are organizational secrets. A user must never share or display\npasswords. A user must not store passwords in a handheld PC, which can\nget lost and fall in wrong hands. Never give passwords to anyone over the\nphone.\nN Not all passwords are created equal. Just having a password isn’t good\nenough. A good password is hard to guess and often hard to remember. A\nuser must make great efforts to memorize the password.\nCreating a User-Termination\nSecurity Policy\nIt is absolutely crucial that your organization create a user-termination security\npolicy to ensure that people who leave the organization can’t become potential\nsecurity liabilities. By enforcing a policy upon user termination, you can make sure\nyour systems remain safe from any ill-conceived action taken by an unhappy\nemployee.\nWhen a user leaves your organization, you have two alternatives for a first\nresponse:\nN Remove the person’s account by using the userdel username command.\nN Disable the user account so it can’t log in to the system, using the user-\nmod –s /bin/true username command.\nThe command modifies the user account called username in /etc/pass-\nword file and changes the login shell to /bin/true, which doesn’t allow\nthe user to log in interactively.\n310\nPart III: System Security\n" }, { "page_number": 334, "text": "To display a message such as Sorry, you are no longer allowed to access\nour systems, you can create a file called /bin/nologin this way:\n#!/bin/sh\necho “Sorry, you are no longer allowed to access our systems.”;\nexit 0;\nSet the nologin script’s ownership to root with the chown root /bin/nologin\ncommand. Make it executable for everyone by using the chmod 755 /bin/nologin\ncommand. Run the usermod –s /bin/nologin username command. When a ter-\nminated user tries to log in, the script runs and displays the intended message.\nSummary\nThis chapter examined the risks associated with user access and some responses to\nthe risks — such as using shadow passwords, securing a user-authentication process\nby using an OpenSSH service, restricting the access granted to the root user\naccount, and delegating root tasks to ordinary users in a secure manner. \nChapter 12: Shadow Passwords and OpenSSH\n311\n" }, { "page_number": 335, "text": "" }, { "page_number": 336, "text": "Chapter 13\nSecure Remote Passwords\nIN THIS CHAPTER\nN Setting up Secure Remote Password (SRP)\nN Securing Telnet using SRP\nSECURE REMOTE PASSWORD (SRP) is an open source password-based authentication\nprotocol. SRP-enabled client/server suites don’t transmit passwords (encrypted or\nin clear text) over the network. This entirely removes the possibility of password\nspoofing. SRP also doesn’t use encryption to perform authentication, which makes\nit faster than the public/private key–based authentication schemes currently avail-\nable. To learn more about this protocol visit the official SRP Web site at\nhttp://srp.standford.edu.\nSetting Up Secure Remote\nPassword Support\nAs of this writing there’s no RPM package available for SRP. You need to download\nthe source distribution from http://srp.stanford.edu, then compile and install\nit. In this section I discuss how you can do that.\n1. Download the latest SRP source distribution from the preceding Web site.\nAs of this writing the source distribution is called srp-1.7.1.tar.gz. As\nusual, make sure that you replace the version number (1.7.1) with the\nappropriate version number of the distribution you are about to install.\n2. Once downloaded, su to root and copy the .tar file in the /usr/src/\nredhat/SOURCES directory.\n3. Extract the source distribution in the /usr/src/redhat/SOURCES direc-\ntory using the tar xvzf srp-1.7.1.tar.gz command. This creates a\nsubdirectory called srp-1.7.1. Change your current directory to this new\nsubdirectory.\n313\n" }, { "page_number": 337, "text": "4. Run the configure script with these options:\n--with-openssl\n--with-pam\nI assume that you have extracted and compiled OpenSSL source in the\n/usr/src/redhat/SOURCES/openssl-0.9.6 directory. Run the config-\nure script as shown below:\n./configure --with-openssl=/usr/src/redhat/SOURCES/openssl-0.9.6 \\\n--with-pam\n5. Once the SRP source is configured for OpenSSL and PAM support by the\noptions used in the preceding command, run the make and make install\ncommands to install the software.\nAt this point you have compiled and installed SRP, but you still need the\nExponential Password System (EPS) support for SRP applications. \nEstablishing Exponential\nPassword System (EPS)\nThe SRP source distribution includes the EPS source, which makes installation easy.\nHowever, the default installation procedure didn’t work for me, so I suggest that\nyou follow my instructions below.\n1. su to root.\n2. Change the directory to /usr/src/redhat/SOURCES/srp-\n1.7.1/base/pam_eps.\n3. Install the PAM modules for EPS in the /lib/security directory with the\nfollowing command:\ninstall -m 644 pam_eps_auth.so pam_eps_passwd.so\n/lib/security\n4. Run the /usr/local/bin/tconf command. You can also run it from the\nbase/src subdirectory of the SRP source distribution.\nThe tconf command generates a set of parameters for the EPS password\nfile.\n5. Choose the predefined field option. \nThe tconf utility also creates /etc/tpasswd and /etc/tpasswd.conf\nfiles.\n314\nPart III: System Security\n" }, { "page_number": 338, "text": "Select the predefined field number 6 or above.The number 6 option is 1,024\nbits.If you choose a larger field size,the computation time to verify the para-\nmeters used by EPS increases.\nThe more bits that you require for security, the more verification time costs\nyou.\nAt this point, you have the EPS support installed but not in use. Thanks to the\nPAM technology used by Linux, upgrading your entire (default) password authenti-\ncation to EPS is quite easy. You modify a single PAM configuration file.\nUsing the EPS PAM module\nfor password authentication\nTo use the EPS PAM module for password authentication, do the following:\n1. As root, create a backup copy of your /etc/pam.d/system-auth file.\n(You’ll need this if you run into problems with the EPS.) You can simply\nswitch back to your old PAM authentication by overwriting the modified\nsystem-auth file with the backed-up version.\n2. Modify the system-auth file as shown in Listing 13-1.\nListing 13-1: /etc/pam.d/system-auth\n#%PAM-1.0\n# This file is auto-generated.\n# User changes are destroyed the next time authconfig is run.\n# DON’T USE authconfig!\nauth required /lib/security/pam_unix.so likeauth nullok md5 shadow\nauth sufficient /lib/security/pam_eps_auth.so\nauth required /lib/security/pam_deny.so\naccount sufficient /lib/security/pam_unix.so\naccount required /lib/security/pam_deny.so\npassword required /lib/security/pam_cracklib.so retry=3\npassword required /lib/security/pam_eps_passwd.so\nContinued\nChapter 13: Secure Remote Passwords\n315\n" }, { "page_number": 339, "text": "Listing 13-1 (Continued)\npassword sufficient /lib/security/pam_unix.so nullok use_authtok md5\nshadow\npassword required /lib/security/pam_deny.so\nsession required /lib/security/pam_limits.so\nsession required /lib/security/pam_unix.so\n3. Notice the lines in bold. The first bold line indicates that the ESP auth\nmodule for PAM can satisfy authentication requirements. The second bold\nline specifies that the pam_eps_passwd.so PAM module for EPS is used\nfor password management. The placement of these lines (in bold) is very\nimportant. No line with sufficient control fag can come before the\npam_eps_auth.so or pam_eps_passwd.so lines.\nNow you can convert the passwords in /etc/passwd (or in /etc/shadow) to EPS\nformat.\nConverting standard passwords to EPS format\nUser passwords are never stored in /etc/passwd or in /etc/shadow, so there is no\neasy way to convert all of your existing user passwords to the new EPS format.\nThese two files store only encrypted versions of password verification strings gen-\nerated by a one-way hash algorithm used in the crypt() function.\nSo the best way for converting to ESP passwords is by making users change their\npasswords using the passwd command as usual. If your /etc/pam.d/passwd file\nstill uses the default settings, as shown in Listing 13-2, the pam_eps_passwd.so\nmodule used in /etc/pam.d/system-auth configuration writes an EPS version of\nthe password verification string (not the user’s actual password) in the\n/etc/tpasswd file.\nListing 13-2: /etc/pam.d/passwd\n#%PAM-1.0\nauth required /lib/security/pam_stack.so service=system-auth\naccount required /lib/security/pam_stack.so service=system-auth\npassword required /lib/security/pam_stack.so service=system-auth\nOrdinary user passwords may need to be changed by using the root account\nonce before map_eps_passwd.so will write to the /etc/tpasswd file.\nThis bug or configuration problem may be already corrected for you if you\nare using a newer version.\n316\nPart III: System Security\n" }, { "page_number": 340, "text": "Once you have converted user passwords in this manner you can start using the\nSRP version of applications such as Telnet.\nUsing SRP-Enabled Telnet Service\nThe SRP distribution includes SRP-enabled Telnet server and client software. To\ninstall the SRP-enabled Telnet client/server suite, do the following:\n1. su to root and change the directory to the Telnet subdirectory of your\nSRP source distribution, which for my version is\n/usr/src/redhat/SOURCES/srp-1.7.1/telnet.\n2. Run make and make install to the Telnet server (telnetd) software in\n/usr/local/sbin and the Telnet client (telnet) in /usr/local/bin.\n3. Change the directory to /etc/xinetd.conf. Move your current Telnet\nconfiguration file for xinetd to a different directory if you have one.\n4. Create a Telnet configuration file called /etc/xinetd.d/srp-telnetd, as\nshown in Listing 13-3.\nListing 13-3: /etc/xinetd.d/srp-telnetd\n# default: on\n# description: The SRP Telnet server serves Telnet connections.\n# It uses SRP for authentication.\nservice telnet\n{\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/local/sbin/telnetd\nlog_on_success += DURATION USERID\nlog_on_failure += USERID\nnice = 10\ndisable = no\n}\n5. Restart xinetd using the killall -USR1 xinetd command.\n6. Create or modify the /etc/pam.d/telnet file as shown in Listing 13-4.\nChapter 13: Secure Remote Passwords\n317\n" }, { "page_number": 341, "text": "Listing 13-4: /etc/pam.d/telnet\n#%PAM-1.0\nauth required /lib/security/pam_listfile.so item=user \\\nsense=deny file=/etc/telnetusers onerr=succeed\nauth required /lib/security/pam_stack.so service=srp-telnet\nauth required /lib/security/pam_shells.so\naccount required /lib/security/pam_stack.so service=srp-telnet\nsession required /lib/security/pam_stack.so service=srp-telnet\nIf you have modified the /etc/pam.d/system-auth file as shown in\nListing 13-1, you can replace the service=srp-telnet option in the pre-\nceding listing to service=system-auth. This can keep one systemwide\nPAM configuration file, which eases your authentication administration.\nAlso,you can skip step 7.\n7. Create a file called /etc/pam.d/srp-telnet as shown in Listing 13-5.\nListing 13-5: /etc/pam.d/srp-telnet\n#%PAM-1.0\nauth required /lib/security/pam_unix.so likeauth nullok md5 shadow\nauth sufficient /lib/security/ pam_eps_auth.so\nauth required /lib/security/pam_deny.so\naccount sufficient /lib/security/pam_unix.so\naccount required /lib/security/pam_deny.so\npassword required /lib/security/pam_cracklib.so retry=3\npassword required /lib/security/pam_eps_passwd.so\npassword sufficient /lib/security/pam_unix.so nullok use_authtok md5\nshadow\npassword required /lib/security/pam_deny.so\nsession required /lib/security/pam_limits.so\nsession required /lib/security/pam_unix.so\nNow you have an SRP-enabled Telnet server. Try the service by running the\nSRP-enabled Telnet client (found in the /usr/local/bin directory) using the\n/usr/local/bin/telnet localhost command. When prompted for the username\nand password, use an already SRP-converted account. The username you use to\nconnect to the SRP-enabled Telnet server via this client must have an entry in\n/etc/tpasswd, or the client automatically fails over to non-SRP (clear-text pass-\nword) mode. Here’s a sample session:\n318\nPart III: System Security\n" }, { "page_number": 342, "text": "$ telnet localhost 23\nTrying 127.0.0.1...\nConnected to localhost.intevo.com (127.0.0.1).\nEscape character is ‘^]’.\n[ Trying SRP ... ]\nSRP Username (root): kabir\n[ Using 1024-bit modulus for ‘kabir’ ]\nSRP Password:\n[ SRP authentication successful ]\n[ Input is now decrypted with type CAST128_CFB64 ]\n[ Output is now encrypted with type CAST128_CFB64 ]\nLast login: Tue Dec 26 19:30:08 from reboot.intevo.com\nTo connect to your SRP-enabled Telnet server from other Linux workstations,\nyou must install SRP support and the SRP Telnet client software on them. Also,\nthere are many SRP-enabled non-Linux versions of Telnet clients available, which\nmay come in handy if you have a heterogeneous network using multiple operating\nsystems.\nUsing SRP-enabled Telnet clients \nfrom non-Linux platforms\nMany SRP-enabled Telnet clients exist for the other popular operating systems. You\ncan find a list of these at http://srp.stanford.edu. One SRP-enabled Telnet\nclient works on any system that supports Java, which covers just about every mod-\nern operating system.\nUsing SRP-Enabled FTP Service\nThe SRP distribution includes an SRP-enabled FTP server and FTP client software.\nTo install the SRP-enabled FTP service do the following:\n1. su to root and change the directory to the FTP subdirectory of your SRP\nsource distribution, which for my version is /usr/src/redhat/SOURCES/\nsrp-1.7.1/ftp.\n2. Run make and make install to the FTP server (ftpd) software in /usr/\nlocal/sbin and the FTP client (ftp) in /usr/local/bin.\n3. Change the directory to /etc/xinetd.conf. Move your current FTP con-\nfiguration file for xinetd to a different directory if you have one.\n4. Create an FTP configuration file called /etc/xinetd.d/srp-ftpd, as\nshown in Listing 13-6.\nChapter 13: Secure Remote Passwords\n319\n" }, { "page_number": 343, "text": "Listing 13-6: /etc/xinetd.d/srp-ftpd\n# default: on\n# description: The SRP FTP server serves FTP connections.\n# It uses SRP for authentication.\nservice ftp\n{\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/local/sbin/ftpd\nlog_on_success += DURATION USERID\nlog_on_failure += USERID\nnice = 10\ndisable = no\n}\nIf you don’t want to fall back to regular FTP authentication (using a clear-text\npassword) when SRP authentication fails, add server_args = -a line\nafter the socket_type line in the preceding configuration file.\n5. Restart xinetd using the killall -USR1 xinetd command.\n6. Create or modify the /etc/pam.d/ftp file as shown in Listing 13-7.\nListing 13-7: /etc/pam.d/ftp\n#%PAM-1.0\nauth required /lib/security/pam_listfile.so item=user \\\nsense=deny file=/etc/ftpusers onerr=succeed\nauth required /lib/security/pam_stack.so service=srp-ftp\nauth required /lib/security/pam_shells.so\naccount required /lib/security/pam_stack.so service=srp-ftp\nsession required /lib/security/pam_stack.so service=srp-ftp\nIf you have modified the /etc/pam.d/system-auth file as shown in\nListing 13-1 you can replace the service=srp-ftp option in the listing to\nservice=system-auth. This keeps one systemwide PAM configuration\nfile,which eases your authentication administration.Also,you can skip step 7.\n320\nPart III: System Security\n" }, { "page_number": 344, "text": "7. Create a file called /etc/pam.d/srp-ftp as shown in Listing 13-8.\nListing 13-8: /etc/pam.d/srp-ftp\n#%PAM-1.0\nauth required /lib/security/pam_unix.so likeauth nullok md5 shadow\nauth sufficient /lib/security/pam_eps_auth.so\nauth required /lib/security/pam_deny.so\naccount sufficient /lib/security/pam_unix.so\naccount required /lib/security/pam_deny.so\npassword required /lib/security/pam_cracklib.so retry=3\npassword required /lib/security/pam_eps_passwd.so\npassword sufficient /lib/security/pam_unix.so nullok use_authtok md5\nshadow\npassword required /lib/security/pam_deny.so\nsession required /lib/security/pam_limits.so\nsession required /lib/security/pam_unix.so\nNow you have an SRP-enabled FTP server. Try the service by running the SRP-\nenabled FTP client (found in the /usr/local/bin\ndirectory) using the\n/usr/local/bin/ftp localhost command. When prompted for the username\nand password, use an already SRP-converted account. The username you use to\nconnect to the SRP-enabled FTP server via this client must have an entry in\n/etc/tpasswd, or the client automatically fails over to non-SRP (clear-text pass-\nword) mode. Here’s a sample session:\n$ /usr/local/bin/ftp localhost\nConnected to localhost.intevo.com.\n220 k2.intevo.com FTP server (SRPftp 1.3) ready.\nSRP accepted as authentication type.\nName (localhost:kabir): kabir\nSRP Password:\nSRP authentication succeeded.\nUsing cipher CAST5_CBC and hash function SHA.\n200 Protection level set to Private.\n232 user kabir authorized by SRP.\n230 User kabir logged in.\nRemote system type is UNIX.\nUsing binary mode to transfer files.\nThe SRP-enabled FTP service supports the following cryptographic ciphers:\nNONE (1)\nBLOWFISH_ECB (2)\nBLOWFISH_CBC (3)\nChapter 13: Secure Remote Passwords\n321\n" }, { "page_number": 345, "text": "BLOWFISH_CFB64 (4)\nBLOWFISH_OFB64 (5)\nCAST5_ECB (6)\nCAST5_CBC (7)\nCAST5_CFB64 (8)\nCAST5_OFB64 (9)\nDES_ECB (10)\nDES_CBC (11)\nDES_CFB64 (12)\nDES_OFB64 (13)\nDES3_ECB (14)\nDES3_CBC (15)\nDES3_CFB64 (16)\nDES3_OFB64 (17)\nAlso, MD5 and SHA hash functions are supported. By default, the CAST5_CBC\ncipher and SHA hash function are used. To specify a different cipher, use the -c\noption. For example, the /usr/local/bin/ftp -c blowfish_cfb64 localhost\ncommand uses the BLOWFISH_CFB64 cipher, not CAST5_CBC. To use the MD5 hash\nfunction, use the -h option. The /usr/local/bin/ftp -h md5 localhost com-\nmand uses the MD5 hash function, not SHA.\nDetails of these ciphers or hash functions are beyond the scope of this book. You\ncan learn about these ciphers at security-related Web sites. See Appendix C for\nonline resources.\nTo connect to your SRP-enabled FTP server from other Linux workstations,\ninstall SRP support along with the SRP-enabled FTP client on them. There are also\nSRP-enabled FTP clients for non-Linux systems.\nUsing SRP-enabled FTP clients \nfrom non-Linux platforms\nKermit 95 — available for Windows 95, 98, ME, NT, and 2000, and OS/2 — is SRP\nenabled and has a built-in FTP client. Visit http://www.columbia.edu/kermit/\nk95.html for details.\nSummary\nTransmitting plain-text passwords over a network such as Internet is very risky.\nSecure Remote Password (SRP) protocol provides you with an alternative to send-\ning plain-text passwords over the network. Using SRP you can secure the authenti-\ncation aspect of such protocols as Telnet and FTP.\n322\nPart III: System Security\n" }, { "page_number": 346, "text": "Chapter 14\nxinetd\nIN THIS CHAPTER\nN What is xinetd?\nN Compiling and installing xinetd\nN Restricting access to common Internet services\nN Preventing denial-of-service attacks\nN Redirecting services\nAS A SECURE REPLACEMENT for the inetd daemon, xinetd offers greater flexibility\nand control. The xinetd daemon has the same functionality as inetd, but adds\naccess control, port binding, and protection from denial-of-service attacks.\nOne drawback is its poor support for Remote Procedure Call (RPC)-based services\n(listed in /etc/rpc). Because most people don’t run RPC-based services, this\ndoesn’t matter too much. If you need RPC-based services, you can use inetd to run\nthose services while running xinetd to manage your Internet services in a secure,\ncontrolled manner. In this chapter I dicuss how you can set up xinetd and manage\nvarious services using it in a secure manner.\nWhat Is xinetd?\nTypically, Internet services on Linux are run either in stand-alone or xinetd-run\nmode. Figure 14-1 shows a diagram of what the stand-alone mode looks like.\nAs shown in stand-alone mode, a parent or master server is run at all times. This\nmaster server\nN Listens to ports on network interfaces.\nN Preforks multiple child servers that wait for requests from the master\nserver.\nWhen the master server receives a request from a client system, it simply passes\nthe request information to one of its ready-to-run child server processes. The child\nserver interacts with the client system and provides necessary service.\n323\n" }, { "page_number": 347, "text": "Figure 14-1: Stand-alone mode Internet service diagram\nApache Web Server is typically stand-alone, though it can run as a xinetd-run\nserver.\nFigure 14-2 shows a diagram of how a xinetd-run service works.\nFigure 14-2: xinetd-run Internet service diagram\nThere is no master server other than the xinetd server itself. The server is\nresponsible for listening to all necessary ports for all the services it manages. Once\na connection for a particular service arrives, it forks the appropriate server pro-\ngram, which in turn services the client and exits. If the load is high, xinetd services\nmultiple requests by running multiple servers.\nHowever, because xinetd must fork a server as requests arrive, the penalty is too\ngreat for anything that receives or can receive heavy traffic. For example, running\nApache as a xinetd service is practical only for experimentation and internal pur-\nposes. It isn’t feasible or run Apache as a xinetd-run service for a high-profile Web\nsite. The overhead of forking and establishing a new process for each request is too\nmuch of a load and a waste of resources.\nUser\nRequest\nxinetd\n(Listens for connection on all\ndefined service ports)\nUser\nResponse\nRequested Daemon\n(Services the client and exits)\n• Forks on appropriate daemon\non demand\n• Typically, a daemon services \na single client and exits\nimmediately afterwards\n• Multiple requests of the same\nkind force xinetd to start\nmultiple daemons on the fly\n• Typically, this model is not\nrecommended for high-load,\nhigh-traffic services such as\nHTTP or SMTP\nUser #1\nRequest #1\nMaster Server\n(Listens for connection\non a certain port)\nUser #N\nRequest #N\nPre-forks a number of\nchild servers\nUser #1\nResponse #1\nChild Server #1\n(Services the client)\n• A child server stays around for awhile\nto service multiple clients\n• A pool of child servers are kept alive to\nservice multiple clients fast\nChild Server #N\n(Services the client)\nUser #N\nResponse #N\n324\nPart III: System Security\n" }, { "page_number": 348, "text": "Because load is usually not an issue, plenty of services can use xinetd. FTP ser-\nvice and POP3 service by using xinetd, for example, are quite feasible for even\nlarge organizations.\nSetting Up xinetd\nBy default, xinetd gets installed on your Red Hat Linux 7.x system. Make sure,\nthough, that you always have the latest version installed. In the following section I\nshow installation of and configuration for the latest version of xinetd.\nGetting xinetd\nAs with all open-source Linux software, you have two choices for sources of\nxinetd:\nN Install a binary RPM distribution of xinetd from the Red Hat CD-ROM or\ndownload it from a Red Hat RPM site such as http://rpmfind.net.\nN Download the source RPM distribution, then compile and install it\nyourself.\nI prefer the source distribution, so I recommend that you try this approach, too.\nHowever, if you must install the binary RPM, download it and run the rpm –ivh\nxinetd-version-architecture.rpm file to install. In the following section, I\nshow how you can compile and install xinetd.\nCompiling and installing xinetd\nAfter downloading the source RPM distribution, follow the steps below to compile\nxinetd.\n1. Run rpm –ivh xinet-version.src.rpm, where xinet-version.src.rpm is\nthe name of the source RPM distribution file.\nThis places a tar archive called xinetd-version.tar.gz in the\n/usr/sr324c/redhat/SOURCES directory.\n2. Go to the /usr/src/redhat/SOURCES directory and run the tar xvzf\nxinetd-version.tar.gz command to extract the source tree.\nThe tar program extracts the source in a subdirectory called \nxinetd-version.\n3. Go to xinetd-version and run the ./configure script.\nTo control maximum load placed on the system by any particular xinetd-\nmanaged service, use the --with-loadavg option with the script.\nChapter 14: xinetd\n325\n" }, { "page_number": 349, "text": "To configure the TCP wrapper (tcpd) for xinetd, run the configure\nscript with the --with-libwrap=/usr/lib/libwrap.a option. This\ncontrols access by using /etc/hosts.allow and /etc/hosts.deny\nfiles.Choose this option only if you invest a great deal of time creating these\ntwo files.\n4. Run make and make install if you don’t receive an error during step 3.\nIf you get an error message and can’t resolve it, use the binary RPM\ninstallation.\n5. Create a directory called /etc/xinetd.d.\nThis directory stores the xinetd configuration files for each service you\nwant to run via xinetd.\n6. Create the primary xinetd configuration file called /etc/xinetd.conf as\nshown in Listing 14-1.\nThe binary RPM xinetd package comes with this file.\nListing 14-1: The /etc/xinetd.conf file\n# Simple configuration file for xinetd\n# Some defaults, and include /etc/xinetd.d/\ndefaults\n{\ninstances = 60\nlog_type = SYSLOG authpriv\nlog_on_success = HOST PID\nlog_on_failure = HOST RECORD\n}\nincludedir /etc/xinetd.d\nAt startup, xinetd reads this file, which accordingly should be modified\nfor greater security. (See “Strengthening the Defaults in\n/etc/xinetd.conf,” later in this chapter, for details.)\n7. Create a script called /etc/rc.d/init.d/xinetd as shown in Listing\n14-2.\nThis script — needed to start xinetd from an appropriate run level — is\nsupplied by Red Hat in the binary distribution only.\n326\nPart III: System Security\n" }, { "page_number": 350, "text": "Listing 14-2: The /etc/rc.d/init.d/xinetd file\n#! /bin/sh\n# xinetd This starts and stops xinetd.\n# chkconfig: 345 56 50\n# description: xinetd is a powerful replacement\n# for inetd. xinetd has access control mechanisms,\n# extensive logging capabilities, the ability to\n# make services available based on time, and can\n# place limits on the number of servers that can\n# be started,among other things.\n# processname: /usr/sbin/xinetd\n# config: /etc/sysconfig/network\n# config: /etc/xinetd.conf\n# pidfile: /var/run/xinetd.pid\nPATH=/sbin:/bin:/usr/bin:/usr/sbin\n# Source function library.\n. /etc/init.d/functions\n# Get config.\ntest -f /etc/sysconfig/network && . /etc/sysconfig/network\n# Check that networking is up.\n[ ${NETWORKING} = “yes” ] || exit 0\n[ -f /usr/sbin/xinetd ] || exit 1\n[ -f /etc/xinetd.conf ] || exit 1\nRETVAL=0\nstart(){\necho -n “Starting xinetd: “\ndaemon xinetd -reuse -pidfile /var/run/xinetd.pid\nRETVAL=$?\necho\ntouch /var/lock/subsys/xinetd\nreturn $RETVAL\n}\nstop(){\necho -n “Stopping xinetd: “\nkillproc xinetd\nRETVAL=$?\necho\nrm -f /var/lock/subsys/xinetd\nreturn $RETVAL\n}\nreload(){\necho -n “Reloading configuration: “\nkillproc xinetd -USR2\nRETVAL=$?\necho\nContinued\nChapter 14: xinetd\n327\n" }, { "page_number": 351, "text": "Listing 14-2 (Continued)\nreturn $RETVAL\n}\nrestart(){\nstop\nstart\n}\ncondrestart(){\n[ -e /var/lock/subsys/xinetd ] && restart\nreturn 0\n}\n# See how we were called.\ncase “$1” in\nstart)\nstart\n;;\nstop)\nstop\n;;\nstatus)\nstatus xinetd\n;;\nrestart)\nrestart\n;;\nreload)\nreload\n;;\ncondrestart)\ncondrestart\n;;\n*)\necho “Usage: xinetd\n{start|stop|status|restart|condrestart|reload}”\nRETVAL=1\nesac\nexit $RETVAL\n8. Change the directory to /etc/rc.d/rc[1-5].d where [1-5] should be\nreplaced with the run-level number between 1 and 5. In most cases, your\ndefault run level is 3, so you would change directory to /etc/rc.d/rc3.d.\nIf you don’t know your run level, run the run level command and the\nnumber returned is your current run level.\n328\nPart III: System Security\n" }, { "page_number": 352, "text": "9. Create a symbolic link called S50xinetd that points to /etc/rc.d/\ninit.d/xinetd script. Run the ln –s /etc/rc.d/init.d/xinetd\nS50xinetd command to create this link.\nTo automatically run xinetd in other run levels you may choose to use,\ncreate a similar link in the appropriate run-level directory.\nConfiguring xinetd for services\nAfter compiling and installing xinetd, configure each service that you want to\nmanage via xinetd. When xinetd is started, the /etc/xinetd.conf file, shown in\nListing 14-1, is loaded. This file sets some defaults and instructs xinetd to load\nadditional service configuration files from the /etc/xinetd.d directory. The\nxinetd daemon parses each file in this directory and loads all the services that are\nconfigured properly.\nIf you have an inetd.conf file that you want to convert to xinetd.conf,\nrun xinetd/xconv.pl < /etc/inetd.conf > /tmp/xinetd.conf\ncommand from the directory where you extracted xinetd source RPM. In\nmy example, this directory is /usr/src/redhat/SOURCES/xinetd-\n2.1.8.9pre11/xinetd.\nThe default values section enclosed within the curly braces {} have the follow-\ning syntax:\n ...\nThe following are common xinetd service attributes and their options.\nN\nbind IP Address\nSee “Creating an Access-Discriminative Service” in this chapter.\nN\ncps sec [wait sec]\nSee “Limiting the number of servers” in this chapter.\nN\nflags keyword\nThis attribute can specify seven flags:\nI REUSE — Sets the SO_REUSEADDR flag on the service socket.\nChapter 14: xinetd\n329\n" }, { "page_number": 353, "text": "I IDONLY — This flag accepts connections from only those clients that\nhave an identification (identd) server.\nI NORETRY — This flag instructs the server not to fork a new service\nprocess again if the server fails.\nI NAMEINARGS — This flag specifies that the first value in the\nserver_args attribute is used as the first argument when starting the\nservice specified. This is most useful when using tcpd; you would spec-\nify tcpd in the server attribute (and ftpd -l as the service) in the\nserver_args attribute.\nI INTERCEPT — This flag tells the server to intercept packets to verify\nthat a source’s IP is acceptable. (It is not applicable to all situations.)\nI NODELAY — This option sets the TCP_NODELAY flag for the socket.\nI DISABLE — This option sets the TCP_NODELAY flag for the socket.\nI KEEPALIVE — This option sets the SO_KEEPALIVE flag in the socket for\nTCP-based services.\nN\nid\nIdentifies the service. By default, the service’s name is the same as the id\nattribute.\nN\ninstances number\nSpecifies the maximum number of servers that can run concurrently.\nN\nlog type\nThis takes one of two forms:\nI log_type syslog facility\nI log_type file [soft_limit [hard_limit]] [path]\nWhen xinetd starts a service, it writes a log entry in the log_type\nspecified file or syslog facility. See “Limiting log file size” in this chapter.\nN\nlog_on_success keyword\nSpecifies the information that needs to be logged upon successful start of\na service. This attribute can take five optional values:\nI PID — This value is the server’s PID (if it’s an internal xinetd service,\nthe PID is 0)\nI HOST — This value is the client’s IP address.\nI USERID — This value is the identity of the remote user.\nI EXIT — This value is the exit status code of the service.\nI DURATION — This value is the session duration of the service.\n330\nPart III: System Security\n" }, { "page_number": 354, "text": "N\nlog_on_failure keyword\nAs with log_on_success, xinetd logs an entry when a service can’t be\nstarted. This attribute can take four values as arguments:\nI HOST — This value is the client’s IP address.\nI USERID — This value is the identity of the remote user.\nI ATTEMPT — Records the access attempt.\nI RECORD — Logs everything that xinetd knows about the client.\nN\nmax_load number\nSee “Limiting load” in this chapter.\nN\nnice number\nSets the process priority of the service run by xinetd.\nN\nno_access [IP address] [hostname] [network/netmask]\nDefines a list of IP addresses, hostnames, networks, and/or netmask that\nare denied access to the service. (For details, see Appendix A.)\nN\nonly_from [ip address] [hostname] [network/netmask]\nSpecifies a list of IP addresses, hostnames, network(s), and/or netmask(s)\nallowed to access the service (see Appendix A). If you don’t supply a value\nfor this attribute, the service is denied to everyone.\nN\nper_source number\nSee “Limiting the number of servers” in this chapter.\nN\nUNLIMITED\nSee “Limiting the number of servers” in this chapter.\nN\nport number\nSpecifies the port number for a service. Use this only if your service port\nisn’t defined in /etc/services.\nN\nprotocol keyword\nSpecifies the protocol name that must exist in /etc/protocols. Normally\na service’s default protocol is used.\nN\nRedirect IP address\nSee “Redirecting and Forwarding Clients” in this chapter.\nN\nRedirect hostname port\nSee “Redirecting and Forwarding Clients” in this chapter.\nChapter 14: xinetd\n331\n" }, { "page_number": 355, "text": "N\nserver path\nSpecifies the path to the server executable file.\nN\nserver_args [arg1] [arg2] [arg3]...\nSpecifies the list of arguments that are passed on to the server.\nN\nsocket_type keyword\nSpecifies any of four socket types: \nI Stream (TCP)\nI dgram (UDP)\nI raw\nI seqpacket.\nN\ntype keyword\nN\nxinetd\nSpecifies service type. Can manage three different types of services:\nI INTERNAL — These services are directly managed by xinetd.\nI RPC — xinetd isn’t (yet) good at handling RPC services that are\ndefined in /etc/rpc. Use inetd instead with RPC-based services.\nI UNLISTED — Services that aren’t listed in /etc/services or in\n/etc/rpc.\nN\nwait yes | no\nIf the service you want to manage by using xinetd is multithreaded, set\nthis attribute to yes; otherwise set it to no.\nI When wait is set to yes, only one server is started by xinetd. \nI When wait is set to no, xinetd starts a new server process for each\nrequest.\nThe following table lists the three possible assignment operators.\nAssignment Operator\nDescription\n=\nAssigns a value to the attribute\n+=\nAdds a value to the list of values assigned to a given attribute\n-=\nRemoves a value from the list of values for a given attribute\n332\nPart III: System Security\n" }, { "page_number": 356, "text": "The default attributes found in the /etc/xinetd.conf file applies to each man-\naged service. As shown in Listing 14-1, the defaults section:\nN Tells xinetd to allow 60 instances of the same service to run.\nThis means that when xinetd is in charge of managing the FTP service, it\nallows 60 FTP sessions to go on simultaneously.\nN Tells xinetd to use the syslog (the authpriv facility) to log information.\nN Instructs xinetd to log\nI hostname (HOST) and process ID (PID) upon successful start of a service,\nI hostname and all available information (RECORD) when a service\ndoesn’t start.\nAs mentioned earlier, each service has its own configuration file (found in the\n/etc/xinetd.d directory), and that’s what you normally use to configure it. For\nexample, a service called myservice would be managed by creating a file called\n/etc/xinetd.d/myservice, which has lines such as the following:\nservice myservice\n{\nattribute1 operator value1, value2, ...\nattribute2 operator value1, value2, ...\n. . .\nattributeN operator value1, value2, ...\n}\nYou can start quickly with only the default configuration found in the\n/etc/xinetd.conf file. However, there is a lot of per-service configuration that\nshould be done (discussed in later sections) before your xinetd configuration is\ncomplete.\nStarting, Reloading, and\nStopping xinetd\nIf you followed the installation instructions in the previous section, your xinetd\nshould automatically start when you reboot the system. You can also start it man-\nually without rebooting the system. To start xinetd, run the /etc/rc.d/init.d/\nxinetd start command.\nAny time you add, modify, or delete /etc/xinetd.conf (or any other files in the\n/etc/xinet.d directory), tell xinetd to reload the configuration. To do so, use the\n/etc/rc.d/init.d/xinetd reload command.\nChapter 14: xinetd\n333\n" }, { "page_number": 357, "text": "If you prefer the kill command, you can use kill –USR1 xinetd PID\nor killall -USR1 xinetd to soft-reconfigure xinetd.A soft reconfigu-\nration using the SIGUSR1 signal makes xinetd reload the configuration\nfiles and adjust accordingly. To do a hard reconfiguration of the xinetd\nprocess, simply replace USR1 with USR2 (SIGUSR2 signal). This forces\nxinetd to reload the configuration and remove currently running services.\nTo stop xinetd, run the /etc/rc.d/init.d/xinetd stop command.\nStrengthening the Defaults\nin /etc/xinetd.conf\nThe defaults section, shown in Listing 14-1, isn’t ideal for strong security. It doesn’t\nobey the prime directive of a secured access configuration: “Deny everyone; allow\nonly those who should have access.” So add an attribute that fixes this insecurity:\nno_access = 0.0.0.0/0\nThe 0.0.0.0/0 IP address range covers the entire IP address space. The\nno_access attribute set to such an IP range disables access from all possible IP\naddresses — that is, everyone. You must open access on a per-service basis.\nHere is how you can fine tune the default configuration:\nN The default configuration allows 60 instances of a service to run if neces-\nsary because of load. This number seems high. I recommend that this\nnumber be scaled back to 15 or 20. You can change it later as needed. For\nexample, if you find that your server gets more than 20 FTP requests\nsimultaneously, you can change the /etc/xinetd/ftp service file to set\ninstances to a number greater than 20.\nN The default configuration doesn’t restrict how many connections one\nremote host can make to a service. Set this to 10, using the per_source\nattribute.\nN Disable all the r* services (such as rlogin, rsh, and rexec); they are con-\nsidered insecure and shouldn’t be used. You can disable them in the\ndefaults section by using the disabled attribute.\nNow the defaults section looks like this:\n334\nPart III: System Security\n" }, { "page_number": 358, "text": "defaults\n{\ninstances = 20\nlog_type = SYSLOG authpriv\nlog_on_success = HOST PID\nlog_on_failure = HOST RECORD\n# Maximum number of connections allowed from\n# a single remote host.\nper_source = 10\n# Deny access to all possible IP addresses. You MUST\n# open access using only_from attribute in each service\n# configuration file in /etc/xinetd.d directory.\nno_access = 0.0.0.0/0\n# Disable services that are not to be used\ndisabled = rlogin rsh rexec\n}\nAfter you create the defaults section as shown here, you can start xinetd. You\ncan then create service-specific configuration files and simply reload your xinetd\nconfiguration as needed.\nRunning an Internet Daemon\nUsing xinetd\nAn Internet service that runs via xinetd is defined using an /etc/xinetd.d/\nservice file where the filename is the name of the service. Listing 14-3 shows a\nsimple configuration for an Internet service called myinetservice.\nListing 14-3: /etc/xinetd.d/myinetservice\nservice myinetservice\n{\nsocket_type = stream\nwait = no\nuser = root\nserver = /path/to/myinetserviced\nserver_args = arg1 arg2\n}\nTo set up services such as FTP, Telnet, and finger, all you need is a skeleton\nconfiguration as in the preceding listing; change the values as needed. For exam-\nple, Listing 14-4 shows /etc/xinetd.d/ftp, the FTP configuration file.\nChapter 14: xinetd\n335\n" }, { "page_number": 359, "text": "Listing 14-4: /etc/xinetd.d/ftpd\nservice ftp\n{\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/sbin/in.ftpd\nserver_args = -l -a\n}\nHere the server attribute points to /usr/sbin/in.ftpd, the server_args\nattribute is set to -l and -a, and everything else is the same as in the skeleton con-\nfiguration. You can enhance such configuration to add more attributes as needed.\nFor example, say you want to log more than what the defaults section provides for\nthe FTP service, such that a successful login (log_on_success) then logs not only\nthe HOST and PID, but also the DURATION and USERID. You can simply use the +=\noperator to add these log options:\nlog_on_success += DURATION USERID\nWhen reloaded, the xinetd daemon sees this line as\nlog_on_success = HOST PID DURATION USERID\nYou are adding values to the list already specified in the log_on_success\nattribute in the defaults section in /etc/xinetd.conf. Similarly, you can override\na default value for your service configuration. Say you don’t want to log via\nsyslog, and prefer to log by using a file in the /var/log directory. You can over-\nride the default log_type setting this way:\nlog_type = FILE /var/log/myinetdservice.log\nAlso, you can add new attributes as needed. For example, to control the FTP\nserver’s priority by using the nice attribute, you can add it into your configuration.\nThe completed example configuration is shown in Listing 14-5.\nListing 14-5: /etc/xinetd.d/ftpd\nservice ftp\n{\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/sbin/in.ftpd\n336\nPart III: System Security\n" }, { "page_number": 360, "text": "server_args = -l –a\nlog_on_success += DURATION USERID\nnice = 10\n}\nControlling Access by Name\nor IP Address\nIt is common practice to control access to certain services via name (that is, host-\nname) or IP address. Previously (in the inetd days) this was possible only by using\nthe TCP wrapper program called tcpd, which uses the /etc/hosts.allow and\n/etc/hosts.deny files to control access. Now xinetd comes with this feature\nbuilt in.\nIf you want your Telnet server accessible only within your organization’s LAN,\nuse the only_from attribute. For example, if your network address and netmask in\n(CIDR) format is 192.168.0.0/24, you can add the following line in the /etc/\nxinetd.d/telnet configuration file.\n# Only allow access from the 192.168.0.0/24 subnet\nonly_from = 192.168.0.0/24\nThis makes sure that only the computers in the 192.168.0.0 network can access\nthe Telnet service.\nIf you want to limit access to one or a few IP addresses instead of a full network,\nyou can list the IP addresses as values for the only_from attribute as shown in this\nexample:\n# Only allow access from two known IP addresses\nonly_from = 192.168.0.100 172.20.15.1\nHere, access to the Telnet service is limited to two IP addresses.\nIf you want to allow connections from a network such as 192.168.0.0/24 but\ndon’t want a subnet 192.168.0.128/27 to access the service, add the following lines\nto the configuration file:\n# Only allow access from the 192.168.0.0/24 subnet\nonly_from = 192.168.0.0/24\n# Don’t allow access from the 192.168.0.128/27 subnet\nno_access = 192.168.0.128/27\nAlthough only_from makes the service available to all usable IP addresses rang-\ning from 192.168.0.1 to 192.168.0.254, the noaccess attribute disables the IP\naddresses that fall under the 192.168.0.128 network.\nChapter 14: xinetd\n337\n" }, { "page_number": 361, "text": "If you want to allow access to the service from a network 192.168.0.0/24 but\nalso want to block three hosts (with IP addresses 192.168.0.100, 192.168.0.101, and\n192.168.0.102), the configuration that does the job is as follows:\n# Only allow access from the 192.168.0.0/24 subnet\nonly_from = 192.168.0.0/24\n# Don’t allow access from the 192.168.0.128/27 subnet\nno_access = 192.168.0.100 192.168.0.101 192.168.0.102\nControlling Access by Time of Day\nSooner or later, most security administrators have to restrict access to a service for\na certain period of time. Typically, the need for such restriction comes from services\nthat must be restarted (or go into maintenance mode) during a 24-hour cycle. For\nexample, if you’re running a database server, you may find that performing a\nbackup requires taking all database access offline because of the locks. Luckily, if\nsuch a service is managed by xinetd, you can control access by using the\naccess_times attribute — and may not have to deny access while you’re creating\nthe backup. For example, you can control access to your FTP server during office\nhours if you add the following configuration:\n# Allow access only during office hours\naccess_times = 08:00-17:00\nWhen a user tries connecting to the service before or after these hours, access is\ndenied.\nReducing Risks of \nDenial-of-Service Attacks\nDenial-of-Service (DoS) attacks are very common these days. A typical DoS attacker\ndiminishes your system resources in such a way that your system denies responses\nto valid user requests. Although it’s hard to foolproof a server from such attacks,\nprecautionary measures help you fight DoS attacks effectively. In this section, I dis-\ncuss how xinetd can reduce the risk of DoS attacks for services it manages.\nLimiting the number of servers\nTo control how many servers are started by xinetd use the instances attribute.\nThis attribute allows you to specify the maximum number of server instances that\nxinetd can start when multiple requests are received as shown in this example:\n338\nPart III: System Security\n" }, { "page_number": 362, "text": "#Only 10 connections at a time\ninstances = 10\nHere, xinetd starts a maximum of ten servers to service multiple requests. If the\nnumber of connection requests exceeds ten, the requests exceeding ten are refused\nuntil at least one server exits.\nLimiting log file size\nMany attackers know that most services write access log entries. They often send\nmany requests to daemons that write lots of log entries and try to fill disk space in\n/var or other partitions. Therefore, a maximum log size for services is a good idea.\nBy default, xinetd writes log entries using the daemon.info facility of syslog\n(syslogd). You can use this attribute to change the syslog facility this way:\nlog_type SYSLOG facility\nTo use the authpriv.info facility of syslog, use:\nlog_type SYSLOG authpriv.info\nAlso, xinetd can write logs to a file of your choice. The log_type syntax for\nwriting logs is:\nlog_type FILE /path/to/logfile [soft_limit [hard_limit]]\nFor example, to limit the log file /var/log/myservice.log for a service to be\n10,485,760 bytes (10MB) at the most and receive warning in syslog when the limit\napproaches 8,388,608 bytes (8MB) then use the log_type attribute this way:\nlog_type FILE /var/log/myservice.log 8388608 10485760\nWhen the log file reaches 8MB, you see an alert entry in syslog and when the log\nfile reaches the 10MB limit, xinetd stops any service that uses the log file.\nLimiting load\nYou can use the maxload attribute to specify the system load at which xinetd stops\naccepting connection for a service. This attribute has the following syntax:\nmax_load number\nThis number specifies the load at which the server stops accepting connections;\nthe value for the load is based on a one-minute CPU load average, as in this\nexample:\nChapter 14: xinetd\n339\n" }, { "page_number": 363, "text": "#Not under load\nmax_load = 2.9\nWhen the system load average goes above 2.9, this service is temporarily dis-\nabled until the load average lowers.\nTo use the max_load attribute,compile xinetd with the –with-load-\navrg option.\nThe nice attribute sets the process priority of the server started by xinetd as\nshown in this example:\n#Be low priority\nnice = 15\nThis ensures that the service started by xinetd has a low priority.\nTo set a high priority use a smaller number.Highest priority is -20.\nLimiting the rate of connections\nThis attribute controls how many servers for a custom service xinetd starts per\nsecond. The first number (in seconds) specifies the frequency (connections per sec-\nond). The second number (also in seconds) specifies how long xinetd waits after\nreaching the server/sec limit, as in this example:\n#Only 5 connections per second\ncps = 10 60\nHere xinetd starts a maximum of 10 servers and waits 60 seconds if this limit is\nreached. During the wait period, the service isn’t available to any new client.\nRequests for service are denied.\n340\nPart III: System Security\n" }, { "page_number": 364, "text": "Creating an \nAccess-Discriminative Service\nOccasionally, a service like HTTP or FTP has to run on a server in a way that dis-\ncriminates according to where the access request came from. This access discrimi-\nnation allows for tight control of how the service is available to the end user.\nFor example, if you have a system with two interfaces (eth0 connected to the\nlocal LAN and eth1 connected to an Internet router), you can provide FTP service\nwith a different set of restrictions on each interface. You can limit the FTP service\non the public (that is, eth1), Internet-bound interface to allow FTP connections only\nduring office hours when a system administrator is on duty and let the FTP service\nrun unrestrictedly when requested by users in the office LAN. Of course, you don’t\nwant to let Internet users access your FTP site after office hours, but you want\nhardworking employees who are working late to access the server via the office\nLAN at any time.\nYou can accomplish this using the bind attribute to bind an IP address to a spe-\ncific service. Because systems with multiple network interfaces have multiple IP\naddresses, this attribute can offer different functionality on a different interface\n(that is, IP address) on the same machine.\nListing 14-6 shows the /etc/xinetd.d/ftp-worldwide configuration file used\nfor the public FTP service.\nListing 14-6: /etc/xinetd.d/ftp-worldwide\nservice ftp\n{\nid = ftp-worldwide\nwait = no\nuser = root\nserver = /usr/sbin/in.ftpd\nserver_args = -l\ninstances = 10\ncps = 5\nnice = 10\nonly_from = 0.0.0.0/0\nbind = 169.132.226.215\naccess_times = 08:00-17:00\n}\nThe proceeding configuration does the following\nN The id field sets a name (“ftp-worldwide”) for the FTP service that is\navailable to the entire world (the Internet).\nChapter 14: xinetd\n341\n" }, { "page_number": 365, "text": "N This service is bound to the IP address on the eth1 interface\n(169.132.226.215). It’s open to everyone because the only_from attribute\nallows any IP address in the entire IP address space (0.0.0.0/0) to access it.\nN Access is restricted to the hours 08:00-17:00 using the access_times\nattribute.\nN Only ten instances of the FTP server can run at a time.\nN Only five instances of the server can be started per second.\nN The service runs with a low process-priority level (10), using the nice\nattribute.\nListing 14-7 shows the private (that is, office LAN access only) FTP service con-\nfiguration file called /etc/xinetd.d/ftp-office.\nListing 14-7: /etc/xinetd.d/ftp-office\nservice ftp\n{\nid = ftp-office\nsocket_type = stream\nwait = no\nuser = root\nserver = /usr/sbin/in.ftpd\nserver_args = -l\nonly_from = 192.168.1.0/24\nbind = 192.168.1.215\n}\nHere the private FTP service is named ftp-office — using the id attribute — and it’s\nbound to the 192.168.1.0 network. Every host on this Class C network can access\nthis FTP server. But no external server (for example, one on the Internet) has access\nto this server.\nRedirecting and Forwarding Clients\nUsing port redirection you can point clients to a different port on the server or even\nforward them to a different system. The redirect attribute can redirect or forward\nclient requests to a different system or a different port on the local system. The\nredirect attribute has the following syntax:\nredirect IP address or hostname port\n342\nPart III: System Security\n" }, { "page_number": 366, "text": "When xinetd receives a connection for the service with the redirect attribute, it\nspawns a process and connects to the port on the IP or hostname specified as the\nvalue of the redirect attribute. Here’s how you can use this attribute.\nSay that you want to redirect all Telnet traffic destined for the Telnet server (run-\nning on IP address 169.132.226.215) to 169.132.226.232. The machine with the\n169.132.226.215 IP address needs the following /etc/xinetd.d/telnet configuration:\nservice telnet\n{\nflags = REUSE\nsocket_type = stream\nprotocol = tcp\nwait = no\nuser = root\nbind = 169.132.226.215\nredirect = 169.132.226.232\n}\nHere the redirect attribute redirects Telnet requests to 169.132.226.215 to\nanother host with IP address 169.132.226.232. Any time you run Telnet\n169.132.226.215 from a machine, the request forces xinetd to launch a process and\nact as a Telnet proxy between 169.132.226.215 and 169.132.226.232. You don’t\nneed a server or server_args attribute here.\nThe 169.132.226.232 machine doesn’t even have to be a Linux system. However,\nif the destination of the redirect (169.132.226.232) is a Linux system — and you\nwant to run Telnet on a nonstandard port such as 2323 on that machine — you can\ncreate a configuration file for xinetd called /etc/xinetd.d/telnet2323. It would look\nlike this:\nservice telnet2323\n{\nid = telnet2323\nflags = REUSE\nsocket_type = stream\nprotocol = tcp\nwait = no\nuser = root\nbind = 169.132.226.232\nport = 2323\nserver = /usr/sbin/in.telnetd\n}\nHere the id field distinguishes the special service and port attribute lets xinetd\nknow that you want to run the Telnet daemon on port 2323 on 169.132.226.232. In\nChapter 14: xinetd\n343\n" }, { "page_number": 367, "text": "such a case change redirect 169.132.226.232 23 to redirect 169.132.226.232 2323\nin /etc/xinetd.d/telnet on the machine with the IP address 169.132.226.215.\nFigure 14-3 illustrates how you can use this redirection feature to access a pri-\nvate network from the Internet.\nFigure 14-3: Using redirection to access a private network\nAs shown in the figure, the neon.nitec.com system is a Linux gateway between\nthe Internet and a private network 192.168.1.0/24. The gateway implements the\nredirect this way:\nservice telnet\n{\nflags = REUSE\nsocket_type = stream\nprotocol = tcp\nwait = no\nuser = root\nbind = 169.132.226.215\nredirect = 192.168.1.215 23\n}\nWhen a Telnet request such as telnet 169.132.226.215 is received by the\nxinetd daemon on 169.132.226.215, it launches a process to proxy all data\nbetween 169.132.226.215 and 192.168.1.215.\nneon.nitec.com\nNetwork Gateway System (running xinetd)\nLAN: 192.168.1.0/24\nInternet Router\n169.132.226.215: Port23\neth1\neth0\neth0: 192.168.1.215\n192.168.1.215: Port23\nTelnet Server\n344\nPart III: System Security\n" }, { "page_number": 368, "text": "Using TCP Wrapper with xinetd\nWhen xinetd is compiled with TCP wrapper support (using the –with-libwrap\nconfiguration option), all services can use the /etc/hosts.allow\nand\n/etc/hosts.deny files. For example, say you want to run the finger server via\nxinetd and control it via the TCP wrapper. Here’s what you do:\n1. Modify the /etc/xinetd.d/finger file as shown below:\nservice finger\n{\nflags = REUSE NAMEINARGS\nprotocol = tcp\nsocket_type = stream\nwait = no\nuser = nobody\nserver = /usr/sbin/tcpd\nserver_args = /usr/sbin/in.fingerd\n}\n2. To control access to the finger daemon, modify /etc/hosts.allow and\n/etc/hosts.deny as needed. For example, to deny everyone access to the\nfinger daemon except hostname 192.168.1.123, you can create an\nentry in /etc/hosts.allow this way:\nin.fingerd: 192.168.1.123\n3. Modify /etc/hosts.deny this way:\nin.fingerd: ALL\nThis makes xinetd run the TCP wrapper (/usr/sbin/tcpd) with the command-\nline argument /usr/sbin/in.fingerd (which is the finger daemon).\nRunning sshd as xinetd\nEvery time sshd runs, it generates a server key. This is why sshd is typically a\nstand-alone server (that is, started once during server start-up). However, to use\nxinetd’s access control features for SSH service, you can run it as a xinetd service.\nHere’s how:\n1. Create a xinetd service file called /etc/xinetd.d/sshd as shown in the\nfollowing listing.\nservice ssh\n{\nChapter 14: xinetd\n345\n" }, { "page_number": 369, "text": "socket_type = stream\nwait = no\nuser = root\nserver = /usr/local/sbin/sshd\nserver_args = -i\nlog_on_success += DURATION USERID\nlog_on_failure += USERID\nnice = 10\n}\n2. Run ps auxw | grep sshd to check whether sshd is already running. If\nit’s running, stop it by using /etc/rc.d/init.d/sshd stop.\n3. Force xinetd to load its configuration, using killall –USR1 xinetd.\nYou can use an SSH client to access the server as usual.\nUsing xadmin\nThe xinetd daemon provides an internal administrative service called xadmin. This\nservice provides information about the xinetd-run services. You can set up this\nservice configuration file, /etc/xinetd.d/xadmin, this way:\nservice xadmin\n{\ntype = INTERNAL UNLISTED\nport = 9100\nprotocol = tcp\nsocket_type = stream\nwait = no\ninstances = 1\nonly_from = localhost\ncps = 1\n}\nThe configuration tells xinetd to run only one instance of the xadmin service on\nthe nonstandard (that is, not listed in /etc/services) port, 9100, using TCP.\nBecause xadmin shows information about the xinetd-run services, it isn’t advis-\nable to make this service available to the public. That’s why the configuration\nmakes this service available only on localhost (that is, 127.0.0.1). You must log\non to the system locally to access this service. Only one connection per second is\nallowed for this service.\nTo run this service from localhost, run the telnet localhost 9100 command.\nListing 14-8 shows a sample session when connected to this service.\n346\nPart III: System Security\n" }, { "page_number": 370, "text": "Listing 14-8: Sample xadmin session\nTrying 127.0.0.1...\nConnected to localhost.intevo.com.\nEscape character is ‘^]’.\n> help\nxinetd admin help:\nshow run : shows information about running services\nshow avail: shows what services are currently available\nbye, exit : exits the admin shell\n> show run\nRunning services:\nservice run retry attempts descriptor\nftp server\npid = 2232\nstart_time = Thu Dec 14 21:56:47 2000\nConnection info:\nstate = CLOSED\nservice = ftp\ndescriptor = 11\nflags = 9\nremote_address = 172.20.15.100,2120\nAlternative services =\nlog_remote_user = YES\nwrites_to_log = YES\nxadmin server\npid = 0\nstart_time = Fri Dec 15 19:00:00 2000\nConnection info:\nstate = OPEN\nservice = xadmin\ndescriptor = 11\nflags = 0x9\nremote_address = 127.0.0.1,1534\nAlternative services =\nlog_remote_user = YES\nwrites_to_log = NO\n> show avail\nAvailable services:\nservice port bound address uid redir addr redir port\nxadmin 9100 0.0.0.0 0\nftp 21 0.0.0.0 0\ntelnet 23 0.0.0.0 0\nshell 514 0.0.0.0 0\nlogin 513 0.0.0.0 0\nContinued\nChapter 14: xinetd\n347\n" }, { "page_number": 371, "text": "Listing 14-8 (Continued)\nfinger 79 0.0.0.0 99\n> bye\nbye bye\nConnection closed by foreign host.\nThe xadmin commands entered at the > prompt are shown in boldface. The help\ncommand lists available xadmin commands. The show run command shows infor-\nmation about currently running services that xinetd started. In the example, ftp\nand xadmin are the only services run by xadmin. The show avail command shows\nthe configured services.\nSummary\nThe xinetd daemon is a secure replacement for the traditional inetd daemon. It\nallows each service to have its own configuration file and provides a greater flexi-\nbility in controlling access to the services it manages. It offers good support for\nhandling many denial of service attacks as well.\n348\nPart III: System Security\n" }, { "page_number": 372, "text": "Network Service Security\nCHAPTER 15\nWeb Server Security\nCHAPTER 16\nDNS Server Security\nCHAPTER 17\nE-Mail Server Security\nCHAPTER 18\nFTP Server Security\nCHAPTER 19\nSamba and NFS Server Security\nPart IV\n" }, { "page_number": 373, "text": "" }, { "page_number": 374, "text": "Chapter 15\nWeb Server Security\nIN THIS CHAPTER\nN Understanding Web Risks\nN Configuring Sensible Security for Apache\nN Reducing CGI Risks\nN Reducing SSI Risks\nN How to log everything\nN How to restrict access to sensitive sections of the Web\nN How use SSL with Apache\nAPACHE, THE DEFAULT WEB-SERVER program for Red Hat Linux, is the most widely\nused Web server in the world. Apache developers pay close attention to Web secu-\nrity issues, which keeps Apache in good shape for keeping security holes to a min-\nimum in server code. However, most security issues surrounding Web sites exist\nbecause of software misconfiguration or misunderstanding of underlying server\ntechnology. This chapter examines some common Web security risks — and some\nways to reduce or eliminate them.\nUnderstanding Web Risks\nThe very first step in protecting your Web server from vandals is understanding and\nidentifying security risks. Not long ago, Web sites served only static HTML pages,\nwhich made them less prone to security risks. The only way a vandal could hack\ninto such Web sites was to break into the server by gaining illegal access. This was\ntypically done by using weak passwords (passwords that are easily guessed or dic-\ntionary words) or by tricking another server program.\nThese days most Web sites no longer serve static HTML pages; typically they\nserve dynamic content, personalized for a rich user experience. Many Web sites tie\nin applications for valuable customer service or perform e-commerce activities —\nand that’s where they also take some (usually inadvertent) risks.\nMost Web sites that have been hacked by vandals are not vandalized because of\nthe Web server software; they are hacked because holes in their applications or\nscripts are exploited.\n351\n" }, { "page_number": 375, "text": "Most Web-security experts agree that scripts or applications running on a Web\nserver are the biggest risk factors. Because CGI scripts are generally responsible for\ncreating dynamic content, they often cause the most damage. This chapter exam-\nines security risks associated with CGI scripts and shows how you can reduce such\nrisks. First — appropriately for the most-used Web server — is a look at how you can\nconfigure Apache for enhanced security.\nConfiguring Sensible\nSecurity for Apache\nSensible security configuration for Apache includes creating dedicated user and\ngroup accounts, using a security-friendly directory structure, establishing permis-\nsions and index files, and disabling risky defaults. The following sections provide a\ncloser look.\nUsing a dedicated user and group for Apache\nApache can be run as a standalone or an inetd-run service. If you run Apache as an\ninetd service, don’t worry about the User and Group directives. If you run Apache\nas a standalone server, however, make sure you create a dedicated user and group\nfor Apache. Don’t use the nobody user or the nogroup group, especially if your sys-\ntem has already defined these. Likely there are other services or other places where\nyour system uses them. Instead, create a new user and group for Apache.\nFor clarity, this chapter refers to the Apache-dedicated user and group\naccounts as the httpd user and the httpd group; you may want to use a\ndifferent name for these accounts.\nWhen you use a dedicated user and group for Apache, permission-specific\nadministration of your Web content becomes simpler to do: Just ensure that only\nthe Apache user has read access to your Web content. If you want to create a direc-\ntory to which some CGI scripts may write data, enable write permissions for only\nthe Apache user.\nUsing a safe directory structure\nMost Apache installations have four main directories:\nN\nServerRoot stores server configuration (conf subdirectory), binary\n(bin subdirectory), and other server-specific files.\n352\nPart IV: Network Service Security\n" }, { "page_number": 376, "text": "N\nDocumentRoot stores Web site content such as HTML, JavaScript, and\nimages.\nN\nScriptAlias stores CGI scripts.\nN\nCustomLog and ErrorLog store access and error log files. You can specify\ntwo different directories for each of these directives, but keeping one log\ndirectory for all the log files is usually more manageable in the long run.\nI recommend using a directory structure where all four primary directories are\nindependent of each other — meaning no primary directory is a subdirectory of any\nother.\nN\nServerRoot should point to a directory that can be accessed only by the\nroot user.\nN The DocumentRoot directory needs access permission for\nI Users who maintain your Web site\nI The Apache user or group (specified using the User and Group direc-\ntives in the httpd.conf file)\nN The ScriptAlias directory should be accessible only to script developers\nand an Apache user or group.\nN The CustomLog or ErrorLog directory should be accessible only by the\nroot user.\nNot even the Apache user or group should have access to the log directory. The\nfollowing example shows such a directory structure:\n/\n+---home\n| +---httpd (ServerRoot)\n+---www\n| +---htdocs (DocumentRoot)\n| +---cgi-bin (ScriptAlias)\n| +---logs (CustomLog and ErrorLog)\n.\nThis directory structure is quite safe in many ways. To understand why, first look\nat the following Apache configuration in httpd.conf.\nServerRoot /home/httpd\nDocumentRoot /www/htdocs\nScriptAlias /cgi-bin/ “/www/cgi-bin/”\nCustomLog /www/logs/access.log common\nErrorLog /www/logs/error.log\nChapter 15: Web Server Security\n353\n" }, { "page_number": 377, "text": "Because all these major directories are independent (not one is a subdirectory of\nanother) they are safe. A permissions mistake in one directory doesn’t affect the\nothers.\nUsing appropriate file and directory permissions\nServerRoot should be accessible only by the root user, because no one but the\nroot should configure or run Apache. DocumentRoot should be accessible to users\nwho manage the contents of your Web site and the Apache user (specified using the\nUser directive) or the Apache group (specified using Group directive).\nFor example, if you want a user called htmlguru to publish content on your Web\nsite and you run Apache as httpd user, here’s how you make both Apache and the\nnamed user access the DocumentRoot directory:\n1. Create a new group called webteam with this command:\ngroupadd webteam\n2. Add htmlguru as a user to the webteam group with this command:\nusermod –G webteam htmlguru\n3. Change the ownership of the DocumentRoot directory (and all the subdi-\nrectories below it) with this command:\nchown –R httpd.webteam /www/htdocs\nThis command sets the directory ownership to Apache (that is, the httpd\nuser) and sets the group ownership to webteam, which includes the html-\nguru user. This means both the Apache and htmlguru accounts can access\nthe document tree.\n4. Change the permission of the DocumentRoot directory (and all the subdi-\nrectories below it) this way:\nchmod -R 2570 /www/htdocs\nThis command makes sure that the files and subdirectories under the\nDocumentRoot are readable and executable by the Apache user and that\nthe webteam group can read, write, and execute everything. It also ensures\nthat whenever a new file or directory is created in the document tree, the\nwebteam group has access to it.\nOne great advantage of this method is that adding new users to the\nwebteam is as simple as running the following command:\nusermod -G webteam \n5. To remove an existing user from the webteam group, simply run:\nusermod -G [group1,group2,group3,...]\n354\nPart IV: Network Service Security\n" }, { "page_number": 378, "text": "In this configuration, group1, group2, group3, and so on are groups\n(excluding the webteam group) that this user currently belongs to.\nYou can find which group(s) a user belongs to by running the group\n command.\nScriptAlias should be accessible only to the CGI developers and the Apache\nuser. I recommend that you create a new group called webdev for the developer(s).\nAlthough the developer group (webdev) needs read, write, and execute access for\nthe directory, the Apache user requires only read and execute access. Don’t allow\nthe Apache user to write files in this directory. For example, say you have the fol-\nlowing ScriptAlias in httpd.conf:\nScriptAlias /cgi-bin/ “/www/cgi-bin/”\nIf httpd is your Apache user and webdev is your developer group, set the per-\nmissions for /www/cgi-bin like this:\nchown -R httpd.webdev /www/cgi-bin\nchmod -R 2570 /www/cgi-bin\nAlternatively, if you want only one user (say, cgiguru) to develop CGI scripts,\nyou can set the file and directory permission this way:\nchown -R cgiguru.httpd /www/cgi-bin\nchmod -R 750 /www/cgi-bin\nHere the user cgiguru owns the directory and the group (specified by the Group\ndirective) used for Apache server and is the group owner of the directory and its files.\nThe log directory used in CustomLog and ErrorLog directives should be writable\nonly by the root user. The recommended permissions setting for such a directory\n(say, /www/logs) is:\nchown -R root.root /www/logs\nchmod -R 700 /www/logs\nDon’t allow anyone (including the Apache user or group) to read, write, or\nexecute files in the log directory specified in CustomLog and ErrorLog\ndirectives.\nChapter 15: Web Server Security\n355\n" }, { "page_number": 379, "text": "Whenever implementing an access policy for a Web directory remember:\nN Take a conservative approach to allowing access to new directories that\nare accessible via Web.\nN Don’t allow Web visitors to view any directory listings. You can hide your\ndirectory listings using the methods discussed below.\nUsing directory index file\nWhenever a user requests access to a directory via Web, Apache does the following:\n1.\nApache checks whether the directory is accessible.\nIf it is accessible, Apache continues; if it is not, Apache displays an error\nmessage.\n2. If the directory is accessible, Apache looks for a directory index file speci-\nfied using the DirectoryIndex directive. By default, this file is\nindex.html.\nI If it can read this file in the requested directory, the contents of the file\nare displayed.\nI If such a file doesn’t exist, Apache checks whether it can create a\ndynamic listing for the directory. If that action is allowed, then Apache\ncreates dynamic listings and displays the contents of the directory to\nthe user.\nAny directory listings dynamically generated by Apache provide potential bad\nguys with clues about your directory structure; you shouldn’t allow such listings.\nThe simplest way to avoid creating a dynamic directory listing is by specifying the\nfilenames of your directory listings in the DirectoryIndex directive. For example,\nApache first looks for index.html in the requested directory of the URL; then it\nlooks for index.htm if index.html is missing — provided you set DirectoryIndex\nwith this command:\nDirectoryIndex index.html index.htm\nOne common reason that many Web sites have an exposed directory or two is\nthat someone creates a new directory and forgets to create the index file — or\nuploads an index file in the wrong case (INDEX.HTML or INDEX.HTM, for example).\nIf this happens frequently, a CGI script can automatically redirect users to your\nhome page or perhaps to an internal search engine interface. Simply modify the\nDirectoryIndex directive so it looks like this:\nDirectoryIndex index.html index.htm /cgi-bin/index.pl\n356\nPart IV: Network Service Security\n" }, { "page_number": 380, "text": "Now add a CGI script such as the one shown in Listing 15-1 in the\nScriptAlias-specified directory.\nListing 15-1: index.pl\n#!/usr/bin/perl\n# Purpose: this script is used to redirect\n# users who enter URL that points to\n# directories without index.html page.\n#\n# Set the automatically redirect URL\nmy $AUTO_REDIRECT_URL = ‘/’;\n# Get the current URL path\nmy $curDir = $ENV{REQUEST_URI};\n# if the current URL path isn’t home page (/) then\n# redirect user to home page\nif ($curDir ne ‘/’){\nprint redirect($AUTO_REDIRECT_URL);\n# If the home page is also missing the index page,\n# we can’t redirect back to home page (to avoid\n# recursive redirection) so display an error message.\n} else {\nprint header;\nprint “HOME PAGE NOT FOUND!”;\n}\nexit 0;\nThis script runs if Apache doesn’t find the directory index files (index.html or\nindex.htm). The script simply redirects a user, whose URL points to a directory with\nno index file, to the home page of the Web site.\nChange /cgi-bin/ from the path of the directive if you use another alias\nname.\nIf you want to not display any directory listings, you can simply disable direc-\ntory listings by setting the following configuration:\n\nOptions -Indexes\n\nThe Options directive tells Apache to disable all directory-index processing.\nChapter 15: Web Server Security\n357\n" }, { "page_number": 381, "text": "You may also want to tell Apache not to allow symbolic links; they can\nexpose part of the disk space that you don’t want to make public.To do so,\nuse the minus sign when you set the Options directive so it looks like this:\n-FollowSymLinks.\nDisabling default access\nA good security model dictates that no access exists by default; get into the habit\nof permitting no access at first. Permit specific access only to specific directories. To\nimplement no-default access, use the following configuration segment in\nhttpd.conf:\n\nOrder deny,allow\nDeny from all\n\nThis segment disables all access first. For access to a particular directory, use the\n container again to open that directory. For example, if you want\nto permit access to /www/htdocs, add the following configuration:\n\nOrder deny,allow\nAllow from all\n\nThis method — opening only what you need — is highly recommended as a pre-\nventive security measure.\nDon’t allow users to change any directorywide configuration options using\na per-directory configuration file (.htaccess) in directories that are open\nfor access.\nDisabling user overrides\nTo disable users’ override capability for configuration settings that use the per-\ndirectory configuration file (.htaccess) in any directory, do the following :\n\nAllowOverride None\n\n358\nPart IV: Network Service Security\n" }, { "page_number": 382, "text": "This disallows user overrides and speeds up processing (because the server no\nlonger looks for the per-directory access control files (.htaccess) for each request).\nUsing Paranoid Configuration\nWant to go a few steps further into the land of paranoia in search of security?\nHere’s what I consider a “paranoid” configuration for Apache.\nN No CGI script support.\nCGI scripts are typically the cause of most Web security incidents.\nN No SSI support.\nSSI pages are often problematic since some SSI directives that can be\nincorporated in a page can allow running CGI programs\nN No standard World Wide Web URLs.\nAllowing Web sites to use the www.domain.com/~username URL scheme\nfor individual users introduces many security issues such as\nI Users may not take appropriate cautions to reduce the risk of exposing\nfilesystem information to the rest of the world.\nI Users can make mistakes that make nonpublic disk areas of the server\npublicly accessible.\nN Don’t provide status information via the Web.\nApache provides a status module that offers valuable status information\nabout the server via the Web. This information can give clues to vandals.\nNot installing the module in the first place is the paranoid way of making\nsure the vandals can’t access such information.\nThe preceding paranoid configuration can be achieved using the following con-\nfiguration command:\n./configure --prefix=/home/apache \\\n--disable-module=include \\\n--disable-module=cgi \\\n--disable-module=userdir \\\n--disable-module=status\nOnce you have run the preceding configuration command from the src direc-\ntory of the Apache source distribution, you can make and install Apache (in\n/home/apache) using the preceding paranoid configuration.\nChapter 15: Web Server Security\n359\n" }, { "page_number": 383, "text": "Many “paranoid” administrators run Apache on nonstandard ports such as\n8080 or 9000. To run Apache on such ports, change the Port directive in\nhttpd.conf. Vandals typically use port-scanner software to detect HTTP\nports. However, using nonstandard ports also makes legitimate users work\nharder to reach the benefits of your site, because they must know and type\nthe port number (www.domain.com:port) at the end of the URL used to\nenter your Web site.\nReducing CGI Risks\nCGI isn’t inherently insecure, but poorly written CGI scripts are a major source of Web\nsecurity holes. The simplicity of the CGI specification makes it easy for many inexpe-\nrienced programmers to write CGI scripts. These inexperienced programmers,\nunaware of the security aspects of internetworking, may create applications or scripts\nthat work but may also create unintentional back doors and holes on the system.\nI consider CGI applications and CGI scripts to be interchangeable terms.\nInformation leaks\nVandals can make many CGI scripts leak information about users or the resources\navailable on a Web server. Such a leak helps vandals break into a system. The more\ninformation a vandal knows about a system, the better informed the break-in\nattempt, as in the following example:\nhttp://unsafe-site.com/cgi-bin/showpage.cgi?pg=/doc/article1.html\nSay this URL displays /doc/article1.html using the showpage.cgi script. A\nvandal may try something like\nhttp://unsafe-site.com/cgi-bin/showpage.cgi?pg=/etc/passwd\nThis displays the user password file for the entire system if the showpage.cgi\nauthor does not protect the script from such leaks.\nConsumption of system resources\nA poorly written CGI script can also consume system resources such that the server\nbecomes virtually unresponsive, as shown in this example:\n360\nPart IV: Network Service Security\n" }, { "page_number": 384, "text": "http://unsafe-site.com/cgi-bin/showlist.pl?start=1&stop=15\nSay that this URL allows a site visitor to view a list of classifieds advertisements\nin a Web site. The start=1 and stop=15 parameters control the number of records\ndisplayed. If the showlist.pl script relies only on the supplied start and stop val-\nues, then a vandal can edit the URL and supply a larger number for the stop para-\nmeter to make showlist.pl display a larger list then usual. The vandal’s\nmodification can overload the Web server with requests that take longer to process,\nmaking real users wait (and, in the case of e-commerce, possibly move on to a com-\npetitor’s site).\nSpoofing of system commands via CGI scripts\nVandals can trick an HTML form-based mailer script to run a system command or\ngive out confidential system information. For example, say you have a Web form\nthat visitors use to sign up for your services or provide you with feedback. Most of\nthese Web forms use CGI scripts to process the visitors’ reqests and send thank-you\nnotes via e-mail. The script may perform a process like the following to send the\ne-mail:\nsystem(“/bin/mail -s $subject $emailAddress < $thankYouMsg”);\nIn this case, the system call runs the /bin/mail program and supplies it the\nvalue of variable $subject as the subject header and the value of variable\n$emailAddress as the e-mail address of the user and redirects the contents of the\nfile named by the $thankYouMsg variable. This works, and no one should normally\nknow that your application uses such a system call. However, a vandal interested in\nbreaking into your Web site may examine everything she has access to, and try\nentering irregular values for your Web form. For example, if a vandal enters\nvandal@emailaddr < /etc/passwd; as the e-mail address, it fools the script into\nsending the /etc/passwd file to the vandal-specified e-mail address.\nIf you use the system() function in your CGI script, use the -T option in\nyour #!/path/to/perl line to enable Perl’s taint-checking mode and also\nset the PATH (environment variable) using set $ENV{PATH} = ‘/path/\nto/commands/you/call/via/system’ to increase security.\nKeeping user input from making\nsystem calls unsafe\nMost security holes created by CGI scripts are caused by inappropriate user input.\nUse of certain system calls in CGI script is quite unsafe. For example, in Perl (a\nwidely used CGI programming language) such a call could be made using system(),\nChapter 15: Web Server Security\n361\n" }, { "page_number": 385, "text": "exec(), piped open(), and eval() functions. Similarly, in C the popen() and sys-\ntem() functions are potential security hazards. All these functions/commands typi-\ncally invoke a subshell (such as /bin/sh) to process the user command.\nEven shell scripts that use system(), exec() calls can open a port of entry for\nvandals. Backtick quotes (features available in shell interpreters and Perl that cap-\nture program output as text strings) are also dangerous.\nTo illustrate the importance of careful use of system calls, take a look this\ninnocent-looking Perl code segment:\n#!/usr/bin/perl\n# Purpose: to demonstrate security risks in\n# poorly written CGI script.\n# Get the domain name from query string\n# environment variable.\n#\n# Print the appropriate content type.\n# Since whois output is in plain text\n# we choose to use text/plain as the content-type here.\nprint “Content-type: text/plain\\n\\n”;\n# Here is the bad system call\nsystem(“/usr/bin/whois $domain”);\n# Here is another bad system call using backticks.\n# my $output = `/usr/bin/whois $domain`;\n# print $output;\nexit 0;\nThis little Perl script should be a Web-based whois gateway. If this script is\ncalled whois.pl, and it’s kept in the cgi-bin directory of a Web site called\nunsafe-site.com, a user can call this script this way:\nhttp://unsafe-site.com/cgi-bin/script.pl?domain=anydomain.com\nThe script takes anydomain.com as the $domain variable via the QUERY_STRING\nvariable and launches the /usr/bin/whois program with the $domain value as the\nargument. This returns the data from the whois database that InterNIC maintains.\nThis is all very innocent, but the script is a disaster waiting to happen. Consider the\nfollowing line:\nhttp://unsafe-site.com/cgi-bin/script.pl?domain=nitec.com;ps\nThis does a whois lookup on a domain called nitec.com and provides the out-\nput of the Unix ps utility that shows process status. This reveals information about\nthe system that shouldn’t be available to the requesting party. Using this technique,\nanyone can find out a great deal about your system. For example, replacing the ps\ncommand with df (a common Unix utility that prints a summary of disk space)\n362\nPart IV: Network Service Security\n" }, { "page_number": 386, "text": "enables anyone to determine what partitions you have and how full they are. I\nleave to your imagination the real dangers this security hole could pose.\nDon’t trust any input.Don’t make system calls an easy target for abuse.\nTwo overall approaches are possible if you want to make sure your user input is\nsafe:\nN Define a list of acceptable characters. Replace or remove any character\nthat isn’t acceptable.\nThe list of valid input values is typically a predictable, well-defined set of\nmanageable size.\nThis approach is less likely to let unaccepable characters through.A pro-\ngrammer must ensure that only acceptable characters are identified.\nBuilding on this philosophy, the Perl program presented earlier could be\nsanitized to contain only those characters allowed, for example:\n#!/usr/bin/perl -w\n# Purpose: This is a better version of the previous\n# whois.pl script.\n# Assign a variable the acceptable character\n# set for domain names.\nmy $DOMAIN_CHAR_SET=’-a-zA-Z0-9_.’;\n# Get the domain name from query string\n# environment variable.\nmy $domain = $ENV{‘QUERY_STRING’};\n# Now remove any character that doesn’t\n# belong to the acceptable character set.\n$domain =~ s/[^$DOMAIN_CHAR_SET]//g;\n# Print the appropriate content type.\n# Since whois output is in plain text we\n# choose to use text/plain as the content-type here.\nprint “Content-type: text/plain\\n\\n”;\n# Here is the system call\nsystem(“/usr/bin/whois $domain”);\n# Here is another system call using backticks.\nChapter 15: Web Server Security\n363\n" }, { "page_number": 387, "text": "# my $output = `/usr/bin/whois $domain`;\n# print $output;\nexit 0;\nThe $DOMAIN_CHAR_SET variable holds the acceptable character set, and\nthe user input variable $domain is searched for anything that doesn’t fall\nin the set. The unacceptable character is removed.\nN Scan the input for illegal characters and replace or remove them.\nFor example, for the preceding whois.pl script, you can add the follow-\ning line:\n$domain =~ s/[\\/ ;\\[\\]\\<\\>&\\t]//g;\nThis is an inadvisable approach. Programmers must know all possible com-\nbinations of characters that could cause trouble.If the user creates input not\npredicted by the programmer, there’s the possibility that the program may\nbe used in a manner not intended by the programmer.\nThe best way to handle user input is by establishing rules to govern it, clarifying\nN What you expect\nN How you can determine if what you have received is acceptable\nIf (for example) you are expecting an e-mail address as input (rather than just\nscanning it blindly for shell metacharacters), use a regular expression such as the\nfollowing to detect the validity of the input as a possible e-mail address:\n$email = param(‘email-addr’);\nif ($email=~ /^[\\w-\\.]+\\@[\\w-\\.]+$/) {\nprint “Possibly valid address.”\n}\nelse {\nprint “Invalid email address.”;\n}\nJust sanitizing user input isn’t enough. Be careful about how you invoke exter-\nnal programs; there are many ways you can invoke external programs in Perl.\nSome of these methods include:\nN Backtick. You can capture the output of an external program:\n$list = ‘/bin/ls -l /etc’;\nThis command captures the /etc directory listing.\n364\nPart IV: Network Service Security\n" }, { "page_number": 388, "text": "N Pipe. A typical pipe looks like this:\nopen (FP, “ | /usr/bin/sort”);\nN Invoking an external program. You have a couple of options with exter-\nnal programs:\nI system() — wait for it to return:\nsystem “/usr/bin/lpr data.dat”;\nI exec() — don’t wait for it to return:\nexec “/usr/bin/sort < data.dat”;\nAll these constructions can be risky if they involve user input that may con-\ntain shell metacharacters. For system() and exec(), there’s a somewhat\nobscure syntactical feature that calls external programs directly rather than\nthrough a shell. If you pass the arguments to the external program (not in\none long string, but as separate elements in a list), Perl doesn’t go through\nthe shell, and shell metacharacters have no unwanted side effects, as\nfollows:\nsystem “/usr/bin/sort”,”data.dat”;\nYou can use this feature to open a pipe without using a shell.By calling open\nthe character sequence -| , you fork a copy of Perl and open a pipe to the\ncopy.Then,the child copy immediately forks another program,using the first\nargument of the exec function call.\nTo read from a pipe without opening a shell, you can use the -| character\nsequence:\nopen(GREP,”-|”) || exec “/usr/bin/grep”,$userpattern,$filename;\nwhile () {\nprint “match: $_”;\n}\nclose GREP;\nThese forms of open()s are more secure than the piped open()s. Use these\nwhenever applicable.\nMany other obscure features in Perl can call an external program and lie to it\nabout its name. This is useful for calling programs that behave differently depend-\ning on the name by which they were invoked. The syntax is\nsystem $real_name “fake_name”,”argument1”,”argument2”\nChapter 15: Web Server Security\n365\n" }, { "page_number": 389, "text": "Vandals sometimes alter the PATH environment variable so it points to the pro-\ngram they want your script to execute — rather than the program you’re expecting.\nInvoke programs using full pathnames rather than relying on the PATH environ-\nment variable. That is, instead of the following fragment of Perl code\nsystem(“cat /tmp/shopping.cart.txt”);\nuse this:\nsystem “/bin/cat” , “/tmp/shopping.cart.txt “;\nIf you must rely on the path, set it yourself at the beginning of your CGI script,\nlike this:\n$ENV{‘PATH’}=”bin:/usr/bin:/usr/local/bin”;\nRemember these guidelines:\nN Include the previous line toward the top of your script whenever you use\ntaint checks.\nEven if you don’t rely on the path when you invoke an external program,\nthere’s a chance that the invoked program does.\nN You must adjust the line as necessary for the list of directories you want\nsearched.\nN It’s not a good idea to put the current directory into the path.\nUser modification of hidden data in HTML pages\nHTTP is a stateless protocol. Many Web developers keep state information or other\nimportant data in cookies or in hidden tags. Because users can turn off cookies and\ncreating unique temporary files per user is cumbersome, hidden tags are frequently\nused. A hidden tag looks like the following:\n\nFor example:\n\nHere the hidden tag stores state=CA, which can be retrieved by the same appli-\ncation in a subsequent call. Hidden tags are common in multiscreen Web applica-\ntions. Because users can manually change hidden tags, they shouldn’t be trusted at\nall. A developer can use two ways of protecting against altered data:\n366\nPart IV: Network Service Security\n" }, { "page_number": 390, "text": "N Verify the hidden data before each use.\nN Use a security scheme to ensure that data hasn’t been altered by the user.\nIn the following example CGI script, shown in Listing 15-2, I demonstrate the\nMD5 message digest algorithm to protect hidden data.\nThe details of the MD5 algorithm are defined in RFC 1321.\nListing 15-2: hidden-md5.eg\n#!/usr/bin/perl -w\n# Purpose: this script demonstrates the use of\n# MD5 message digest in a multiscreen\n# Web application.\n# CVS: $Id$\n######################################################\nuse strict;\nuse CGI qw(:standard);\nmy $query = new CGI;\n# Call the handler subroutine to process user data\n&handler;\n# Terminate\nexit 0;\nsub handler{\n#\n# Purpose: determine which screen to display\n# and call the appropriate subroutine to\n# display it.\n#\n# Get user-entered name (if any) and email address\n# (if any) and initialize two variables using given\n# name and e-mail values. Note, first time we will\n# not have values for these variables.\nmy $name = param(‘name’);\nmy $email = param(‘email’);\n# Print the appropriate Content-Type header and\n# also print HTML page tags\nprint header,\nstart_html(-title => ‘Multiscreen Web Application Demo’);\n# If we don’t have value for the $name variable,\nContinued\nChapter 15: Web Server Security\n367\n" }, { "page_number": 391, "text": "Listing 15-2 (Continued)\n# we have not yet displayed screen one so show it.\nif ($name eq ‘’){\n&screen1;\n# if we have value for the $name variable but the\n# $email variable is empty then we need to show\n# screen 2.\n} elsif($email eq ‘’) {\n&screen2($name);\n# We have value for both $name and $email so\n# show screen 3.\n} else {\n&screen3($name, $email);\n}\n# Print closing HTML tag for the page\nprint end_html;\n}\nsub screen1{\n#\n# Purpose: print an HTML form that asks the\n# user to enter her name.\n#\nprint h2(“Screen 1”),\nhr({-size=>0,-color=>’black’}),\nstart_form,\n‘Enter name: ‘,\ntextfield(-name => ‘name’, -size=>30),\nsubmit(-value => ‘ Next ‘),\nend_form;\n}\nsub screen2{\n#\n# Purpose: print an HTML form that asks the\n# user to enter her email address. It also\n# stores the name entered in the previous screen.\n#\n# Get the name\nmy $name = shift;\n# Create an MD5 message disgest for the name\nmy $digest = &create_message_digest($name);\n# Insert the digest as a new CGI parameter so\n# that we can store it using CGI.pm’s hidden()\n# subroutine.\nparam(‘digest’, $digest);\n# Now print the second screen and insert\n# the $name and the $digest values as hidden data.\n368\nPart IV: Network Service Security\n" }, { "page_number": 392, "text": "print h2(“Screen 2”),\nhr({-size=>0,-color=>’black’}),\nstart_form,\n‘Enter email: ‘,\ntextfield(-name => ‘email’, -size=>30),\nhidden(‘name’),\nhidden(‘digest’),\nsubmit(-value => ‘ Next ‘),\nend_form;\n}\nsub screen3{\n#\n# Purpose: print a message based on the data gathered\n# in screen 1 and 2. However, print the message\n# only if the entered data has not been altered.\n#\n# Get name and email address\nmy ($name, $email) = @_;\n# Get the digest of the $name value\nmy $oldDigest = param(‘digest’);\n# Create a new digest of the value of the $name variable\nmy $newDigest = &create_message_digest($name);\n# If both digests are not same then (name) data has been altered\n# in screen 2. Display an alert message and stop processing\n#in such a case.\nif ($oldDigest ne $newDigest){\nreturn (0, alert(‘Data altered. Aborted!’));\n}\n# Since data is good, process as usual.\nprint h2(“Screen 3”),\nhr({-size=>0,-color=>’black’}),\np(‘Your name is ‘. b($name) .\n‘ and your email address is ‘. b($email) . ‘.’),\na({-href=>”$ENV{SCRIPT_NAME}”},’Restart’);\n}\nsub create_message_digest{\n#\n# Purpose: create a message digest for the\n# given data. To make the digest hard\n# to reproduce by a vandal, this subroutine\n# uses a secret key.\n#\nmy $data = shift;\nmy $secret = ‘ID10t’ ; # Change this key if you like.\n# We need the following line to tell Perl that\nContinued\nChapter 15: Web Server Security\n369\n" }, { "page_number": 393, "text": "Listing 15-2 (Continued)\n# we want to use the Digest::MD5 module.\nuse Digest::MD5;\n# Create a new MD5 object\nmy $ctx = Digest::MD5->new;\n# Add data\n$ctx->add($data);\n# Add secret key\n$ctx->add($secret);\n# Create a Base64 digest\nmy $digest = $ctx->b64digest;\n# Return the digest\nreturn $digest;\n}\nsub alert{\n#\n# Purpose: display an alert dialog box\n# using JavaScript\n#\n# Get the message that we need to display\nmy $msg = shift;\n# Create JavaScript that uses the alert()\n# dialog box function to display a message\n# and then return the browser to previous screen\nprint <\nalert(“$msg”);\nhistory.back();\n\nJAVASCRIPT\n}\nThis is a simple multiscreen CGI script that asks the user for a name in the first\nscreen and an e-mail address in the following screen and finally prints out a mes-\nsage. When the user moves from one screen to another, the data from the previous\nscreen is carried to the next screen through hidden tags. Here’s how this script\nworks.\nThe first screen asks the user for her name. Once the user enters her name, the\nfollowing screen asks for the user’s e-mail address. The HTML source of this screen\nis shown in Listing 15-3.\nListing 15-3: HTML source for screen 2 of hidden-md5.eg\n\n\n\nMultiscreen Web Application Demo\n370\nPart IV: Network Service Security\n" }, { "page_number": 394, "text": "\n\n

Screen 2

\n
\n
\nEnter email:\n\n\n\n\n
\n\n\nNotice that the hidden data is stored using the following lines:\n\n\nThe first hidden data tag line stores name=Cynthia and the second one stores\ndigest=IzrSJlLrsWlYHNfshrKw/A. The second piece of data is the message digest\ngenerated for the name entered in screen 1. When the user enters her e-mail address\nin the second screen and continues, the final screen is displayed.\nHowever, before the final screen is produced, a message digest is computed for\nthe name field entered in screen 1. This digest is compared against the digest created\nearlier to verify that the value entered for the name field in screen 1 hasn’t been\naltered in screen 2. Because the MD5 algorithm creates the same message digest for\na given data set, any differences between the new and old digests raise a red flag,\nand the script displays an alert message and refuses to complete processing. Thus,\nif a vandal decides to alter the data stored in screen 2 (shown in Listing 15-3) and\nsubmits the data for final processing, the digest mismatch allows the script to detect\nthe alteration and take appropriate action. In your real-world CGI scripts (written in\nPerl) you can use the create_message_digest() subroutine to create a message\ndigest for anything.\nChapter 15: Web Server Security\n371\n" }, { "page_number": 395, "text": "You can download and install the latest version of Digest::MD5 from\nCPAN by using the perl –MCPAN –e shell command, followed by the\ninstall Digest::MD5 command at the CPAN shell prompt.\nWrapping CGI Scripts\nThe best way to reduce CGI-related risks is to not run any CGI scripts — but in these\ndays of dynamic Web content, that’s unrealistic. Perhaps you can centralize all CGI\nscripts in one location and closely monitor their development to ensure that they\nare well written.\nIn many cases, especially on ISP systems, all users with Web sites want CGI\naccess. In this situation, it may be a good idea to run CGI scripts under the UID of\nthe user who owns the CGI script. By default, CGI scripts that Apache runs use the\nApache UID. If you run these applications using the owner’s UID, all possible dam-\nage is limited to what the UID is permitted to access. This way, a bad CGI script run\nwith a UID other than the Apache server UID can damage only the user’s files. The\nuser responsible for the CGI script will now be more careful, because the possible\ndamage affects his or her content solely. In one shot, you get increased user respon-\nsibility and awareness and (simultaneously) a limit on the area that could suffer\npotential damage. To run a CGI script using a UID other than that of the Apache\nserver, you need a special type of program called a wrapper, which can run a CGI\nscript as the user who owns the file rather than as the Apache server user. Some CGI\nwrappers do other security checks before they run the requested CGI scripts.\nsuEXEC\nApache includes a support application called suEXEC that lets Apache users run\nCGI and SSI programs under UIDs that are different from the UID of Apache.\nsuEXEC is a setuid wrapper program that is called when an HTTP request is made\nfor a CGI or SSI program that the administrator designates to run as a UID other\nthan that of the Apache server. In response to such a request, Apache provides the\nsuEXEC wrapper with the program’s name and the UID and GID. suEXEC runs the\nprogram using the given UID and GID.\nBefore running the CGI or SSI command, the suEXEC wrapper performs a set of\ntests to ensure that the request is valid.\nN This testing procedure ensures that the CGI script is owned by a user who\ncan run the wrapper and that the CGI directory or the CGI script isn’t\nwritable by anyone but the owner.\nN After the security checks are successful, the suEXEC wrapper changes the\nUID and the GID to the target UID and GID via setuid and setgid calls,\nrespectively.\n372\nPart IV: Network Service Security\n" }, { "page_number": 396, "text": "N The group-access list is also initialized with all groups in which the user is\na member. suEXEC cleans the process’s environment by\nI Establishing a safe execution path (defined during configuration).\nI Passing through only those variables whose names are listed in the safe\nenvironment list (also created during configuration).\nThe suEXEC process then becomes the target CGI script or SSI command\nand executes.\nCONFIGURING AND INSTALLING SUEXEC\nIf you are interested in installing suEXEC support in Apache, run the configure (or\nconfig.status) script like this:\n./configure --prefix=/path/to/apache \\\n--enable-suexec \\\n--suexec-caller=httpd \\\n--suexec-userdir=public_html\n--suexec-uidmin=100 \\\n--suexec-gidmin=100\n--suexec-safepath=”/usr/local/bin:/usr/bin:/bin”\nHere’s the detailed explanation of this configuration:\nN\n--enable-suexec enables suEXEC support.\nN\n--suexec-caller=httpd changes httpd to the UID you use for the User\ndirective in the Apache configuration file. This is the only user account\npermitted to run the suEXEC program.\nN\n--suexec-userdir=public_html defines the subdirectory under users’\nhome directories where suEXEC executables are kept. Change\npublic_html to whatever you use as the value for the UserDir directive,\nwhich specifies the document root directory for a user’s Web site.\nN\n--suexec-uidmin=100 defines the lowest UID permitted to run suEXEC-\nbased CGI scripts. This means UIDs below this number can’t run CGI or\nSSI commands via suEXEC. Look at your /etc/passwd file to make sure\nthe range you chose doesn’t include the system accounts that are usually\nlower than UIDs below 100.\nN\n--suexec-gidmin=100 defines the lowest GID permitted as a target group.\nThis means GIDs below this number can’t run CGI or SSI commands via\nsuEXEC. Look at your /etc/group file to make sure that the range you\nchose doesn’t include the system account groups that are usually lower\nthan UIDs below 100.\nChapter 15: Web Server Security\n373\n" }, { "page_number": 397, "text": "N\n--suexec-safepath=”/usr/local/bin:/usr/bin:/bin” defines the\nPATH environment variable that gets executed by suEXEC for CGI scripts\nand SSI commands.\nENABLING AND TESTING SUEXEC\nAfter you install both the suEXEC wrapper and the new Apache executable in the\nproper location, restart Apache, which writes a message like this:\n[notice] suEXEC mechanism enabled (wrapper: /usr/local/sbin/suexec)\nThis tells you that the suEXEC is active. Now, test suEXEC’s functionality. In the\nhttpd.conf file, add the following lines:\nUserDir public_html\nAddHandler cgi-script .pl\nUserDir sets the document root of a user’s Web site as ~username/\npublic_html, where username can be any user on the system. The second directive\nassociates the cgi-script handler with the .pl files. This runs Perl scripts with\n.pl extensions as CGI scripts. For this test, you need a user account. In this exam-\nple, I use the host wormhole.nitec.com and a user called kabir. Try the script\nshown in Listing 15-4 in a file called test.pl and put it in a user’s public_html\ndirectory. In my case, I put the file in the ~kabir/public_html directory.\nListing 15-4: A CGI script to test suEXEC support\n#!/usr/bin/perl\n# Make sure the preceding line is pointing to the\n# right location. Some people keep perl in\n# /usr/local/bin.\nmy ($key,$value);\nprint “Content-type: text/html\\n\\n”;\nprint “

Test of suEXEC

”;\nforeach $key (sort keys %ENV){\n$value = $ENV{$key};\nprint “$key = $value
”;\n}\nexit 0;\nTo access the script via a Web browser, I request the following URL:\nhttp://wormhole.nitec.com/~kabir/test.pl\nA CGI script is executed only after it passes all the security checks performed by\nsuEXEC. suEXEC also logs the script request in its log file. The log entry for my\nrequest is\n374\nPart IV: Network Service Security\n" }, { "page_number": 398, "text": "[200-03-07 16:00:22]: uid: (kabir/kabir) gid: (kabir/kabir) cmd: test.pl\nIf you are really interested in knowing that the script is running under the user’s\nUID, insert a sleep command (such as sleep(10);) inside the foreach loop, which\nslows the execution and allows commands such as top or ps on your Web server\nconsole to find the UID of the process running test.pl. You also can change the\nownership of the script using the chown command, try to access the script via your\nWeb browser, and see the error message that suEXEC logs. For example, I get a\nserver error when I change the ownership of the test.pl script in the ~kabir/\npublic_html directory as follows:\nchown root test.pl\nThe log file shows the following line:\n[200-03-07 16:00:22]: uid/gid (500/500) mismatch with directory (500/500) or\nprogram (0/500)\nHere, the program is owned by UID 0, and the group is still kabir (500), so\nsuEXEC refuses to run it, which means suEXEC is doing what it should do.\nTo ensure that suEXEC will run the test.pl program in other directories, I cre-\nate a cgi-bin directory in ~kabir/public_html and put test.cgi in that direc-\ntory. After determining that the user and group ownership of the new directory and\nfile are set to user ID kabir and group ID kabir, I access the script by using the fol-\nlowing command:\nhttp://wormhole.nitec.com/~kabir/cgi-bin/test.pl\nIf you have virtual hosts and want to run the CGI programs and/or SSI commands\nusing suEXEC, use User and Group directives inside the \ncontainer. Set these directives to user and group IDs other than those the Apache\nserver is currently using. If only one, or neither, of these directives is specified for a\n container, the server user ID or group ID is assumed.\nFor security and efficiency, all suEXEC requests must remain within either a top-\nlevel document root for virtual host requests or one top-level personal document\nroot for userdir requests. For example, if you have four virtual hosts configured,\nstructure all their document roots from one main Apache document hierarchy if\nyou plan to use suEXEC for virtual hosts.\nCGIWrap\nCGIWrap is like the suEXEC program because it allows CGI scripts without compro-\nmising the security of the Web server. CGI programs are run with the file owner’s\npermission. In addition, CGIWrap performs several security checks on the CGI script\nand isn’t executed if any checks fail.\nChapter 15: Web Server Security\n375\n" }, { "page_number": 399, "text": "CGIWrap is written by Nathan Neulinger; the latest version of CGIWrap is\navailable from the primary FTP site on ftp://ftp.cc.umr.edu/pub/cgi/\ncgiwrap. CGIWrap is used via a URL in an HTML document. As distributed,\nCGIWrap is configured to run user scripts that are located in the ~/public_html/\ncgi-bin/ directory.\nCONFIGURING AND INSTALLING CGIWRAP\nCGIWrap is distributed as a gzip-compressed tar file. You can uncompress it by\nusing gzip and extract it by using the tar utility.\nRun the Configure script, which prompts you with questions. Most of these\nquestions are self-explanatory.\nA feature in this wrapper differs from suEXEC. It enables allow and deny files\nthat can restrict access to your CGI scripts. Both files have the same format, as\nshown in the following:\nUser ID\nmailto:Username@subnet1/mask1,subnet2/mask2. . .\nYou can have\nN A username (nonnumeric UID)\nN A user mailto:ID@subnet/mask line where subnet/mask pairs can be\ndefined\nFor example, if the following line is found in the allow file (you specify the\nfilename),\nmailto:Myuser@1.2.3.4/255.255.255.255\nuser kabir’s CGI scripts can be run by hosts that belong in the 192.168.1.0\nnetwork with netmask 255.255.255.0.\nAfter you run the Configure script, you must run the make utility to create the\nCGIWrap executable file.\nENABLING CGIWRAP\nTo use the wrapper application, copy the CGIWrap executable to the user’s cgi-bin\ndirectory. This directory must match what you have specified in the configuration\nprocess. The simplest starting method is keeping the ~username/public_html/\ncgi-bin type of directory structure for the CGI script directory.\n1. After you copy the CGIWrap executable, change the ownership and per-\nmission bits like this:\nchown root CGIWrap\nchmod 4755 CGIWrap\n376\nPart IV: Network Service Security\n" }, { "page_number": 400, "text": "2. Create three hard links or symbolic links called nph-cgiwrap, nph-\ncgiwrapd, and cgiwrapd to CGIWrap in the cgi-bin directory as follows:\nln [-s] CGIWrap cgiwrapd\nln [-s] CGIWrap nph-cgiwrap\nln [-s] CGIWrap nph-cgiwrapd\nOn my Apache server, I specified only the .cgi extension as a CGI script;\ntherefore, I renamed my CGIWrap executable to cgiwrap.cgi. If you have\nsimilar restrictions, you may try this approach or make a link instead.\n3. Execute a CGI script like this:\nhttp://www.yourdomain.com/cgi-bin/cgiwrap/username/scriptname\nTo access user kabir’s CGI script, test.cgi, on the wormhole.nitec.com\nsite, for example, I must use the following:\nhttp://wormhole.nitec.com/cgi-bin/cgiwrap/kabir/test.cgi\n4. To see debugging output for your CGI, specify cgiwrapd instead of\nCGIWrap, as in the following URL:\nhttp://www.yourdomain.com/cgi-bin/cgiwrapd/username/\nscriptname\n5. If the script is an nph-style script, you must run it using the following\nURL:\nhttp://www.yourdomain.com/cgi-bin/nph-\ncgiwrap/username/scriptname\nHide clues about your CGI scripts\nThe fewer clues you provide about your system to potential vandals, the less likely\nyour Web site is to be the next victim. Here’s how you can hide some important CGI\nscripts:\nN Use a nonstandard script alias.\nUse of cgi-bin alias has become overwhelmingly popular. This alias is set\nusing the ScriptAlias directive in httpd.conf for Apache as shown in\nthis example:\nScriptAlias /cgi-bin/ “/path/to/real/cgi/directory/”\nYou can use nearly anything to create an alias like this. For example, try\nScriptAlias /apps/ “/path/to/real/cgi/directory/”\nNow the apps in the URL serve the same purpose as cgi-bin. Thus, you\ncan use something nonstandard like the following to confuse vandals:\nChapter 15: Web Server Security\n377\n" }, { "page_number": 401, "text": "ScriptAlias /dcon/ “/path/to/real/cgi/directory/”\nMany vandals use automated programs to scan Web sites for features and\nother clues. A nonstandard script alias such as the one in the preceding\nexample usually isn’t incorporated in any automated manner.\nN Use nonextension names for your CGI scripts.\nMany sites boldly showcase what type of CGI scripts they run, as in this\nexample:\nhttp://www.domain.com/cgi-bin/show-catalog.pl\nThe preceding URL provides two clues about the site: It supports CGI\nscripts, and it runs Perl scripts as CGI scripts.\nIf, instead, that site uses\nhttp://www.domain.com/ext/show-catalog\nthen it becomes quite hard to determine anything from the URL. Avoid\nusing the .pl and .cgi extensions.\nTo change an existing script’s name from a .pl, .cgi, or other risky exten-\nsion type to a nonextension name, simply rename the script.You don’t have\nto change or add any new Apache configuration to switch to nonextension\nnames.\nReducing SSI Risks\nSSI scripts pose a few security risks. If you run external applications using SSI\ncommands such as exec, the security risk is virtually the same as with the CGI\nscripts. However, you can disable this command very easily under Apache, using\nthe following Options directive:\n\nOptions IncludesNOEXEC\n\nThis disables exec and includes SSI commands everywhere on your Web space.\nYou can enable these commands whenever necessary by defining a directory con-\ntainer with narrower scope. See the following example:\n378\nPart IV: Network Service Security\n" }, { "page_number": 402, "text": "\nOptions IncludesNOEXEC\n\n\nOptions +Include\n\nThis configuration segment disables the exec command everywhere but the\n/ssi directory.\nAvoid using the printenv command, which prints out a listing of all exist-\ning environment variables and their values,as in this example:\n<--#printenv -->\nThis command displays all the environment variables available to the Web\nserver — on a publicly accessible page — which certainly gives away clues\nto potential bad guys. Use this command only when you are debugging SSI\ncalls,never in a production environment.\nAs shown, there are a great deal of configuration and policy decisions (what to\nallow and how to allow it) that you must make to ensure Web security. Many\nbecome frustrated after implementing a set of security measures, because they don’t\nknow what else is required. Once you have implemented a set of measures, such as\ncontrolled CGI and SSI requests as explained above, focus your efforts on logging.\nLogging Everything\nA good Web administrator closely monitors server logs, which provide clues to\nunusual access patterns. Apache can log access requests that are successful and that\nresult in error in separate log files as shown in this example:\nCustomLog /logs/access.log common\nErrorLog /logs/error.log\nThe first directive, CustomLog, logs each incoming request to your Web site, and\nthe second directive, ErrorLog, records only the requests that generated an error\ncondition. The error log is a good place to check problems that are reported by your\nWeb server. You can use a robust log analysis program like Wusage\n(www.boutell.com) to routinely analyze and monitor log files. If you notice, for\nChapter 15: Web Server Security\n379\n" }, { "page_number": 403, "text": "example, someone trying to supply unusual parameters to your CGI scripts, con-\nsider it a hostile attempt and investigate the matter immediately. Here’s a process\nthat you can use:\n1. Get the complete URL used in trying to fool a CGI script.\n2. If you didn’t write the script, ask the script author about what happens\nwhen someone passes such URL (that is, parameters within the URL after\n?) to the script. If there’s a reason to worry, proceed forward or stop\ninvestigating at this point — but make a note of the IP address in a text file\nalong with the URL and time and date stamp.\n3. If the URL makes the script do something it shouldn’t, consider taking the\nscript offline until it’s fixed so that the URL can’t pose a threat to the\nsystem.\n4. Use host to detect the hostname of the bad guy’s IP address. Sometimes\nhost can’t find the hostname. In such a case, try traceroute and identify\nthe ISP owning the IP address.\n5. Do a whois domain lookup for the ISP and find the technical contact\nlisted in the whois output. You may have to go to a domain register’s Web\nsite to perform the whois lookup if you don’t have the whois program\ninstalled. Try locating an appropriate domain register from InterNIC at\nwww.internic.net.\n6. Send an e-mail to the technical contact address at the ISP regarding the\nincident and supply the log snippet for his review. Write your e-mail in a\npolite and friendly manner.\nThe ISP at the other end is your only line of defense at this point.Politely\nrequest a speedy resolution or response.\n7. If you can’t take the script offline because it’s used too heavily by other\nusers, you can decide to ban the bad guy from using it. Say you run your\nscript under the script alias ext which is set up as follows:\nScriptAlias /ext/ “/some/path/to/cgi/scripts/”\nChange the preceding line of code to the following:\nAlias /ext/ “/some/path/to/cgi/scripts/”\n380\nPart IV: Network Service Security\n" }, { "page_number": 404, "text": "Add the following lines after the above line:\n\nSetHandler cgi-script\nOptions -Indexes +ExecCGI\nAllowOverride None\nOrder allow,deny\nAllow from all\nDeny from 192.168.1.100\n\nReplace 192.168.1.100 with the IP address of the bad guy. This configura-\ntion runs your script as usual for everyone but the user on the IP address\ngiven in the Deny from line. However, if the bad guy’s ISP uses dynami-\ncally allocated IP addresses for its customers, then locking the exact IP\naddress isn’t useful because the bad guy can come back with a different IP\naddress next time. In such a case, you must consider locking the entire IP\nnetwork. For example, if the ISP uses 192.168.1.0, then you must remove\nthe 100 from the Deny from line to block the entire ISP. This is a drastic\nmeasure and may block a lot of innocent users at the ISP from using this\nscript, so exercise caution when deciding to block.\n8. Wait a few days for the technical contact to respond. If you don’t hear\nfrom him, try to contact him through the Web site. If the problem persists,\ncontact your legal department to determine what legal actions you can\ntake to require action from the ISP.\nLogs are great, but they’re useless if the bad guys can modify them. Protect your\nlog files. I recommend keeping log files in their own partition where no one but the\nroot user has access to make any changes.\nMake sure that the directories specified by ServerRoot, CustomLog, and\nErrorLog directives aren’t writable by anyone but the root user. Apache users and\ngroups don’t need read or write permission in log directories. Enabling anyone\nother than the root user to write files in the log directory can cause a major secu-\nrity hole. To ensure that only root user has access to the log files in a directory\ncalled /logs, do the following:\n1. Change the ownership of the directory and all the files within it to root\nuser and root group by using this command:\nchown –R root:root /logs\n2. Change the directory’s permission by using this command:\nchmod –R 750 /logs\nChapter 15: Web Server Security\n381\n" }, { "page_number": 405, "text": "Logging access requests can monitor and analyze who is requesting information\nfrom your Web site. Sometimes access to certain parts of your Web site must be\nrestricted so that only authorized users or computers can access the contents.\nRestricting Access to\nSensitive Contents\nYou can restrict access by IP or hostname or use username/password authentication\nfor sensitive information on your Web site. Apache can restrict access to certain\nsensitive contents using two methods:\nN IP-based or hostname-based access control\nN An HTTP authentication scheme\nUsing IP or hostname\nIn this authentication scheme, access is controlled by the hostname or the host’s IP\naddress. When a request for a certain resource arrives, the Web server checks\nwhether the requesting host is allowed access to the resource; then it acts on the\nfindings.\nThe standard Apache distribution includes a module called mod_access, which\nbases access control on the Internet hostname of a Web client. The hostname can be\nN A fully qualified domain name\nN An IP address\nThe module supports this type of access control by using the three Apache\ndirectives:\nN\nallow\nN\ndeny\nN\norder\nThe allow directive can define a list of hosts (containing hosts or IP addresses)\nthat can access a directory. When more than one host or IP address is specified,\nthey should be separated with space characters. Table 15-1 shows the possible val-\nues for the directive.\n382\nPart IV: Network Service Security\n" }, { "page_number": 406, "text": "TABLE 15-1 POSSIBLE VALUES FOR THE ALLOW DIRECTIVE\nValue\nExample\nDescription\nall\nallow from all\nThis reserved word allows access for all\nhosts. The example shows this option.\nA fully qualified \nallow from \nOnly the host that has the specified \ndomain name \nwormhole.nitec.com\nFQDN is allowed access. The allow\n(FQDN) of a host\ndirective in the example allows access \nonly to wormhole.nitec.com. This\ncompares whole components;\ntoys.com would not match\netoys.com.\nA partial domain \nallow from .mainoffice.\nOnly the hosts that match the partial \nname of a host\nnitec.com\nhostname have access. The example\npermits all the hosts in the\n.mainoffice.nitec.com network\naccess to the site. For example,\ndeveloper1.mainoffice.\nnitec.com and developer2.\nmainoffice.nitec.com have access\nto the site. However, developer3.\nbaoffice.nitec.com isn’t allowed\naccess.\nA full IP address \nallow from \nOnly the specified IP address is allowed \nof a host\n192.168.1.100\naccess. The example shows a full IP\naddress (all four octets are present),\n192.168.1.100, that is allowed access.\nA partial \nallow from \nWhen not all four octets of an IP \nIP address\n192.168.1\naddress are present in the allow\nallow from \ndirective, the partial IP address is \n130.86\nmatched from left to right, and hosts\nthat have the matching IP address\npattern (that is, it’s part of the same\nsubnet) have access. In the first\nexample, all hosts with IP addresses in\nthe range of 192.168.1.1 to\n192.168.1.255 have access. In the\nsecond example, all hosts from the\n130.86 network have access.\nContinued\nChapter 15: Web Server Security\n383\n" }, { "page_number": 407, "text": "TABLE 15-1 POSSIBLE VALUES FOR THE ALLOW DIRECTIVE (Continued)\nValue\nExample\nDescription\nA network/\nallow from \nThis can specify a range of IP addresses \nnetmask pair\n192.168.1.0/\nby using the network and netmask \n255.255.255.0\naddresses. The example allows only the \nhosts with IP addresses in the range of \n192.168.1.1 to 192.168.1.254 to have\naccess. \nA network/n \nallow 192.\nThis is like the previous entry, except \nCIDR specification\n168.1.0/24\nthe netmask consists of n number of \nhigh-order 1 bits. The example allows\nfrom 192.168.1.0/255.255.255.0. This\nfeature is available in Apache 1.3 and\nlater.\nThe deny directive is the exact opposite of the allow directive. It defines a list of\nhosts that can’t access a specified directory. Like the allow directive, it can accept\nall the values shown in Table 15-1.\nThe order directive controls how Apache evaluates both allow and deny direc-\ntives. For example:\n\norder deny, allow\ndeny from myboss.mycompany.com\nallow from all\n\nThis example denies the host myboss.mycompany.com access and gives all other\nhosts access to the directory. The value for the order directive is a comma-\nseparated list, which indicates which directive takes precedence. Typically, the one\nthat affects all hosts (in the preceding example, the allow directive ) is given\nlowest priority.\nAlthough allow, deny and deny, allow are the most widely used values for the\norder directive, another value, mutual-failure, can indicate that only those hosts\nappearing on the allow list but not on the deny list are granted access.\nIn all cases, every allow and deny directive is evaluated.\nIf you are interested in blocking access to a specific HTTP request method, such\nas GET, POST, and PUT, the container, you can do so as shown in this\nexample:\n384\nPart IV: Network Service Security\n" }, { "page_number": 408, "text": "\n\norder deny,allow\ndeny from all\nallow from yourdomain.com\n\n\nThis example allows POST requests to the cgi-bin directory only if they are\nmade by hosts in the yourdomain.com domain. This means if this site has some\nHTML forms that send user input data via the HTTP POST method, only the users in\nyourdomain.com can use these forms effectively. Typically, CGI applications are\nstored in the cgi-bin directory, and many sites feature HTML forms that dump\ndata to CGI applications through the POST method. Using the preceding host-based\naccess control configuration, a site can allow anyone to run a CGI script but allow\nonly a certain site (in this case, yourdomain.com) to actually post data to CGI\nscripts. This gives the CGI access in such a site a bit of read-only character.\nEveryone can run applications that generate output without taking any user input,\nbut only users of a certain domain can provide input.\nUsing an HTTP authentication scheme\nStandard mod_auth module-based basic HTTP authentication confirms authentica-\ntion with usernames, groups, and passwords stored in text files. This approach\nworks well if you’re dealing with a small number of users. However, if you have a\nlot of users (thousands or more), using mod_auth may exact a performance\npenalty — in which case, you can use something more advanced, such as DBM files,\nBerkeley DB files, or even a dedicated SQL database. The next section presents a\nfew examples of basic HTTP authentication.\nREQUIRING A USERNAME AND PASSWORD\nThis example creates a restricted directory that requires a username and a pass-\nword for access. I assume the following are settings for a Web site called\napache.nitec.com:\nDocumentRoot “/www/htdocs”\nAccessFileName .htaccess\nAllowOverride All\nAssume also that you want to restrict access to the following directory, such that\nonly a user named reader with the password bought-it can access the directory:\n/www/htdocs/readersonly\nChapter 15: Web Server Security\n385\n" }, { "page_number": 409, "text": "The following steps create the appropriately restricted access:\n1. Create a user file by using htpasswd.\nA standard Apache distribution includes a utility called htpasswd, which\ncreates the user file needed for the AuthUserFile directive. Use the pro-\ngram like this:\nhtpasswd -c /www/secrets/.htpasswd reader\nThe htpasswd utility asks for the password of reader. Enter bought-it\nand then reenter the password to confirm. After you reenter the password,\nthe utility creates a file called .htpasswd in the /www/secrets directory.\nNote the following:\nI The -c option tells htpasswd that you want a new user file. If you\nalready had the password file and wanted to add a new user, you\nwould not want this option.\nI Place the user file outside the document root directory of the\napache.nitec.com site, as you don’t want anyone to download it via\nthe Web.\nI Use a leading period (.) in the filename so it doesn’t appear in the out-\nput on your Unix system. Doing so doesn’t provide any real benefits\nbut can help identify a Unix file because its use is a traditional Unix\nhabit. Many configuration files in Unix systems have leading periods\n(.login and .profile).\n2. Execute the following command:\ncat /www/secrets/.htpasswd\nThis should show a line like the following (the password won’t be exactly\nthe same as this example):\nreader:hulR6FFh1sxK6\nThis command confirms that you have a user called reader in the\n.htpasswd file. The password is encrypted by the htpasswd program,\nusing the standard crypt() function.\n3. Create an .htaccess file.\nUsing a text editor, add the following lines to a file named\n/www/htdocs/readersonly/.htaccess:\nAuthName “Readers Only”\nAuthType Basic\nAuthUserFile /www/secrets/.htpasswd\nrequire user reader\nThe preceding code works this way:\n386\nPart IV: Network Service Security\n" }, { "page_number": 410, "text": "I\nAuthName sets the realm of the authentication.\nThis is really just a label that goes to the Web browser so that the user\nis provided with some clue about what she will access. In this case, the\n“Readers Only” string indicates that only readers can access this\ndirectory.\nI\nAuthType specifies the type of authentication.\nBecause only basic authentication is supported, AuthType is always\nBasic.\nI\nAuthUserFile specifies the filename and path for the user file.\nI require specifies that a user named reader is allowed access to this\ndirectory.\n4. Set file permissions.\nAfter the .htaccess and .htpasswd files are created, make sure that only\nthe Apache user can read the files.\nNo users except the file owner and Apache should have access to these files.\n5. Use a Web browser to access the following URL:\nhttp://apache.nitec.com/readersonly\nApache sends the 401 status header and WWW-Authenticate response\nheader to the browser with the realm (set in AuthName) and authentication-\ntype (set in AuthType) information. The browser displays a pop-up dialog\nbox that requests a username and password.\nCheck whether a user can get in without a username or password — enter\nnothing in the entry boxes in the dialog box and click OK. This should\nresult in an authentication failure and an error message. The browser\nreceives the same authentication challenge again, so it displays another\ndialog box.\nClicking Cancel results in the browser showing the standard\nAuthentication Required error message from Apache.\nClicking Reload or refresh in the browser requests the same URL again,\nand the browser receives the same authentication challenge from the\nserver. This time enter reader as the username and bought-it as the pass-\nword, and click OK. Apache now gives you directory access.\nChapter 15: Web Server Security\n387\n" }, { "page_number": 411, "text": "You can change the Authentication Required message if you want by using the\nErrorDocument directive:\nErrorDocument 401 /nice_401message.html\nInsert this line in your httpd.conf file and create a nice message in the\nnice_401message.html file to make your users happy.\nALLOWING A GROUP OF USERS TO ACCESS A DIRECTORY\nInstead of allowing one user called reader to access the restricted area (as demon-\nstrated in the previous example), try allowing anyone belonging to the group\nnamed smart_readers to access the same directory. Assume this group has two\nusers: pikejb and bcaridad.\nFollow these steps to give the users in the group smart_readers directory\naccess.\n1. Create a user file by using htpasswd.\nUsing the htpasswd utility, create the users pikejb and bcaridad.\n2. Create a group file.\nUsing a text editor such as vi (available on most Unix systems), create a\nfile named /www/secrets/.htgroup. This file has one line:\nsmart_readers: pikejb bcaridad\n3. Create an .htaccess file in /www/htdocs/readersonly.\nUsing a text editor, add the following lines to a file called /data/web/\napache/public/htdocs/readersonly/.htaccess:\nAuthName “Readers Only”\nAuthType Basic\nAuthUserFile /www/secrets/.htpasswd\nAuthGroupFile /www/secrets/.htgroup\nrequire group smart_readers\nThis addition is almost the same configuration that I discussed in the pre-\nvious example, with two changes:\nI A new directive, AuthGroupFile, points to the .htgroup group file\ncreated earlier.\nI The require directive line requires a group called smart_readers.\nThis means Apache allows access to anyone that belongs to the group.\n4. Make sure .htaccess, .htpasswd, and .htgroup files are readable only by\nApache, and that no one but the owner has write access to the files.\n388\nPart IV: Network Service Security\n" }, { "page_number": 412, "text": "MIXING HOST-BASED ACCESS CONTROL\nWITH HTTP AUTHENTICATION\nIn this example, you see how you can mix the host-based access control scheme\nwith the basic HTTP authentication method found in Apache. Say you want to\nallow the smart_readers group access to the same directory as it has in the second\npreceding example, “Allowing a group of users to access a directory,” and you want\nanyone coming from a domain called classroom.nitec.com without a username\nand password to have access to the same directory.\nThis means if a request for the URL http://apache.nitec.com/readersonly\ncomes from a domain named classroom.nitec.com, the request is processed with-\nout HTTP authentication because you perform the following steps:\n1. Modify the .htaccess file (from the preceding example) to look like this:\nAuthName “Readers Only”\nAuthType Basic\nAuthUserFile /www/secrets/.htpasswd\nAuthGroupFile /www/secrets/.htgroup\nrequire group smart_readers\norder deny, allow\ndeny from all\nallow from classroom.nitec.com\nThis adds three host-based access control directives (discussed in earlier\nsections).\nI The order directive tells Apache to evaluate the deny directive before\nit does the allow directive.\nI The deny directive tells Apache to refuse access from all hosts.\nI The allow directive tells Apache to allow access from\nclassroom.nitec.com.\nThis third directive effectively tells Apache that any hosts in the\nclassroom.nitec.com domain are welcome to this directory.\n2. Using a Web browser from a host called user01.classroom.nitec.com,\nif you try to access http://apache.nitec.com/readersonly, your\nbrowser displays the username and password authentication dialog box.\nThis means you must authenticate yourself.\nThis isn’t what you want to happen. So what’s going on? Apache assumes\nthat both host-based and basic HTTP authentication are required for this\ndirectory — so it denies access to the directory unless it can pass both\nmethods. A solution to this problem is the satisfy directive, which you\ncan use like this:\nAuthName “Readers Only”\nAuthType Basic\nChapter 15: Web Server Security\n389\n" }, { "page_number": 413, "text": "AuthUserFile /www/secrets/.htpasswd\nAuthGroupFile /www/secrets/.htgroup\nrequire group smart_readers\norder deny, allow\ndeny from all\nallow from classroom.nitec.com\nsatisfy any\nThe satisfy directive takes either the all value or the any value. Because\nyou want the basic HTTP authentication activated only if a request comes\nfrom any host other than the classroom.nitec.com domain, specify any\nfor the satisfy directive. This effectively tells Apache to do the following:\nIF (REMOTE_HOST NOT IN .classroom.nitec.com DOMAIN) THEN\nBasic HTTP authentication Required\nENDIF\nIf you want only users of the classroom.nitec.com subdomain to access\nthe directory with basic HTTP authentication, specify all for the satisfy\ndirective; this tells Apache to enforce both authentication methods for all\nrequests.\nControlling Web Robots\nYour Web site isn’t always accessed by human users. Many search engines index\nyour Web site by using Web robots — programs that traverse Web sites for indexing\npurposes. These robots often index information they shouldn’t — and sometimes\ndon’t index what they should. The following section examines ways to control\n(most) robot access to your Web site.\nFrequently used search engines such as Yahoo!, AltaVista, Excite, and Infoseek\nuse automated robot or spider programs that search Web sites and index their con-\ntents. This is usually desirable, but on occasion, you may find yourself wanting to\nstop these robots from accessing a certain part of your Web site.\nIf content in a section of your Web site frequently expires (daily, for example),\nyou don’t want the search robots to index it. When a user at the search-engine site\nclicks a link to the old content and finds that the link doesn’t exist, she isn’t happy.\nThat user may then go to the next link without returning to your site.\nSometimes you may want to disable the indexing of your content (or part of it),\nbecause the robots can overwhelm Web sites by requesting too many documents\ntoo rapidly. Efforts are underway to create standards of behavior for Web robots. In\nthe meantime, the Robot Exclusion Protocol enables Web site administrators to\nplace a robots.txt file on their Web sites, indicating where robots shouldn’t go.\n390\nPart IV: Network Service Security\n" }, { "page_number": 414, "text": "For example, a large archive of bitmap images is useless to a robot that is trying to\nindex HTML pages. Serving these files to the robot wastes resources on your server\nand at the robot’s location.\nThis protocol is currently voluntary, and etiquette is still evolving for robot\ndevelopers as they gain experience with Web robots. The most popular search\nengines, however, abide by the Robot Exclusion Protocol. Here is what a robot or\nspider program does:\n1. When a compliant Web robot visits a site called www.domain.com, it first\nchecks for the existence of the URL:\nhttp://www.domain.com/robots.txt\n2. If this URL exists, the robot parses its contents for directives that instruct\nthe robot to index the site. As a Web server administrator, you can create\ndirectives that make sense for your site. Only one robots.txt file may\nexist per site; this file contains records that may look like the following:\nUser-agent: *\nDisallow: /cgi-bin/\nDisallow: /tmp/\nDisallow: /~kabir/\nIn the preceding code\nI The first directive tells the robot that the following directives should be\nconsidered by any robots.\nI The following three directives (Disallow) tell the robot not to access\nthe directories mentioned in the directives.\nYou need a separate Disallow line for every URL prefix you want to\nexclude.For example,your command line should not read like this:\nDisallow: /cgi-bin/ /tmp/ /~kabir/\nYou should not have blank lines in a record. They delimit multiple records.\nRegular expressions aren’t supported in the User-agent and Disallow lines. The\nasterisk in the User-agent field is a special value that means any robot.\nSpecifically, you can’t have lines like either of these:\nDisallow: /tmp/*\nDisallow: *.gif\nChapter 15: Web Server Security\n391\n" }, { "page_number": 415, "text": "Everything not explicitly disallowed is considered accessible by the robot (some\nexamples follow).\nTo exclude all robots from the entire server, use the following configuration:\nUser-agent: *\nDisallow: /\nTo permit all robots complete access, use the following configuration:\nUser-agent: *\nDisallow:\nYou can create the same effect by deleting the robots.txt file. To exclude a sin-\ngle robot called WebCrawler, add these lines:\nUser-agent: WebCrawler\nDisallow: /\nTo allow a single robot called WebCrawler to access the site, use the following\nconfiguration:\nUser-agent: WebCrawler\nDisallow:\nUser-agent: *\nDisallow: /\nTo forbid robots to index a single file called /daily/changes_to_often.html,\nuse the following configuration:\nUser-agent: *\nDisallow: /daily/changes_to_often.html\nContent Publishing Guidelines\nIf you’ve applied the preceding steps, your Web site is reasonably fortified for secu-\nrity. Even so (as mentioned before), be sure to monitor log activities to detect\nunusual access. Remember, too, that the human components of your Web site (such\nas content publishers and script developers) need training for site security. Establish\nguidelines for them.\nContent publishers and script developers should know and adhere to the follow-\ning guidelines:\n392\nPart IV: Network Service Security\n" }, { "page_number": 416, "text": "N Whenever storing a content file, such as an HTML file, image file, sound\nfile, or video clip, the publisher must ensure that the file is readable by the\nWeb server (that is, the username specified by the User directive). No one\nbut the publisher user should have write access to the new file.\nN Any file or directory that can’t be displayed directly on the Web browser\nbecause it contains information indirectly accessed by using an applica-\ntion or script shouldn’t be located under a DocumentRoot-specified direc-\ntory. For example, if one of your scripts needs access to a data file that\nshouldn’t be directly accessed from the Web, don’t keep the data file inside\nthe document tree. Keep the file outside the document tree and have your\nscript access it from there.\nN Any time a script needs a temporary file, the file should never be created\ninside the document tree. In other words, don’t have a Web server writable\ndirectory within your document tree. All temporary files should be created\nin one subdirectory outside the document tree where only the Web server\nhas write access. This ensures that a bug in a script doesn’t accidentally\nwrite over any existing file in the document tree.\nN To fully enforce copyright, include both visible and embedded copyright\nnotices on the content pages. The embedded copyright message should be\nkept at the beginning of a document, if possible. For example, in an HTML\nfile you can use a pair of comment tags to embed the copyright message\nat the beginning of the file. For example, can be embedded in every\npage.\nN If you have many images that you want to protect from copyright theft,\nlook into watermarking technology. This technique invisibly embeds\ninformation in images to protect the copyright. The idea is that if you\ndetect a site that’s using your graphical contents without permission, you\ncan verify the theft by looking at the hidden information. If the informa-\ntion matches your watermark ID, you can clearly identify the thief and\nproceed with legal action. (That’s the idea, at least. I question the strength\nof currently available watermarking tools; many programs can easily\nremove the original copyright owner’s watermarks. Watermark technology\nis worth investigating, however, if you worry about keeping control of\nyour graphical content.)\nCreating a policy is one thing and enforcing it is another. Once you create your\nown publishing policy, discuss this with the people you want to have using it. Get\ntheir feedback on each policy item — and, if necessary, refine your policy to make it\nuseful.\nChapter 15: Web Server Security\n393\n" }, { "page_number": 417, "text": "Using Apache-SSL\nI want to point out a common misunderstanding about Secure Sockets Layer (SSL).\nMany people are under the impression that having an SSL-enabled Web site auto-\nmatically protects them from all security problems. Wrong! SSL protects data traf-\nfic only between the user’s Web browser and the Web server. It ensures that data\nisn’t altered during transit. It can’t enhance your Web site’s security in any other\nway.\nApache doesn’t include an SSL module in the default distribution, but you can\nenable SSL for Apache by using the Apache-SSL source patch. The Apache-SSL\nsource patch kit can be downloaded from www.apache-ssl.org. The Apache-\nSSL source patch kit turns Apache into a SSL server based on either SSLeay or\nOpenSSL.\nIn the following section, I assume that you have already learned to install\nOpenSSL (if not, see Chapter 11), and that you use OpenSSL here.\nCompiling and installing Apache-SSL patches\nAs mentioned before, you need OpenSSL installed for Apache-SSL to work. I\nassume that you have done the following:\nN Installed OpenSSL in the /usr/local/ssl directory as recommended in\nChapter 11\nN Extracted the Apache source tree into the\n/usr/src/redhat/SOURCES/apache_x.y.zz directory\nFor example, the Apache source path for Apache 2.0.01 is /usr/src/\nredhat/SOURCES/apache_2.0.01.\nHere’s how you can set up Apache for SSL support.\n1. su to root.\n2. Change the directory to the Apache source distribution (/usr/src/\nredhat/SOURCES/apache_x.y.zz).\n3. Copy the Apache-SSL patch kit (apache_x.y.zz+ssl_x.y.tar.gz) in the\ncurrent directory and extract it by using the tar xvzf\napache_x.y.zz+ssl_x.y.tar.gz command.\n4. Run patch -p1 < SSLpatch to patch the source files.\n5. Change the directory to src and edit the Configuration.tmpl file to\nhave the following lines along with other unchanged lines.\nSSL_BASE=/usr/local/ssl\nSSL_APP_DIR= $(SSL_BASE)/bin\nSSL_APP=/usr/local/ssl/bin/openssl\n394\nPart IV: Network Service Security\n" }, { "page_number": 418, "text": "6. Change back your current directory to src by running the cd ..\ncommand.\n7. Run the ./configure command with any command-line arguments that\nyou typically use. For example, to install Apache in /usr/local/apache,\nrun this script with the --prefix=/usr/local/apache option.\n8. Run make and make install to compile and install Apache.\nThis compiles and installs both standard (httpd) and SSL-enabled (httpsd)\nApache. Now you need a server certificate for Apache.\nCreating a certificate for your Apache-SSL server\nSee Chapter 11 for details on creating a certificate for your Apache server. To create\na temporary certificate to get going quickly, you can simply do the following:\n1. Change directory to the src (for example, /usr/src/redhat/SOURCES/\napache_x.y.zz/src) subdirectory of your Apache source distribution.\n2. Run the make certificate command to create a temporary certificate for\ntesting purposes only. The make certificate command uses the\n/usr/local/ssl/bin/openssl program to create a server certificate for\nyou. You are asked a few self-explanatory questions. Here’s an example\nsession of this command.\nps > /tmp/ssl-rand; date >> /tmp/ssl-rand; \\\nRANDFILE=/tmp/ssl-rand /usr/local/ssl/bin/openssl req -config\n../SSLconf/conf/ssleay.cnf \\\n-new -x509 -nodes -out ../SSLconf/conf/httpsd.pem \\\n-keyout ../SSLconf/conf/httpsd.pem; \\\nln -sf httpsd.pem ../SSLconf/conf/`/usr/local/ssl/bin/openssl\n\\\nx509 -noout -hash < ../SSLconf/conf/httpsd.pem`.0; \\\nrm /tmp/ssl-rand\nUsing configuration from ../SSLconf/conf/ssleay.cnf\nGenerating a 1024 bit RSA private key\n..................++++++\n...............................................++++++\nwriting new private key to ‘../SSLconf/conf/httpsd.pem’\n-----\nYou are about to be asked to enter information that will be\nincorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished\nName or a DN.\nChapter 15: Web Server Security\n395\n" }, { "page_number": 419, "text": "There are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter ‘.’, the field will be left blank.\n-----\nCountry Name (2 letter code) [GB]:US\nState or Province Name (full name) [Some-State]:California\nLocality Name (eg, city) []:Sacramento\nOrganization Name (eg, company; recommended) []:MyORG\nOrganizational Unit Name (eg, section) []:CS\nserver name (eg. ssl.domain.tld; required!!!)\n[]:shea.intevo.com\nEmail Address []:kabir@intevo.com\nThe certificate called httpsd.pem is created in the SSLconf/conf subdirectory\nof your Apache source distribution. For example, if the path to the directory con-\ntaining your Apache source distribution is \n/usr/src/redhat/SOURCES/\napache_x.xx, then the fully qualified path — which you use to configure Apache in\nthe following section — is as follows:\n/usr/src/redhat/SOURCES/apache_x.xx/SSLconf/conf/httpsd.pem\nNow you can configure Apache.\nConfiguring Apache for SSL\nWhen you ran make install in the “Compiling and installing Apache-SSL\npatches” section, you created an httpsd.conf file in the conf subdirectory of your\nApache installation directory. For example, if you used --prefix=/usr/\nlocal/apache to configure Apache, you find the httpsd.conf file in /usr/local/\napache/conf. Rename it to httpd.conf, using the following command:\nmv /usr/local/apache/conf/httpsd.conf /usr/local/apache/conf/httpd.conf\nMake sure you replace /usr/local/apache/conf with appropriate\npathname if you installed Apache in a different directory.\nYou have two choices when it comes to using SSL with Apache. You can either\nenable SSL for the main server or for virtual Web sites. Here I show you how you\ncan enable SSL for your main Apache server. Modify the httpd.conf file as follows\n396\nPart IV: Network Service Security\n" }, { "page_number": 420, "text": "1. By default, Web browsers send SSL requests to port 443 of your Web\nserver, so if you want to turn the main Apache server into an SSL-enabled\nserver, change the Port directive line to be\nPort 443\n2. Add the following lines to tell Apache how to generate random data\nneeded for encrypting SSL connections:\nSSLRandomFile file /dev/urandom 1024\nSSLRandomFilePerConnection file /dev/urandom 1024\n3. If you want to reject all requests but the secure requests, insert the follow-\ning directive:\nSSLRequireSSL\n4. To enable SSL service, add the following directive:\nSSLEnable\n5. By default, the cache server used by SSL-enabled Apache is created in the\nsrc/modules/ssl directory of the Apache source distribution. Set this\ndirectory as shown below:\nSSLCacheServerPath \\\n/path/to/apache_x.y.zz/src/modules/ssl/gcache\n6. Add the following to enable the cache server port and cache timeout\nvalues:\nSSLCacheServerPort logs/gcache_port\nSSLSessionCacheTimeout 15\n7. Tell Apache where you are keeping the server certificate file.\nI If you created the server certificate by using the instructions in Chapter\n11, your server certificate should be in /usr/local/ssl/certs.\nI If you apply the test certificate now (using the make certificate\ncommand discussed earlier), then your test certificate is in\n/path/to/apache_x.y.zz/SSLconf/conf, and it’s called httpsd.pem.\nSet the following directive to the fully qualified path of your server cer-\ntificate as shown with the following code.\nSSLCertificateFile \\\n/path/to/apache_x.y.zz/SSLconf/conf/httpsd.pem\n8. Set the following directives as shown, and save the httpd.conf file.\nSSLVerifyClient 3\nSSLVerifyDepth 10\nSSLFakeBasicAuth\nSSLBanCipher NULL-MD5:NULL-SHA\nChapter 15: Web Server Security\n397\n" }, { "page_number": 421, "text": "To SSL-enable a virtual host called myvhost.intevo.com on port 443,use\nthe following configuration:\nListen 443\n\nSSLEnable\nSSLCertificateFile /path/to/myvhost.certificate.cert\n\nNow you can test your SSL-enabled Apache server.\nTesting the SSL connection\nIf you have installed Apache in the /usr/local/apache directory, run the\n/usr/local/apache/bin/httpsdctl start command to start the SSL-enabled\nApache server. If you get an error message, check the log file for details. A typo or\na missing path in the httpd.conf file is the most common cause of errors. Once the\nserver is started, you can access it by using the HTTPS protocol. For example, to\naccess an SSL-enabled Apache server called shea.intevo.com, I can point a Web\nbrowser to https://shea.intevo.com. If you use the test certificate or a home-\ngrown CA-signed certificate (see Chapter 11 for details) the Web browser displays a\nwarning message stating that the certificate can’t be verified. This is normal,\nbecause the certificate isn’t signed by a known certificate authority.\nAccept the certificate, and browse your SSL-enabled Web site.\nSummary\nWeb servers are often the very first target of most hack attacks. By fortifying your\nWeb server using techniques to reduce CGI and SSI risks and logging everything,\nyou can ensure the security of your Web sites. Not allowing spiders and robots to\nindex sensitive areas of your Web site and restricting access by username or IP\naddress can be quite helpful in combating Web vandalism.\n398\nPart IV: Network Service Security\n" }, { "page_number": 422, "text": "Chapter 16\nDNS Server Security\nIN THIS CHAPTER\nN Checking DNS configuration using Dlint\nN Using Transaction Signatures (TSIG) to handle zone transfers\nN Limiting DNS queries\nN Creating a chroot jail for the DNS server\nN Using DNSSEC for authentication\nACCORDING TO A RECENT Men & Mice Domain Health Survey, three out of four\nInternet domains have incorrect DNS configurations. Incorrect DNS configuration\noften leads to security break-ins. This chapter examines correcting, verifying, and\nsecuring DNS configuration using various techniques.\nUnderstanding DNS Spoofing\nDNS spoofing (attack by falsifying information) is a common DNS security prob-\nlem. When a DNS server is tricked into accepting — and later using incorrect,\nnonauthoritative information from a malicious DNS server, the first DNS server has\nbeen spoofed. Spoofing attacks can cause serious security problems — like directing\nusers to the wrong Internet sites or routing e-mail to unauthorized mail servers —\nfor vulnerable DNS servers.\nHackers employ many methods to spoof a DNS server, including these two\nfavorites:\nN Cache poisoning. A malicious hacker manipulates DNS queries to insert\ndata into an unprotected DNS server’s cache. This poisoned data is later\ngiven out in response to client queries. Such data can direct clients to\nhosts that are running Trojan Web servers or mail servers, where the\nhackers may retrieve valuable information from users.\nN DNS ID prediction scheme. Each DNS packet has a 16-bit ID number asso-\nciated with it, which DNS servers use to determine what the original query\nwas. A malicious hacker attacks DNS server A by placing a recursive query\nthat makes server A perform queries on a remote DNS server, B, whose\n399\n" }, { "page_number": 423, "text": "information will be spoofed. By performing a denial-of-service (DoS)\nattack and predicting the DNS ID sequence, the hacker can place query\nresponses to A before the real server B can respond. This type of attack is\nhard but not impossible, because the ID space is only 16 bits, and DoS\nattack tools are common hackerware these days.\nHow can you protect your DNS server from spoofing attacks? Begin with the fol-\nlowing two principles:\nN Keep your DNS configuration secure and correct.\nN Ensure that you are running the latest release version of DNS server\nsoftware.\nRunning the latest stable DNS server software is as simple as getting the\nsource or binary distribution of the software from the server vendor and\ninstalling it. Most people run the Berkeley Internet Name Domain (BIND)\nserver. The latest version of BIND is at www.isc.org/products/BIND.\nKeeping your DNS configuration correct and secure is the challenge.\nChecking DNS Configuring\nUsing Dlint\nPoorly configured DNS servers are great security risks because they’re exploited\neasily. However, a free tool called Dlint can help you analyze any DNS zone and\nproduce reports on many common configuration problems in the following listing:\nN Hostnames that have A records must also have PTR records.\nDNS configurations that have A records but no corresponding PTR records\ncan’t be verified by servers that want to perform reverse DNS lookup on a\nhost. Dlint checks for missing PTR records for A records found in your in\nconfiguration.\nN For each PTR record in the in-addr.arpa zone there should an equivalent\nA record. Dlint reports missing A records for PTR records.\nN Dlint recursively traverses subdomains (subzones) and looks for configu-\nration problems in them, too.\nN Common typos or misplaced comments can create incorrect configuration;\nDlint tries to catch such errors.\n400\nPart IV: Network Service Security\n" }, { "page_number": 424, "text": "Getting Dlint\nHere’s how you can install Dlint on your system.\nYou can download Dlint from www.domtools.com/dns/dlint.shtml. As of this\nwriting the latest version of Dlint is 1.4.0. You can also use an online version of\nDlint at www.domtools.com/cgi-bin/dlint/nph-dlint.cgi. The online version\nhas time restrictions, so I recommend it only for trying the tool. \nInstalling Dlint\nDlint requires DiG and Perl 5. DiG is a DNS query utility found in the BIND distrib-\nution. Most likely you have it installed. Run the dig localhost any command to\nfind out. If you don’t have it, you can get DiG from www.isc.org/bind.html. I\nassume that you have both DiG and Perl 5 installed on your Linux system.\nTo install Dlint do the following:\n1. su to root.\n2. Extract the Dlint source package using a suitable directory. \nI extracted the dlint1.4.0.tar package in the /usr/src/redhat/\nSOURCES directory using the tar xvf dlint1.4.0.tar command. A new\nsubdirectory gets created when you extract the source distribution. \nChange your current directory to the new directory, which in my case is\ndlint1.4.0. Make sure you substitute the appropriate Dlint version num-\nber (of the source distribution you downloaded) in all the instructions given\nhere.\n3. Run the which perl command to see where the Perl interpreter is\ninstalled.\n4. Run the head -1 digparse command to see the very first line of the\ndigparse Perl script used by Dlint. If the path shown after #! matches the\npath shown by the which perl command, don’t change it. If the paths\ndon’t match, modify this file using a text editor, and replace the path after\n#! with the path of your Perl interpreter.\n5. Run the make install command to install Dlint, which installs the dlint\nand digparse scripts in /usr/local/bin.\nNow you can run Dlint.\nChapter 16: DNS Server Security\n401\n" }, { "page_number": 425, "text": "Running Dlint\nThe main script in the Dlint package is called dlint. You can run this script using\nthe following command:\n/usr/local/bin/dlint domain | in-addr.arpa-domain\nFor example, to run dlint for a domain called intevo.com, you can execute\n/usr/local/bin/dlint intevo.com. Listing 16-1 shows an example output.\nListing 16-1: Sample output from dlint\n;; dlint version 1.4.0, Copyright (C) 1998 Paul A. Balyoz \n;; Dlint comes with ABSOLUTELY NO WARRANTY.\n;; This is free software, and you are welcome to redistribute it\n;; under certain conditions. Type ‘man dlint’ for details.\n;; command line: /usr/local/bin/dlint domain.com\n;; flags: normal-domain recursive.\n;; using dig version 8.2\n;; run starting: Fri Dec 29 13:34:07 EST 2000\n;; ============================================================\n;; Now linting domain.com\n;; Checking serial numbers per nameserver\n;; 1997022700 ns2.domain.com.\n;; 1997022700 ns1.domain.com.\n;; All nameservers agree on the serial number.\n;; Now caching whole zone (this could take a minute)\n;; trying nameserver ns1.domain.com.\n;; 3 A records found.\nERROR: “ns1.domain.com. A 172.20.15.1”, but the PTR record for\n1.15.20.172.in-addr.arpa. is “k2.domain.com.”\nOne of the above two records are wrong unless the host is a name server\nor mail server.\nTo have 2 names for 1 address on any other hosts, replace the A record\nwith a CNAME record:\nns1.domain.com. IN CNAME k2.domain.com.\nERROR: “ns2.domain.com. A 172.20.15.1”, but the PTR record for\n1.15.20.172.in-addr.arpa. is “k2.domain.com.”\nOne of the above two records are wrong unless the host is a name server\nor mail server.\nTo have 2 names for 1 address on any other hosts, replace the A record\nwith a CNAME record:\nns2.domain.com. IN CNAME k2.domain.com.\n;; ============================================================\n402\nPart IV: Network Service Security\n" }, { "page_number": 426, "text": ";; Now linting domain.com.\n;; Checking serial numbers per nameserver\n;; 1997022700 ns1.domain.com.\n;; 1997022700 ns2.domain.com.\n;; All nameservers agree on the serial number.\n;; Now caching whole zone (this could take a minute)\n;; trying nameserver ns1.domain.com.\n;; 3 A records found.\nERROR: “ns1.domain.com. A 172.20.15.1”, but the PTR record for\n1.15.20.172.in-addr.arpa. is “k2.domain.com.”\nOne of the above two records are wrong unless the host is a name\nserver or mail server.\nTo have 2 names for 1 address on any other hosts, replace the A record\nwith a CNAME record:\nns1.domain.com. IN CNAME k2.domain.com.\nERROR: “ns2.domain.com. A 172.20.15.1”, but the PTR record for\n1.15.20.172.in-addr.arpa. is “k2.domain.com.”\nOne of the above two records are wrong unless the host is a name\nserver or mail server.\nTo have 2 names for 1 address on any other hosts, replace the A record\nwith a CNAME record:\nns2.domain.com. IN CNAME k2.domain.com.\n;; no subzones found below domain.com., so no recursion will take place.\n;; ============================================================\n;; dlint of domain.com. run ending with errors.\n;; run ending: Fri Dec 29 13:34:09 EST 2000\n;; ============================================================\n;; dlint of domain.com run ending with errors.\n;; run ending: Fri Dec 29 13:34:09 EST 2000\nAs you can see, dlint is verbose. The lines that start with a semicolon are com-\nments. All other lines are warnings or errors. Here domain.com has a set of prob-\nlems. ns1.domain.com\nhas an A record, but the PTR record points to\nk2.domain.com instead. Similarly, the ns2.domain.com host has the same problem.\nThis means the domain.com configuration has the following lines:\nns1 IN A 172.20.15.1\nns2 IN A 172.20.15.1\nk2 IN A 172.20.15.1\nThe configuration also has the following PTR record:\n1 IN PTR k2.intevo.com.\nChapter 16: DNS Server Security\n403\n" }, { "page_number": 427, "text": "The dlint program suggests using CNAME records to resolve this problem. This\nmeans the configuration should be:\nns1 IN A 172.20.15.1\nns2 IN CNAME\nns1\nk2 IN CNAME ns1\nThe PTR record should be:\n1 IN PTR ns1.intevo.com.\nAfter fixing the errors in the appropriate configuration DNS files for\ndomain.com, the following output is produced by the /usr/local/bin/dlint\ndomain.com command.\n;; dlint version 1.4.0, Copyright (C) 1998 Paul A. Balyoz \n;; Dlint comes with ABSOLUTELY NO WARRANTY.\n;; This is free software, and you are welcome to redistribute it\n;; under certain conditions. Type ‘man dlint’ for details.\n;; command line: /usr/local/bin/dlint domain.com\n;; flags: normal-domain recursive.\n;; using dig version 8.2\n;; run starting: Fri Dec 29 13:38:00 EST 2000\n;; ============================================================\n;; Now linting domain.com\n;; Checking serial numbers per nameserver\n;; 1997022700 ns2.domain.com.\n;; 1997022700 ns1.domain.com.\n;; All nameservers agree on the serial number.\n;; Now caching whole zone (this could take a minute)\n;; trying nameserver ns1.domain.com.\n;; 1 A records found.\n;; ============================================================\n;; Now linting domain.com.\n;; Checking serial numbers per nameserver\n;; 1997022700 ns1.domain.com.\n;; 1997022700 ns2.domain.com.\n;; All nameservers agree on the serial number.\n;; Now caching whole zone (this could take a minute)\n;; trying nameserver ns1.domain.com.\n;; 1 A records found.\n;; no subzones found below domain.com., so no recursion will take place.\n;; ============================================================\n;; dlint of domain.com. run ending normally.\n;; run ending: Fri Dec 29 13:38:01 EST 2000\n404\nPart IV: Network Service Security\n" }, { "page_number": 428, "text": ";; ============================================================\n;; dlint of domain.com run ending normally.\n;; run ending: Fri Dec 29 13:38:01 EST 2000\nAs shown no error messages are reported. Of course, Dlint (dlint) can’t catch all\nerrors in your configuration, but it’s a great tool to perform a level of quality con-\ntrol when you create, update, or remove DNS configuration information.\nSecuring BIND\nBIND is the most widely used DNS server for Linux. BIND was recently overhauled\nfor scalability and robustness. Many DNS experts consider earlier versions of BIND\n(prior to 9.0) to be mostly patchwork.\nFortunately BIND 9.0 is written by a large team of professional software devel-\nopers to support the next generation of DNS protocol evolution. The new BIND sup-\nports back-end databases, authorization and transactional security features,\nSNMP-based management, and IPv6 capability. The code base of the new bind is\naudited and written in a manner that supports frequent audits by anyone who is\ninterested.\nThe new BIND now supports the DNSSEC and TSIG standards.\nUsing Transaction Signatures (TSIG)\nfor zone transfers\nTransaction Signatures (TSIG) can authenticate and verify the DNS data exchange.\nThis means you can use TSIG to control zone transfers for domains you manage.\nTypically, zone transfers are from primary to secondary name servers. In the fol-\nlowing named.conf segment of a primary name server the IP addresses listed in the\naccess control list (acl) called dns-ip-list can transfer the zone information only\nfor the yourdomain.com domain.\nacl “dns-ip-list” {\n172.20.15.100;\n172.20.15.123;\n};\nzone “yourdomain.com” {\ntype master;\nfile “mydomain.dns”;\nallow-query { any; };\nallow-update { none; };\nallow-transfer { dns-ip-list; };\n};\nChapter 16: DNS Server Security\n405\n" }, { "page_number": 429, "text": "Unfortunately, malicious hackers can use IP spoofing tricks to trick a DNS server\ninto performing zone transfers. Avoid this by using Transaction Signatures. Let’s\nsay that you want to limit the zone transfer for a domain called yourdomain.com to\ntwo secondary name servers with IP addresses 172.20.15.100 (ns1.yourdomain.\ncom) and 172.20.15.123 (ns2.yourdomain.com). Here’s how you can use TSIG to\nensure that IP spoofing tricks can’t force a zone transfer between your DNS server\nand a hacker’s DNS server.\nMake sure that the DNS servers involved in TSIG-based zone transfer\nauthentication keep the same system time.You can create a cron job entry\nto synchronize each machine with a remote time server using rdate or ntp\ntools.\n1. Generate a shared secret key to authenticate the zone transfer.\n2. Change the directory to /var/named.\n3. Use the /usr/local/sbin/dnssec-keygen command to generate a set of\npublic and private keys as follows:\ndnssec-keygen -a hmac-md5 -b 128 -n HOST zone-xfr-key\nThe public key file is called Kzone-xfr-key.+157+08825.key, and the\nprivate key file is Kzone-xfr-key.+157+08825.private. If you view the\ncontents of the private key file, you see something like the following:\nPrivate-key-format: v1.2\nAlgorithm: 157 (HMAC_MD5)\nKey: YH8Onz5x0/twQnvYPyh1qg==\n4. Using the key string displayed by the preceding step, create the following\nstatement in the named.conf file of both ns1.yourdomain.com and\nns2.yourdomain.com.\nkey zone-xfr-key {\nalgorithm hmac-md5;\nsecret “YH8Onz5x0/twQnvYPyh1qg==”;\n};\n406\nPart IV: Network Service Security\n" }, { "page_number": 430, "text": "Use the actual key string found in the file you generated.Don’t use the key\nfrom this example.\n5. Add the following statement in the /etc/named.conf file of the\nns1.yourdomain.com server:\nserver 172.20.15.123 {\nkeys { zone-xfr-key; };\n};\n6. Add the following statement in the /etc/named.conf file of the\nns2.yourdomain.com server:\nserver 172.20.15.100 {\nkeys { zone-xfr-key; };\n};\n7. The full /etc/named.conf configuration segment of the yourdomain.com\nzone for the primary DNS server ns1.yourdomain.com is shown in \nListing 16-2.\nListing 16-2: yourdomain.com configuration for primary DNS server\nacl “dns-ip-list” {\n172.20.15.100;\n172.20.15.123;\n};\nkey zone-xfr-key {\nalgorithm hmac-md5;\nsecret “YH8Onz5x0/twQnvYPyh1qg==”;\n};\nserver 172.20.15.123 {\nkeys { zone-xfr-key; };\n};\nzone “yourdomain.com” {\ntype master;\nfile “mydomain.dns”;\nallow-query { any; };\nallow-update { none; };\nallow-transfer { dns-ip-list; };\n};\nChapter 16: DNS Server Security\n407\n" }, { "page_number": 431, "text": "8. The full /etc/named.conf configuration segment of the yourdomain.com\nzone for the secondary DNS server ns1.yourdomain.com is shown in\nListing 16-3.\nListing 16-3: yourdomain.com configuration for secondary DNS server\nacl “dns-ip-list” {\n172.20.15.100;\n172.20.15.123;\n};\nkey zone-xfr-key {\nalgorithm hmac-md5;\nsecret “YH8Onz5x0/twQnvYPyh1qg==”;\n};\nserver 172.20.15.100 {\nkeys { zone-xfr-key; };\n};\nzone “yourdomain.com” {\ntype master;\nfile “mydomain.dns”;\nallow-query { any; };\nallow-update { none; };\nallow-transfer { dns-ip-list; };\n};\n9. Restart named on both systems.\nThe preceding steps ensures zone transfers between the given hosts occur in a\nsecure manner. To test that a shared TSIG key is used for zone-transfer authentica-\ntion, you can do the following:\nN Delete the yourdomain.com domain’s zone file on the secondary DNS\nserver (ns2.yourdomain.com).\nN Restart the secondary name server.\nN The secondary DNS server should transfer the missing zone file from the\nprimary DNS server. You should see the zone file created in the appropri-\nate directory. If for some reason this file isn’t created, look at /var/log/\nmessages for errors, fix the errors, and redo this verification process.\n408\nPart IV: Network Service Security\n" }, { "page_number": 432, "text": "Watch for these problems:\nN If you change the shared TSIG key in any of the two hosts by one character,\nthe zone transfer isn’t possible. You get an error message in /var/log/\nmessages that states that TSIG verification failed because of a bad key.\nN Because the named.conf file on both machines now has a secret key,\nensure that the file isn’t readable by ordinary users.\nIf you want to dynamically updates of DNS configuration if the request is\nsigned using a TSIG key, use the allow-update { key keyname; }; state-\nment. For example, allow-update { key zone-xfr-key; }; statement\nallows dynamic updates between the hosts discussed here.If the public and\nprivate key files for a key named zone-xfr-key is in the /var/named/\nkeys directory, you can run /usr/local/bin/nsupdate -k/var/\nnamed/keys:zone-xfr-key to update DNS zone information for the\nyourdomain.com domain.\nRunning BIND as a non-root user\nOn a Linux kernel 2.3.99 and later, you can run BIND as a non-root user using the\n-u option. For example, the /usr/local/sbin/named -u nobody command starts\nBIND as the nobody user.\nHiding the BIND version number\nBecause software bugs are associated with certain versions, the version information\nbecomes a valuable piece of information for malicious hackers. By finding what\nversion of BIND you run, a hacker can figure what exploits (if any) are there for it\nand try to break in. So it’s wise not to give your version number willingly. You can\nsimply override the version information given by BIND by adding the version\nstatement in the options section. For example, the following configuration seg-\nment tells named to display Unsupported on this platform when version infor-\nmation is requested.\noptions {\n# other global options go here\nversion “Unsupported on this platform”;\n};\nChapter 16: DNS Server Security\n409\n" }, { "page_number": 433, "text": "As with the version number,you don’t want to give your host information.In\nthe sprit of making a potential attacker’s job harder, I recommend that you\ndon’t use HINFO or TXT resource records in your DNS configuration files.\nLimiting Queries\nAnyone can perform a query with most DNS servers on the Internet. This is\nabsolutely unacceptable for a secure environment. A DNS spoof attack usually\nrelies on this fact, and an attacker can ask your DNS server to resolve a query for\nwhich it can’t produce an authoritative answer. The spoof may ask your server to\nresolve a query that requires it to get data from the hacker’s own DNS server. For\nexample, a hacker runs a DNS server for the id10t.com domain, and your DNS\nserver is authoritative for the yourdomain.com domain. Now, if you allow anyone\nto query your server for anything, the hacker can ask your server to resolve\ngotcha.id10t.com. Your DNS server gets data from the hacker’s machine, and the\nhacker plays his spoofing tricks to poison your DNS cache.\nNow, say that your network address is 168.192.1.0. The following statement\nmakes sure that no one outside your network can query your DNS server for any-\nthing but the domains it manages.\noptions {\nallow-query { 168.192.1.0/24; };\n};\nThe allow-query directive makes sure that all the hosts in the 168.192.1.0\nnetwork can query the DNS server. If your DNS server is authoritative for the\nyourdomain.com zone, you can have the following /etc/named.conf segment:\noptions {\nallow-query { 168.192.1.0/24; };\n};\nzone “yourdomain.com” {\ntype master;\nfile “yourdomain.com”;\nallow-query { any; };\n};\nzone “1.168.192.in-addr.arpa” {\ntype master;\nfile “db.192.168.1”;\nallow-query { any; };\n};\n410\nPart IV: Network Service Security\n" }, { "page_number": 434, "text": "This makes sure that anyone from anywhere can query the DNS server for your-\ndomain.com but only the users in the 168.192.1.0 network can query the DNS\nserver for anything.\nDon’t allow anyone outside your network to perform recursive queries. To\ndisable recursive queries for everyone but your network,add this line:\nallow-recursion { 192.168.1.0/24; };\nYou can also disable recursion completely, for everyone, by using the following\noption in the global options section:\nrecursion no;\nYou can’t disable recursion on a name server if other name servers use it as a\nforwarder.\nIdeally, you should set your authoritative name server(s) to perform no recur-\nsion. Only the name server(s) that are responsible for resolving DNS queries for\nyour internal network should perform recursion. This type of setup is known as\nsplit DNS configuration.\nFor example, say that you have two name servers — ns1.yourdomain.com (pri-\nmary) and ns2.yourdomain.com (secondary) — responsible for a single domain\ncalled yourdomain.com. At the same time you have a DNS server called ns3.your-\ndomain.com, which is responsible for resolving DNS queries for your 192.168.1.0\nnetwork. In a split DNS configuration, you can set both ns1 and ns2 servers to use\nno recursion for any domain other than yourdomain.com and allow recursion on\nns3 using the allow-recursion statement discussed earlier.\nTurning off glue fetching\nWhen a DNS server returns a name server record for a domain and doesn’t have an\nA record for the name server record, it attempts to retrieve one. This is called glue\nfetching, which spoofing attackers can abuse. Turning off glue fetching is as simple\nas adding the following statement in the global options section of /etc/named.\nconf.\noptions no-fetch-glue\nChapter 16: DNS Server Security\n411\n" }, { "page_number": 435, "text": "chrooting the DNS server\nThe 9.x version of BIND simplifies creating a chroot jail for the DNS server. Here’s\nhow you can create a chroot jail for BIND.\n1. su to root.\n2. Create a new user called dns by using the useradd dns -d /home/dns\ncommand.\n3. Run the mkdir -p /home/dns/var/log /home/dns/var/run /home/\ndns/var/named /home/dns/etc command to create all the necessary\ndirectories.\n4. Copy the /etc/named.conf file, using the cp /etc/named.conf\n/home/dns/etc/ command.\n5. Copy everything from /var/named to /home/dns/var/named, using the\ncp -r /var/named/* /home/dns/var/named/ command.\n6. Run the chown -R dns:dns /home/dns command to make sure that all\nfiles and directories needed by named are owned by user dns and its pri-\nvate group called dns.\nIf you plan to run named as root,use root:root instead of dns:dns as\nthe username:groupname in this command.\nNow you can run the name server using the following command:\n/usr/local/sbin/named -t /home/dns -u dns\nIf you plan to run named as root, don’t specify the -u dns command.\nUsing DNSSEC (signed zones)\nThe DNS Security Extension (DNSSEC) is an authentication model based on public\nkey cryptography. It introduces two new resource record types, KEY and SIG, to\nallow resolves and name servers to cryptographically authenticate the source of\nany DNS data. This means a DNS client can now prove that the response it received\nfrom a DNS server is authentic. Unfortunately, until DNSSEC is widespread, its ben-\nefit can’t be fully realized. Here I show you how you can create the necessary\nDNSSEC configuration for a domain called domain.com.\n412\nPart IV: Network Service Security\n" }, { "page_number": 436, "text": "1. Create a pair of public and private keys for the domain.com domain. From\nthe /var/named directory, run the /usr/local/sbin/dnssec-keygen -a\nDSA -b 768 -n ZONE domain.com command. \nThis command creates a 768-bit DSA-based private and public key pair. It\ncreates a public key file called Kdomain.com.+003+29462.key and a pri-\nvate key file called Kdomain.com.+003+29462.private.\nThe 29462 number is called a key tag, and it varies. Insert the public key\nin the zone file (domain.com.db) with a line like this at the beginning of\nthe file:\n$INCLUDE /var/named/Kdomain.com.+003+29462.key\n2. Create a key set using the /usr/local/sbin/dnssec-makekeyset -t\n3600 -e now+30 Kdomain.com.+003+29462 command. \nThis command creates a key set with a time-to-live value of 3,600 seconds\n(1 hour) and expiring in 30 days. This command creates a file called\ndomain.com.keyset.\n3. Sign the key set, using the /usr/local/sbin/dnssec-signkey\ndomain.com.keyset Kdomain.com.+003+29462 command. \nThis command creates a signed key file called domain.com.signedkey.\n4. Sign the zone file by using the /usr/local/sbin/dnssec-signzone -o\ndomain.com domain.db command, where domain.db is the name of the\nzone file in the /var/named directory. \nThis command creates a signed zone file called domain.db.signed.\n5. Replace the zone filename for domain.com in the /etc/named.conf file. \nFor example, the /etc/named.conf configuration segment in the follow-\ning code shows the zone declaration for domain.com.\nzone “domain.com” IN {\ntype master;\nfile “domain.db.signed”;\nallow-update { none; };\n};\nChapter 16: DNS Server Security\n413\n" }, { "page_number": 437, "text": "Summary\nEvery Internet request (Web, FTP, email) requires at least one DNS query. Since\nBIND is the most widely used DNS server available today, it is very important that\nyour BIND server is configured well for enhanced security. Checking the DNS con-\nfiguration using Dlint, using transaction signatures for zone transfer, and using\nDNSSEC ensures that your DNS server is as secure as it can be.\n414\nPart IV: Network Service Security\n" }, { "page_number": 438, "text": "Chapter 17\nE-Mail Server Security\nIN THIS CHAPTER\nN Securing open mail relay\nN Using procmail to secure e-mail\nN Securing IMAP\nN Securing POP3\nE-MAIL COMMUNICATION TAKES A leading role in today’s business-to-business (B2B),\nbusiness-to-consumer (B2C), and peer-to-peer (P2P) arenas. Many consider e-mail\nto be the “killer app” of the Internet era. I don’t doubt it for a bit.\nToday, over a billion e-mails are exchanged worldwide every day. Most of these\ne-mails are routed to their destinations by a select few Mail Transport Agents\n(MTAs). Sendmail, an MTA, has been around for many years and is usually the\ndefault MTA for most Unix and Unix-like distributions (which include Red Hat\nLinux).\nUnfortunately, as e-mail use becomes more common, it’s becomes a target for\nabuse and break-ins. The open-source and commercial software industries are\nresponding to a real-world need for secure e-mail services by updating Sendmail.\nAmong these updates are new MTAs especially designed for scalability and secu-\nrity. This chapter discusses e-mail-related security issues, focusing on popular\nMTAs and their roles in potential solutions.\nWhat Is Open Mail Relay?\nThe biggest problem in the world of e-mail is unsolicited mail or spam. The under-\nlying e-mail protocol, Simple Mail Transport Protocol (SMTP), is just that — simple.\nIt is not designed to be secure. Accordingly, the biggest abuse of e-mail service is\ncalled open mail relay.\nAn MTA receives mail for the domain it’s responsible for. It’s also able to relay\nmessages to other MTAs responsible for other domains. When you write an e-mail\nusing an e-mail client like Pine, Netscape Messenger, or Outlook, the mail is deliv-\nered to your local MTA, which then relays the message to the appropriate MTA of\nthe destination address. So mail sent to kabir@nitec.com from a user called\nreader@her-isp.com is delivered from the MTA of her-isp.com to the MTA for\nnitec.com.\n415\n" }, { "page_number": 439, "text": "Traditionally, each MTA also allows anyone to relay messages to another MTA.\nFor example, only a few years ago you could have configured your e-mail program\nto point to the mail server for the nitec.com domain and sent a message to your\nfriend at someone@someplace-not-nitec.com. This means you could have simply\nused my mail server to relay a message to your friend. What’s wrong with that?\nNothing — provided the relaying job doesn’t do the following:\nN Take system resources from the MTA used\nN Send e-mail to people who don’t want to receive it\nUnfortunately, legitimate-opportunity seeking individuals and organizations\nweren’t the only ones to realize the power of e-mail as a mass-delivery medium.\nScam artists started spamming people around the globe, using any open-relay-\ncapable MTA they could find. They simply figured that by using the open relaying\ncapability built into the MTAs around the world, they could distribute their junk mes-\nsages for profit without incurring any cost proportional to the distribution capacity. \nAs spamming became more prevalent and annoying, the Internet community\nbecame worried about the abuse. Some users formed blacklists of known spammers,\nsome filed legal actions, and some resorted to fixing MTAs. Why bother? Here are\nsome good reasons.\nN If an open mail-relay attack uses your MTA, your reputation can be\ntarnished. Many people receiving the spam via your e-mail server auto-\nmatically assign you as the faulty party and possibly publicize the matter\non the Internet or even in public mediums. This can be a public relations\ndisaster for an organization. \nN An open relay attack can cost you money. If the spam attack is large, it\nmay take your own e-mail service down. Your legitimate e-mail messages\nmay get stuck in a queue, because the mail server is busy sending spam.\nN An open relay attack can mean legal action against your organization.\nAs legislators pass Internet-related laws, it is likely that soon open relays\nwill become a legal liability. \nN You may be blacklisted. People whose e-mail accounts are abused by a\nspammer using open mail relay can file your mail server information to\nbe included in a blacklist. This can stop you from sending legitimate\ne-mail to domains that automatically check with blacklists such as the\nMAPS (Mail Abuse Prevention System) Realtime Blackhole List.\nThe MAPS RBL authority are quite reasonable about removing a black-listed\nserver from their list once the server authority demonstrates that the server\nis no longer an open mail relay.\n416\nPart IV: Network Service Security\n" }, { "page_number": 440, "text": "N Spammers use tools that search the Internet for open relays automatically. \nIf you want a secure, relatively hassle-free network, I recommend that you take\naction to stop open mail from relaying via your e-mail servers. \nIs My Mail Server Vulnerable?\nTo find whether your mail server (or any mail server) is vulnerable to an open mail-\nrelay attack, do the following test.\n1. Log on to your Linux system (or any system that has nslookup and Telnet\nclient tools).\n2. Run the nslookup -q=mx domain.com command where domain.com is\nthe domain name for which you want to find the MX records. \nThe MX records in a DNS database point to the mail servers of a domain. In\nthis example, I use a fictitious domain called openrelay-ok.com as the\nexample domain. Note the mail servers to which the MX records of the\ndomain actually point. The domain should have at least one mail server\nconfigured for it. In this example, I assume the mail server pointed to by the\nMX record for the openrelay-ok.com domain is mail.openrelay-ok.com.\n3. Run the telnet mailserver-host 25 command, where mailserver-\nhost is a mail server hostname. I ran the telnet mail.openrelay-ok.\ncom 25 command to connect to port 25 (standard SMTP port) of the tested\nmail server.\n4. Once connected, enter the ehlo localhost command to say (sort of )\nhello to the mail server. The mail server replies with a greeting message\nand waits for input.\n5. Enter the mail from: you@hotmail.com command to tell the mail server\nthat you want to send mail to a Hotmail address called you@hotmail.com.\nI recommend using any address outside your domain when replacing\nyou@hotmail.com. The server acknowledges the sender address using a\nresponse such as 250 you@hotmail.com... Sender ok. \nIf the server responds with a different message, stating that the sender’s\ne-mail address isn’t acceptable, make sure you are entering the command\ncorrectly. If you still get a negative response, the server isn’t accepting the\ne-mail destination, which is a sign that the server probably has special\nMAIL FROM checking. This means the server probably won’t allow open\nrelay at all. Most likely you get the okay message; if so, continue.\nAt this point, you have instructed the mail server that you want to send\nan e-mail from you@hotmail.com. \nChapter 17: E-Mail Server Security\n417\n" }, { "page_number": 441, "text": "6. Tell the server that you want to send it to you@yahoo.com using the rcpt\nto: you@yahoo.com command. If the server accepts it by sending a\nresponse such as 250 you@yahoo.com... Recipient ok, then you have\nfound an open mail relay. This is because the mail server accepted mail\nfrom you@hotmail.com, and agreed to send it to you@yahoo.com. \nIf you aren’t performing this test on a yahoo.com mail server, then the\nmail server shouldn’t accept mail from just anyone outside the domain to\nsend to someone else outside the domain. It’s an open mail relay; a spam-\nmer can use this mail server to send mail to people outside your domain,\nas shown in Listing 17-1.\nListing 17-1: Using an open mail relay\n$ telnet mail.openrelay-ok.com 25\nTrying 192.168.1.250...\nConnected to mail.openrelay-ok.com.\nEscape character is ‘^]’.\n220 mail.openrelay-ok.com ESMTP Sendmail Pro-8.9.3/Pro-8.9.3;\nSun, 31 Dec 2000 11:04:33 -0800\nEHLO localhost\n250-mail.openrelay-ok.com Hello [192.168.1.250], pleased to\nmeet you\n250-EXPN\n250-VERB\n250-8BITMIME\n250-SIZE\n250-DSN\n250-ONEX\n250-ETRN\n250-XUSR\n250 HELP\nmail from: you@hotmail.com\n250 you@hotmail.com... Sender ok\nrcpt to: you@yahoo.com\n250 you@yahoo.com... Recipient ok\ndata\n354 Enter mail, end with “.” on a line by itself\nTHIS MAIL SERVER CAN BE AN OPEN MAIL RELAY!\nFUTURE SPAM WILL BE SERVED FROM HERE!\nYOU NEED TO BLOCK THIS ASAP!\n.\n250 LAA15851 Message accepted for delivery\n418\nPart IV: Network Service Security\n" }, { "page_number": 442, "text": "If you perform the preceding test on a mail server that doesn’t allow open mail\nreplay, the output looks very different. Here’s an example Telnet session on port 25\nof a mail server called mail.safemta.com, showing how the test does on a pro-\ntected mail server; you get the following ouput:\n[kabir@209 ~]$ telnet localhost 25\nTrying 172.20.15.250...\nConnected to mail.safemta.com.\nEscape character is ‘^]’.\n220 172.20.15.250 ESMTP Sendmail 8.11.0/8.11.0; Sun, 31 Dec 2000 14:07:15 -0500\nehlo localhost\n250-172.20.15.250 Hello mail.safemta.com [172.20.15.250], pleased to meet you\n250-ENHANCEDSTATUSCODES\n250-8BITMIME\n250-SIZE\n250-DSN\n250-ONEX\n250-XUSR\n250-AUTH DIGEST-MD5 CRAM-MD5\n250 HELP\nmail from: you@hotmail.com\n250 2.1.0 you@hotmail.com... Sender ok\nrcpt to: you@yahoo.com\n550 5.7.1 you@yahoo.com... Relaying denied\nThe listing shows the mail server rejecting the recipient address given in the\nrcpt to: you@yahoo.com command.\nIf your mail server doesn’t reject open mail relay requests, secure it now!\nSecuring Sendmail\nSendmail is the most widely distributed MTA and currently the default choice for\nRed Hat Linux. Fortunately, by default the newer versions of Sendmail don’t allow\nopen relay functionality. Although this feature is available in the latest few ver-\nsions, I recommend that you download and install the newest version of Sendmail\nfrom either an RPM mirror site or directly from the official open source Sendmail\nsite at www.sendmail.org.\nChapter 17: E-Mail Server Security\n419\n" }, { "page_number": 443, "text": "I strongly recommend that you download both the binary RPM version (from\nan RPM mirror site such as www.rpmfind.net) and the source distribution\n(from www.sendmail.org). Install the RPM version using the rpm -ivh\nsendmail-version.rpm command where sendmail-version.rpm is\nthe latest binary RPM of Sendmail. Installing the binary RPM version ensures\nthat the configuration files and directories are automatically created for you.\nYou can decide not to install the binary RPM and simply compile and install\nthe source distribution from scratch.The source distribution doesn’t have a\nfancy installation program, so creating and making configuration files and\ndirectories are a lot of work.To avoid too many manual configurations,I sim-\nply install the binary distribution and then compile and install the source on\ntop of it..\nAlso, extract the source-distribution tar ball in a suitable directory, such as\n/usr/src/redhat/SOURCES. Then follow the instructions in the top-level readme\nfile to compile and install Sendmail. In the following section I discuss various\nSendmail security and control features related to combating spam. To use these fea-\ntures, you must incorporate them in your Sendmail configuration file found in\n/etc/mail/sendmail.cf. However, don’t directly modify this file. Instead, modify\nthe whatever.mc file in the cf/cf directory where whatever.mc is the name of the\nmacro file per the top-level readme file. When you modify the whatever.mc file,\nmake sure you generate the /etc/mail/sendmail.cf file using the m4 /path/to/\nwhatever.mc > /etc/mail/sendmail.cf command. Remember these rules:\nN Back up your current /etc/mail/sendmail.cf file first.\nN Run this command from the cf/cf subdirectory of your source\ndistribution.\nI use the linux-dnsbl.mc configuration as the replacement for the\nwhatever.mc file. The linux-dnsbl.mc file is in the source distribution\nof Sendmail, but it’s extracted outside the subdirectory that the tar com-\nmand creates when extracting Sendmail.To use it,copy it to the cf/cf sub-\ndirectory in the newly created Sendmail source distribution directory.\n420\nPart IV: Network Service Security\n" }, { "page_number": 444, "text": "In the following section, when I mention a Sendmail feature using the FEATURE\n(featurename) syntax, add it to the appropriate whatever.mc file and recreate\nthe\n/etc/mail/sendmail.cf file. In my example, I add it to /usr/src/\nredhat/SOURCES/sendmail-8.11.0/cf/cf/linux-dnsbl.mc, which is shown in\nListing 17-2.\nListing 17-2: /usr/src/redhat/SOURCES/sendmail-8.11.0/cf/cf/linux-dnsbl.mc\ndivert(0)dnl\ninclude(`../m4/cf.m4’)dnl\nVERSIONID(`$Id: Kitchen Sink 2000/08/09 ‘)\nOSTYPE(linux)dnl\ndefine(`confCON_EXPENSIVE’, `true’)dnl\ndefine(`confDEF_USER_ID’,`mail:mail’)dnl\ndefine(`confDONT_PROBE_INTERFACES’,`true’)dnl\ndefine(`confPRIVACY_FLAGS’,\n`needmailhelo,noexpn,novrfy,restrictmailq,restrictqrun,noetrn,nobodyreturn’)dnl\nFEATURE(access_db)dnl\nFEATURE(always_add_domain)dnl\nFEATURE(blacklist_recipients)dnl\nFEATURE(delay_checks)dnl\nFEATURE(limited_masquerade)dnl\nFEATURE(local_procmail)dnl\nFEATURE(masquerade_entire_domain)dnl\nFEATURE(relay_local_from_popip)dnl\nFEATURE(redirect)dnl\nFEATURE(relay_entire_domain)dnl\nFEATURE(smrsh)dnl\nFEATURE(use_ct_file)dnl\nFEATURE(use_cw_file)dnl\nFEATURE(domaintable)dnl\nFEATURE(genericstable)dnl\nFEATURE(mailertable)dnl\nFEATURE(virtusertable)dnl\nFEATURE(`dnsbl’,`rbl.maps.vix.com’)dnl\nFEATURE(`dnsbl’,`dul.maps.vix.com’)dnl\nFEATURE(`dnsbl’,`relays.mail-abuse.org’)dnl\ndnl Remove to use orbs also\ndnl FEATURE(`dnsbl’,`relays.orbs.org’)dnl\nMAILER(smtp)dnl\nMAILER(procmail)dnl\nFor example, if I recommend a feature called xyz using the FEATURE(xyz) nota-\ntion, add the feature to the configuration file; then create the /etc/mail/\nsendmail.cf file, using the preceding command. The latest version of Sendmail\ngives you a high degree of control over the mail-relaying feature of your MTA.\nChapter 17: E-Mail Server Security\n421\n" }, { "page_number": 445, "text": "Controlling mail relay\nThe latest version of Sendmail 8.11.2 allows you a high degree of control over the\nmail-relaying feature of your MTA. As mentioned before, mail relaying is disabled\nby default. Sendmail offers control of mail relay for legitimate uses. To enable the\nconfiguration controls discussed here, you need the following features in your m4\nmacro file to generate appropriate /etc/mail/sendmail.cf.\nUSING FEATURE(ACCESS_DB)\nThis feature enables the access-control database, which is stored in the\n/etc/mail/access file. The entries in this file have the following syntax:\nLHS{tab}RHS\nN Left-hand side (LHS) can be any item shown in Table 17-1\nN\n{tab} is a tab character\nN Right-hand side (RHS) can be any item shown in Table 17-2.\nTABLE 17-1: LEFT SIDE OF AN ENTRY IN /ETC/MAIL/ACCESS\nLHS\nMeaning\nuser@host\nAn e-mail address\nIP address\nIP address of a mail server\nHostname or domain\nHostname of a mail server\nFrom: user@host\nMail from an e-mail address called\nuser@host\nFrom: hostname or From:domain\nMail from a hostname or domain\nTo: user@host\nMail to an e-mail address called\nuser@host\nTo: hostname or To:domain\nMail to a hostname or domain\nConnect: hostname or Connect:domain\nConnection from hostname or any host in\nthe given domain name\n422\nPart IV: Network Service Security\n" }, { "page_number": 446, "text": "TABLE 17-2: RIGHT-HAND SIDE OF AN ENTRY IN /ETC/MAIL/ACCESS\nRHS\nMeaning\nRELAY\nEnable mail relay for the host or domain named in the LHS\nOK\nAccept mail and ignore other rules\nREJECT\nReject mail \nDISCARD\nSilently discard mail; don’t display an error message\nERROR RFC821-CODE \nDisplay RFC 821 error code and a text message\ntext message\nERROR RFC1893-CODE Display RFC 1893 error code and a text message\ntext message\nWhen you create, modify, or delete an entry in the /etc/mail/access file,\nremember these rules:\nN Run the makemap hash /etc/mail/access < /etc/mail/\naccess command to create the database readable by Sendmail.\nN Restart the server using the /rc.d/init.d/sendmail restart\ncommand.\nREJECTING MAIL FROM AN ENTIRE DOMAIN OR HOST\nTo reject a message\nfrom an entire domain (for example, from spamfactory.com), use the following\ncommand:\nspamfactory.com \nREJECT\nTo reject mail from a host called bad.scam-artist.com, use\nbad.scam-artist.com \nREJECT\nThe preceding configuration doesn’t reject mail from other hosts.\nREJECT MAIL FROM AN E-MAIL ADDRESS\nTo reject mail from an e-mail address\ncalled newsletter@fromhell.com, use\nFrom:newsletter@fromhell.com REJECT\nChapter 17: E-Mail Server Security\n423\n" }, { "page_number": 447, "text": "You don’t receive mail from the preceding address, but you can still send mes-\nsages to the address.\nRELAYING MAIL TO A DOMAIN OR HOST\nTo relay messages to a domain called\nbusyrebooting.com, use\nTo:busyrebooting.com RELAY\nThis command allows relaying messages to the busyrebooting.com domain,\nbut Sendmail doesn’t accept nonlocal messages from it; that is, Sendmail doesn’t\nrelay.\nRELAYING MAIL FROM A DOMAIN OR HOST\nTo relay messages from a domain\ncalled imbuddies.com, use\nConnect:imbuddies.com RELAY\nACCEPTING A SELECTED E-MAIL ADDRESS FROM A DOMAIN\nSometimes you\nwant to disallow e-mail from all users except for a few for a given domain. For\nexample, to ban everyone but the e-mail addressees spy1@myfoe.net and\nspy2@myfoe.net for the myfoe.net domain, use\nFrom:spy1@myfoe.net\nOK\nFrom:spy2@myfoe.net\nOK\nFrom:myfoe.net \nREJECT\nAll e-mail except that from the first two addresses is rejected.\nUSING FEATURE(RELAY_ENTIRE_DOMAIN)\nThe database for this feature is stored in /etc/mail/relay-domains. Each line in\nthis file lists one Internet domain. When this feature is set, it allows relaying of all\nhosts in one domain. For example, if your /etc/mail/relay-domain file looks like\nthe following line, mail to and from kabirsfriends.com is allowed:\nkabirsfriends.com\nUSING FEATURE(RELAY_HOSTS_ONLY)\nIf you don’t want to enable open mail relay to and from an entire domain, you can\nuse this feature to specify each host for which your server is allowed to act as a\nmail relay.\n424\nPart IV: Network Service Security\n" }, { "page_number": 448, "text": "Enabling MAPS Realtime Blackhole\nList (RBL) support\nMAPS RBL uses a set of modified DNS servers for access to a blacklist of alleged\nspammers, who are usually reported by spam victims.\nThe simplest way to start using the RBL to protect your mail relay is arranging\nfor it to make a DNS query (of a stylized name) whenever you receive an incoming\nmail message from a host whose spam status you don’t know.\nWhen a remote mail server (say, 192.168.1.1) connects to your mail server,\nSendmail checks for the existence of an address record (A) in the MAPS DNS\nserver using a MAPS RBL rule set. Sendmail issues a DNS request for\n1.1.168.192.blackholes.mail-abuse.org. If an address record (A) is found for\n1.1.168.192.blackholes.mail-abuse.org, it is 127.0.0.2, which means that\n192.168.1.1 is a blacklisted mail server. Your Sendmail server can then reject it.\nTo use the RBL add the following features to your configuration (mc) file, regen-\nerate /etc/mail/sendmail.cf, and restart the server.\nFEATURE(`dnsbl’,`rbl.maps.vix.com’)dnl\nFEATURE(`dnsbl’,`dul.maps.vix.com’)dnl\nFEATURE(`dnsbl’,`relays.mail-abuse.org’)dnl\nTo test your RBL configuration, run the /usr/sbin/sendmail -bt command.\nAn interactive Sendmail test session is started as shown in the following listing:\nADDRESS TEST MODE (ruleset 3 NOT automatically invoked)\nEnter
\n> .D{client_addr}127.0.0.1\n> Basic_check_relay <>\nBasic_check_rela input: < >\nBasic_check_rela returns: OKSOFAR\nEnter .D{client_addr}127.0.0.1, followed by Basic_check_relay <>, to\ncheck whether the address 127.0.0.1 is blacklisted. Because the address 127.0.0.1 is\na special address (localhost) it’s not blacklisted, as indicated by the message\nOKSOFAR.\nNow test a blacklisted address: 127.0.0.2. You must enter the same sequence of\ninput as shown in the following list:\n> .D{client_addr}127.0.0.2\n> Basic_check_relay <>\nBasic_check_rela input: < >\nBasic_check_rela returns: $# error $@ 5.7.1 $: “550 Mail from “ 127.0.0.2 “\nrefused by blackhole site rbl.maps.vix.com”\nChapter 17: E-Mail Server Security\n425\n" }, { "page_number": 449, "text": "Here you can see that the address is blacklisted. Press Ctrl+Z to put the current\nprocess in the background, and then enter kill %1 to terminate the process.\nThe current version of Sendmail supports Simple Authentication and Security\nLayer (SASL), which can authenticate the user accounts that connect to it. Because\na user must use authentication, spammers who (aren’t likely to have user accounts\non your system) can’t use it as an open mail relay. (This new feature is not yet\nwidely used.)\nBefore you can use the SASL-based authentication, however, install the Cyrus\nSASL library package (as shown in the next section).\nCOMPILING AND INSTALLING CYRUS SASL\nDownload the source distribution (cyrus-sasl-1.5.24.tar.gz or the latest ver-\nsion) from ftp://ftp.andrew.cmu.edu/pub/cyrus-mail. To compile and install\nthe package, do the following:\nWhen following these instructions,make sure you replace SASL version\nnumber 1.5.24 with the version number you download.\n1. Extract the source into /usr/src/redhat/SOURCES, using the tar xvzf\ncyrus-sasl-1.5.24.tar.gz command. This creates a subdirectory called\ncyrus-sasl-1.5.24. Change directory to cyrus-sasl-1.5.24.\n2. Run the ./configure --prefix=/usr command to configure the SASL\nsource tree.\n3. Run the make and make install to make and install the library.\nIf you change directory to /usr/lib and run the ls -l command, you see the\nSASL library files installed.\n-rwxr-xr-x 1 root root 685 Dec 31 04:45 libsasl.la\nlrwxrwxrwx 1 root root 16 Dec 31 04:45 libsasl.so -> libsasl.so.7.1.8\nlrwxrwxrwx 1 root root 16 Dec 31 04:45 libsasl.so.7 -> libsasl.so.7.1.8\n-rwxr-xr-x 1 root root 173755 Dec 31 04:45 libsasl.so.7.1.8\nNow you can compile Sendmail with SASL support.\nCOMPILING AND INSTALLING SENDMAIL WITH SASL SUPPORT\nIf you already have a working Sendmail installation, you must back up all the nec-\nessary files using the following commands:\n426\nPart IV: Network Service Security\n" }, { "page_number": 450, "text": "cp -r /etc/mail /etc/mail.bak\ncp /usr/sbin/sendmail /usr/sbin/sendmail.bak\ncp /usr/sbin/makemap /usr/sbin/makemap.bak\ncp /usr/bin/newaliases /usr/bin/newaliases.bak\nDownload the latest Sendmail source from www.sendmail.org. I downloaded\nsendmail.8.11.0.tar.gz, the latest source as of this writing. Make sure you\nreplace version information when completing the following instructions.\n1. Extract the Sendmail source distribution using the tar xvzf sendmail.\n8.11.0.tar.gz command. This creates a subdirectory called sendmail-8.\n11.0. Change to this subdirectory.\n2. Run the following commands to extract and install the Sendmail configu-\nration files in the appropriate directories.\nmkdir -p /etc/mail\ncp etc.mail.tar.gz /etc/mail\ncp site.config.m4 sendmail-8.11.0/devtools/Site/\ncp sendmail.init /etc/rc.d/init.d/sendmail\n3. Follow the instructions in the INSTALL file and build Sendmail as\ninstructed.\n4. Add the following lines in the /usr/src/redhat/SOURCES/sendmail-\n8.11.0/devtools/Site/site.config.m4 file.\nAPPENDDEF(`confENVDEF’, `-DSASL’)\nAPPENDDEF(`conf_sendmail_LIBS’, `-lsasl’)\nAPPENDDEF(`confLIBDIRS’, `-L/usr/local/lib/sasl’)\nAPPENDDEF(`confINCDIRS’, `-I/usr/local/include’)\n5. The sendmail-8.11.0/devtools/Site/configu.m4 file is shown in\nListing 17-3.\nListing 17-3: define(`confDEPEND_TYPE’, `CC-M’)\ndefine(`confEBINDIR’, `/usr/sbin’)\ndefine(`confFORCE_RMAIL’)\ndefine(`confLIBS’, `-ldl’)\ndefine(`confLDOPTS_SO’, `-shared’)\ndefine(`confMANROOT’, `/usr/man/man’)\ndefine(`confMAPDEF’,`-DNEWDB -DMAP_REGEX -DNIS -\nDTCP_WRAPPERS’)\ndefine(`confMTLDOPTS’, `-lpthread’)\ndefine(`confOPTIMIZE’,`${RPM_OPT_FLAGS}’)\ndefine(`confSTDIR’, `/var/log’)\nAPPENDDEF(`confLIBSEARCH’, `crypt nsl wrap’)\nChapter 17: E-Mail Server Security\n427\n" }, { "page_number": 451, "text": "APPENDDEF(`confENVDEF’, `-DSASL’)\nAPPENDDEF(`conf_sendmail_LIBS’, `-lsasl’)\nAPPENDDEF(`confLIBDIRS’, `-L/usr/local/lib/sasl’)\nAPPENDDEF(`confINCDIRS’, `-I/usr/local/include’)\n6. Change the directory to /usr/src/redhat/SOURCES/sendmail-\n8.11.0/sendmail and run the su Build -c command to rebuild\nSendmail.\n7. Run the sh Build install command to install the new Sendmail\nbinaries.\n8. Run the /usr/sbin/sendmail -d0.1 -bv root command to check\nwhether you have SASL support built into your new Sendmail configura-\ntion. You should see output like the following:\nVersion 8.11.0\nCompiled with: MAP_REGEX LOG MATCHGECOS MIME7TO8 MIME8TO7\nNAMED_BIND NETINET NETUNIX NEWDB NIS QUEUE SASL SCANF SMTP\nUSERDB XDEBUG\n============ SYSTEM IDENTITY (after readcf) ============\n(short domain name) $w = 172\n(canonical domain name) $j = 172.20.15.1\n(subdomain name) $m = 20.15.1\n(node name) $k = 172.20.15.1\nAs shown in bold, SASL is listed in the preceding output.\n9. Run /usr/sbin/saslpasswd username to create the /etc/sasldb.db\npassword file.\n10. Run the /etc/rc.d/init.d/sendmail start command to start the\nSendmail daemon.\n11. Run the telnet localhost 25 command to connect to your newly com-\npiled Sendmail service. When connected enter the EHLO localhost com-\nmand. This displays output like the following.\n220 209.63.178.15 ESMTP Sendmail 8.11.0/8.11.0; Sun, 31 Dec\n2000 05:37:58 -0500\nEHLO localhost\n250-209.63.178.15 Hello root@localhost, pleased to meet you\n250-ENHANCEDSTATUSCODES\n250-8BITMIME\n250-SIZE\n250-DSN\n428\nPart IV: Network Service Security\n" }, { "page_number": 452, "text": "250-ONEX\n250-XUSR\n250-AUTH DIGEST-MD5 CRAM-MD5\n250 HELP\nAs shown, the newly built Sendmail now supports the SMTP AUTH command and\noffers DIGEST-MD5 CRAM-MD5 as an authentication mechanism.\nThe SMTP AUTH allows relaying for senders who successfully authenticate them-\nselves. Such SMTP clients as Netscape Messenger and Microsoft Outlook can use\nSMTP authentication via SASL.\nSanitizing incoming e-mail using procmail\nMost e-mail security incidents occur because users can attach all types of files to\nthe messages. Attachments and embedded scripts are primary vehicles for such\nattacks as e-mail viruses and malicious macros. A filtering tool called procmail\ncan help.\nprocmail can scan headers and the body of each message for patterns based on\ncustom rules. It can take action when a certain rule matches. Here I show you how\nyou can sanitize incoming e-mails using a procmail-based rule set.\nYou can download the procmail rule set (procmail-sanitizer.tar.gz) from\nwww.impsec.org/email-tools/procmail-security.html.\nMake sure that you have the following lines in your m4 macro file. (they gener-\nate the /etc/mail/sendmail.cf file):\nFEATURE(local_procmail)dnl\nMAILER(procmail)dnl\nFor reliable performance, take the following two measures:\nN Install procmail from either\nI An RPM distribution on your Red Hat CD-ROM.\nI An RPM mirror site, such as http://www.rpmfind.net.\nN Install the latest version of Perl on your system.\nESTABLISHING THE SANITIZER\nHere’s how you can set up the rule set for local delivery.\n1. su to root.\nChapter 17: E-Mail Server Security\n429\n" }, { "page_number": 453, "text": "2. Run the mkdir /etc/procmail command to create a subdirectory in\n/etc.\n3. Run the chown -R root:root /etc/procmail command to change the\nownership of the directory to root.\n4. Copy the procmail-sanitizer.tar.gz file in /etc/procmail and\nextract it using the tar xvzf procmail-sanitizer.tar.gz command.\n5. Create an /etc/procmailrc file as shown in Listing 17-4.\nListing 17-4: /etc/procmailrc\nLOGFILE=$HOME/procmail.log\nPATH=”/usr/bin:$PATH:/usr/local/bin”\nSHELL=/bin/sh\nPOISONED_EXECUTABLES=/etc/procmail/poisoned\nSECURITY_NOTIFY=”postmaster, security-dude”\nSECURITY_NOTIFY_VERBOSE=”virus-checker”\nSECURITY_NOTIFY_SENDER=/etc/procmail/local-email-security-\npolicy.txt\nSECRET=”CHANGE THIS”\n# this file must already exist, with\n# proper permissions (rw--w--w-):\nSECURITY_QUARANTINE=/var/spool/mail/quarantine\nPOISONED_SCORE=25\nSCORE_HISTORY=/var/log/macro-scanner-scores\n# Finished setting up, now run the sanitizer...\nINCLUDERC=/etc/procmail/html-trap.procmail\n# Reset some things to avoid leaking info to\n# the users...\nPOISONED_EXECUTABLES=\nSECURITY_NOTIFY=\nSECURITY_NOTIFY_VERBOSE=\nSECURITY_NOTIFY_SENDER=\nSECURITY_QUARANTINE=\nSECRET=\n6. Run the touch /var/spool/mail/quarantine command to create the\nfile. This file stores poisoned messages.\n7. Run the touch /var/log/macro-scanner-scores command to create the\nfile. This file stores historical scores for macros.\n8. Change the permission for the /var/spool/mail/quarantine file using\nthe chmod 622 /var/spool/mail/quarantine command.\n430\nPart IV: Network Service Security\n" }, { "page_number": 454, "text": "The /etc/procmailrc file sets a number of control (environment) variables used\nto control the sanitizer and runs the sanitizer using the INCLUDERC setting. This\nresets the environment variables such that a user can’t view the values of these\nvariables by running setenv or set commands — a safer arrangement. These con-\ntrol variables are \nN\nLOGFILE\nSpecifies the fully qualified path of the log. The default value allows\nthe sanitizer to create a log file called procmail.log in a user’s home\ndirectory.\nThe default value is $HOME/procmail.log.\nN\nPOISONED_EXECUTABLES\nSpecifies a fully qualified path of a filename, which lists the filenames\nand/or file extensions (with wild cards) that are considered poisoned if\nsent via attachment.\nThe default file contains a list of widely known poisoned attachment file-\nnames. When a new poison attachment file name is released in public by\nCERT (or another security authority), add the name in this file.\nThe default value is /etc/procmail/poisoned.\nN\nSECURITY_NOTIFY\nSpecifies a comma-separated list of e-mail addresses of people who should\nbe notified when a poisoned e-mail is trapped by the sanitizer.\nOnly the header part of the trapped message goes to the e-mail list.\nThe default values are postmaster, security-dude.\nN\nSECURITY_NOTIFY_VERBOSE\nSpecifies a comma-separated list of e-mail addresses of people who should\nbe notified when a poisoned e-mail is trapped by the sanitizer.\nIn contrast to the SECURITY_NOTIFY variable,the trapped e-mail goes in\nits entirety to this list.\nChapter 17: E-Mail Server Security\n431\n" }, { "page_number": 455, "text": "The default value is virus-checker.\nN\nSECURITY_NOTIFY_SENDER\nSpecifies a filename whose contents are e-mailed to the sender of a poi-\nsoned message. If the variable is a nonexistent file, a built-in message is\nsent instead. \nFor this variable to take effect,the SECURITY_NOTIFY variable must be set\nto at least one e-mail address.\nThe default value is /etc/procmail/local-email-security-\npolicy.txt.\nN\nSECURITY_NOTIFY_SENDER_POSTMASTER\nWhen set to a value such as YES, an e-mail goes to the violator’s postmas-\nter address.\nN\nSECURITY_NOTIFY_RECIPIENT\nWhen set to a filename, the intended recipient receives the contents of the\nfile as a notice stating that an offending e-mail has been quarantined. \nN\nSECRET\nSpecifies a random set of characters that are used internally to make it\nhard for a vandal to bypass the sanitizer rule set. Change the default to\nsomething in the 10- to 20-character range.\nThe default value is CHANGE THIS.\nN\nSECURITY_QUARANTINE\nSpecifies the path of the file that quarantines the poisoned attachment.\nThe default value is /var/spool/mail/quarantine.\nN\nSECURITY_QUARANTINE_OPTIONAL\nI When set to YES, a poisoned message is still sent to the intended\nrecipient. \nI When set to NO, it is bounced.\nN\nPOISONED_SCORE\nSpecifies the score at which the sanitizer considers embedded Microsoft\nOffice-related macros (found in such applications as Word and Excel)\npoisoned.\n432\nPart IV: Network Service Security\n" }, { "page_number": 456, "text": "The sanitizer looks at the embedded macro and tries to match macro frag-\nments with known poisoned macro-fragment code. As it finds question-\nable macro fragments, it keeps a growing score. When the score reaches\nthe value specified by the variable, the macro is considered dangerous\n(that is, poisoned).\nThe default value is 25.\nN\nMANGLE_EXTENSIONS\nContains a list of filename extensions to mangle and possibly poison. The\nbuilt-in list of extensions should be sufficient for most installations.\nN\nDISABLE_MACRO_CHECK\nDisables scanning of Microsoft Office file attachments for dangerous\nmacros.\nI The sanitizer contains a rudimentary scanner that checks Microsoft\nOffice document attachments (such as Word documents, Excel spread-\nsheets, and PowerPoint presentations) for embedded Visual Basic\nApplication (VBA) macros that appear to be modifying security set-\ntings, changing the Registry, or writing macros to the Standard\nDocument template.\nI Documents are scanned for macros even if their extensions don’t\nappear in the MANGLE_EXTENSIONS list. This means you can remove\n.doc and .xls extensions from the MANGLE_EXTENSIONS list to make\nyour users happy, but still be protected by the scanner against macro-\nbased attacks.\nN\nSCORE_HISTORY\nIf you want to keep a history of macro scores for profiling to see whether\nyour POISONED_SCORE is a reasonable value, set SCORE_HISTORY to the\nname of a file. The score of each scanned document is saved to this file.\nThe default value is /var/log/macro-scanner-scores.\nN\nSCORE_ONLY\nWhen this variable is set to YES, the sanitizer doesn’t act when it detects a\nmacro.\nN\nSECURITY_STRIP_MSTNEF\nMicrosoft Outlook and Exchange support sending e-mail using a format\ncalled Outlook Rich Text. Among other things, this has the effect of\nbundling all file attachments, as well as other data, into a proprietary\nMicrosoft-format attachment, usually with the name winmail.dat. This\nformat is called MS-TNEF and isn’t generally understandable by non-\nMicrosoft mail programs.\nChapter 17: E-Mail Server Security\n433\n" }, { "page_number": 457, "text": "I MS-TNEF attachments can’t be scanned or sanitized and may contain\nhazardous content that the sanitizer can’t detect. Microsoft recom-\nmends that MS-TNEF attachments are used only within your intranet,\nnot the Internet.\nI If you set SECURITY_STRIP_MSTNEF to any value, these attachments are\nstripped off the message, and it is delivered to the intended recipient\nwith a notice that this happened. The message isn’t poisoned.\nN\nDEFANG_WEBBUGS\nDisables inline images.\nWeb bugs are small images (typically only one pixel in size) that track an\ne-mail message. Identifying information is included in the image URL, and\nwhen an HTML-enabled mail program attempts to display the message,\nthe location of the message can be tracked and logged. If you consider\nthis a violation of your privacy, you can set DEFANG_WEBBUGS to any\nvalue, and the sanitizer mangles the image tag. You can still retrieve the\nURL from the message and decide whether to view the image.\nN\nSECURITY_TRUST_STYLE_TAGS\nDisables